Skip to content

Architecutre test scenario for scalable adaptive production systems through AI-based resilience optimization

License

Notifications You must be signed in to change notification settings

Senseering/spaicer-translation-srs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scalable adaptive production systems through AI-based resilience optimization.
Powered by MyDataEconomy

Discord

Motivation

Testing scenario

The goal is to test and evaluate the requirements created in previous work packages as examples. Among other aspects, the execution of external services, the roles of data producers, and central architectural elements such as the catalog and vocabulary provider are to be tested.

In order to focus on testing the architecture, we have decided to use a translation service. However, this is an arbitrarily replaceable service. This scenario can also be performed with object recognition, annomaly detection, machine learning algorithms or artificial intelligence.

Spaicer

In a globalized and interconnected business world, production interruptions including supply chain disruption have been the leading business risk for many years.

The ability of a company to permanently adapt to internal and external changes and disruptions is the "quest for resilience". Reinforced by a significant increase in complexity in production due to Industry 4.0, resilience management thus becomes an indispensable success factor for manufacturing companies.

The SPAICER project is developing a data-driven ecosystem based on lifelong, collaborative and low-threshold Smarter Resilience services by leveraging leading AI technologies and Industrie 4.0 standards with the goal of anticipating disruptions (anticipation) and optimally adapting production plans to active disruptions (reaction) at any time.

Architecture

It is a decentralized architecture in which participating parties exchange data in a sovereign way. The federated catalog serves as a search engine for data sources and data that was created, based on services.

Scenario

We have a data producer who regularly publishes texts. This can be done every minute or every second. This data producer passes on the data to participating partners within SPAICER, which may be used according to SPAICER guidelines.

A service provider would like to use this data to build up its own database to improve its translation algorithms, and also give the data producer the opportunity to use its service. However, his USP (the translation algorithm) should not to be published. The service provider obtains the authorization to the different texts of the data producer ( Company Y) via the catalog. This is only possible because both parties are within the same project (SPAICER).

The actual transfer of the data then takes place with a peer-to-peer connection. This can be done with the help of batch downloads. In our case, we decide to use an event-based data stream (websockets).

    //initially connect service to node of company-x 
    await worker.connect(config)

    //initialize a websocket connection to the data providing node (Company Y)
    const socket = new WebSocket("wss://<your-receipt-id>:<your-permission-key>@<your-provider-url>/uploader/")
    socket.on('open', function () {
        //waits for incoming data. This is called every time company Y stores data at its node
        socket.on('message', async function (message) {
            const fmessage = JSON.parse(message)
            let body = fmessage.message.data

            //execute service 
            let res = await translateText(body.data.text);

            //upload results to node of Company X (Service provider)
            await worker.publish({ text: res.text })
        })
        //...
    })

The translated texts are stored on the service provider's node and Company Y is released.

Company Y can now use the catalog to find the data and either transfer the data to its own network or use it freely via open APIs (event-based or batch-based).

Install

Prerequisits

You need to install node.js

Create azure translate service

You can simply use the instructions and create a free service. You will then need the url and permissionkeys.

Installation

Clone this repository by

git clone hgit@github.com:Senseering/spaicer-translation-srs.git
cd spaicer-translation-srs

install node modules by

cd textpublisher
npm install
cd ..
cd service
npm install

Configuration

To configure the two workers you need to register a new datasource at the providing Manager. This datasource publishes raw texts that need to be translated. ....

About

Architecutre test scenario for scalable adaptive production systems through AI-based resilience optimization

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published