Skip to content

marktayl1/ManufacturingOntologies

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Manufacturing Ontologies

Introduction

An ontology defines the language used to describe a system. In the manufacturing domain, these systems can represent a factory or plant but also enterprise applications or supply chains. There are several established ontologies in the manufacturing domain. Most of them have long been standardized. In this repository, we have focused on two of these ontologies, namely ISA95 to describe a factory ontology and IEC 63278 Asset Administration Shell to describe a manufacturing supply chain. Furthermore, we have included a factory simulation and an end-to-end solution architecture for you to try out the ontologies, leveraging IEC 62541 OPC UA and the Microsoft Azure Cloud.

Digital Twin Definition Language

The ontologies defined in this repository are described by leveraging the Digital Twin Definition Language (DTDL), which is specified here.

International Society of Automation 95 (ISA95/IEC 62264)

ISA95 / IEC 62264 is one of the ontologies leveraged by this solution. It is a standard and described here and here.

IEC 63278 Asset Administration Shell (AAS)

The IEC 63278 Asset Administration Shell (AAS) is another ontology leveraged by this solution. The AAS is used across all stages of an industrial asset and is a platform-independent data sharing service between enterprises along the manufacturing supply chain. This standard is described here and tools to convert from Asset Administration Shell models to DTDL are provided in this repository here. Furthermore, the reference solution provided in this repository also contains an AAS Repository service from the Digital Twin Consortium's reference implementation here and makes the Product Carbon Footprint (PCF) of the simulated products built by the simulated production lines available to customers.

IEC 62541 Open Platform Communications Unified Architecture (OPC UA)

This solution leverages IEC 62541 Open Platform Communications Unified Architecture (OPC UA) for all Operational Technology (OT) data. This standard is described here.

Reference Solution Architecture

This repository contains a reference solution leveraging the ontologies described above with an implementation on Microsoft Azure. Other implementations can be easily added by implementing the open interface IDigitalTwin within the UA Cloud Twin application.

architecture

Here are the components involved in this solution:

Component Description
Industrial Assets A set of simulated OPC-UA enabled production lines hosted in Docker containers
UA Cloud Publisher This edge application converts OPC UA Client/Server requests into OPC UA PubSub cloud messages. It's hosted in a Docker container.
UA Cloud Commander This edge application converts messages sent to an MQTT or Kafka broker (possibly in the cloud) into OPC UA Client/Server requests for a connected OPC UA server. It's hosted in a Docker container.
AKS Edge Essentials This Kubernetes implementation (both K3S and K8S are supported) runs at the Edge and provides single- and multi-node Kubernetes clusters for a fault-tolerant Edge configuration on embedded or PC-class hardware, like an industrial gateway.
Azure Event Hubs The cloud message broker that receives OPC UA PubSub messages from edge gateways and stores them until they're retrieved by subscribers like the UA Cloud Twin. Separately, it's also used to forward data history events emitted from the Azure Digital Twins instance to the Azure Data Explorer instance.
UA Cloud Twin This cloud application converts OPC UA PubSub cloud messages into digital twin updates. It also creates digital twins automatically by processing the cloud messages. Twins are instantiated from models in ISA95-compatible DTDL ontology. It's hosted in a Docker container.
Azure Digital Twins The platform that enables the creation of a digital representation of real-world assets, places, business processes, and people.
Azure Data Explorer The time series database and front-end dashboard service for advanced cloud analytics, including built-in anomaly detection and predictions.
Pressure Relief Azure Function This Azure Function queries the Azure Data Explorer for a specific data value (the pressure in one of the simulated production line machines) and calls UA Cloud Commander via Azure Event Hubs when a certain threshold is reached (4000 mbar). UA Cloud Commander then calls the OpenPressureReliefValve method on the machine via OPC UA.
Azure Arc This cloud service is used to manage the on-premises Kubernetes cluster at the edge. New workloads can be deployed via Flux.
Azure Storage This cloud service is used to manage the OPC UA certificate store and settings of the Edge Kubernetes workloads.
Azure 3D Scenes Studio This cloud app allows the creation of 3D immersive viewers for your manufacturing data.
Azure Digital Twins Explorer This cloud app allows you to view your digital twins in an interactive UI.
Azure Data Explorer Dashboards This cloud app allows the creation of 2D viewers for your manufacturing data.
Asset Admin Shell Repository This REST web service and UI allows you to host Asset Administration Shells containing product information for your customers in a machine-readable format.
AASX Package Explorer This app allows you to view and modify Asset Administration Shells on your PC.
UA Cloud Metaverse This Industrial Metaverse app allows you to view digital twins of our manufacturing assets via Augmented Reality or Virtual Reality headsets. Work in progress!
Microsoft Sustainability Manager Microsoft Sustainability Manager is an extensible solution that unifies data intelligence and provides comprehensive, integrated, and automated sustainability management for organizations at any stage of their sustainability journey. It automates manual processes, enabling organizations to more efficiently record, report, and reduce their emissions.

âť— In a real-world deployment, something as critical as opening a pressure relief valve would of course be done on-premises and this is just a simple example of how to achieve the digital feedback loop.

Here are the data flow steps:

  1. The UA Cloud Publisher reads OPC UA data from each simulated factory, and forwards it via OPC UA PubSub to Azure Event Hubs.
  2. The UA Cloud Twin reads and processes the OPC UA data from Azure Event Hubs, and forwards it to an Azure Digital Twins instance.
    1. The UA Cloud Twin also automatically creates digital twins in Azure Digital Twins in response, mapping each OPC UA element (publishers, servers, namespaces, and nodes) to a separate digital twin.
    2. The UA Cloud Twin also automatically updates the state of digital twins based on the data changes in their corresponding OPC UA nodes.
  3. Updates to digital twins in Azure Digital Twins are automatically historized to an Azure Data Explorer cluster via the data history feature. Data history generates time series data, which can be used for analytics, such as OEE (Overall Equipment Effectiveness) calculation and predictive maintenance scenarios.

UA Cloud Twin

The simulation makes use of the UA Cloud Twin also available from the Digital Twin Consortium here. It automatically detects OPC UA assets from the OPC UA telemetry messages sent to the cloud and registers ISA95-compatible digital twins in Azure Digital Twins service for you.

twingraph

Mapping OPC UA Servers to the ISA95 Hierarchy Model

UA Cloud Twin takes the combination of the OPC UA Application URI and the OPC UA Namespace URIs discovered in the OPC UA telemetry stream (specifically, in the OPC UA PubSub metadata messages) and creates OPC UA Nodeset digital twin instances (inherited from the ISA95 Work Center digital twin model) for each one. UA Cloud Publisher sends the OPC UA PubSub metadata messages to a separate broker topic to make sure all metadata can be read by UA Cloud Twin before the processing of the telemetry messages starts.

Mapping OPC UA PubSub Publishers to the ISA95 Hierarchy Model

UA Cloud Twin takes the OPC UA Publisher ID and creates ISA95 Area digital twin instances (derived from the digital twin model of the same name) for each one.

Mapping OPC UA PubSub Datasets to the ISA95 Hierarchy Model

UA Cloud Twin takes each OPC UA Field discovered in the received Dataset metadata and creates an OPC UA Node digital twin instance (inherited from the ISA95 Work Unit digital twin model) for each.

A Cloud-based OPC UA Certificate Store and Persisted Storage

When running OPC UA applications, their OPC UA configuration files, keys and certificates must be persisted. While Kubernetes has the ability to persist these files in volumes, a safer place for them is the cloud, especially on single-node clusters where the volume would be lost when the node fails. This is why the OPC UA applications used in this solution (i.e. the UA Cloud Publisher, the MES and the simulated machines/production line stations) store their configuration files, keys and certificates in the cloud. This also has the advantage of providing a single location for mutually trusted certificates for all OPC UA applications.

Production Line Simulation

The solution leverages a production line simulation made up of several Stations, leveraging an OPC UA information model, as well as a simple Manufacturing Execution System (MES). Both the Stations and the MES are containerized for easy deployment.

Default Simulation Configuration

The simulation is configured to include 2 production lines. The default configuration is depicted below:

Production Line Ideal Cycle Time (in seconds)
Munich 6
Seattle 10

OPC UA Node IDs of Station OPC UA Server

The following OPC UA Node IDs are used in the Station OPC UA Server for telemetry to the cloud

  • i=379 - manufactured product serial number
  • i=385 - number of manufactured products
  • i=391 - number of discarded products
  • i=398 - running time
  • i=399 - faulty time
  • i=400 - status (0=station ready to do work, 1=work in progress, 2=work done and good part manufactured, 3=work done and scrap manufactured, 4=station in fault state)
  • i=406 - energy consumption
  • i=412 - ideal cycle time
  • i=418 - actual cycle time
  • i=434 - pressure

Calculating the Product Carbon Footprint (PCF)

One of the most popular use cases for the Asset Administration Shell (AAS) is to make the Product Carbon Footprint (PCF) of manufactured products available to customers of those products. In fact, the AAS will most likely become the underlying technology in the upcoming Digital Product Passport (DPP) initiative from the European Union. To calculate the PCF, all three scopes (1, 2 & 3) of emissions need to be taken into account. See here about how to enable the AAS to calculate and provide the PCF to external consumers via a standardized REST interface and see here about how to enable the Microsoft Sustainability Manager (MSM) to calculate the PCF.

Scope 1 Emissions

These emissions come from all sources the manufacturer uses to burn fossil fuels, either during production (for example when the manufacturer has a natural gas-powered production process) or before (for example picking up parts by truck) or afterwards (for example the cars of sales people or the delivery trucks with the produced products). They are relatively easy to calculate as the emissions from fossil fuel-powered engines are a well-understood quantity. This reference solution simply adds a fixed value for scope 1 emissions to the total product carbon footprint.

Scope 2 Emissions

These emissions come from the electricity used during production. If the manufacturer uses a 100% renewable energy provider, the scope 2 emissions are zero. However, most manufacturers have long-term contracts with energy providers and need to ask their energy provider for the carbon intensity per KWh of energy delivered. If this data is not available, an average for the electricity grid region the manufacturing site is in should be used. This data is available through services like WattTime and this is what this reference solution uses via the built-in Asset Admin Shell Repository also available open-source from the Digital Twin Consortium. Please see below on how to configure this part of the reference solution after deployment. The PCF calculation first checks if a new product was successfully produced by the production line, retrieves the produced product's serial number, followed by the energy consumption of each machine of the production line while the new product was produced by the machine and then applies the carbon intensity to the sum of all machines' energy consumption.

Scope 3 Emissions

These emissions come from the parts and raw materials used within the product being manufactured as well as from using the product by the end customer (and getting it into the customer's hands in the first place!) and are the hardest to calculate simply due to a lack of data from the worldwide suppliers manufacturer uses today. Unfortunately, scope 3 emissions make up almost 90% of the emissions in manufacturing. However, this is where the AAS can help create a standardized interface and data model to provide and retrieve scope 3 emissions. This reference solution does just that by making an AAS available for each manufactured product built by the simulated production line and also reads PCF data from another AAS simulating a manufacturing supply chain.

Installation of Production Line Simulation and Cloud Services

Clicking on the button below will deploy all required resources (on Microsoft Azure):

Deploy to Azure

You can also visualize the resources that will get deployed by clicking the button below:

Visualize

Once the deployment completes, follow these steps to setup a single-node Edge Kubernetes cluster and finish configuring the simulation:

  1. Connect to the deployed Windows VM with an RDP (remote desktop) connection. You can download the RDP file in the Azure portal page for the VM, under the Connect options. Sign in using the credentials you provided during deployment.

  2. From the VM, download and install Azure Kubernetes Services Edge Essentials.

  3. Download and install the Azure CLI.

  4. Download this repository from here and extract to a directory of your choice.

  5. From a Windows command prompt, navigate to the ./AKSEdgeTools directory of the extracted repository and run AksEdgePrompt. On first run after some config steps, this will reboot the VM. Log in again and run AksEdgePrompt from a command prompt again. This will open a PowerShell window:

    AKS
  6. Run New-AksEdgeDeployment -JsonConfigFilePath .\aksedge-config.json from the PowerShell window.

Once the script is finished, your Kubernetes installation is complete and you can start deploying workloads.

Note: To get logs from all your Kubernetes workloads and services at any time, simply run Get-AksEdgeLogs from the Powershell window that can be opened via AksEdgePrompt.

Running the Production Line Simulation

On the deployed VM, navigate to the ./Tools/FactorySimulation/OnPremAssets directory of the extracted repository downloaded ealier and run the StartSimulation command from a Windows command prompt by supplying the following parameters:

Syntax:

StartSimulation <EventHubsCS> <StorageAccountCS> <AzureSubscriptionID>

Parameters:

Parameter Description
EventHubCS Copy the Event Hubs namespace connection string as described here.
StorageAccountCS In the Azure Portal, navigate to the Storage Account created by this solution. Select "Access keys" from the left-hand navigation menu. Then, copy the connection string for key1.
AzureSubscriptionID In Azure Portal, browse your Subscriptions and copy the ID of the subscription used in this solution.

Example:

StartSimulation Endpoint=sb://ontologies.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=abcdefgh= DefaultEndpointsProtocol=https;AccountName=ontologiesstorage;AccountKey=abcdefgh==;EndpointSuffix=core.windows.net 9dd2eft0-3dad-4aeb-85d8-c3adssd8127a

Note: On first run, a tool to copy files to Azure Storage needs to be installed. When prompted, simply press enter to proceed with the installation.

Note: In this solution, the OPC UA application certificate store for UA Cloud Publisher, as well as the simulated production line's MES and individual machines' store, is located in the cloud in the deployed Azure Storage account.

View Digital Twins in Azure Digital Twins Explorer

You can use Azure Digital Twins Explorer to monitor twin property updates and add more relationships to the digital twins that are created. For example, you might want to add Next and Previous relationships between machines on each production line to add more context to your solution.

To access Azure Digital Twins Explorer, first make sure you have the Azure Digital Twins Data Owner role on your Azure Digital Twins instance. Then open the explorer.

Condition Monitoring, Calculating OEE, Detecting Anomalies and Making Predictions in Azure Data Explorer

You can also visit the Azure Data Explorer documentation to learn how to create no-code dashboards for condition monitoring, yield or maintenance predictions, or anomaly detection. There are a number of sample queries in the ./Tools/FactorySimulation/ADXQueries folder in this repository to get you started, plus we have provided a sample dashboard in the same folder that you can deploy by following the steps outlined here.

Note: After importing the ontologies dashboard, you need run all the provided ADX queries in the Query tab of your ADX cluster once to register the Kusto functions. You also need to set your ADT instance URL in the CalculateOEEForLine.kql query. Then, set the data source by providing your ADX cluster URI in the dashboard's hamburger menu (top-right-hand corner) under Data sources.

Using 3D Scenes Studio

If you want to add a 3D viewer to the simulation, you can follow the steps to configure the 3D Scenes Studio outlined here and map the 3D robot model from here to the digital twins automatically generated by the UA Cloud Twin:

3dviewer

Enabling the Digital Feedback Loop with UA Cloud Commander and the Pressure Relief Azure Function

If you want to test a "digital feedback loop", i.e. triggering a command on one of the OPC UA servers in the simulation from the cloud, based on a time-series reaching a certain threshold (the simulated pressure), deploy the PressureRelief Azure Function in your Azure subscription and create an application registration for your ADX instance as described here. You also need to define the following environment variables in the Azure portal for the Function:

  • ADX_INSTANCE_URL - the endpoint of your ADX cluster, e.g. https://ontologies.eastus2.kusto.windows.net/
  • ADX_DB_NAME - the name of your ADX database
  • ADX_TABLE_NAME - the name of your ADX table
  • AAD_TENANT_ID - the GUID of your AAD tenant of your Azure subscription
  • APPLICATION_KEY - the secret you created during pressure relief function app registration
  • APPLICATION_ID - the GUID assigned to the pressure relief function during app registration
  • BROKER_NAME - the name of your event hubs namespace, e.g. ontologies-eventhubs.servicebus.windows.net
  • BROKER_USERNAME - set to "$ConnectionString"
  • BROKER_PASSWORD - the primary key connection string of your event hubs namespace
  • TOPIC - set to "commander.command"
  • RESPONSE_TOPIC - set to "commander.response"
  • UA_SERVER_ENDPOINT - set to "opc.tcp://assembly.seattle/" to open the pressure relief valve of the Seattle assembly machine
  • UA_SERVER_METHOD_ID - set to "ns=2;i=435"
  • UA_SERVER_OBJECT_ID - set to "ns=2;i=424"
  • UA_SERVER_APPLICATION_NAME - set to "assembly"
  • UA_SERVER_LOCATION_NAME - set to "seattle"

Onboarding the Kubernetes Instance for Management via Azure Arc

  1. On your virtual machine, From a command prompt, navigate to the AKSEdgeTools directory and run AksEdgePrompt.

  2. Run notepad aide-userconfig.json and provide the following information:

    Attribute Description
    SubscriptionName The name of your Azure subscription. You can find this in the Azure portal.
    SubscriptionId Your subscription ID. In the Azure portal, click on the subscription you're using and copy/paste the subscription ID.
    TenantId Your tenant ID. In the Azure portal, click on Azure Active Directory and copy/paste the tenant ID.
    ResourceGroupName The name of the Azure resource group which was deployed for this solution.
    ServicePrincipalName The name of the Azure Service Principal to use as credentials. AKS uses this service principal to connect your cluster to Arc. Set this to the same name as your ResourceGroupName for simplicity.
  3. Save the file and run .\scripts\AksEdgeAzureSetup\AksEdgeAzureSetup.ps1 .\aide-userconfig.json -spContributorRole from the PowerShell window.

  4. Run Read-AideUserConfig from the PowerShell window.

  5. Run Initialize-AideArc from the Powershell window.

  6. Run Connect-AideArcKubernetes from the Powershell window.

You can now manage your Kubernetes cluster from the cloud via the newly deployed Azure Arc instance. In the Azure Portal, browse to the Azure Arc instance and select Workloads. The required service token can be retrieved via Get-AideArcKubernetesServiceToken from the AksEdgePrompt on your virtual machine.

Enabling the Product Carbon Footprint Calculation (PCF) in the Asset Admin Shell (AAS) Repository

The Asset Admin Shell (AAS) Repository is automatically configured during deployment of the reference solution, but for the Product Carbon Footprint (PCF) calculation, a WattTime service account needs to be provided. Please refer to the WattTime API documentation on how to register for an account. Once your account has been activated, provide your username and password in the settings of the AAS Repo website from the Azure Portal via YourDeploymentName-AAS-Repo -> Configuration -> Application settings.

Replacing the Production Line Simulation with a Real Production Line

Once you are ready to connect your own production line, simply delete the VM from the Azure Portal.

  1. Edit the UA-CloudPublisher.yaml file provided in the Deployment folder of this repository, replacing [yourstorageaccountname] with the name of your Azure Storage Account and [key] with the key1 of your Azure Storage Account. You can access this information from the Azure Portal on your deployed Azure Storage Account under Access keys.

  2. Run UA Cloud Publisher with the following command. The PC needs Kubernetes support and Internet access (via port 9093) and needs to be able to connect to your OPC UA-enabled machines in your production line:

     kubectl apply -f UA-CloudPublisher.yaml
    
  3. Open a browser on the Edge PC and navigate to http://localhost:[kubernetesPortForYourPublisherService]. You are now connected to the UA Cloud Publisher's interactive UI. Select the Configuration menu item and enter the following information, replacing [myeventhubsnamespace] with the name of your Event Hubs namespace and replacing [myeventhubsnamespaceprimarykeyconnectionstring] with the primary key connection string of your Event Hubs namespace. The primary key connection string can be read in the Azure Portal under your Event Hubs' "share access policy" -> "RootManagedSharedAccessKey". Then click Update:

     BrokerClientName: "UACloudPublisher"  
     BrokerUrl: "[myeventhubsnamespace].servicebus.windows.net"
     BrokerPort: 9093  
     BrokerUsername: "$ConnectionString"  
     BrokerPassword: "[myeventhubsnamespaceprimarykeyconnectionstring]"  
     BrokerMessageTopic: "data"
     BrokerMetadataTopic: "metadata"  
     SendUAMetadata: true  
     MetadataSendInterval: 43200  
     BrokerCommandTopic: ""
     BrokerResponseTopic: ""  
     BrokerMessageSize: 262144  
     CreateBrokerSASToken: false  
     UseTLS: false  
     PublisherName: "UACloudPublisher"  
     InternalQueueCapacity: 1000  
     DefaultSendIntervalSeconds: 1  
     DiagnosticsLoggingInterval: 30  
     DefaultOpcSamplingInterval: 500  
     DefaultOpcPublishingInterval: 1000  
     UAStackTraceMask: 645  
     ReversiblePubSubEncoding: false  
     AutoLoadPersistedNodes: true  
    
  4. Configure the OPC UA data nodes from your machines (or connectivity adapter software). To do so, select the OPC UA Server Connect menu item, enter the OPC UA server IP address and port and click Connect. You can now browse the OPC UA Server you want to send telemetry data from. If you have found the OPC UA node you want, right click it and select publish.

Note: UA Cloud Publisher stores its configuration and log files in the cloud within the Azure Storage Account deployed in this solution.

Note: You can check what is currently being published by selecting the Publishes Nodes tab. You can also see diagnostics information from UA Cloud Publisher on the Diagnostics tab.

License

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

About

Adding extensible telemetry

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • PowerShell 62.8%
  • C# 32.9%
  • Batchfile 4.0%
  • Dockerfile 0.3%