Latest Research Trends: Applicable Technologies in Smart Manufacturing [Blockchain and Digital Twins]

digital twin

【Paper 1: Digital twin for smart manufacturing: A review of concepts towards a practical industrial implementation
The latest trends and developments in digital technology make a new manufacturing model possible. Digital systems can monitor, optimize and control processes by creating virtual copies of the physical world and making decentralized decisions. This model relies on the development of a digital counterpart, the digital twin, of each production resource participating in the entire manufacturing process. While practical applications of digital twins may vary in technical and operational details, in the past few years, in order to identify and define key functions and attributes.
One of the main challenges of modern manufacturing processes is the ability to increase product variety through small batch production. The current manufacturing process requires production resources to be able to quickly adapt and respond to changes in the production environment, providing flexibility, reconfigurability, anti-interference and higher efficiency for the entire production process. 模拟、代码生成、调试和测试等阶段需要“即时”连续执行,避免浪费与不断变化的产品和相关要求不兼容的时间. Thus, with Industry 4.0 technologies, these stages are virtualized and integrated into the process itself, becoming a real-time simulation and control counterpart of the physical process itself. Based on feedback from the digital manufacturing layer, even before production. According to this vision, the response in production improved the model of Digital Twins; 优化算法允许在线重新配置;虚拟调试缩短了专用制造系统的交付时间.

  • Internet of Things (IoT) solutions, especially Industrial IoT, provide ubiquitous sensing capabilities that can collect data from different shop floor resources, factories and processes;
  • Cyber-physical systems (CPS) integrate computing and physical capabilities, enabling physical resources to compute, communicate, and control;
  • Cloud computing provides powerful computing capabilities for manipulating complex models;
  • Edge computing provides computing power to dispersed resources when latency, data security, and bandwidth concerns may hinder the adoption of cloud-based solutions;
  • Big data and artificial intelligence provide intelligence to entities, models and systems.

insert image description here
Existing problems:

  1. Communication protocol: How to confirm the synchronization and consistency between the two worlds, and how to properly implement the two-way communication protocol?
  2. Generation and integration of fidelity models: High fidelity and synchronization of models, which is the only way to develop high-fidelity data based on physical resource data subject to variability, disturbance and uncertainty
  3. Integration of different domains: storage, management, inspection and verification of digital twin data.
  4. Economic Cost Analysis: The ratio of investment to benefit is a deployment conundrum.

[ Paper 2 Digital Twin-driven smart manufacturing: Connotation, reference model, applications and research issue s]
This paper reviews the development of digital twin-driven technology in manufacturing systems and processes in recent years, and analyzes the digital twin-driven smart manufacturing under the background of Industry 4.0. connotation, application scenarios and research questions. To understand the digital twin and its future potential in manufacturing, we summarize the definition and latest developments of the digital twin. On the basis of the digital twin reference model, the existing technology of intelligent manufacturing digital twin is reviewed, and the development method of digital twin is systematized. Representative applications are reviewed, focusing on consistency with the proposed reference model. Finally, outstanding research issues for the development of digital twins for smart manufacturing are pointed out.
From physical equipment to factory management to production networks, manufacturing at all levels is becoming intelligent, gaining the ability to learn, configure and execute through cognitive intelligence. This part outlines the development trend of intelligent manufacturing, discusses the connotation of digital dual-drive intelligent manufacturing, and focuses on the possible impact of digital dual-drive on future manufacturing.
Smart manufacturing is created by several agencies, including the U.S. Department of Energy (DoE) and the National Institute of Standards and Technology (NIST). According to the views of Davis et al., intelligent manufacturing refers to the intelligentization of manufacturing throughout the entire manufacturing supply chain enterprise. It includes real-time understanding, reasoning, planning, and management of all aspects of the manufacturing process, facilitated by the extensive use of advanced sensor-based data analysis, modeling, and simulation. NIST defines smart manufacturing systems as fully integrated, collaborative manufacturing systems that respond in real time to meet changing needs and conditions in factories, supply networks, and customer demands
In smart manufacturing, the entities in the factory are connected with the industrial Internet through standard network gateways, and abstracted as digital twins in cyberspace. Every "digital twin" in cyberspace is an abstraction of its counterpart in the physical world, reflecting its physical state. Cyberspace stores and processes streaming data from connected physical objects. These data are used to model, simulate and predict the behavior of individual physical objects under dynamic operating conditions. The popularization of smart technologies such as Big Data Processing and Artificial Intelligence enables the extraction of manufacturing intelligence at every moment of manufacturing activities. The collective intelligence of locally connected factories and cyberspace paves the way for some dramatic changes in terms of enterprise internal operations, inter-enterprise collaboration, and production models, as shown in Figure 3.
insert image description here
Digital twins play a pivotal role in the vision of smart manufacturing. It turns analyzing the past into predicting the future. The real-time presentation of reality through digital twins allows us to develop from post-data collection and analysis to real-time and pre-event business practices. Echoing the vision of smart manufacturing in Figure 3, Digital Twin can affect future manufacturing in the following ways:
A manufacturing asset can be connected and abstracted into cyberspace through its digital twin. With near real-time data captured from assets, manufacturers can gain a clearer understanding of the actual performance and operating conditions of production assets and make proactive decisions about optimal operations. Through the flow of real information from manufacturing assets, manufacturers can improve situational awareness and increase operational resilience and flexibility, especially in a large-scale personalization environment.
Digital Twins can also connect workers on the shop floor. A profile of a person, including personal data such as weight, health data, activity data, and emotional state, can help build models of individual well-being and working conditions for humans in factories. Understanding the state of humans in the workforce can help in the design of human-centered human-robot collaboration strategies to improve worker well-being and achieve optimal production performance. Workers can also improve their skills through ultra-realistic training programs that combine actual factory settings with virtual what-if scenarios. The ability to build personalized virtual training programs based on digital twins of workers and factories can lead the way with enormous resource optimization and operational efficiencies.
The "digital twin" can also work for the factory, replicating the real factory environment. Digital twins and data-driven production operations can establish a self-organizing factory environment with complete operational visibility and flexibility. Connectivity and data tracking throughout the manufacturing process transforms factory operations into data-driven, evidence-based practices, providing the ability to track sources of product failures, analyze production efficiency bottlenecks, and predict future resource needs.
By connecting manufacturing assets, people, and services through a "digital twin," every aspect of an enterprise can be represented virtually. Connecting distributed Digital Twins between companies will allow companies to build virtually connected production networks. Leveraging big data capabilities, this strategy provides unprecedented visibility into operational performance and creates the possibility to predict the future needs of the digital twin network.
Digital twins reflect the bidirectional dynamic mapping between physical objects and their virtual models in cyberspace. A Digital Twin provides a middleware architecture that abstracts its physical objects for advanced engineering management systems to make near real-time decisions. Figure 4 shows a digital twin reference model. At the core of the technology, the development of a digital twin requires three components: (1) an information model that abstracts the specifications of physical objects; (2) a communication mechanism that transmits bidirectional data between the digital twin and its physical counterpart; The processing module can extract information from heterogeneous multi-source data and construct real-time representation of physical objects. These three components must work together to build a digital twin. Without an information model to abstract the characteristics of physical entities, data transmitted into cyberspace will lose its meaning and context. If there is no data synchronization mechanism between the physical model and the information model, the connection and reflection between these two endpoints will be broken, and the information model will become a one-time snapshot of the physical model. High-performance data processing is key to bridging the gap between heterogeneous data streams and digital twin information models.
insert image description here

[ Paper 3 Digital-Twin-Based Job Shop Scheduling Toward Smart Manufacturing ]
Job shop scheduling has always played an important role in the manufacturing process and is one of the decisive factors affecting manufacturing efficiency. In the actual process of production scheduling, there are some uncertain events, information asymmetry and abnormal disturbance, which will lead to execution deviation and affect the efficiency and quality of scheduling execution. Traditional scheduling methods cannot solve this problem well. In response to the rise of digital twin technology, a job shop scheduling method based on digital twin technology is proposed. This method has the characteristics of virtual reality interaction, real-time mapping, and symbiotic evolution, which reduces scheduling deviation. This paper presents the architecture and working principle of a new job shop scheduling model. Then, a scheduling resource parameter update method and a dynamic interactive scheduling strategy are proposed to realize real-time precise scheduling. Finally, a prototype system is designed to verify the effectiveness of the job shop scheduling model.

The DT-based job shop scheduling architecture consists of two parts, physical space and virtual space, as shown in Figure 1. The two parts communicate through the CPS unit. In the virtual space, scheduling data can be obtained from monitoring resources in the physical space, such as equipment, workers, task information, etc. Based on the obtained resource data, a scheduling model and algorithm are established, a scheduling strategy is obtained and a simulation is carried out. The final verified scheduling plan is fed back to the physical space for execution. In the physical space, planning is broken down into machine execution, operator assignment, and material movement.
insert image description here
Through this new scheduling mechanism, on the one hand, we can use relevant parameters (processing time, worker skill level, cost, energy consumption, etc.) Exact probability distributions for exact scheduling parameters. On the other hand, through data exchange between the two spaces, various interference information of the workshop can be analyzed and corresponding constraints can be adjusted for various interference information. Therefore, dispatch plans can be updated and returned to the physical shop floor for better adaptability and timely response.
Virtual reality interaction is used to build a bridge between virtual space and real space. As shown in Figure 2, implementing virtual reality interaction includes three steps. First of all, in order to realize the real-time interaction between the virtual space and the physical space, various data of the physical workshop are needed. Use radio frequency identification, wireless sensor network, smart instrument, and various sensors to collect and monitor data in the workshop production process. Secondly, based on the collected workshop data, a high-speed and stable customized data transmission protocol is designed to realize the synchronization and fusion of virtual space and physical space data; when abnormal events occur in the workshop, including abnormal processing technology and equipment failure, etc. The workshop sends abnormal data. Finally, through the modeling and simulation optimization analysis of the virtual space, a timely and accurate new plan is obtained and fed back to the execution system of the physical workshop. Through the interaction between the physical workshop and the virtual workshop, it is possible to grasp and respond to the dynamic changes of the workshop in real time. The production process can also be continuously optimized.
insert image description here
In traditional job shop scheduling, the parameters of related scheduling resources are inconsistent with the actual production conditions of the shop floor. The parameters used in scheduling are roughly estimated based on actual production experience. In traditional dispatching plans, parameters are usually obtained through statistics and used as known constants. For example, the processing time of a task on a certain machine is considered to be a constant. But in actual workshop production, the processing time of a job is not only related to the processing machines and workers, but also related to the job that needs to be processed. The processing time of a job on a certain machine is always assumed to be a fixed constant. It ignores the influence of different types of workers and changes in processing time. Therefore, schedules generated by fixed time parameters cannot perform well in real workshops. The parameter update method is shown in Figure 3.
insert image description here
Job shop scheduling data includes physical execution data and virtual shop floor data, such as processing time, cost, operation data, historical simulation data, etc. Using the scheduling data obtained from the data fusion and analysis of virtual space and physical space, the efficiency distribution function of machines and workers is fitted. Through continuous data iteration and fusion, parameters can be enriched and improved to make them more precise. Through simulation and verification, the scheduling plan is fed back to the physical workshop for execution.
Under this new scheduling mechanism, precise parameter values ​​can be obtained through the relevant distribution functions. The model can get accurate scheduling plan through the corresponding scheduling algorithm in the virtual space. The exact distribution of relevant parameters is updated during successive loop executions. Generated a new schedule that is more realistic.

【Paper 4: Digital twin-driven carbon emission prediction and low-carbon control of intelligent manufacturing job-shop】
With the development of sensing technology and data processing technology, intelligent manufacturing based on Cyber ​​Physical System (CPS) is the development of manufacturing industry trend. Digital twins have been considered as an implementation method for CPS. Considering the complexity and uncertainty of discrete manufacturing workshops, the automatic integration of carbon emission data and low-carbon control of manufacturing systems are two major challenges. In order to achieve carbon emission reduction in intelligent manufacturing workshops, a digital dual-drive carbon emission prediction and low-carbon control for intelligent manufacturing workshops is proposed, including digital dual-drive models for low-carbon manufacturing workshops and digital dual data interaction for low-carbon manufacturing and integration, digital dual-drive carbon emission prediction and low-carbon control. Three key enabling technologies, including digital twin data processing of low-carbon manufacturing workshops, carbon emission evaluation and forecasting services based on digital twins, and low-carbon control methods of manufacturing workshops driven by digital twin data, were studied. The approach combines the latest information and computing technologies with low-carbon manufacturing to validate and optimize control schemes through a virtual workshop. At the same time, carbon emission assessment and forecasting can be packaged as a service provided by machine tools to customers.
For the digital twin virtual workshop, it can not only be simulated with the physical world, but also can be updated synchronously. In order to achieve this goal, a data sensor network must first be constructed. For low-carbon manufacturing workshops, the carbon emission collection sensor network refers to the data collection network for low-carbon manufacturing, including carbon emission data, machine tool status data, and work-in-process data. Combined with the actual production process, the structure of the sensor network includes two parts: static network structure and dynamic network structure. Through the design of the manufacturing system, the physical configuration process of the static network configuration manufacturing system is realized. On the other hand, the dynamic sensor network is aimed at one or several specific production tasks after production planning and scheduling, that is, selecting appropriate sensors to build a sub-network to serve the logical configuration process of the task. Through dynamic network construction, the required digital data can be obtained, and the use efficiency of sensors is also high.
insert image description here

Digital twin data mainly comes from four aspects, namely data related to the physical workshop, data related to the virtual workshop, data related to the workshop service system, and data fused from the above three parts. Since digital twin data involves different types of data, data processing is crucial for subsequent simulation, prediction and decision-making. A low-carbon manufacturing workshop digital twin data processing method is proposed, as shown in Figure 2.
Since discrete manufacturing processes are stochastic, production processes are constantly changing over time. Using the deviation as input data, the prediction model is trained based on the artificial intelligence algorithm, that is, incremental learning. Data processing mainly includes three steps: data preprocessing, bias comparison, and incremental learning-based predictive model.
Data preprocessing Since raw data such as carbon emissions, machine status, and work-in-process status may have errors, repetitions, or redundancy, data preprocessing is required. Here, preprocessing consists of three steps: data cleaning, data integration, and data compression. a) Data cleaning: Since energy-related production data is collected automatically and in real time, there are a large number of errors such as duplicate records and incomplete records. In this case, an effective cleaning method can be used to achieve data cleaning;
b) Data integration: the cleaned data usually comes from different data sources and needs to form a consistent data store, such as a data warehouse; c)
Data compression : Due to the large and complex amount of data acquired, it takes a lot of time to analyze the data and mine the Workshop, which is not operable. Data compression can achieve smaller data volumes without destroying the integrity of the original data.
insert image description here

  • Deviation comparison
    After obtaining the preprocessing data, compare the actual status with the planned status to realize the deviation comparison, including process status, logistics status and buffer status. Because these three data will affect the processing progress of the machine tool.
  • Prediction and low-carbon control model based on incremental learning
    Through data preprocessing and deviation comparison, a predictive model based on incremental learning can be used to predict the carbon emissions of a certain machine tool. The model is trained on some historical data, including process data, machine data, planning data and logistics data. The trained prediction model can be used to predict the carbon emissions of machine tools.

blockchain

【Paper 5 Blockchain-Secured Smart Manufacturing in Industry 4.0: A Survey
Blockchain is a new generation of secure information technology that promotes business and industrial innovation. A lot of research has been conducted on the key enabling technologies of resource organization and system operation of blockchain security and intelligent manufacturing under Industry 4.0. However, the development and popularization of these blockchain applications are fundamentally hampered by various issues in terms of scalability, flexibility, and network security. This survey discusses how blockchain systems can overcome potential cybersecurity barriers to enable intelligence in Industry 4.0. In this regard, eight cybersecurity concerns (CIs) were identified in manufacturing systems. When investigating the research on blockchain-safe smart manufacturing, ten indicators for implementing blockchain applications in manufacturing systems are designed. This study sheds light on the studies of these CIs in the literature. Based on the insights gained from the above analysis, future research directions for blockchain-secured smart manufacturing are proposed, which may provide guidance for research on pressing cybersecurity issues to realize Industry 4.0 intelligence.
【Paper 6 Blockchain-Based Trust Mechanism for IoT-Based Smart Manufacturing System
As the Internet of Things technology acquires massive data, an integrated collaborative manufacturing system emerges as the times require. However, the fiduciary tax levied on manufacturers is very high in their countless engagements with customers, suppliers, distributors, governments, service providers, and other manufacturers. Blockchain is an emerging technology that enables more transparent, secure and efficient transactions. It represents a new paradigm and new thinking about how to securely store, integrate and communicate data among disparate stakeholders, organizations and systems that don't need to trust each other. Blockchain can be of great help in lowering trust taxes, especially for SMEs that have to bear heavier trust taxes than established manufacturers. This paper studies the security and trust mechanism based on blockchain, and expounds the specific application of blockchain in quality assurance, one of the key points of intelligent manufacturing strategy. Using the data generated during the intelligent manufacturing process, it is possible to retrieve the source of materials, facilitate equipment management, improve transaction efficiency, and create a flexible pricing mechanism. The dairy industry is being used to instantiate blockchain's value proposition for quality assurance.

[Paper 7 Integration of Blockchain, IoT and Machine Learning for
Multistage Quality Control and Enhancing Security in
Smart Manufacturing
]
Based on various requirements for equipment reliability and quality prediction, smart manufacturing systems are developing. To this end, many machine learning techniques are being researched. Another issue that is considered to be an important part of the industry is data security and management. In order to overcome the above problems, we adopt a combination of blockchain and machine learning to secure system transactions and process a dataset to overcome fake datasets. In order to manage and analyze the collected data, we use big data technology. The blockchain system is implemented on the private Hyperledger Fabric platform. Likewise, the predictive aspects of fault diagnosis are evaluated based on hybrid predictive techniques. The nonlinear machine learning technology is used to evaluate the quality control of the system, and the complex environment is modeled to obtain the true positive rate of the system quality control method. The main contributions of this paper are as follows:
insert image description here

Real-time monitoring based on Internet of Things environmental sensors.
• Use blockchain to reduce delays in decision-making.
• Apply blockchain to secure decentralized and transparent transactions.
• Use smart contracts to enhance manufacturing networks.
• Predictive analytics based on fault diagnosis of manufacturing systems.
• Apply big data techniques to manage large-scale manufacturing datasets.

[Paper DeepBlockScheme: A Deep Learning-Based Blockchain
Driven Scheme for Secure Smart City
]
Today, the continuous deployment of sensors and Internet of Things (IoT) drives the amount of manufactured data in smart cities. Factories around the world are rapidly becoming more connected through sensory data. Big data generally has five Vs: large capacity, high value, high accuracy, high diversity, and high speed. Latency, scalability, centralization, reliability, security, and privacy are major challenges for advanced smart city applications such as smart manufacturing, smart factories, etc. Meanwhile, blockchain is an emerging distributed technology, and it is deployed to minimize central authority control and provide a secure environment in recent applications. On the other hand, deep learning is one of the cutting-edge technologies, providing modern analytical tools for the processing and analysis of data and enabling scalable production in smart factory applications in smart cities. In this paper, we propose DeepBlockScheme: a deep learning-based blockchain-driven scheme for secure smart cities, where blockchain is used in a distributed manner at the fog layer to ensure the integrity of manufacturing data, decentralization and security. At the cloud level, deep learning is used to increase production, automate data analysis, and increase the communication bandwidth of smart factories and smart manufacturing applications in smart cities. We present a case study of automotive manufacturing with a state-of-the-art service scenario of the proposed scheme and compare it with existing studies using key parameters such as security and privacy tools. Finally, open research challenges based on this scheme are also discussed.
insert image description here

The first layer has various physical IoT devices, divided into three parts: industrial equipment (flow meters, power meters, speedometers, etc.), sensor devices (RFID, light sensors, proximity sensors, pressure and ultrasonic sensors) and IoT Devices (smart cars, watches, cameras, monitors and speakers). These devices are used to collect IoT raw data related to various smart cities (SC1, SC2, SC3).
The second layer includes various industrial gateways at the edge layer of the smart city as a data communication medium between the device layer and the blockchain layer.
The third layer, the edge layer of the smart city, adopts the distributed information hub edge nodes based on blockchain. This consortium blockchain is controlled by trusted entities (designed by governments and/or smart manufacturing), and its main role is to verify and validate data before it is added to the blockchain.
The fourth layer is the data processing and analysis layer, which analyzes all the data collected by the device layer and verified by the network layer, and uses methods based on deep learning to extract knowledge.
The fifth and last layer is the application layer. All knowledge and results extracted from the data analysis layer are directly applied to the application layer to realize self-management, self-distribution, self-automation, scalable production and rapid development of intelligent manufacturing.

insert image description here

In the method flow scheme of the proposed safe smart city driving scheme, the functions are divided into three modules: data acquisition, data communication and data processing and analysis. Blockchain technology is provided at the fog layer to ensure communication security, and data is stored on immutable or tamper-proof ledgers, and deep learning is used for data processing and analysis at the cloud layer. With the development of deep learning, the production of smart manufacturing is increasing according to the requirements of smart cities. The methodological flow of the Safe Smart City program proposed in this paper is shown in Figure 2, and the module classification is discussed below
Data collection: As part of the device layer, data collection is the first module of the program. This layer contains various types of devices, including industrial equipment, sensors, and IoT devices (flow meters, power meters, ultrasonics, proximity sensors, smart watches, and smart monitors). All these devices generate raw data such as temperature, rotational speed, power and current, which are transmitted to smart city applications including smart manufacturing, smart grid, etc. These applications derive useful information from raw data.
Data communication: available at the edge layer as the second module of the smart city solution proposed in this paper. With the help of industrial gateways, useful information or data is transferred to the blockchain network. The entire smart manufacturing is centralized, so there are disadvantages such as security and privacy. To alleviate this problem, we pass data or information to the blockchain network.
Blockchain-based distributed information hub: The fog layer consists of a blockchain-based distributed information hub module with some blockchain networks. This network has various government miners and local nodes. Validation and validation functions are done by miners as a proof of work consensus algorithm. First, smart manufacturing transmits information in the blockchain network, and then all miners initiate the verification process. The verification process is done first by a miner, just like solving a computational puzzle. The verification information is then transmitted across the blockchain network and verified by all other miner nodes. When the miner nodes verifying the information exceeds 51%, the verification process is completed and a block is added to the blockchain; otherwise, there is no need to add the block to the blockchain. With the help of the blockchain distributed network, we provide the security and privacy of a tamper-proof and immutable ledger.
Data analysis and processing: This is the last module of the proposed smart city application-driven scheme, called data analysis and processing. This is used for data analysis or cloud layer, providing deep learning. So, in this layer, we use intelligent cloud functions based on deep learning. In this process, it consists of three parts: an input layer, multiple hidden layers, and an output layer. The input layer uses current production and analysis data as input. Then, the data is transferred to the various hidden layers. With hidden layers, we can predict future outputs and increase data yield or scalability. This data is then passed to the output layer of deep learning. Hidden layers A1, A2, A3 are shown in Figure 2. This output is then communicated to the final layer, called the application layer. In smart city applications such as smart manufacturing and smart grid, it provides multiple advantages for the rapid development of data, scalable data production, data self-automation, self-management, self-distribution, and self-contribution.

Guess you like

Origin blog.csdn.net/Chahot/article/details/125781986