| Literature DB >> 36016014 |
Ali Bemani1, Niclas Björsell1.
Abstract
Industry 4.0 lets the industry build compact, precise, and connected assets and also has made modern industrial assets a massive source of data that can be used in process optimization, defining product quality, and predictive maintenance (PM). Large amounts of data are collected from machines, processed, and analyzed by different machine learning (ML) algorithms to achieve effective PM. These machines, assumed as edge devices, transmit their data readings to the cloud for processing and modeling. Transmitting massive amounts of data between edge and cloud is costly, increases latency, and causes privacy concerns. To address this issue, efforts have been made to use edge computing in PM applications., reducing data transmission costs and increasing processing speed. Federated learning (FL) has been proposed a mechanism that provides the ability to create a model from distributed data in edge, fog, and cloud layers without violating privacy and offers new opportunities for a collaborative approach to PM applications. However, FL has challenges in confronting with asset management in the industry, especially in the PM applications, which need to be considered in order to be fully compatible with these applications. This study describes distributed ML for PM applications and proposes two federated algorithms: Federated support vector machine (FedSVM) with memory for anomaly detection and federated long-short term memory (FedLSTM) for remaining useful life (RUL) estimation that enables factories at the fog level to maximize their PM models' accuracy without compromising their privacy. A global model at the cloud level has also been generated based on these algorithms. We have evaluated the approach using the Commercial Modular Aero-Propulsion System Simulation (CMAPSS) dataset to predict engines' RUL Experimental results demonstrate the advantage of FedSVM and FedLSTM in terms of model accuracy, model convergence time, and network usage resources.Entities:
Keywords: aggregation strategy; distributed machine learning algorithm; edge and fog computing; federated learning; resource allocation
Mesh:
Year: 2022 PMID: 36016014 PMCID: PMC9415777 DOI: 10.3390/s22166252
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Hierarchical and collaborative edge-fog-cloud architecture.
Figure 2Proposed collaborative PM at the edge, fog, and cloud level.
Summary of the related research.
| Approach | Ref | Key Ideas |
|---|---|---|
| Distributed data | [ | Wireless communication in edge learning |
| [ | Deep neural network for fog-cloud based with adopting dynamic changes in resource variation | |
| [ | Genetic algorithm for scheduling to minimize overall latency | |
| ML at the | [ | Distributed ML and challenges for implementing (Hardware, security, privacy, and communication) |
| [ | A fruitful survey on distributed machine learning | |
| [ | Proposed distributed gradient descent algorithm which fits for non-iid data | |
| Federated ML | [ | Stochastic method with variance reduction for solving the problem on federated learning |
| [ | Challenges of non-iid Data to Model Training on horizontal and vertical FL | |
| [ | Overview of FL, technologies, protocols and applications | |
| [ | Horizontal federated learning, vertical federated learning, and federated transfer learning | |
| [ | Analyzing Fl regarding data partitioning, privacy, model, and communication | |
| Federated | [ | FedAvg, FedProx, CO-OP, FSVRG |
| [ | FSVRG on fog or cloud | |
| [ | FedProx | |
| Distribution strategies and | [ | Hierarchical FL based on the number of aggregations compared to number of iterations (epochs) |
| [ | Hierarchical FL to minimize training loss and latency | |
| Distributed ML for | [ | Distributed PM algorithm based on FL and blockchain |
| [ | Cross-device FL for collaborative PM | |
| [ | Real-time fault detection system for edge computing | |
| [ | Edge computing in IoT based manufacturing | |
| [ | Federated SVM for horizontal FL and federated random forest for vertical FL | |
| [ | Novel FL algorithm for the LSTM model for anomaly detection | |
| [ | Combination of CNN and LSTM in distributed anomaly detection applications |
Figure 3FedSVM architecture.
Figure 4LSTM architecture and memory blocks.
Figure 5Random topology formation of FedLSTM.
Figure 6Undirected graph of communication between edge devices and fog servers for synchronous FL.
Figure 7Undirected graph of communication between edge devices and fog servers for asynchronous FL, edge devices 5 and 6 play the fog roll.
Figure 8Moving window strategy over sensor measurements of an engine.
Figure 9Convergence time of syncronous FedSVM based on the communication of Figure 6 with different optimizer.
Figure 10Four example of labeled RUL predictions for the testing engine based on the synchronous FedSVM model.
Performance analysis of the synchronous FedSVM.
| Optimizer | Evaluation Metrics | Dataset | |||
|---|---|---|---|---|---|
| FD001 | FD002 | FD003 | FD004 | ||
| GD | Runtime (s) | 61.6 | 150 | 69 | 143.5 |
| Final acc (%) | 92.4 | 77.9 | 94.2 | 78.4 | |
| SGD | Runtime (s) | 18.9 | 43 | 20.5 | 45.5 |
| Final acc (%) | 92.5 | 78.9 | 92.2 | 71.7 | |
| FSVRG | Runtime (s) | 140 | 362 | 161 | 337 |
| Final acc (%) | 90.3 | 74 | 91.7 | 86.8 | |
Performance analysis of the asynchronous FedSVM.
| Optimizer | Evaluation Metrics | Dataset | |||
|---|---|---|---|---|---|
| FD001 | FD002 | FD003 | FD004 | ||
| GD | Runtime (s) | 61.8 | 145 | 68.6 | 141.4 |
| Final acc (%) | 92.2 | 77.2 | 93.8 | 77.1 | |
| SGD | Runtime (s) | 19.8 | 42.5 | 21.4 | 43 |
| Final acc (%) | 90 | 77.5 | 92.1 | 83.1 | |
Figure 11Convergence time of syncronous FedLSTM based on the communication of Figure 6.
Figure 12Four example of RUL predictions for the testing engine based on the synchronous FedLSTM model.
Performance analysis of the synchronous FedLSTM.
| Num of Epoch | Evaluation Metrics | Dataset | |||
|---|---|---|---|---|---|
| FD001 | FD002 | FD003 | FD004 | ||
| 1 | RMSE | 13.33 | 22.83 | 12.57 | 25.1 |
| SF | 242 | 226 | 156 | 560 | |
| 2 | RMSE | 15.47 | 22.4 | 10.73 | 26.25 |
| SF | 720 | 690 | 2469 | 470 | |
| 3 | RMSE | 15.53 | 22.93 | 9.7 | 24.85 |
| SF | 690 | 348 | 617 | 202 | |
| 4 | RMSE | 14.5 | 21.68 | 9.65 | 17.14 |
| SF | 709 | 753 | 895 | 337 | |
Performance analysis of the asynchronous FedLSTM.
| Num of Epoch | Evaluation Metrics | Dataset | |||
|---|---|---|---|---|---|
| FD001 | FD002 | FD003 | FD004 | ||
| 1 | RMSE | 16.14 | 22.11 | 14.68 | 29.5 |
| SF | 174 | 2877 | 1452 | 492 | |
| 2 | RMSE | 16.01 | 21.15 | 11.8 | 26.4 |
| SF | 2097 | 2039 | 2769 | 3167 | |
| 3 | RMSE | 15.81 | 21.16 | 11.87 | 26.1 |
| SF | 410 | 1852 | 2796 | 2535 | |
| 4 | RMSE | 15.36 | 22.29 | 11.85 | 27.1 |
| SF | 1147 | 1473 | 5026 | 2650 | |
Results of other research for centralized RUL prediction on CMAPSS.
| Prediction Model | Evaluation Metrics | Dataset | |||
|---|---|---|---|---|---|
| FD001 | FD002 | FD003 | FD004 | ||
| DCNN [ | RMSE | 12.61 | 22.36 | 12.64 | 23.31 |
| SF | 273 | 10412 | 284 | 12466 | |
| Deep CNN [ | RMSE | 18.45 | 30.29 | 19.81 | 29.16 |
| SF | 1286 | 13570 | 1596 | 7886 | |
| MODBNE [ | RMSE | 15.04 | 25.05 | 12.51 | 28.66 |
| SF | 334 | 5585 | 6557 | 6557 | |
| CNN-XGB [ | RMSE | 12.61 | 19.61 | 13.01 | 19.41 |
| SF | 224 | 2525 | 279 | 2930 | |
Figure 13Results of random neural connection on synchronous FedLSTM.
Performance analysis of FedSVM With MNIST dataset.
| Optimizer | Evaluation Metrics | MNIST | |||
|---|---|---|---|---|---|
| Synchronous | Asynchronous | ||||
| iid | Non-iid | iid | Non-iid | ||
| GD | Runtime (s) | 109.84 | 97.15 | 93.89 | 86.26 |
| Final acc (%) | 97.41 | 97.31 | 97.41 | 97.32 | |
| SGD | Runtime (s) | 1.52 | 1.61 | 2.37 | 2.36 |
| Final acc (%) | 97.69 | 97.32 | 96.86 | 96.76 | |
| SGD | Runtime (s) | 328.93 | 332.68 | - | - |
| Final acc (%) | 96.95 | 96.23 | - | - | |