| Literature DB >> 34064942 |
Yinghui Liu1, Youyang Qu2, Chenhao Xu2, Zhicheng Hao3, Bruce Gu4.
Abstract
The fast proliferation of edge computing devices brings an increasing growth of data, which directly promotes machine learning (ML) technology development. However, privacy issues during data collection for ML tasks raise extensive concerns. To solve this issue, synchronous federated learning (FL) is proposed, which enables the central servers and end devices to maintain the same ML models by only exchanging model parameters. However, the diversity of computing power and data sizes leads to a significant difference in local training data consumption, and thereby causes the inefficiency of FL. Besides, the centralized processing of FL is vulnerable to single-point failure and poisoning attacks. Motivated by this, we propose an innovative method, federated learning with asynchronous convergence (FedAC) considering a staleness coefficient, while using a blockchain network instead of the classic central server to aggregate the global model. It avoids real-world issues such as interruption by abnormal local device training failure, dedicated attacks, etc. By comparing with the baseline models, we implement the proposed method on a real-world dataset, MNIST, and achieve accuracy rates of 98.96% and 95.84% in both horizontal and vertical FL modes, respectively. Extensive evaluation results show that FedAC outperforms most existing models.Entities:
Keywords: asynchronous convergence; blockchain; edge computing; federated learning
Year: 2021 PMID: 34064942 PMCID: PMC8151195 DOI: 10.3390/s21103335
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Workflow of FedAC and FedBlock.
List of Notations.
| Notation | Description | Notation | Description |
|---|---|---|---|
|
| Total number of edge devices |
| Federated learning loss function |
|
| The |
| Staleness coefficient |
|
| Total number of miners |
| Federated learning learning rate |
|
| A miner associated with edge device |
| |
|
| Sample space |
| A scalar value |
|
| A subset of the sample space on edge device |
| |
|
| Total number of training rounds |
| A small positive constant |
|
| The |
| Time |
|
| Final global model |
| Waiting time |
|
| Local model on edge device |
| Block generation rate |
|
| Gradient |
| Block size |
|
| Model weights |
| Head size in block |
|
| Edge device local model weights |
| Updated local model size in block |
Figure 2Raspberry Pi (4b) diagram.
Specifications of Raspberry Pi (4b).
| Items | Description |
|---|---|
| CPU | Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5 GHz |
| Memory | 2 GB, 4 GB or 8 GB LPDDR4-3200 SDRAM (depending on model) |
| Wireless LAN | 2.4 GHz and 5.0 GHz IEEE 802.11ac wireless, Bluetooth 5.0, BLE |
| LAN | Gigabit Ethernet |
| USB | 2 USB 3.0 ports; 2 USB 2.0 ports. |
| GPIO | Standard 40 pin GPIO header |
| HDMI | 2 × micro-HDMI ports (up to 4kp60 supported) |
| Display | 2-lane MIPI DSI display port |
| Camera | 2-lane MIPI CSI camera port |
| Audio | 4-pole stereo audio and composite video port |
| Video | H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode) |
| Graphics API | OpenGL ES 3.0 graphics |
| External Storage | Micro-SD card slot for loading operating system and data storage |
| USB-C Power | 5V DC via USB-C connector (minimum 3A) |
| GPIO Power | 5V DC via GPIO header (minimum 3A) |
| Power over Ethernet (PoE) | enabled (requires separate PoE HAT) |
| Working Temperature | Operating temperature: 0–50 degrees C ambient |
Figure 3Accuracy vs. number of Edge devices.
Figure 4Asynchronous vs. Synchronous.
Figure 5Convergence on Horizontal and Vertical FL in MNIST.
Figure 6Time Consumption.
Figure 7Blockchain evaluation regarding blockchain generation rate.