| Literature DB >> 35023995 |
Nidal Nasser1, Zubair Md Fadlullah2,3, Mostafa M Fouda4,5, Asmaa Ali6, Muhammad Imran7,8.
Abstract
The concept of an intelligent pandemic response network is gaining momentum during the current novel coronavirus disease (COVID-19) era. A heterogeneous communication architecture is essential to facilitate collaborative and intelligent medical analytics in the fifth generation and beyond (B5G) networks to intelligently learn and disseminate pandemic-related information and diagnostic results. However, such a technique raises privacy issues pertaining to the health data of the patients. In this paper, we envision a privacy-preserving pandemic response network using a proof-of-concept, aerial-terrestrial network system serving mobile user entities/equipment (UEs). By leveraging the unmanned aerial vehicles (UAVs), a lightweight federated learning model is proposed to collaboratively yet privately learn medical (e.g., COVID-19) symptoms with high accuracy using the data collected by individual UEs using ambient sensors and wearable devices. An asynchronous weight updating technique is introduced in federated learning to avoid redundant learning and save precious networking as well as computing resources of the UAVs/UEs. A use-case where an Artificial Intelligence (AI)-based model is employed for COVID-19 detection from radiograph images is presented to demonstrate the effectiveness of our proposed approach.Entities:
Keywords: 5G; Artificial intelligence (AI); Beyond 5G (B5G); Edge computing; Federated learning; Pandemic; Unmanned aerial vehicle (UAV)
Year: 2021 PMID: 35023995 PMCID: PMC8702301 DOI: 10.1016/j.comnet.2021.108672
Source DB: PubMed Journal: Comput Netw ISSN: 1389-1286 Impact factor: 4.474
A taxonomy of related research work on federated learning techniques and scenarios depicting their suitability in B5G networks.
| Reference | Federated learning technique and scenario | B5G-ready? |
|---|---|---|
| Horizontal, vertical, and hybrid data partitioning and/or cryptographic, perturbative, and anonymization techniques for privacy | No | |
| Specific focus on privacy-preservation in federated learning setups without any wireless communication parameter | No | |
| Privacy leakage scenarios with external and external attackers, active/passive attacks, inference attacks | No | |
| Temporally asynchronous weight update of federated learning | No | |
| Federated machine learning to address energy, bandwidth, delay and data privacy concerns in wireless communications by performing decentralized model training | Yes | |
Fig. 1Considered UAV-based B5G pandemic response network architecture exploiting heterogeneous B5G network links and federated learning.
Fig. 2Proposed system model illustrating the B5G network data, control, and application planes.
Fig. 3Proposed algorithm.
Performance comparison of federated learning architectures for a varying number of users and time rounds (learning phase).
| Number of LNs | Fed-ANN | Fed-CNN | ||||||
|---|---|---|---|---|---|---|---|---|
| Time round | Iterations to converge | Accuracy | Loss | Time round | Iterations to converge | Accuracy | Loss | |
| 2 | 15 | 6 | 0.8968 | 0.0629 | 10 | 6 | 0.9477 | 0.0293 |
| 4 | 10 | 10 | 0.9038 | 0.0599 | 10 | 16 | 0.9456 | 0.0314 |
| 6 | 10 | 16 | 0.9157 | 0.0561 | 5 | 27 | 0.9468 | 0.0309 |
| 8 | 10 | 26 | 0.9208 | 0.0527 | 10 | 31 | 0.9444 | 0.03201 |
| 10 | 15 | 27 | 0.0507 | 10 | 50 | 0.02901 | ||
Fig. 4Learning accuracy and loss comparison of adopted federated learning architectures over different time rounds.
Fig. 5The value of the loss function over growing number of iterations for the ANN and CNN-based federated learning architectures.
Fig. 6Required execution time and memory consumption comparison (Learning phase).
Performance comparison among diverse AI methods (testing phase).
| Accuracy | Precision | Recall | F1-score | |
|---|---|---|---|---|
| Centralized deep learning benchmark | 0.94630 | 0.94513 | 0.94352 | 0.94433 |
| Fed-ANN | 0.91914 | 0.91763 | 0.91914 | 0.91419 |
| Fed-CNN | 0.93891 | 0.93773 | 0.93891 | 0.93557 |
Fig. 7Overhead reduction for the federated learning asynchronous update method over different time rounds and deep parameter exchange ratios.