| Literature DB >> 34901423 |
Abdullah Lakhan1, Mazin Abed Mohammed2, Seifedine Kadry3, Karrar Hameed Abdulkareem4, Fahad Taha Al-Dhief5, Ching-Hsien Hsu6,7,8.
Abstract
The intelligent reflecting surface (IRS) is a ground-breaking technology that can boost the efficiency of wireless data transmission systems. Specifically, the wireless signal transmitting environment is reconfigured by adjusting a large number of small reflecting units simultaneously. Therefore, intelligent reflecting surface (IRS) has been suggested as a possible solution for improving several aspects of future wireless communication. However, individual nodes are empowered in IRS, but decisions and learning of data are still made by the centralized node in the IRS mechanism. Whereas, in previous works, the problem of energy-efficient and delayed awareness learning IRS-assisted communications has been largely overlooked. The federated learning aware Intelligent Reconfigurable Surface Task Scheduling schemes (FL-IRSTS) algorithm is proposed in this paper to achieve high-speed communication with energy and delay efficient offloading and scheduling. The training of models is divided into different nodes. Therefore, the trained model will decide the IRSTS configuration that best meets the goals in terms of communication rate. Multiple local models trained with the local healthcare fog-cloud network for each workload using federated learning (FL) to generate a global model. Then, each trained model shared its initial configuration with the global model for the next training round. Each application's healthcare data is handled and processed locally during the training process. Simulation results show that the proposed algorithm's achievable rate output can effectively approach centralized machine learning (ML) while meeting the study's energy and delay objectives.Entities:
Keywords: Delay; Energy; IRSTS; ML; Objectives; Offloading
Year: 2021 PMID: 34901423 PMCID: PMC8627228 DOI: 10.7717/peerj-cs.758
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Existing IRS methods and systems.
| Research | Parameters | Decision | Training | Environment | Methods | Objective |
|---|---|---|---|---|---|---|
|
| Single Para. | Static | Network | Loss Function | IRS | Min. Energy |
|
| Single Para. | Static | Program | Distributed | IRS | Min. Energy |
|
| Single Para. | Static | REST API | Centralized | IRS | Min Computation |
|
| Two Para. | Dynamic | RPC | ML | Centralized | Max. Utilization |
|
| Multi-Para. | Dynamic | Monitoring | Adaptive | Centralized | Max. Throughput |
|
| Multi-Para. | Dynamic | Resource | Adaptive | Centralized | Min. Delay |
| Multi-Para. | Dynamic | Monitoring | Adaptive | Centralized | Min. Energy | |
| Multi-Para | Hybrid | Monitoring | Mobility | Centralized | Min Rent | |
| Many-Para. | Hybrid | SDN-Controller | Mobility | Centralized | Min. Cost | |
| Many-Para. | Hybrid | SDN-Controller | Mobility | Centralized | Min. Budget | |
|
| Many-Para. | Hybrid | OS | Mobility | Centralized | Min. renting cost |
| Proposed Work | Energy/Latency | Fog-Nodes | Federated learning | Energy/Latency | Node Learning | Global decision |
Figure 1IRS-enable federated learning aware system.
Mathematical notation.
| Notations | Descriptions |
|---|---|
|
| Number of IIoT applications |
|
| |
|
| The deadline of |
|
| The Workload of application |
|
| Execution delay of app. |
|
| Power consumption of app. |
|
| Number of computing nodes |
|
| The |
|
| The power consumption of |
|
| The resource capacity of |
|
| The speed of node |
|
| Energy Power of node |
|
| Delay execution of node |
|
| Number of Base-stations |
|
| Resource capability of Base-stations |
|
| Attributes of blockchain in block |
|
| Training model of any random node |
|
| Power consumption during model train |
|
| Assignment of workload |
Figure 2Energy consumption cases.
FL-IRSTS algorithm framework
| 1 |
| 2 Call Initial Local Training to Global Training; |
| 3 Initially schedules all workload based on their deadlines; |
| 4 Call |
| 5 Reschedule initial workloads to minimize the overall delay of applications; |
| 6 Call |
| 7 Reschedule initial workloads to minimise the overall energy consumption of nodes; |
| 8 |
| 9 End-Loop; |
Initial training of deadline sensor
| 1 |
| 2 Schedule-list [] = null; |
| 3 |
| 4 Sort all workloads by their deadlines by Earliest Deadline First Method; |
| 5 if ( |
| 6 Train initial model on possible node |
| 7 Initial Schedule all workloads based on their deadlines; |
| 8 Add Schedule-list [ |
| 9 End-Loop; |
Lateness of sensor workloads
| 1 |
| 2 |
| 3 Sort all computing nodes by their speed and resource capacity; |
| 4 |
| 5 Reschedule possible tasks from one node to another node; |
| 6 Add Schedule-list [ |
| 7 optimize the objective function τ ← Schedule-list [ |
| 8 End-Loop; |
Energy efficient task scheduling
| 1 |
| 2 |
| 3 |
| 4 Minimize the offloading based on |
| 5 |
| 6 Minimize the offloading based on |
| 7 |
| 8 |
| 9 Optimize |
| 10 Sort all computing nodes by their lower power consumption; |
| 11 |
| 12 Reschedule possible tasks from one node to another node; |
| 13 Add Schedule-list [ |
| 14 optimize the objective function E ← Schedule-list [ |
| 15 End-Loop; |
Simulation parameters.
| Parameter | Value |
|---|---|
| 1,000 − | |
| 20, 30 | |
| 12.4, 15.7, 11.33 jpu | |
|
| 29, 12, 7 jpu |
| Mobility | Density |
Figure 3Energy consumption during requests.
Figure 4Energy consumption during requests.
Figure 5Federated learning delay.
Figure 6Energy consumption during distributed federated learning.
Figure 7Energy consumption between sensors and base-stations.
Figure 8Energy consumption between base-station to base-station.
Figure 9Energy consumption between sensors and base-stations during high requests.
Figure 10Energy consumption between sensors and base-stations during peak hours.
Figure 11Offloading between sensors and base-stations.
Figure 12Execution delay at nodes.