| Literature DB >> 30909657 |
Liang Huang1, Xu Feng2, Luxin Zhang3, Liping Qian4, Yuan Wu5.
Abstract
This paper studies mobile edge computing (MEC) networks where multiple wireless devices (WDs) offload their computation tasks to multiple edge servers and one cloud server. Considering different real-time computation tasks at different WDs, every task is decided to be processed locally at its WD or to be offloaded to and processed at one of the edge servers or the cloud server. In this paper, we investigate low-complexity computation offloading policies to guarantee quality of service of the MEC network and to minimize WDs' energy consumption. Specifically, both a linear programing relaxation-based (LR-based) algorithm and a distributed deep learning-based offloading (DDLO) algorithm are independently studied for MEC networks. We further propose a heterogeneous DDLO to achieve better convergence performance than DDLO. Extensive numerical results show that the DDLO algorithms guarantee better performance than the LR-based algorithm. Furthermore, the DDLO algorithm generates an offloading decision in less than 1 millisecond, which is several orders faster than the LR-based algorithm.Entities:
Keywords: computation offloading; deep reinforcement learning; mobile edge computing
Year: 2019 PMID: 30909657 PMCID: PMC6470783 DOI: 10.3390/s19061446
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Related works on computation offloading in mobile edge computing (MEC) networks.
| Publication | Task | User | Edge Server | Remote Server | |||
|---|---|---|---|---|---|---|---|
| Single | Multiple | Single | Multiple | Single | Multiple | ||
| Liu et al. [ |
|
|
|
| |||
| Bi et al. [ |
|
|
| ||||
| Dinh et al. [ |
|
|
| ||||
| Huang et al. [ |
|
|
| ||||
| Wei et al. [ |
|
|
| ||||
| You et al. [ |
|
|
| ||||
| Munoz et al. [ |
|
|
| ||||
| Huang et al. [ |
|
|
| ||||
| Chen et al. [ |
|
|
| ||||
| Wang et al. [ |
|
|
| ||||
| Dinh et al. [ |
|
|
| ||||
| You et al. [ |
|
|
| ||||
| Chen et al. [ |
|
|
|
| |||
| Li et al. [ |
|
|
|
| |||
| Our Work |
|
|
|
| |||
Figure 1System Model of a multi-server multi-user multi-task mobile edge computing (MEC) network.
Notations used in this paper.
| Notation | Definition |
|---|---|
|
| |
| | Input date size of the task |
| | Output date size of the task |
| | The number of CPU cycles to process the task |
| | Transmission rates between WD |
| | Transmission channel bandwidths between WD |
| | Transmission powers of WD |
| | Transmission channel gains between WD |
| | The white noise power level |
| | Uplink and downlink transmission latency of the task |
| | Transmission latency between a edge server and the cloud server |
| | Total transmission latency of WD |
| | Transmission power |
| | Total transmission energy consumption of WD |
| | Clock frequency of CPU |
| | Computing latency of the task |
| | Total computing latency of WD |
| | Effective switched capacitance |
| | Computing energy consumption of WD |
| | Total computing energy consumption of WD |
| | Scalar weights of latency |
| | Scalar weights of energy consumption |
Figure 2Architecture of distributed deep learning-based offloading (DDLO) [23].
Application complexity [30,35].
| Application | Labels | Computation to Data Ratio |
|---|---|---|
| Gzip | A | 330 |
| pdf2text(N900 data sheet) | B | 960 |
| z264 CBR encode | C | 1900 |
| html2text | D | 5900 |
| pdf2text(E72 data sheet) | E | 8900 |
DNN structures used in DDLO and heterogeneous DDLO with 2 hidden layers.
| DNNs | Number of Neurons in DDLO | Number of Neurons in Het. DDLO | ||||||
|---|---|---|---|---|---|---|---|---|
| Input | 1st Hidden | 2nd Hidden | Output | Input | 1st Hidden | 2nd Hidden | Output | |
| DNN 1 | 6 | 120 | 80 | 24 | 6 | 30 | 320 | 24 |
| DNN 2 | 6 | 120 | 80 | 24 | 6 | 60 | 160 | 24 |
| DNN 3 | 6 | 120 | 80 | 24 | 6 | 120 | 80 | 24 |
| DNN 4 | 6 | 120 | 80 | 24 | 6 | 240 | 40 | 24 |
| DNN 5 | 6 | 120 | 80 | 24 | 6 | 480 | 20 | 24 |
DNN structures used in DDLO and heterogeneous DDLO with 3 hidden layers.
| DNNs | Number of Neurons in DDLO | Number of Neurons in Het. DDLO | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Input | 1st Hidden | 2nd Hidden | 3th Hidden | Output | Input | 1st Hidden | 2nd Hidden | 3th Hidden | Output | |
| DNN 1 | 6 | 80 | 60 | 40 | 24 | 6 | 320 | 60 | 10 | 24 |
| DNN 2 | 6 | 80 | 60 | 40 | 24 | 6 | 160 | 60 | 20 | 24 |
| DNN 3 | 6 | 80 | 60 | 40 | 24 | 6 | 80 | 60 | 40 | 24 |
| DNN 4 | 6 | 80 | 60 | 40 | 24 | 6 | 40 | 60 | 80 | 24 |
| DNN 5 | 6 | 80 | 60 | 40 | 24 | 6 | 20 | 60 | 160 | 24 |
Figure 3Convergence performance of DDLO and heterogeneous DDLO ((a) corresponds to the deep neural network (DNN) structure with two-hidden layers shown in Table 4; (b) corresponds to the DNN structure with three-hidden layers shown in Table 5).
Figure 4Convergence performance under different number of DNNs.
Figure 5Convergence performance under different learning rates.
Figure 6Convergence performance under different batch sizes.
Figure 7Convergence performance under different training intervals.
Figure 8Algorithm comparison under different .
Figure 9Algorithm comparison under different .
Figure 10Algorithm comparison under different number of WDs when and .
Figure 11Algorithm comparison under different number of tasks when and .
Figure 12Algorithm comparison under different number of edges when and .
Figure 13Algorithm comparison under different types of applications when and .
Average CPU computation time under various number of WDs.
| Number of WDs | DDLO (s) | Het. DDLO (s) | LR-based Alg. (s) |
|---|---|---|---|
| 1 | 6.11 × | 6.28 × | 3.30 × |
| 2 | 6.42 × | 6.47 × | 9.66 × |
| 3 | 6.69 × | 6.67 × | 1.68 |
| 4 | 6.88 × | 6.82 × | 2.41 |
| 5 | 6.99 × | 7.02 × | 3.66 |
| 6 | 7.19 × | 7.20 × | 4.41 |
| 7 | 7.36 × | 7.39 × | 5.75 |