| Literature DB >> 35684707 |
Shicheng Yang1, Gongwei Lee2, Liang Huang2.
Abstract
This paper investigates the computation offloading problem in mobile edge computing (MEC) networks with dynamic weighted tasks. We aim to minimize the system utility of the MEC network by jointly optimizing the offloading decision and bandwidth allocation problems. The optimization of joint offloading decisions and bandwidth allocation is formulated as a mixed-integer programming (MIP) problem. In general, the problem can be efficiently generated by deep learning-based algorithms for offloading decisions and then solved by using traditional optimization methods. However, these methods are weakly adaptive to new environments and require a large number of training samples to retrain the deep learning model once the environment changes. To overcome this weakness, in this paper, we propose a deep supervised learning-based computational offloading (DSLO) algorithm for dynamic computational tasks in MEC networks. We further introduce batch normalization to speed up the model convergence process and improve the robustness of the model. Numerical results show that DSLO only requires a few training samples and can quickly adapt to new MEC scenarios. Specifically, it can achieve 99% normalized system utility by using only four training samples per MEC scenario. Therefore, DSLO enables the fast deployment of computation offloading algorithms in future MEC networks.Entities:
Keywords: computation offloading; deep learning; mobile-edge computing
Mesh:
Year: 2022 PMID: 35684707 PMCID: PMC9185259 DOI: 10.3390/s22114088
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1System model of an MEC network with multiple WDs.
Figure 2The two-level optimization structure for solving the problem (P1).
Figure 3The pipeline of DSLO training.
Figure 4The process of the DSLO-CNN algorithm.
Figure 5The process of the DSLO-DNN algorithm.
The parameters of DLSO-CNN and DSLO-DNN algorithm structures.
| (a) DSLO-CNN Algorithm | |||
|---|---|---|---|
|
|
|
|
|
|
| 16 | ReLU | 16 |
|
| 16 | ReLU | 16 |
|
| 3 | ReLU | - |
|
| 21 | ReLU | - |
|
| 64 | ReLU | - |
|
| 10 | Sigmoid | 10 |
|
| |||
|
|
|
|
|
|
| 20 | ReLU | - |
|
| 120 | ReLU | - |
|
| 80 | ReLU | - |
|
| 10 | Sigmoid | 10 |
Simulation parameters.
| Notation | Value | Notation | Value |
|---|---|---|---|
|
| 100 Mbps |
| 10–30 MB |
|
|
| ||
|
|
| 1900 cycles/byte | |
|
| CPU rate |
Figure 6Convergence performance of DSLO with plenty of training samples.
Figure 7Comparisons of system utility performance for different offloading algorithms. (a) ; (b) .
Weight factors of different WDs.
| MEC Task Scenarios | Weight | ||
|---|---|---|---|
|
| {1.0, 1.5, 1.0, 1.5, 1.0} | {1.0, 1.0, 1.5, 1.5, 1.0 | {1.0, 1.0, 1.5, 1.5, 1.5 |
|
| {1.0, 1.5, 1.5, 1.5, 1.0} | {1.0, 1.5, 1.0, 1.5, 1.0 | {1.0, 1.5, 1.0, 1.5, 1.0 |
Comparisons of CPU execution latency.
| # of WDs | DSLO-CNN | DSLO-DNN | LR | ||
|---|---|---|---|---|---|
| Train | Test | Train | Test | ||
| 5 | |||||
| 10 | |||||
| 15 | |||||
Figure 8Performance evaluation of BN layer. (a) DSLO-CNN; (b) DSLO-DNN.
Figure 9DSLO with per MEC scenario.
Figure 10Convergence performance of DSLO under different scales of training MEC scenarios . (a) ]; (b) ; (c) ; (d) .
Figure 11Test performance with different scales of the training dataset. (a) ; (b) .
Figure 12Test performance of different computational offloading algorithms.