| Literature DB >> 32614858 |
Abstract
OBJECTIVE: To explore the application of deep neural networks (DNNs) and deep reinforcement learning (DRL) in wireless communication and accelerate the development of the wireless communication industry.Entities:
Year: 2020 PMID: 32614858 PMCID: PMC7332070 DOI: 10.1371/journal.pone.0235447
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Schematic diagram of a typical communication-cache system for a wireless communication network.
Fig 2Structure of DNNs.
Flow of DQN algorithm for power control.
| Algorithm name: DQN algorithm for power control |
| Initialize DNNs Q and set the weight as |
| Randomly initialize p1(1) and p2(1), and obtain s (1). |
| For k = 1 to K do |
| Obtain p1(k+1) according to the primary user’s power control update strategy. |
| The secondary user randomly selects an action with a probability of εk and a(k) with a probability of 1-εk. |
| Obtain the state s(k+1) according to the random observation model, and observe the reward r(k). |
| Store data set |
| If k≥0 then |
| Randomly sample small sample set { |
| Update |
| End if |
| If s(k) is the ultimate state |
| Randomly initialize p1(k) and p2(k), and obtain s(k). |
| End if |
| End for |
Fig 3Comparative analysis of the change of the loss function with the number of iterations k under different strategies.
Fig 4Comparative analysis of the change in success rate with the number of iterations k under different strategies.
Fig 5Comparative analysis of the results of the average number of transfers with the number of iterations k under different strategies.
Fig 6Analysis of the change of the signal-to-noise ratio of the primary and secondary users with the state transition process.