| Literature DB >> 30363787 |
Paulo V Klaine1, João P B Nadas1, Richard D Souza2, Muhammad A Imran1.
Abstract
Due to the unpredictability of natural disasters, whenever a catastrophe happens, it is vital that not only emergency rescue teams are prepared, but also that there is a functional communication network infrastructure. Hence, in order to prevent additional losses of human lives, it is crucial that network operators are able to deploy an emergency infrastructure as fast as possible. In this sense, the deployment of an intelligent, mobile, and adaptable network, through the usage of drones-unmanned aerial vehicles-is being considered as one possible alternative for emergency situations. In this paper, an intelligent solution based on reinforcement learning is proposed in order to find the best position of multiple drone small cells (DSCs) in an emergency scenario. The proposed solution's main goal is to maximize the amount of users covered by the system, while drones are limited by both backhaul and radio access network constraints. Results show that the proposed Q-learning solution largely outperforms all other approaches with respect to all metrics considered. Hence, intelligent DSCs are considered a good alternative in order to enable the rapid and efficient deployment of an emergency communication network.Entities:
Keywords: Emergency communication network; Machine learning; Reinforcement learning; Unmanned aerial vehicles
Year: 2018 PMID: 30363787 PMCID: PMC6182572 DOI: 10.1007/s12559-018-9559-8
Source DB: PubMed Journal: Cognit Comput ISSN: 1866-9956 Impact factor: 5.418
List of symbols
| Symbol | Definition |
|---|---|
| Scenario | |
|
| Set of all base stations |
|
| Side of the considered area |
|
| Number of base stations |
|
| Number of users |
|
| Set of all users |
| ITU-R | |
|
| Ratio of buildup to land area |
|
| Building density |
|
| Scale parameter for building height distribution |
|
| Width of buildings |
|
| Separation between buildings |
| Link | |
|
| Bandwidth |
|
| Speed of light |
|
| Antenna height correction factor |
|
| Distance between user and macro cell/drone |
| EIRP | Equivalent isotropically radiated power |
|
| Carrier frequency |
|
| Base station height |
|
| Drone height |
|
| Height of user device |
|
| Additive white Gaussian noise power |
| PLm/d | Path loss between user and macro cell/drone |
| RSRP | Reference signal received power |
|
| Drone coverage radius |
| SINR | Signal to interference plus noise ratio |
|
| Throughput |
|
| Drone antenna major lobe angle |
|
| Additional path loss |
| Algorithm | |
|
| Action |
|
| Chance of choosing a random action |
|
| Discount factor |
|
| Learning rate |
| MAXit | Max iterations per episode |
| MAXit,r | Max iterations with same reward |
| MINit | Min iterations per episode |
|
| Action-value function |
|
| Reward (total number of users allocated) |
|
| Agent state |
|
| Time instant |
| Performance metrics | |
|
| Average throughput dissatisfaction |
|
| Percentage of users in outage |
|
| Number of users in outage |
| Ψ | Set of unsatisfied users in terms of throughput |
|
| User required throughput |
Fig. 1Manhattan grid urban layout
Fig. 2DSC flying at a height, hd, and with an antenna with aperture angle of 𝜃
Fig. 3Considered scenario. A DSC providing coverage to a certain amount of users, both regular and rescue team users, in an emergency situation
Simulation parameters
| Parameters | Value |
|---|---|
| Ratio of buildup to total land area, | 0.3 [ |
| Average number of buildings, | 500 buildings/km2 [ |
| Scale parameter for building heights, | 15 m [ |
| 1 dB [ | |
| 20 dB [ | |
| Side of the square area, | 1 km |
| Drone | 50 m |
| Drone | 50 m |
| Drone | 100 m |
| Minimum drone height | 200 m |
| Maximum drone height | 1000 m |
| Low mobility users | 3 m |
| Low mobility users | 3 m |
| Low mobility users | 0 m |
| High mobility users | 10 m |
| High mobility users | 10 m |
| High mobility users | 0 m |
| Number of users, | 768 [ |
| User height, | 1.5 m |
| Ratio of rescue team users | 20% |
| Number of hot spots | 16 |
| Number of DSCs | 16 |
| Ratio of users in near hot spots | 2/3 [ |
| Macro BS EIRP | 0 dBW [ |
| Macro BS height, | 20 m |
| DSC EIRP | − 3 dBW [ |
| DSC antenna directivity angle, | 60∘ [ |
| RBs in macro cell | 50 [ |
| RBs in DSCs | 50 [ |
| Macro cell backhaul capacity | 100 Gbps [ |
| Microwave backhaul capacity per drone | 37.5 Mbps/drone [ |
| Bandwidth of one RB | 180 kHz [ |
| Carrier frequency, | 1 GHz |
| High SINR requirement | 5 dB |
| Low SINR requirement | 0 dB |
| Total number of episodes | 100 |
| Number of independent runs | 100 |
| Max iterations per episode, Maxit | 1000 |
| Max iterations, same reward, Maxit,r | 100 |
| Min iterations per episode, Minit,r | 200 |
| Learning rate ( | 0.9 |
| Discount factor ( | 0.9 |
Fig. 4Upper view of the simulation scenario. The macro cell, in orange, is positioned near the center of the area, while the drones are shown as colored triangles. The DSC coverage radius is represented as the colored circles and users served by the BSs (either truck BS or DSCs) are displayed with different colors, while users in outage are represented as black X’s. The trajectory of one drone is plotted (dashed)
Fig. 5Isometric view of the simulation scenario. DSCs adjust their 3D position in order to maximize the amount of users covered. As it can be seen, different DSCs prefer different heights, in order to minimize interference between DSCs while also maximizing their coverage. The trajectory of one drone is plotted (dashed)
User characteristics
| User types | ||
|---|---|---|
| Rescue team | Regular | |
| Mobility | High | Low |
| SINR | High/low | Low |
Fig. 6Average number of users in outage per episode
Fig. 7Average DSC RAN load per episode
Fig. 8Average macro cell RAN load per episode
Fig. 9Average dissatisfaction of users with low throughput requirement
Fig. 10Average dissatisfaction of users with high throughput requirement
Fig. 11Average backhaul throughput for the drones per episode
Fig. 12Users in outage per episode considering different learning rates for the Q-learning positioning strategy