| Literature DB >> 35808228 |
Lincoln Herbert Teixeira1, Árpád Huszák1.
Abstract
Ad hoc vehicular networks have been identified as a suitable technology for intelligent communication amongst smart city stakeholders as the intelligent transportation system has progressed. However, in a highly mobile area, the growing usage of wireless technologies creates a challenging context. To increase communication reliability in this environment, it is necessary to use intelligent tools to solve the routing problem to create a more stable communication system. Reinforcement Learning (RL) is an excellent tool to solve this problem. We propose creating a complex objective space with geo-positioning information of vehicles, propagation signal strength, and environmental path loss with obstacles (city map, with buildings) to train our model and get the best route based on route stability and hop number. The obtained results show significant improvement in the routes' strength compared with traditional communication protocols and even with other RL tools when only one parameter is used for decision making.Entities:
Keywords: advanced vehicular ad-hoc network; reinforcement learning; routing network
Mesh:
Year: 2022 PMID: 35808228 PMCID: PMC9269236 DOI: 10.3390/s22134732
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Reinforcement Learning (RL) flowchart.
RSSI table.
| Signal Strength | Meaning |
|---|---|
| < | Very Good |
| < | Good |
| > | Not Good |
Figure 2Simulation diagram.
Figure 3Three-dimensional map of the city.
Figure 4Path life time.
Figure 5Number of reconnections.
Figure 6Hop number of the chosen path.
Figure 7Normalized maximum distance reach of the path.
Figure 8Histogram length of the path.
Figure 9Maximum Path Loss normalized.