| Literature DB >> 31311205 |
Wafa Njima1,2, Iness Ahriz3, Rafik Zayani3,4, Michel Terre3, Ridha Bouallegue4.
Abstract
Currently, indoor localization is among the most challenging issues related to the Internet of Things (IoT). Most of the state-of-the-art indoor localization solutions require a high computational complexity to achieve a satisfying localization accuracy and do not meet the memory limitations of IoT devices. In this paper, we develop a localization framework that shifts the online prediction complexity to an offline preprocessing step, based on Convolutional Neural Networks (CNN). Motivated by the outstanding performance of such networks in the image classification field, the indoor localization problem is formulated as 3D radio image-based region recognition. It aims to localize a sensor node accurately by determining its location region. 3D radio images are constructed based on Received Signal Strength Indicator (RSSI) fingerprints. The simulation results justify the choice of the different parameters, optimization algorithms, and model architectures used. Considering the trade-off between localization accuracy and computational complexity, our proposed method outperforms other popular approaches.Entities:
Keywords: Convolutional Neural Networks (CNN); RSSI fingerprinting; deep learning; image classification; indoor localization; kurtosis
Year: 2019 PMID: 31311205 PMCID: PMC6679294 DOI: 10.3390/s19143127
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Different steps of our CNN-based localization system.
Figure 2Region partition.
Figure 3The structure of RSSI databases at each training point.
Figure 4The structure of the radio images.
Figure 5An example of a CNN architecture with two convolution layers and one fully-connected layer.
Figure 6Max-pooling operation on radio images (2 × 2 window and stride two).
List of the proposed hyperparameters.
| Parameter | Value |
|---|---|
|
| 16 (grid size is 5 m × 5 m) and 100 (grid size is 2 m × 2 m) |
|
| 0, 2, 3, 4, 5 |
|
| 1 |
|
| 2 × 2 |
|
| Used once after the first convolutional layer |
|
| 2 with stride 1 or 2 |
|
| 2, 10, 20, 25, 30 |
|
| SGD, RMSProp, and Adam |
|
| ReLU for convolutional layers and softmax for FC layer |
Optimization algorithms’ adjusted values.
| Parameter | Value |
|---|---|
|
| 0.0005 |
|
|
|
|
| 0.99 |
|
| 0.9 |
|
| 0.8 |
Variation of the accuracy according to the optimization algorithm on the validation data using a grid of size 5 m × 5 m.
| Number of APs | Mini Batch Size | Algorithm | Accuracy (%) | Number of Iterations |
|---|---|---|---|---|
| SGD | 89.93 | 690 | ||
|
| 60 | RMSProp | 90.8 | 1610 |
| Adam |
|
| ||
| SGD | 97.57 | 575 | ||
|
| 60 | RMSProp | 98.9 | 1610 |
| Adam |
|
|
Variation of the accuracy according to the optimization algorithm on the validation data using a grid of size 2 m × 2 m.
| Number of APs | Mini Batch Size | Algorithm | Accuracy (%) | Number of Iterations |
|---|---|---|---|---|
| SGD | 80.43 | 4350 | ||
|
| 200 | RMSProp | 80.53 | 2900 |
| Adam |
|
| ||
| SGD | 91.14 | 3800 | ||
|
| 300 | RMSProp | 90.71 | 1900 |
| Adam |
|
|
Figure 7Variation of the accuracy depending on the parameter T.
Variation of the accuracy according to T using a grid of size 5 m × 5 m.
| Number of APs | Mini Batch Size | T | Accuracy (%) | Training Time (min) | Prediction Time (s) |
|---|---|---|---|---|---|
| 2 | 79.34 | 0:06 |
| ||
| 10 | 84 | 0:07 |
| ||
|
| 60 | 20 |
|
|
|
| 25 | 95.92 | 0:50 |
| ||
| 30 | 96.35 | 1:10 |
| ||
| 2 | 91.57 | 0:09 |
| ||
| 10 | 96.88 | 0:09 |
| ||
|
| 60 | 20 |
|
|
|
| 25 | 99.31 | 1:23 |
| ||
| 30 | 99.88 | 1:50 |
|
Variation of the accuracy according to T using a grid of size 2 m × 2 m.
| Number of APs | Mini Batch Size | T | Accuracy (%) | Training Time (min) | Prediction Time (s) |
|---|---|---|---|---|---|
| 2 | 47.71 | 0:13 |
| ||
| 10 | 65 | 0:24 |
| ||
|
| 200 | 20 |
|
|
|
| 25 | 82.35 | 4:31 |
| ||
| 30 | 82.86 | 5:15 |
| ||
| 2 | 72.07 | 0:47 |
| ||
| 10 | 85.36 | 1:15 |
| ||
|
| 300 | 20 |
|
|
|
| 25 | X | – | – | ||
| 30 | X | – | – |
Adam’s accuracy on the validation data using a grid of size 2 m × 2 m and 10 anchors.
| Mini Batch Size | Accuracy (%) | Number of Iterations | |
|---|---|---|---|
| CNNLocWoC | 300 | 91.57 | 285 |
| CNNLocWC | 400 | 94.13 | 190 |
Variation of the accuracy according to the number of layers using a grid of size 2 m × 2 m and 10 anchors
| Number of Convolutional Layers | Accuracy (%) | Feature Extraction Module Architecture |
|---|---|---|
| 0 | 83.3 | – |
| Conv(200,2) | ||
| 2 | 91.57 | Max-pooling(2,2) |
| Conv(120,2) | ||
| Conv(120,2) | ||
| 3 | 88.43 | Max-pooling(2,2) |
| Conv(200,2) | ||
| Conv(300,2) | ||
| Conv(40,2) | ||
| Max-pooling(2,2) | ||
| 4 | 83.29 | Conv(90,2) |
| Conv(300,2) | ||
| Conv(400,2) | ||
| Conv(40,2) | ||
| Max-pooling(2,2) | ||
| 5 | 82 | Conv(90,2) |
| Conv(300,2) | ||
| Conv(400,2) | ||
| Conv(700,2) |
Comparison of the accuracy associated with different algorithms using a grid of size 2 m × 2 m and 10 anchors.
| Indoor Localization Technique | Accuracy (%) |
|---|---|
| Trilateration | 30 |
| Classic NN | 80.76 |
| Classic NN2 | 84.75 |
| CNNLocWoC | 91.57 |
| CNNLocWC | 94.13 |
The deep learning network architectures used.
| Deep Learning Algorithm | Network Architecture |
|---|---|
| FC(1500) | |
| FC(3000) | |
| Classic NN | FC(2000) |
| FC(1200) | |
| FC(120) | |
| FC(100) | |
| Classic NN2 | FC(200) |
| FC(120) | |
| Conv(200,2) | |
| CNNLocWoC | Max-pooling(2,2) |
| Conv(120,2) | |
| FC(120) | |
| Conv(200,2) | |
| CNNLocWC | Max-pooling(2,2) |
| Conv(300,2) | |
| FC(120) |