| Literature DB >> 28287447 |
Joaquín Torres-Sospedra1, Antonio R Jiménez2, Stefan Knauth3, Adriano Moreira4, Yair Beer5, Toni Fetzer6, Viet-Cuong Ta7, Raul Montoliu8, Fernando Seco9, Germán M Mendoza-Silva10, Oscar Belmonte11, Athanasios Koukofikis12, Maria João Nicolau13, António Costa14, Filipe Meneses15, Frank Ebner16, Frank Deinzer17, Dominique Vaufreydaz18, Trung-Kien Dao19, Eric Castelli20.
Abstract
This paper presents the analysis and discussion of the off-site localization competition track, which took place during the Seventh International Conference on Indoor Positioning and Indoor Navigation (IPIN 2016). Five international teams proposed different strategies for smartphone-based indoor positioning using the same reference data. The competitors were provided with several smartphone-collected signal datasets, some of which were used for training (known trajectories), and others for evaluating (unknown trajectories). The competition permits a coherent evaluation method of the competitors' estimations, where inside information to fine-tune their systems is not offered, and thus provides, in our opinion, a good starting point to introduce a fair comparison between the smartphone-based systems found in the literature. The methodology, experience, feedback from competitors and future working lines are described.Entities:
Keywords: evaluation and benchmarking; indoor localization technology; indoor navigation; smartphone applications
Year: 2017 PMID: 28287447 PMCID: PMC5375843 DOI: 10.3390/s17030557
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Satellite view of the four buildings used in the Indoor Positioning and Indoor Navigation (IPIN) 2016 competition. The building identifiers are also included in the figure (see Section 2.3).
Figure 2An excerpt from a logfile, as provided to the competitors. Note that the example includes a POSI entry (ground truth location highlighted in red color in the excerpt), but these entries are not included in the evaluation files since they do not contain ground truth locations.
Description of the training logfiles.
| # | Building * | Route | Floors | Landmarks | Duration (s) | Smartphone |
|---|---|---|---|---|---|---|
| 01 | CAR | 1 | 1 | 75 | 1257 | S3 |
| 02 | CAR | 1 | 1 | 75 | 1260 | S3mini |
| 03 | CAR | 2 | 1 | 52 | 888 | S3 |
| 04 | CAR | 2 | 1 | 52 | 887 | S4 |
| 05 | UAH | 1 | 3 | 67 | 1101 | S3 |
| 06 | UAH | 1 | 3 | 67 | 1101 | S4 |
| 07 | UAH | 2 | 4 | 64 | 1192 | S3 |
| 08 | UAH | 2 | 4 | 64 | 1188 | S4 |
| 09 | UAH | 4 | 2 | 29 | 508 | S3 |
| 10 | UAH | 4 | 2 | 29 | 508 | S4 |
| 11 | UJIUB | 1 | 6 | 58 | 529 | S3 |
| 12 | UJIUB | 1’ | 6 | 58 | 467 | S3 |
| 13 | UJIUB | 2 | 6 | 59 | 397 | S3 |
| 14 | UJIUB | 2’ | 6 | 59 | 375 | S3 |
| 15 | UJIUB | 3 | 6 | 60 | 516 | S3 |
| 16 | UJITI | 1 | 3 | 360 | 1134 | GN5 |
| 17 | UJITI | 2 | 3 | 291 | 590 | GN5 |
* See Figure 1; S3: Samsung Galaxy S3, Samsung, South Korea; S3mini: Samsung Galaxy S3 mini, Samsung, South Korea; S4: Samsung Galaxy S4, Samsung, South Korea; GN5: Google Nexus 5, LG Electronics, South Korea.
Description of the evaluation logfiles.
| # | Building * | Route | Floors | Landmarks | Duration (s) | Smartphone |
|---|---|---|---|---|---|---|
| 01 | UJITI | 3 | 3 | 46 | 241 | HW |
| 02 | UJIUB | 4 | 6 | 91 | 730 | S3 |
| 03 | UAH | 3 | 4 | 65 | 1476 | S3 |
| 04 | UJITI | 4 | 3 | 75 | 430 | SP |
| 05 | UAH | 5 | 3 | 42 | 899 | S4 |
| 06 | CAR | 3 | 1 | 76 | 1223 | S3 |
| 07 | UAH | 3 | 4 | 65 | 1477 | S4 |
| 08 | UAH | 5 | 3 | 42 | 899 | S3 |
| 09 | CAR | 3 | 1 | 76 | 1223 | S4 |
* See Figure 1; S3: Samsung Galaxy S3, Samsung, South Korea; S4: Samsung Galaxy S4, Samsung, South Korea; SP: Sony Xperia SP, Sony, Japan; HW: Huawei G630, Huawei, China.
Overall results and main statistic value per logfile (best values in bold font). The 3rd quartile and mean errors are in meters, floor hit rates (flr in the table) are in percentage of correct estimates. The values for the ’all logfiles’ column correspond to the mean error and floor hit detection rate of the 578 validations points, and they do not correspond to the average of the nine logfiles.
| 95.67% | 18.27 | 84.62% | |||||||||
| UMinho | 7.32 | 6.33 | 4.03 | 97.83% | 6.26 | 80.22% | 4.32 | ||||
| BlockDox | 7.83 | 7 | 92.73% | 6.46 | 78.26% | 5.61 | 87.91% | 15.11 | 87.69% | 3.66 | |
| FHWS | 8.8 | 8.23 | 96.02% | 6 | 93.48% | 7.8 | 78.02% | 16.74 | 5.94 | ||
| Marauder | 40.9 | 32.6 | 51.38% | 33.9 | 39.13% | 22.57 | 32.97% | 43.4 | 18.46% | 36.42 | 42.67% |
| 5.39 | 4.94 | 96.19% | 2.5 | 100% | 5.16 | 89.01% | 12.04 | 100% | 2.03 | 100% | |
| 5.28 | 4.74 | 98.27% | 2.5 | 100% | 5.16 | 89.01% | 12.04 | 100% | 2.03 | 100% | |
| 5.52 | 13.2 | 88.10% | |||||||||
| UMinho | 5.05 | 97.62% | 5.76 | 5.5 | |||||||
| BlockDox | 7.33 | 90.48% | 4.51 | 7.36 | 92.31% | 10.92 | 90.48% | 5.17 | |||
| FHWS | 8.18 | 4.23 | 6.32 | 19.73 | 4.39 | ||||||
| Marauder | 25.83 | 35.71% | 23.97 | 57.38 | 26.15% | 24.09 | 50% | 26.7 | |||
| 4.49 | 100% | 1.73 | 100% | 4.45 | 100% | 10.92 | 90.48% | 2.23 | 100% | ||
| 4.49 | 100% | 1.73 | 100% | 4.45 | 100% | 10.45 | 100% | 2.23 | 100% | ||
Figure 3Cumulative distribution function (CDF) of the positioning error plus floor/building penalties obtained by each competitor.