| Literature DB >> 29207564 |
Dawid Połap1, Karolina Kęsik2, Kamil Książek3, Marcin Woźniak4.
Abstract
Augmented reality (AR) is becoming increasingly popular due to its numerous applications. This is especially evident in games, medicine, education, and other areas that support our everyday activities. Moreover, this kind of computer system not only improves our vision and our perception of the world that surrounds us, but also adds additional elements, modifies existing ones, and gives additional guidance. In this article, we focus on interpreting a reality-based real-time environment evaluation for informing the user about impending obstacles. The proposed solution is based on a hybrid architecture that is capable of estimating as much incoming information as possible. The proposed solution has been tested and discussed with respect to the advantages and disadvantages of different possibilities using this type of vision.Entities:
Keywords: augmented reality; convolutional neural network; hybrid architecture; obstacle detection; spiking neural network
Mesh:
Year: 2017 PMID: 29207564 PMCID: PMC5751448 DOI: 10.3390/s17122803
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Graph of entropy values for spectrograms with the respect to parameters (on ), (on ), and average entropy (on ).
Figure 2Sample presentation of the idea for an augmented reality detection system which by the use of proposed deep learning techniques evaluates readings from sensors in real time.
Figure 3Visualization of the proposed architecture for data analysis in order to inform system users about possible obstacles.
Average correctness for various neural compositions applied for User 1. CNN: convolutional neural network; SNN: spiking neural network.
| Error Value | Average | ||
|---|---|---|---|
| CNN Frames | CNN for Audio | SNN | |
| 0.1 | 0.1 | 0.1 | 32% |
| 0.1 | 0.1 | 0.01 | 35% |
| 0.1 | 0.1 | 0.01 | 36.5% |
| 0.1 | 0.01 | 0.1 | 38% |
| 0.1 | 0.01 | 0.01 | 39% |
| 0.1 | 0.01 | 0.001 | 39.5% |
| 0.1 | 0.001 | 0.1 | 41% |
| 0.1 | 0.001 | 0.01 | 39% |
| 0.1 | 0.001 | 0.001 | 44% |
Average correctness for various neural compositions applied for User 2.
| Error Value | Average | ||
|---|---|---|---|
| CNN Frames | CNN for Audio | SNN | |
| 0.01 | 0.1 | 0.1 | 41% |
| 0.01 | 0.1 | 0.01 | 43.5% |
| 0.01 | 0.1 | 0.01 | 44% |
| 0.01 | 0.01 | 0.1 | 43% |
| 0.01 | 0.01 | 0.01 | 46% |
| 0.01 | 0.01 | 0.001 | 49% |
| 0.01 | 0.001 | 0.1 | 48.5% |
| 0.01 | 0.001 | 0.01 | 53% |
| 0.01 | 0.001 | 0.001 | 55% |
Average correctness for various neural compositions applied for User 3.
| Error Value | Average | ||
|---|---|---|---|
| CNN Frames | CNN for Audio | SNN | |
| 0.001 | 0.1 | 0.1 | 63% |
| 0.001 | 0.1 | 0.01 | 64.5% |
| 0.001 | 0.1 | 0.01 | 66% |
| 0.001 | 0.01 | 0.1 | 62% |
| 0.001 | 0.01 | 0.01 | 68% |
| 0.001 | 0.01 | 0.001 | 71% |
| 0.001 | 0.001 | 0.1 | 65% |
| 0.001 | 0.001 | 0.01 | 76% |
| 0.001 | 0.001 | 0.001 | 79% |
The results of user verification for the proposed methodology.
| User | |||||||||
|---|---|---|---|---|---|---|---|---|---|
| 1 | 1912 | 614 | 265 | 402 | 0.79 | 0.85 | 0.74 | 0.82 | 0.7 |
| 2 | 1405 | 231 | 190 | 600 | 0.67 | 0.78 | 0.64 | 0.7 | 0.54 |
| 3 | 2512 | 401 | 111 | 174 | 0.91 | 0.95 | 0.9 | 0.94 | 0.78 |
| Average | 1943 | 415.33 | 188.67 | 392 | 0.79 | 0.86 | 0.76 | 0.82 | 0.68 |
Figure 4Confusion matrix for User 1.
Figure 5Confusion matrix for User 2.
Figure 6Confusion matrix for User 3.
Figure 7Sample frames from selected video files recorded with different devices in different weather conditions. Yellow circles indicate detected obstacles. The users were recording in various conditions: from moving vehicles with low visibility (User 1), in corridors and buildings with various levels of lighting (User 2), and in an open city space with good visibility and lightness (User 3).