| Literature DB >> 23112607 |
Lorena Calavia1, Carlos Baladrón, Javier M Aguiar, Belén Carro, Antonio Sánchez-Esguevillas.
Abstract
This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.Entities:
Keywords: safety and security; semantics; smart sensors; surveillance
Mesh:
Year: 2012 PMID: 23112607 PMCID: PMC3472835 DOI: 10.3390/s120810407
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1.Basic scheme of the proposed surveillance system.
Figure 2.Two example applications (one over a synthetic movie, one over a real movie) of Route Detection. Green and blue lines represent the center line of routes, with yellow and red lines representing the envelopes of the routes. Cyan and magenta “x” points are entry and exit points respectively (a point where an object has appeared or disappeared). White squares are sources (clusters of entry points).
Figure 3.OWL and SWRL code.
Figure 4.Hierarchical structure of ontology.
Figure 5.Unclassified individual.
Figure 6.Classified individual.
Figure 7.Example of semantic reasoning for pedestrian use case.
Figure 8.Example of semantic reasoning for vehicle use case.
Figure 9.Subway Alarm.
Performance results for the route detector. N is the number of points per trajectory.
| A synthetic video of a highway with two lanes, one direction per lane. | 5 | 600 | 318s | 359 | 2 | 2 | 2 | 100% | 100% |
| 10 | 600 | 283s | 359 | 2 | 2 | 0 | 100% | 0% | |
| 20 | 600 | 402s | 359 | 2 | 2 | 0 | 100% | 0% | |
| A synthetic video of a highway with four lanes, two lanes for each direction. | 5 | 480 | 26s | 58 | 4 | 3 | 1 | 75% | 25% |
| 10 | 480 | 52s | 58 | 4 | 4 | 0 | 100% | 0% | |
| 20 | 480 | 123s | 58 | 4 | 4 | 0 | 100% | 0% | |
| A real video of a complex intersection, involving three roads for cars with two directions, and several sidewalks. | 5 | 180 | 118 | 191 | 6 (cars) | 6 (cars) | 1 (cars) | 100% (cars) | 16% (cars) |
| 10 | 180 | 284 | 191 | 6 (cars) | 6 (cars) | 1 (cars) | 100% (cars) | 16% (cars) | |
| 20 | 180 | 1696 | 191 | 6 (cars) | 5 (cars) | 2 (cars) | 86% (cars) | 33% (cars) | |
| Part of the MIT video benchmark showing a complex intersection with roads and sidewalks. | 5 | 202 | 196 | 145 | 6 (cars) | 3 (cars) | 3 (cars) | 50% (cars) | 50% (cars) |
| 10 | 202 | 831 | 145 | 6 (cars) | 4 (cars) | 2 (cars) | 66% (cars) | 33% (cars) |
Figure 10.Frame processing time against number of objects and routes in the image and points-per-trajectory parameter.