| Literature DB >> 28076444 |
Sajid Nazir1, Scott Newey2,3, R Justin Irvine2, Fabio Verdicchio1, Paul Davidson2,4, Gorry Fairhurst1, René van der Wal4.
Abstract
The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change) to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.Entities:
Mesh:
Year: 2017 PMID: 28076444 PMCID: PMC5226779 DOI: 10.1371/journal.pone.0169758
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Raspberry Pi with the core components of WiseEye and examples of sensors and peripherals that could be added.
Fig 2The WiseEye system.
(a) Simplified diagram showing inter-connection of components with red lines indicating the power supply and green lines depicting the control/information flow from/to Raspberry Pi. (b) Inside view of WiseEye. The waterproof box has the dimension of 150 x 200 x 80 mm.
Fig 3The layout of the roof top study area, showing the positions of the cameras and the detection zone and camera field of view for WiseEye.
The numbers on the grid represent the number and locations of False Positive events outside the field of view of the camera.
Comparative data for WiseEye and Bushnell camera during the three-day roof top trial.
The tabulated numbers refer to the number of images recorded or, in the case of false negatives, the number of potential detections missed.
| Image Category | Trigger/Operation | Day | |||||
|---|---|---|---|---|---|---|---|
| 1 | 2 | 3 | |||||
| Bushnell | WiseEye | Bushnell | WiseEye | Bushnell | WiseEye | ||
| True Positive | PIR | 3 | 29 | 4 | 6 | 38 | 97 |
| False Positive | PIR | 0 | 6 | 1 | 1 | 8 | 39 |
| After Background Subtraction | N/A | 0 | N/A | 0 | N/A | 0 | |
| False Negative | Video | 6 | 3 | 4 | 4 | 10 | 8 |
PIR—Passive Infrared sensor, N/A—Not applicable.
Fig 4The image processing steps to determine whether an actual target object is present in the field of view by comparing the motion-activated image with a background image.
(a) A time-lapse image (with no target) used for subsequent operations as a background image. The Region of Interest (RoI) to be used for image processing is indicated by a red dashed rectangle. (b) motion-activated image showing the objects of interest and RoI. (c) Difference image between the RoI of (a) and (b), showing the birds as clusters of white “difference” pixels.