| Literature DB >> 31349617 |
Waseem Abbas1, David Masip Rodo2.
Abstract
Neuroscience has traditionally relied on manually observing laboratory animals in controlled environments. Researchers usually record animals behaving freely or in a restrained manner and then annotate the data manually. The manual annotation is not desirable for three reasons; (i) it is time-consuming, (ii) it is prone to human errors, and (iii) no two human annotators will 100% agree on annotation, therefore, it is not reproducible. Consequently, automated annotation for such data has gained traction because it is efficient and replicable. Usually, the automatic annotation of neuroscience data relies on computer vision and machine learning techniques. In this article, we have covered most of the approaches taken by researchers for locomotion and gesture tracking of specific laboratory animals, i.e. rodents. We have divided these papers into categories based upon the hardware they use and the software approach they take. We have also summarized their strengths and weaknesses.Entities:
Keywords: automated annotation; behavioral phenotyping; gesture tracking; locomotion tracking; machine learning; neuroscience
Mesh:
Year: 2019 PMID: 31349617 PMCID: PMC6696321 DOI: 10.3390/s19153274
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Frontal view of a mouse with its moving limbs marked.
Figure 2Lateral view of a mouse with its moving limbs marked.
Figure 3Categories and their hierarchy of the approaches covered in this survey paper.
Comparison of different approaches. Legend:: Invasive: Approaches which requires surgery to put markers for tracking, semi-invasive: Approaches which do not need surgery for marker insertion, non-invasive: no marker needed. Real time means that the system can process frames at the same rate they are being acquired. If it needs specialized equipment apart from standard video cameras and housing setup, it is pointed out in the last column.
|
|
|
|
| ||
|---|---|---|---|---|---|
| [ | Commercial | Paid | Comparison with ground truth not provided. One paper reports the reproducibility: 2.65% max SD | Yes | Yes |
| [ | Commercial | Paid | Comparison with ground truth not provided. One paper reports the reproducibility: 1.57% max SD | Yes | Yes |
| [ | Research | data and code for demo available at | tracking performance not reported, behavioral classification of 12 traits reported to be max at 71% | Tracking real time, classification offline | yes |
| [ | Research | not available | tracking: SD of only 0.034% when compared with ground truth, Max SD of 1.71 degrees in estimating joint angle | real time legs and joints tracking | yes, invasive |
| [ | Research | not available | tracking performance not reported explicitly | real time whisker tracking | yes, semi-invasive |
| [ | Research | available on request | whisker tracking performance not reported explicitly | real time single whisker tracking | yes, semi-invasive |
| [ | Research | not available | head motion tracked correctly with a max false positive of 13% | real time head and snout tracking | yes, semi-invasive |
| [ | Research | not available | head motion tracked continuously with a reported SD of only 0.5 mm | real time head and snout tracking | yes, semi-invasive |
| [ | Research | not available | head motion tracked with an accuracy of 96.3% and the tracking can be reproduced over multiple studies with a correlation coefficient of 0.78 | real time head tracking | yes, semi-invasive |
| [ | Research | code and demo data available at | they reported a correlation between whisking amplitude and velocity as a measure of reliability, R = 0.89 | Offline head and whisker tracking | no, invasive |
| [ | Research | not available | Tracking and gait prediction with confidence of 95%, deviation between human annotator and computer at 8% | Offline | yes, semi-invasive |
| [ | Research | not available | Paw tracked with an accuracy of 88.5 on transparent floor and 83.2% on opaque floor | Offline | yes, semi-invasive |
| [ | Research | code available at | tail and paws tracked with an accuracy >90% | Real time | yes, semi-invasive |
| [ | Research | not available | 5 class behavioral classification problem, accuracy in bright condition is 95.34 and in dark conditions is 89.4% | offline | yes, non-invasive |
| [ | Research | not available | 6 behavioral class accuracy: 66.9%, 4 behavioral class accuracy: 76.3% | offline | yes, non-invasive |
| [ | Research | code available at | whisker detection rate: 76.9%, peak spatial error in whisker detection: 10 pixels | offline | yes, non-invasive |
| [ | Research | not available | Peak deviation between human annotator and automated annotation: 0.5 mm with a camera of 6 pixel/mm resolution | offline | yes, non-invasive |
| [ | Research | not available | Tracking accuracy >90% after the algorithm was assisted by human users in 3–5% of the frames | offline | yes, semi-invasive |
| [ | Research | code available at | A max deviation of 17.7% between human and automated whisker annotation | offline | yes, non-invasive |
| [ | Research | not available | Maximum paw detection error: 5.9%, minimum error : 0.4% | offline | no, non-invasive |
| [ | Research | Source code at | Behavioral classification: 1% false positive rate | offline | no, semi-invasive |
| [ | Research | Source code available at | Whisker tracing accuracy: max error of 0.45 pixels | offline | no, non-invasive |
| [ | Research | not available | Correlation with annotated data; for whiskers r = 0.78, for limbs r = 0.85 | real time | no, non-invasive |
| [ | Research | code available at | Velocity calculated by AGATHA was off from manually calculated velocity by 1.5% | real time | no, non-invasive |
| [ | Research | code available at | Detected pose matched ground truth with an accuracy of | real time on GPUs | no, non-invasive |
| [ | Research | code available at | No performance metric reported | offline | no, non-invasive |