| Literature DB >> 34550974 |
Matthew R Whiteway1,2,3,4,5, Dan Biderman1,2,3,4,5, Yoni Friedman1,6, Mario Dipoppa1,2, E Kelly Buchanan1,2,3,4,5, Anqi Wu1,2,3,4,5, John Zhou7, Niccolò Bonacchi8, Nathaniel J Miska9, Jean-Paul Noel10, Erica Rodriguez2,5, Michael Schartner8, Karolina Socha11, Anne E Urai12, C Daniel Salzman2,5,13,14,15, John P Cunningham1,2,3,4, Liam Paninski1,2,3,4,5.
Abstract
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.Entities:
Mesh:
Year: 2021 PMID: 34550974 PMCID: PMC8489729 DOI: 10.1371/journal.pcbi.1009439
Source DB: PubMed Journal: PLoS Comput Biol ISSN: 1553-734X Impact factor: 4.779