Literature DB >> 31870974

Preterm Infants' Pose Estimation With Spatio-Temporal Features.

Sara Moccia, Lucia Migliorelli, Virgilio Carnielli, Emanuele Frontoni.   

Abstract

OBJECTIVE: Preterm infants' limb monitoring in neonatal intensive care units (NICUs) is of primary importance for assessing infants' health status and motor/cognitive development. Herein, we propose a new approach to preterm infants' limb pose estimation that features spatio-temporal information to detect and track limb joints from depth videos with high reliability.
METHODS: Limb-pose estimation is performed using a deep-learning framework consisting of a detection and a regression convolutional neural network (CNN) for rough and precise joint localization, respectively. The CNNs are implemented to encode connectivity in the temporal direction through 3D convolution. Assessment of the proposed framework is performed through a comprehensive study with sixteen depth videos acquired in the actual clinical practice from sixteen preterm infants (the babyPose dataset).
RESULTS: When applied to pose estimation, the median root mean square distance, computed among all limbs, between the estimated and the ground-truth pose was 9.06 pixels, overcoming approaches based on spatial features only (11.27 pixels).
CONCLUSION: Results showed that the spatio-temporal features had a significant influence on the pose-estimation performance, especially in challenging cases (e.g., homogeneous image intensity). SIGNIFICANCE: This article significantly enhances the state of art in automatic assessment of preterm infants' health status by introducing the use of spatio-temporal features for limb detection and tracking, and by being the first study to use depth videos acquired in the actual clinical practice for limb-pose estimation. The babyPose dataset has been released as the first annotated dataset for infants' pose estimation.

Entities:  

Mesh:

Year:  2019        PMID: 31870974     DOI: 10.1109/TBME.2019.2961448

Source DB:  PubMed          Journal:  IEEE Trans Biomed Eng        ISSN: 0018-9294            Impact factor:   4.538


  5 in total

1.  Using spatial-temporal ensembles of convolutional neural networks for lumen segmentation in ureteroscopy.

Authors:  Jorge F Lazo; Aldo Marzullo; Sara Moccia; Michele Catellani; Benoit Rosa; Michel de Mathelin; Elena De Momi
Journal:  Int J Comput Assist Radiol Surg       Date:  2021-04-28       Impact factor: 2.924

2.  Machine Learning-Based Automatic Classification of Video Recorded Neonatal Manipulations and Associated Physiological Parameters: A Feasibility Study.

Authors:  Harpreet Singh; Satoshi Kusuda; Ryan M McAdams; Shubham Gupta; Jayant Kalra; Ravneet Kaur; Ritu Das; Saket Anand; Ashish Kumar Pandey; Su Jin Cho; Satish Saluja; Justin J Boutilier; Suchi Saria; Jonathan Palma; Avneet Kaur; Gautam Yadav; Yao Sun
Journal:  Children (Basel)       Date:  2020-12-22

Review 3.  The future of General Movement Assessment: The role of computer vision and machine learning - A scoping review.

Authors:  Nelson Silva; Dajie Zhang; Tomas Kulvicius; Alexander Gail; Carla Barreiros; Stefanie Lindstaedt; Marc Kraft; Sven Bölte; Luise Poustka; Karin Nielsen-Saines; Florentin Wörgötter; Christa Einspieler; Peter B Marschik
Journal:  Res Dev Disabil       Date:  2021-02-08

Review 4.  Video-Based Automatic Baby Motion Analysis for Early Neurological Disorder Diagnosis: State of the Art and Future Directions.

Authors:  Marco Leo; Giuseppe Massimo Bernava; Pierluigi Carcagnì; Cosimo Distante
Journal:  Sensors (Basel)       Date:  2022-01-24       Impact factor: 3.576

5.  Deep learning-based quantitative analyses of spontaneous movements and their association with early neurological development in preterm infants.

Authors:  Hyun Iee Shin; Hyung-Ik Shin; Moon Suk Bang; Don-Kyu Kim; Seung Han Shin; Ee-Kyung Kim; Yoo-Jin Kim; Eun Sun Lee; Seul Gi Park; Hye Min Ji; Woo Hyung Lee
Journal:  Sci Rep       Date:  2022-02-24       Impact factor: 4.379

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.