Literature DB >> 31567103

Learning Deep Representations for Video-Based Intake Gesture Detection.

Philipp V Rouast, Marc T P Adam.   

Abstract

Automatic detection of individual intake gestures during eating occasions has the potential to improve dietary monitoring and support dietary recommendations. Existing studies typically make use of on-body solutions such as inertial and audio sensors, while video is used as ground truth. Intake gesture detection directly based on video has rarely been attempted. In this study, we address this gap and show that deep learning architectures can successfully be applied to the problem of video-based detection of intake gestures. For this purpose, we collect and label video data of eating occasions using 360-degree video of 102 participants. Applying state-of-the-art approaches from video action recognition, our results show that (1) the best model achieves an F1 score of 0.858, (2) appearance features contribute more than motion features, and (3) temporal context in form of multiple video frames is essential for top model performance.

Entities:  

Mesh:

Year:  2019        PMID: 31567103     DOI: 10.1109/JBHI.2019.2942845

Source DB:  PubMed          Journal:  IEEE J Biomed Health Inform        ISSN: 2168-2194            Impact factor:   5.772


  2 in total

1.  A texture-aware U-Net for identifying incomplete blinking from eye videography.

Authors:  Qinxiang Zheng; Xin Zhang; Juan Zhang; Furong Bai; Shenghai Huang; Jiantao Pu; Wei Chen; Lei Wang
Journal:  Biomed Signal Process Control       Date:  2022-03-16       Impact factor: 5.076

2.  Deep Learning-Based Multimodal Data Fusion: Case Study in Food Intake Episodes Detection Using Wearable Sensors.

Authors:  Nooshin Bahador; Denzil Ferreira; Satu Tamminen; Jukka Kortelainen
Journal:  JMIR Mhealth Uhealth       Date:  2021-01-28       Impact factor: 4.773

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.