| Literature DB >> 29060105 |
Vasileios Papapanagiotou, Christos Diou, Anastasios Delopoulos.
Abstract
Detecting chewing sounds from a microphone placed inside the outer ear for eating behaviour monitoring still remains a challenging task. This is mainly due the difficulty in discriminating non-chewing sounds (e.g. speech or sounds caused by walking) from chews, as well as due to to the high variability of the chewing sounds of different food types. Most approaches rely on detecting distictive structures on the sound wave, or on extracting a set of features and using a classifier to detect chews. In this work, we propose to use feature-learning in the time domain with 1-dimensional convolutional neural networks for for chewing detection. We apply a network of convolutional layers followed by fully connected layers directly on windows of the audio samples to detect chewing activity, and then aggregate individual chews to eating events. Experimental results on a large, semi-free living dataset collected in the context of the SPLENDID project indicate high effectiveness, with an accuracy of 0.980 and F1 score of 0.883.Mesh:
Year: 2017 PMID: 29060105 DOI: 10.1109/EMBC.2017.8037060
Source DB: PubMed Journal: Conf Proc IEEE Eng Med Biol Soc ISSN: 1557-170X