Literature DB >> 29407995

Snoring classified: The Munich-Passau Snore Sound Corpus.

Christoph Janott1, Maximilian Schmitt2, Yue Zhang3, Kun Qian4, Vedhas Pandit2, Zixing Zhang3, Clemens Heiser5, Winfried Hohenhorst6, Michael Herzog7, Werner Hemmert8, Björn Schuller9.   

Abstract

OBJECTIVE: Snoring can be excited in different locations within the upper airways during sleep. It was hypothesised that the excitation locations are correlated with distinct acoustic characteristics of the snoring noise. To verify this hypothesis, a database of snore sounds is developed, labelled with the location of sound excitation.
METHODS: Video and audio recordings taken during drug induced sleep endoscopy (DISE) examinations from three medical centres have been semi-automatically screened for snore events, which subsequently have been classified by ENT experts into four classes based on the VOTE classification. The resulting dataset containing 828 snore events from 219 subjects has been split into Train, Development, and Test sets. An SVM classifier has been trained using low level descriptors (LLDs) related to energy, spectral features, mel frequency cepstral coefficients (MFCC), formants, voicing, harmonic-to-noise ratio (HNR), spectral harmonicity, pitch, and microprosodic features.
RESULTS: An unweighted average recall (UAR) of 55.8% could be achieved using the full set of LLDs including formants. Best performing subset is the MFCC-related set of LLDs. A strong difference in performance could be observed between the permutations of train, development, and test partition, which may be caused by the relatively low number of subjects included in the smaller classes of the strongly unbalanced data set.
CONCLUSION: A database of snoring sounds is presented which are classified according to their sound excitation location based on objective criteria and verifiable video material. With the database, it could be demonstrated that machine classifiers can distinguish different excitation location of snoring sounds in the upper airway based on acoustic parameters.
Copyright © 2018 Elsevier Ltd. All rights reserved.

Keywords:  Drug-Induced Sleep Endoscopy; Machine learning; Obstructive Sleep Apnea; Primary snoring; Snore sound classification

Mesh:

Year:  2018        PMID: 29407995     DOI: 10.1016/j.compbiomed.2018.01.007

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   4.589


  5 in total

1.  [VOTE versus ACLTE: comparison of two snoring noise classifications using machine learning methods].

Authors:  C Janott; M Schmitt; C Heiser; W Hohenhorst; M Herzog; M Carrasco Llatas; W Hemmert; B Schuller
Journal:  HNO       Date:  2019-09       Impact factor: 1.284

2.  Computer Audition for Fighting the SARS-CoV-2 Corona Crisis-Introducing the Multitask Speech Corpus for COVID-19.

Authors:  Kun Qian; Maximilian Schmitt; Huaiyuan Zheng; Tomoya Koike; Jing Han; Juan Liu; Wei Ji; Junjun Duan; Meishu Song; Zijiang Yang; Zhao Ren; Shuo Liu; Zixing Zhang; Yoshiharu Yamamoto; Bjorn W Schuller
Journal:  IEEE Internet Things J       Date:  2021-03-22       Impact factor: 10.238

3.  Automatic classification of excitation location of snoring sounds.

Authors:  Jingpeng Sun; Xiyuan Hu; Silong Peng; Chung-Kang Peng; Yan Ma
Journal:  J Clin Sleep Med       Date:  2021-05-01       Impact factor: 4.062

4.  DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data.

Authors:  Shahin Amiriparian; Tobias Hübner; Vincent Karas; Maurice Gerczuk; Sandra Ottl; Björn W Schuller
Journal:  Front Artif Intell       Date:  2022-03-17

5.  PSG-Audio, a scored polysomnography dataset with simultaneous audio recordings for sleep apnea studies.

Authors:  Georgia Korompili; Anastasia Amfilochiou; Lampros Kokkalas; Stelios A Mitilineos; Nicolas- Alexander Tatlas; Marios Kouvaras; Emmanouil Kastanakis; Chrysoula Maniou; Stelios M Potirakis
Journal:  Sci Data       Date:  2021-08-03       Impact factor: 6.444

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.