Matthew R Hoffman1, Ketan Surender, Erin E Devine, Jack J Jiang. 1. Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, University of Wisconsin-Madison School of Medicine and Public Health, Madison, Wisconsin, USA.
Abstract
OBJECTIVE: Laryngeal function can be evaluated from multiple perspectives, including aerodynamic input, acoustic output, and mucosal wave vibratory characteristics. To determine the classifying power of each of these, we used a multilayer perceptron artificial neural network (ANN) to classify data as normal, glottic insufficiency, or tension asymmetry. STUDY DESIGN: Case series analyzing data obtained from excised larynges simulating different conditions. METHODS: Aerodynamic, acoustic, and videokymographic data were collected from excised canine larynges simulating normal, glottic insufficiency, and tension asymmetry. Classification of samples was performed using a multilayer perceptron ANN. RESULTS: A classification accuracy of 84% was achieved when including all parameters. Classification accuracy dropped below 75% when using only aerodynamic or acoustic parameters and below 65% when using only videokymographic parameters. CONCLUSIONS: Samples were classified with the greatest accuracy when using a wide range of parameters. Decreased classification accuracies for individual groups of parameters demonstrate the importance of a comprehensive voice assessment when evaluating dysphonia.
OBJECTIVE: Laryngeal function can be evaluated from multiple perspectives, including aerodynamic input, acoustic output, and mucosal wave vibratory characteristics. To determine the classifying power of each of these, we used a multilayer perceptron artificial neural network (ANN) to classify data as normal, glottic insufficiency, or tension asymmetry. STUDY DESIGN: Case series analyzing data obtained from excised larynges simulating different conditions. METHODS: Aerodynamic, acoustic, and videokymographic data were collected from excised canine larynges simulating normal, glottic insufficiency, and tension asymmetry. Classification of samples was performed using a multilayer perceptron ANN. RESULTS: A classification accuracy of 84% was achieved when including all parameters. Classification accuracy dropped below 75% when using only aerodynamic or acoustic parameters and below 65% when using only videokymographic parameters. CONCLUSIONS: Samples were classified with the greatest accuracy when using a wide range of parameters. Decreased classification accuracies for individual groups of parameters demonstrate the importance of a comprehensive voice assessment when evaluating dysphonia.
Authors: Erin E Devine; Erin E Bulleit; Matthew R Hoffman; Timothy M McCulloch; Jack J Jiang Journal: J Speech Lang Hear Res Date: 2012-05-04 Impact factor: 2.297
Authors: Veronika Birk; Michael Döllinger; Alexander Sutor; David A Berry; Dominik Gedeon; Maximilian Traxdorf; Olaf Wendler; Christopher Bohr; Stefan Kniesburges Journal: J Acoust Soc Am Date: 2017-03 Impact factor: 1.840
Authors: Veronika Birk; Stefan Kniesburges; Marion Semmler; David A Berry; Christopher Bohr; Michael Döllinger; Anne Schützenberger Journal: J Acoust Soc Am Date: 2017-10 Impact factor: 1.840