Annalise R Fletcher1, Alan A Wisler2, Megan J McAuliffe1, Kaitlin L Lansford3, Julie M Liss4. 1. Department of Communication Disorders, University of Canterbury, Christchurch, New Zealand. 2. School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe. 3. School of Communication Science & Disorders, Florida State University, Tallahassee. 4. Department of Speech and Hearing Science, Arizona State University, Tempe.
Abstract
Purpose: Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017). This study reexamines these features and assesses whether automated acoustic assessments can also be used to predict intelligibility gains. Method: Fifty speakers (7 older individuals and 43 with dysarthria) read a passage in habitual, loud, and slow speaking modes. Automated measurements of long-term average spectra, envelope modulation spectra, and Mel-frequency cepstral coefficients were extracted from short segments of participants' baseline speech. Intelligibility gains were statistically modeled, and the predictive power of the baseline speech measures was assessed using cross-validation. Results: Statistical models could predict the intelligibility gains of speakers they had not been trained on. The automated acoustic features were better able to predict speakers' improvement in the loud condition than the manual measures reported in the companion article. Conclusions: These acoustic analyses present a promising tool for rapidly assessing treatment options. Automated measures of baseline speech patterns may enable more selective inclusion criteria and stronger group outcomes within treatment studies.
Purpose: Behavioral speech modifications have variable effects on the intelligibility of speakers with dysarthria. In the companion article, a significant relationship was found between measures of speakers' baseline speech and their intelligibility gains following cues to speak louder and reduce rate (Fletcher, McAuliffe, Lansford, Sinex, & Liss, 2017). This study reexamines these features and assesses whether automated acoustic assessments can also be used to predict intelligibility gains. Method: Fifty speakers (7 older individuals and 43 with dysarthria) read a passage in habitual, loud, and slow speaking modes. Automated measurements of long-term average spectra, envelope modulation spectra, and Mel-frequency cepstral coefficients were extracted from short segments of participants' baseline speech. Intelligibility gains were statistically modeled, and the predictive power of the baseline speech measures was assessed using cross-validation. Results: Statistical models could predict the intelligibility gains of speakers they had not been trained on. The automated acoustic features were better able to predict speakers' improvement in the loud condition than the manual measures reported in the companion article. Conclusions: These acoustic analyses present a promising tool for rapidly assessing treatment options. Automated measures of baseline speech patterns may enable more selective inclusion criteria and stronger group outcomes within treatment studies.
Authors: Michael P Cannito; Debra M Suiter; Doriann Beverly; Lesya Chorna; Teresa Wolf; Ronald M Pfeiffer Journal: J Voice Date: 2011-12-29 Impact factor: 2.009
Authors: Gwen Van Nuffelen; Catherine Middag; Marc De Bodt; Jean-Pierre Martens Journal: Int J Lang Commun Disord Date: 2009 Sep-Oct Impact factor: 3.020
Authors: Visar Berisha; Steven Sandoval; Rene Utianski; Julie Liss; Andreas Spanias Journal: Proc IEEE Int Conf Acoust Speech Signal Process Date: 2013
Authors: Megan J McAuliffe; Annalise R Fletcher; Sarah E Kerr; Greg A O'Beirne; Tim Anderson Journal: Am J Speech Lang Pathol Date: 2017-02-01 Impact factor: 2.408
Authors: Annalise R Fletcher; Megan J McAuliffe; Kaitlin L Lansford; Donal G Sinex; Julie M Liss Journal: J Speech Lang Hear Res Date: 2017-11-09 Impact factor: 2.297