| Literature DB >> 24696798 |
Ehsan Akafi1, Mansour Vali2, Negin Moradi3, Kowsar Baghban4.
Abstract
Hypernasality is a frequently occurring resonance disorder in children with cleft palate. In general, an operation is necessary to reduce the hypernasality and therefore an assessment of hypernasality is imperative to quantify the effect of the surgery and design the speech therapy sessions, which are crucial after surgery. In this paper, a new quantitative method is proposed to estimate hypernasality. The proposed method used the fact that an autoregressive (AR) model for vocal tract system of a patient with hypernasal speech is not accurate; because of the zeros appear in the frequency response of the vocal tract system. Therefore in our method, hypernasality was estimated by a quantity calculated from comparing the distance between the sequences of cepstrum coefficients extracted from AR model and autoregressive moving average model. K-means and Bayes theorem were utilized to classify the utterances of subjects by means of proposed index. We achieved the accuracy up to 81.12% on utterances and 97.14% on subjects. Since the proposed method needs only computer processing of speech data, compared with other clinical methods it provides a simple evaluation of hypernasality.Entities:
Keywords: Cepstrum; cleft palate; hypernasality; speech processing; speech therapy
Year: 2013 PMID: 24696798 PMCID: PMC3967423
Source DB: PubMed Journal: J Med Signals Sens ISSN: 2228-7477
Figure 1Simple model of human vocal tract
Figure 2Flow chart of hypernasality detection method
P values for different parameters of our method
Figure 3Boxplot of DI for subjects with cleft palate and normal subjects by using 120 normalized cepstrum coefficients and autoregressive moving average model with two zeros (left), five zeros (right)
Figure 4Mean of DI for each subject with 120 normalized cepstrum coefficients by using autoregressive moving average model with two zeros
Figure 5Mean of DI for each subject with 120 normalized cepstrum coefficients by using autoregressive moving average model with five zeros
Confusion matrix for utterances classification
Confusion matrix for subjects classification
Result of the classification on utterances (given in %)
Result of the classification on subjects (given in %)
Result of the classification on utterances, LPC method (given in %)
Result of the classification on subjects, LPC method (given in %)