| Literature DB >> 33694079 |
Abstract
Many studies of speech perception assess the intelligibility of spoken sentence stimuli by means of transcription tasks ('type out what you hear'). The intelligibility of a given stimulus is then often expressed in terms of percentage of words correctly reported from the target sentence. Yet scoring the participants' raw responses for words correctly identified from the target sentence is a time-consuming task, and hence resource-intensive. Moreover, there is no consensus among speech scientists about what specific protocol to use for the human scoring, limiting the reliability of human scores. The present paper evaluates various forms of fuzzy string matching between participants' responses and target sentences, as automated metrics of listener transcript accuracy. We demonstrate that one particular metric, the token sort ratio, is a consistent, highly efficient, and accurate metric for automated assessment of listener transcripts, as evidenced by high correlations with human-generated scores (best correlation: r = 0.940) and a strong relationship to acoustic markers of speech intelligibility. Thus, fuzzy string matching provides a practical tool for assessment of listener transcript accuracy in large-scale speech intelligibility studies. See https://tokensortratio.netlify.app for an online implementation.Entities:
Keywords: Automated assessment; Fuzzy string matching; Speech intelligibility; Token sort ratio; Transcription accuracy
Mesh:
Year: 2021 PMID: 33694079 PMCID: PMC8516752 DOI: 10.3758/s13428-021-01542-4
Source DB: PubMed Journal: Behav Res Methods ISSN: 1554-351X
Examples of various metrics of participants’ accuracy in speech intelligibility tasks
PWC = percentage words correct; LS = Levenshtein distance; J = Jaro distance; TSR = token sort ratio. PWC is calculated as a percentage (number of shared words divided by total number of words in target, multiplied by 100), allowing for misspellings. Autoscore values are given in percentages using the default settings
Correlations (Pearson’s r) of automated metrics of listener transcripts with human PWC scores. PWC = percentage words correct; LS = Levenshtein distance; J = Jaro distance; TSR = token sort ratio
| Dataset | LS ~ PWC | J ~ PWC | TSR ~ PWC | Autoscore ~ PWC |
|---|---|---|---|---|
| A: ‘cocktail party’ listening | −0.803 | −0.808 | 0.893 | 0.854 |
| B: speech-in-noise | −0.790 | −0.808 | 0.940 | 0.929 |
| Overall | −0.790 | −0.803 | 0.922 | 0.898 |
Model comparisons for predicting speech type (plain vs. Lombard). PWC = percentage words correct; LS = Levenshtein distance; J = Jaro distance; TSR = token sort ratio; df = degrees of freedom. Log-likelihood values closer to zero demonstrate better fit to the data
| Model comparison | Log-likelihood | |||
|---|---|---|---|---|
| Null model | −2603.5 | |||
| PWC vs. null model | −2399.6 | 407.74 | 5 | < 0.001 |
| LS vs. null model | −2495.6 | 215.69 | 5 | < 0.001 |
| J vs. null model | −2446.4 | 314.04 | 5 | < 0.001 |
| TSR vs. null model | −2386.7 | 433.51 | 5 | < 0.001 |
| Autoscore vs. null model | −2400.2 | 406.46 | 5 | < 0.001 |
Model comparisons for predicting individual talkers’ normalized amplitude modulation power. PWC = percentage words correct; LS = Levenshtein distance; J = Jaro distance; TSR = token sort ratio; df = degrees of freedom. Log-likelihood values closer to zero demonstrate better fit to the data
| Model comparison | Log-likelihood | |||
|---|---|---|---|---|
| Null model | −4441.4 | |||
| PWC vs. null model | −4166.9 | 549.04 | 5 | < 0.001 |
| LS vs. null model | −4307.4 | 267.98 | 5 | < 0.001 |
| J vs. null model | −4252.3 | 378.24 | 5 | < 0.001 |
| TSR vs. null model | −4142.2 | 598.55 | 5 | < 0.001 |
| Autoscore vs. null model | −4180.2 | 522.50 | 5 | < 0.001 |