| Literature DB >> 30384803 |
Raul Sanchez Lopez1, Federica Bianchi1, Michal Fereczkowski1, Sébastien Santurette1,2, Torsten Dau1.
Abstract
Pure-tone audiometry still represents the main measure to characterize individual hearing loss and the basis for hearing-aid fitting. However, the perceptual consequences of hearing loss are typically associated not only with a loss of sensitivity but also with a loss of clarity that is not captured by the audiogram. A detailed characterization of a hearing loss may be complex and needs to be simplified to efficiently explore the specific compensation needs of the individual listener. Here, it is hypothesized that any listener's hearing profile can be characterized along two dimensions of distortion: Type I and Type II. While Type I can be linked to factors affecting audibility, Type II reflects non-audibility-related distortions. To test this hypothesis, the individual performance data from two previous studies were reanalyzed using an unsupervised-learning technique to identify extreme patterns in the data, thus forming the basis for different auditory profiles. Next, a decision tree was determined to classify the listeners into one of the profiles. The analysis provides evidence for the existence of four profiles in the data. The most significant predictors for profile identification were related to binaural processing, auditory nonlinearity, and speech-in-noise perception. This approach could be valuable for analyzing other data sets to select the most relevant tests for auditory profiling and propose more efficient hearing-deficit compensation strategies.Entities:
Keywords: auditory profile; data-driven; hearing loss; suprathreshold auditory perception
Mesh:
Year: 2018 PMID: 30384803 PMCID: PMC6236853 DOI: 10.1177/2331216518807400
Source DB: PubMed Journal: Trends Hear ISSN: 2331-2165 Impact factor: 3.293
Figure 1.Sketch of the hypothesis. Hearing deficits arise from two independent types of distortions. Distortion Type I: distortions that accompany the loss of sensitivity. Distortion Type II: distortions that do not covary with sensitivity loss. Profile A: low distortion for both types. Profile B: high Type I distortion and low Type II distortion. Profile C: High distortion for both types. Profile D: low Type I distortion and high Type II distortion. NH = normal hearing.
Figure 2.Sketch of the method considered in this study. The upper panel shows the unsupervised learning techniques applied to the whole data set. The bottom panel shows the supervised learning method, which uses the original data as the input and the identified profiles from the archetypal analysis as the output. PC = principal component.
Results From the Dimensionality Reduction of the Two Data Sets.
| Study I: | Study II:
Johannesen et al. (2016) | ||||||
|---|---|---|---|---|---|---|---|
| Variable | Test | PC1 | PC2 | Variable | Test | PC1 | PC2 |
| HLLF | Hearing loss at low frequencies | 0.45 | −0.03 | HLHF | Hearing loss at high frequencies | 0.51 | 0.08 |
| HLHF | Hearing loss at high frequencies | 0.41 | −0.22 | OHC lossHF | Outer hair cell loss estimated at high frequencies | 0.55 | 0.05 |
| SRTQ | Speech reception threshold (SRT) in quiet | 0.46 | −0.01 | IHC lossHF | Inner hair cell loss estimated at high frequencies | 0.37 | −0.03 |
| SRTISTS | SRT in noise using international speech test signal | 0.47 | −0.17 | BM CompHF | Basilar membrane compression at high frequencies | 0.51 | −0.01 |
| DS | Word discrimination score | 0.33 | −0.24 | HLLF | Hearing loss at low frequencies | −0.00 | 0.62 |
| MCLLF | Most comfortable level at low frequencies | 0.14 | 0.46 | FMDT | Frequency modulation discrimination threshold | −0.02 | 0.42 |
| Bpdicho | BP dichotic condition | 0.20 | 0.53 | OHC lossLF | Outer hair cell loss estimated at low frequencies | 0.03 | 0.45 |
| Bptot | BP diotic + dichotic | 0.16 | 0.61 | IHC lossLF | Inner hair cell loss estimated at low frequencies | −0.11 | 0.45 |
Note. The table includes variables strongly correlated to PC1 (Distortion Type I, top four rows) and PC2 (Distortion Type II, bottom four rows) and their correlation coefficient obtained from the loadings of the principal component analysis.
Figure 3.Archetypes (Artyp): Extreme exemplars of the different patterns found in the data. (a) Normalized performance of each of the four archetypes from Study 1. (b) The same for Study 2. The variables are divided according to Table 1. The first four variables correspond to Distortion Type I and the remaining four to Distortion Type II. HLLF = hearing loss at low frequencies; HLHF = hearing loss at high frequencies; SRTQ = speech reception threshold (SRT) in quiet; SRTISTS = SRT in noise using international speech test signal; DS = word discrimination score; MCLLF = most comfortable level at low frequencies; Bpdicho = BP dichotic condition; Bptot = BP diotic + dichotic; OHC lossHF = outer hair cell loss estimated at high frequencies; IHC lossHF = inner hair cell loss estimated at high frequencies; BM CompHF = basilar membrane compression at high frequencies; FMDT = frequency modulation discrimination threshold; OHC lossLF = outer hair cell loss estimated at low frequencies; IHC lossLF = inner hair cell loss estimated at low frequencies.
Figure 4.Simplex plots for (a) Study 1 and (b) Study 2. Representation of the listeners in a two-dimensional space. The four archetypes are located at the corners, and the remaining observations are placed in the simplex plot depending on their similarity with the archetypes. HI = hearing impaired; NH, normal hearing; OD = obscure dysfunction.
Figure 5.Decision trees and confusion matrices of the classifiers from the analysis of both data sets. For each study, the resulting classifier has three nodes. The right branch corresponds to a poor performance and the left branch to a good performance. The accuracy of the classifier is shown in the form of a confusion matrix where the correspondence of the actual classes and predicted classes are evaluated. HLHF = hearing loss at high frequencies; Bpdicho = BP dichotic condition; Bptot = BP diotic + dichotic; HLLF = hearing loss at low frequencies; OHC loss LF = outer hair cell loss estimated at low frequencies.