| Literature DB >> 33526004 |
Sarineh Keshishzadeh1, Markus Garrett2, Sarah Verhulst1.
Abstract
Over the past decades, different types of auditory models have been developed to study the functioning of normal and impaired auditory processing. Several models can simulate frequency-dependent sensorineural hearing loss (SNHL) and can in this way be used to develop personalized audio-signal processing for hearing aids. However, to determine individualized SNHL profiles, we rely on indirect and noninvasive markers of cochlear and auditory-nerve (AN) damage. Our progressive knowledge of the functional aspects of different SNHL subtypes stresses the importance of incorporating them into the simulated SNHL profile, but has at the same time complicated the task of accomplishing this on the basis of noninvasive markers. In particular, different auditory-evoked potential (AEP) types can show a different sensitivity to outer-hair-cell (OHC), inner-hair-cell (IHC), or AN damage, but it is not clear which AEP-derived metric is best suited to develop personalized auditory models. This study investigates how simulated and recorded AEPs can be used to derive individual AN- or OHC-damage patterns and personalize auditory processing models. First, we individualized the cochlear model parameters using common methods of frequency-specific OHC-damage quantification, after which we simulated AEPs for different degrees of AN damage. Using a classification technique, we determined the recorded AEP metric that best predicted the simulated individualized cochlear synaptopathy profiles. We cross-validated our method using the data set at hand, but also applied the trained classifier to recorded AEPs from a new cohort to illustrate the generalizability of the method.Entities:
Keywords: auditory modeling; auditory-evoked potentials; cochlear synaptopathy; electrophysiology; envelope following response; individualized hearing-loss profile; sensorineural hearing loss
Year: 2021 PMID: 33526004 PMCID: PMC7871356 DOI: 10.1177/2331216520988406
Source DB: PubMed Journal: Trends Hear ISSN: 2331-2165 Impact factor: 3.293
Figure 1.Measured audiograms and DPOAE thresholds. A: Audiograms. B: DPOAE thresholds (DPTHs) of the participants in the first experiment.A: Audiograms. B: DPOAE thresholds (DPTHs) of the participants in the first experiment. DPOAE = distortion-product otoacoustic emission; yNH = young normal-hearing; oNH = older normal-hearing; oHI = older hearing-impaired.
Figure 2.Comparison of Exemplary NH and HI RAM-EFRs and ABRs. A: RAM-EFR of a yNH subject (yNH15) and the corresponding noise-floor (NF). Arrows specified by show the peak-to-noise floor magnitudes at the modulation frequency, that is, 120 Hz and the following harmonics. B: RAM-EFR of an oHI subject (oHI) and the corresponding NF. C: ABR of a yNH subject (yNH15). Arrows show the extracted wave-I and V amplitudes and latencies. D: ABR of an oHI subject (oHI12). NH = normal hearing; EFR = envelope-following response; HI = hearing-impaired; ABR = auditory brainstem response.
Extracted AEP-Metrics Definitions and Corresponding Standard Deviations.
| Metric | Symbol | Definition | Measure of variability |
|---|---|---|---|
| Rectangular-wave | RAM-EFR |
|
|
| Amplitude-modulated EFR | |||
| ABR-70 wave-I amplitude | w- | w- | |
| ABR-100 wave-I amplitude | w- | w- |
|
| ABR-70 wave-V amplitude | w- | w- | |
| ABR-100 wave-V amplitude | w- | w- | |
| ABR-70 wave-I latency | w- | w- | |
| ABR-100 wave-I latency | w- | w- |
|
| ABR-70 wave-V latency | w- | w- | |
| ABR-100 wave-V latency | w- | w- | |
| ABR wave-I amplitude growth | w-I-growth |
|
|
| ABR wave-V amplitude growth | w-V-growth |
|
|
| ABR wave-I latency growth | w- |
|
|
| ABR wave-V latency growth | w- |
|
|
Note. EFR = envelope-following response; ABR = auditory brainstem response. In the last column, represents the standard deviation. is the standard deviation of the bootstrapped metric.
Figure 3.Auditory Model Individualization. The block diagram on the left depicts the different stages of the employed auditory periphery model (Verhulst et al., 2018). Experimentally measured audiometric thresholds were inserted to the transmission-line cochlear model to adjust BM admittance function poles. The box on top-right corner shows the nonuniform AN population distribution across the CF for simulated different degrees of CS profiles. The profile without CS is shown in dark brown (indicated with N) and higher degrees of CS are shown according to the color map. HSR = high-spontaneous-rate; MSR = medium-spontaneous-rate; LSR = low-spontaneous-rate; EFR = envelope-following response; FFT = fast Fourier Transform; CS = cochlear synaptopathy; AN = auditory nerve; DPOAE = distortion-product otoacoustic emission.
Figure 4.A Comparison Between the Measured and Simulated AudTHs and DPTHs. The average (solid) and standard deviation (shaded area) of the measured (gray) and simulated (red) AudTHs and DPTHs are shown in Panels A and B, respectively. A comparison between sAudTH and mAudTH of a yNH and oHI listener is shown in Panel C. Panel D compares the sDPTH (dotted) and mDPTH (solid) of the same yNH and oHI listeners (Panel C). Frequency-specific group-averaged absolute prediction errors of AudTH and DPTH are shown in Panels E and F, respectively (yNH: blue, oNH: black, oHI: orange). DPTH = DPOAE threshold; yNH = young normal-hearing; oNH = older normal-hearing; oHI = older hearing-impaired.
Figure 5.Neural Network-Based Cochlear Model Individualization Using Measured and Simulated DPTHs. A: Random cochlear filter poles are generated and corresponding DPTHs are simulated using TL model (sDPTH). B: The normalized sDPTH () at four frequencies are introduced to the neural network as input. The random pole values generated in (A) are served as training target for . C: Measured DPTHs (mDPTHs) are fed into the trained neural network after pre-processing and individualized cochlear filter pole functions are predicted. DPTH = DPOAE threshold; DPOAE = distortion-product otoacoustic emission.
Figure 6.The Forward-Backward Classification Method. A: Forward classification: Classifier (1) is trained with individualized simulated AEP-derived metrics () for six CS profiles () and tested with measured AEP-derived metrics (). The predicted labels () for the study participants are entered to block (B). The backward-classification in (B) trains classifier (2) using measured AEP-derived metrics, that is, , and labels predicted by the forward classification, that is, . Classifier (2) is tested by and corresponding labels () are used to assess the classifier performance. CS = cochlear synaptopathy.
Combination of Metrics With the Highest Mean Accuracy () Values in Each , With i Combined Metrics. The reported results are based on AudTH-based cochlear model individualization method.
acc (%) | ||||
|---|---|---|---|---|
| Involved metrics | Involved subjects | Best combination of metrics |
|
|
| 1 | 35 | RAM-EFR | 68.57 | 2.95 |
| 2 | 35 | RAM-EFR, | 64.76 | 1.73 |
| 3 | 35 | RAM-EFR, | 53.33 | 7.86 |
| 4 | 35 | RAM-EFR, | 51.90 | 9.28 |
| 5 | 35 | RAM-EFR, | 52.86 | 8.69 |
| 6 | 35 | RAM-EFR, | 51.43 | 6.97 |
| 7 | 35 | RAM-EFR, | 45.24 | 6.79 |
| 8 | 35 | RAM-EFR, | 45.24 | 6.59 |
| 9 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 36.19 | 7.11 |
| 10 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 32.86 | 6.67 |
| 11 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 27.62 | 6.49 |
| 12 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 18.10 | 6.65 |
| 13 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 17.14 | 6.75 |
Note. EFR = envelope-following response. The standard deviations of obtained accuracies are shown in column.
Combination of Metrics With the Highest Mean Accuracy () Values in Each , With i Combined Metrics. The reported results are based on DPTH-based cochlear model individualization method.
| Involved metrics | Involved subjects | Best combination of metrics | acc (%) |
|
|---|---|---|---|---|
|
|
| |||
| 1 | 35 | RAM-EFR | 83.81 | 2.66 |
| 2 | 35 | RAM-EFR, | 58.57 | 1.34 |
| 3 | 35 | RAM-EFR, | 54.29 | 8.34 |
| 4 | 35 | RAM-EFR, | 61.90 | 8.22 |
| 5 | 35 | RAM-EFR, | 58.10 | 8.90 |
| 6 | 35 | RAM-EFR, | 48.10 | 7.40 |
| 7 | 35 | RAM-EFR, | 40.95 | 6.96 |
| 8 | 35 | RAM-EFR, | 35.71 | 7.12 |
| 9 | 35 | RAM-EFR, w-l-growth, w-V-growth, | 34.29 | 7.30 |
| 10 | 35 | RAM-EFR, w-l-growth, w-V-growth, | 29.52 | 6.53 |
| 11 | 35 | RAM-EFR, w-l-growth, w-V-growth, | 17.14 | 6.13 |
| 12 | 35 | RAM-EFR, w-l-growth, w-V-growth, | 16.67 | 2.63 |
| 13 | 35 | RAM-EFR, w-V-growth, w-l-growth, | 16.67 | 2.93 |
Note. EFR = envelope-following response. The standard deviations of obtained accuracies are shown in column.
Figure 7.Confusion Tables at Subgroup and Group-Levels for Both AudTH and DPTH-Based Cochlear Model Individualization Methods. The tables summarize the accuracy of classifier (2) in Figure 6B for subgroups as well as all groups together. AudTH = audiometric thresholds; DPTH = DPOAE threshold.
Predicted Individuals CS Profiles Obtained From AudTH and DPTH-Based Cochlear Individualization Methods, Based on RAM-EFR Metric.
| Group | No. | AudTH |
| DPTH |
|
|---|---|---|---|---|---|
| 1 | C | B | B | B | |
| 2 | A | A | A | B | |
| 5 | N | N | N | N | |
| 7 | N | N | N | N | |
| yNH | 8 | N | N | N | N |
| 9 | N | N | N | N | |
| 10 | N | N | N | A | |
| 11 | A | B | B | B | |
| 12 | N | N | N | A | |
| 13 | A | A | A | A | |
| 14 | N | N | N | N | |
|
| 15 | N | N | N | N |
| 1 | D | D | C | D | |
| 3 | E | E | E | E | |
| 4 | D | E | D | D | |
| 6 | D | D | D | D | |
| oNH | 7 | C | D | D | D |
| 8 | E | E | E | E | |
| 9 | N | A | N | A | |
| 10 | B | B | B | B | |
| 11 | C | D | D | D | |
| 12 | N | N | N | N | |
| 13 | E | E | E | E | |
|
| 14 | C | D | C | C |
| 1 | E | E | E | E | |
| 2 | E | D | E | D | |
| 3 | E | E | E | E | |
| 4 | E | E | E | E | |
| 5 | E | D | E | E | |
| oHI | 7 | E | D | E | E |
| 8 | E | E | E | E | |
| 9 | E | E | E | E | |
| 10 | E | E | E | E | |
| 11 | E | E | E | E | |
| 12 | E | E | E | E |
Columns and list the predicted CS profiles by designing individualized classifiers based on RAM-EFR metric.
Figure 8.A Comparison Between Simulated and Measured AEPs for a yNH Subject (yNH15). This subject was predicted to have a normal (N) CS profile, that is, without CS. A: Simulated RAM-EFR spectra for six CS profiles. The sum of the drawn arrows yields the RAM-EFR magnitude metric. B: Simulated ABR wave-I to 70 and 100 dB-peSPL clicks. Waveforms were shifted by 1 ms to match the experimental data. C: Simulated ABR wave-V to 70 and 100 dB-peSPL clicks. Waveforms were shifted by 3 ms to match the experimental data. The specified arrows in (B) and (C) indicate the extracted metrics. D: Measured RAM-EFR of the same listener (yNH15). Shown arrows, indicate the peak-to-noisefloor values. Akin to (A), the measured RAM-EFR metric was calculated by summing the arrow amplitudes. E: Measured ABR waveform to 70 dB-peSPL clicks. F: Measured ABR waveform to 100 dB-peSPL clicks. Arrows in (E) and (F) determine the extracted metrics. The shown simulated waveforms were predicted based on the DPTH-based cochlear individualization method. The table shows the exact value of EFR and ABR metrics derived from recordings and predicted CS-profile, “N,” of the same listener. EFR = envelope-following response; ABR = auditory brainstem response; CS = cochlear synaptopathy.
Figure 9.Implementation of the Validation Method. Measured RAM-EFRs (M) with predicted labels in Figure 6 () are scaled between zero and one to train a kNN classifier. The trained classifier is tested with scaled RAM-EFRs recorded from the second cohort comprised of yNH listeners. The bar-plot shows the predicted CS profiles for the second cohort listeners. The CS profiles labels in the bar-plot are similar to those defined in Figure 3. EFR = envelope-following response; CS = cochlear synaptopathy.