| Literature DB >> 30930734 |
Emily B J Coffey1,2,3,4, Isabelle Arseneau-Bruneau2,3,4,5, Xiaochen Zhang6, Robert J Zatorre2,3,4,5.
Abstract
The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of "multi-music" noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community.Entities:
Keywords: auditory stream segregation; auditory working memory; hearing-in-noise; interindividual variability; multilingualism; musical training; neuroplasticity; skill assessment tool
Year: 2019 PMID: 30930734 PMCID: PMC6427094 DOI: 10.3389/fnins.2019.00199
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Spectra of the target (red) and multi-music masking sound (gray), demonstrating that the multi-music masking sound has broad spectral content, extending above and below that of the target stimuli (target spectrum shown here is averaged across all Baseline condition melodies).
FIGURE 2Music-In-Noise Task (MINT) conditions included five conditions: (A) Baseline, (B) Rhythm, (C) Prediction, (D) Spatial, and (E) Visual. In the Spatial condition, the time course resembled that of (A), and an icon directed the listener’s attention to the side to which they should attend, before sound onset. In the Visual condition, the time course resembled that of (A), and a scrolling graphic representation provided timing and approximate pitch cues (an example frame is shown). The given examples schematically represent mismatch trials.
Percentile scores based on the fitted normal distribution.
| Percentile | MINT average (proportion correct) | Percentile | MINT average (proportion correct) |
|---|---|---|---|
| 5 | 0.52 | 50 | 0.78 |
| 10 | 0.55 | 55 | 0.81 |
| 15 | 0.58 | 60 | 0.84 |
| 20 | 0.61 | 65 | 0.86 |
| 25 | 0.64 | 70 | 0.89 |
| 30 | 0.67 | 75 | 0.92 |
| 35 | 0.70 | 80 | 0.95 |
| 40 | 0.72 | 85 | 0.98 |
| 45 | 0.75 | ≥90 | 1 |
FIGURE 3Music-In-Noise Task average score vs. (A) Fine pitch discrimination ability, and (B) auditory working memory (AWM) performance. Only musicians and non-musicians are included in this illustration in order to visually emphasize group differences.
FIGURE 4Musical training effects as a function of signal-to-noise ratio (SNR) of MINT stimuli, averaged across all MINT conditions. Musicians had significantly better scores overall. Error bars indicate 95% confidence interval.
FIGURE 5Musical training effects on MINT and HINT scores. Musicians demonstrated a perceptual advantage when both linguistic and musical stimuli are presented in noisy conditions (∗∗∗p < 0.001, ∗∗p < 0.01). Error bars indicate 95% confidence interval.
FIGURE 6Musical training effects on MINT scores by subtask and grand average. Error bars indicate 95% confidence interval.
FIGURE 7Music-In-Noise Task performance vs. cumulative practice hours, wherein linguistic experience is indicated by color and symbol. Irrespective of musical training, multilinguals had superior performance on the MINT.