Literature DB >> 30684204

Attentional resources contribute to the perceptual learning of talker idiosyncrasies in audiovisual speech.

Alexandra Jesse1, Elina Kaplan2.   

Abstract

To recognize audiovisual speech, listeners evaluate and combine information obtained from the auditory and visual modalities. Listeners also use information from one modality to adjust their phonetic categories to a talker's idiosyncrasy encountered in the other modality. In this study, we examined whether the outcome of this cross-modal recalibration relies on attentional resources. In a standard recalibration experiment in Experiment 1, participants heard an ambiguous sound, disambiguated by the accompanying visual speech as either /p/ or /t/. Participants' primary task was to attend to the audiovisual speech while either monitoring a tone sequence for a target tone or ignoring the tones. Listeners subsequently categorized the steps of an auditory /p/-/t/ continuum more often in line with their exposure. The aftereffect of phonetic recalibration was reduced, but not eliminated, by attentional load during exposure. In Experiment 2, participants saw an ambiguous visual speech gesture that was disambiguated auditorily as either /p/ or /t/. At test, listeners categorized the steps of a visual /p/-/t/ continuum more often in line with the prior exposure. Imposing load in the auditory modality during exposure did not reduce the aftereffect of this type of cross-modal phonetic recalibration. Together, these results suggest that auditory attentional resources are needed for the processing of auditory speech and/or for the shifting of auditory phonetic category boundaries. Listeners thus need to dedicate attentional resources in order to accommodate talker idiosyncrasies in audiovisual speech.

Entities:  

Keywords:  Multisensory processing; Perceptual learning; Speech perception

Mesh:

Year:  2019        PMID: 30684204     DOI: 10.3758/s13414-018-01651-x

Source DB:  PubMed          Journal:  Atten Percept Psychophys        ISSN: 1943-3921            Impact factor:   2.199


  1 in total

1.  Boosting lexical support does not enhance lexically guided perceptual learning.

Authors:  Sahil Luthra; James S Magnuson; Emily B Myers
Journal:  J Exp Psychol Learn Mem Cogn       Date:  2020-10-15       Impact factor: 3.051

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.