Literature DB >> 31323377

The effects of high variability training on voice identity learning.

Nadine Lavan1, Sarah Knight2, Valerie Hazan2, Carolyn McGettigan3.   

Abstract

High variability training has been shown to benefit the learning of new face identities. In three experiments, we investigated whether this is also the case for voice identity learning. In Experiment 1a, we contrasted high variability training sets - which included stimuli extracted from a number of different recording sessions, speaking environments and speaking styles - with low variability stimulus sets that only included a single speaking style (read speech) extracted from one recording session (see Ritchie & Burton, 2017 for faces). Listeners were tested on an old/new recognition task using read sentences (i.e. test materials fully overlapped with the low variability training stimuli) and we found a high variability disadvantage. In Experiment 1b, listeners were trained in a similar way, however, now there was no overlap in speaking style or recording session between training sets and test stimuli. Here, we found a high variability advantage. In Experiment 2, variability was manipulated in terms of the number of unique items as opposed to number of unique speaking styles. Here, we contrasted the high variability training sets used in Experiment 1a with low variability training sets that included the same breadth of styles, but fewer unique items; instead, individual items were repeated (see Murphy, Ipser, Gaigg, & Cook, 2015 for faces). We found only weak evidence for a high variability advantage, which could be explained by stimulus-specific effects. We propose that high variability advantages may be particularly pronounced when listeners are required to generalise from trained stimuli to different-sounding, previously unheard stimuli. We discuss these findings in the context of mechanisms thought to underpin advantages for high variability training.
Copyright © 2019 Elsevier B.V. All rights reserved.

Keywords:  High variability training; Person perception; Voice identity; Voice learning

Year:  2019        PMID: 31323377     DOI: 10.1016/j.cognition.2019.104026

Source DB:  PubMed          Journal:  Cognition        ISSN: 0010-0277


  2 in total

1.  An online headphone screening test based on dichotic pitch.

Authors:  Alice E Milne; Roberta Bianco; Katarina C Poole; Sijia Zhao; Andrew J Oxenham; Alexander J Billig; Maria Chait
Journal:  Behav Res Methods       Date:  2020-12-09

2.  Unimodal and cross-modal identity judgements using an audio-visual sorting task: Evidence for independent processing of faces and voices.

Authors:  Nadine Lavan; Harriet M J Smith; Carolyn McGettigan
Journal:  Mem Cognit       Date:  2021-07-12
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.