Literature DB >> 32149924

Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users.

Karen Chan Barrett1, Monita Chatterjee2, Meredith T Caldwell3, Mickael L D Deroche4, Patpong Jiradejvong1, Aditya M Kulkarni2, Charles J Limb1.   

Abstract

OBJECTIVES: Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli.
DESIGN: Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition.
RESULTS: Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences.
CONCLUSIONS: In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.

Entities:  

Mesh:

Year:  2020        PMID: 32149924      PMCID: PMC8323060          DOI: 10.1097/AUD.0000000000000862

Source DB:  PubMed          Journal:  Ear Hear        ISSN: 0196-0202            Impact factor:   3.570


  46 in total

1.  Music perception with temporal cues in acoustic and electric hearing.

Authors:  Ying-Yee Kong; Rachel Cruz; J Ackland Jones; Fan-Gang Zeng
Journal:  Ear Hear       Date:  2004-04       Impact factor: 3.570

2.  Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition.

Authors:  Monita Chatterjee; Shu-Chen Peng
Journal:  Hear Res       Date:  2007-11-23       Impact factor: 3.208

3.  A follow-up study of long-term results after cochlear implantation in children and adolescents.

Authors:  J Kiefer; V Gall; C Desloovere; R Knecht; A Mikowski; C von Ilberg
Journal:  Eur Arch Otorhinolaryngol       Date:  1996       Impact factor: 2.503

Review 4.  Emotion-related self-regulation and its relation to children's maladjustment.

Authors:  Nancy Eisenberg; Tracy L Spinrad; Natalie D Eggum
Journal:  Annu Rev Clin Psychol       Date:  2010       Impact factor: 18.561

5.  Working memory in children with cochlear implants: problems are in storage, not processing.

Authors:  Susan Nittrouer; Amanda Caldwell-Tarr; Joanna H Lowenstein
Journal:  Int J Pediatr Otorhinolaryngol       Date:  2013-09-13       Impact factor: 1.675

6.  Can the Emotion Recognition Ability of Deaf Children be Enhanced? A Pilot Study.

Authors:  Murray J Dyck; Esther Denver
Journal:  J Deaf Stud Deaf Educ       Date:  2003

7.  The perception of Cantonese lexical tones by early-deafened cochlear implantees.

Authors:  Valter Ciocca; Alexander L Francis; Rani Aisha; Lena Wong
Journal:  J Acoust Soc Am       Date:  2002-05       Impact factor: 1.840

Review 8.  Technological, biological, and acoustical constraints to music perception in cochlear implant users.

Authors:  Charles J Limb; Alexis T Roy
Journal:  Hear Res       Date:  2013-05-07       Impact factor: 3.208

9.  Short-Term and Working Memory Impairments in Early-Implanted, Long-Term Cochlear Implant Users Are Independent of Audibility and Speech Production.

Authors:  Angela M AuBuchon; David B Pisoni; William G Kronenberger
Journal:  Ear Hear       Date:  2015 Nov-Dec       Impact factor: 3.570

10.  Child implant users' imitation of happy- and sad-sounding speech.

Authors:  David J Wang; Sandra E Trehub; Anna Volkova; Pascal van Lieshout
Journal:  Front Psychol       Date:  2013-06-21
View more
  3 in total

1.  Strategic perceptual weighting of acoustic cues for word stress in listeners with cochlear implants, acoustic hearing, or simulated bimodal hearing.

Authors:  Justin T Fleming; Matthew B Winn
Journal:  J Acoust Soc Am       Date:  2022-09       Impact factor: 2.482

2.  Weighting of Prosodic and Lexical-Semantic Cues for Emotion Identification in Spectrally Degraded Speech and With Cochlear Implants.

Authors:  Margaret E Richter; Monita Chatterjee
Journal:  Ear Hear       Date:  2021 Nov-Dec 01       Impact factor: 3.570

3.  Voice emotion recognition by Mandarin-speaking pediatric cochlear implant users in Taiwan.

Authors:  Yung-Song Lin; Che-Ming Wu; Charles J Limb; Hui-Ping Lu; I Jung Feng; Shu-Chen Peng; Mickael L D Deroche; Monita Chatterjee
Journal:  Laryngoscope Investig Otolaryngol       Date:  2022-01-13
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.