Literature DB >> 33360918

Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation.

Minke J de Boer1, Tim Jürgens2, Frans W Cornelissen3, Deniz Başkent4.   

Abstract

Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Copyright © 2020 The Authors. Published by Elsevier Ltd.. All rights reserved.

Entities:  

Keywords:  Age-related hearing loss; Audiovisual; Central scotoma; Dynamic; Emotion perception; Eye-tracking

Mesh:

Year:  2020        PMID: 33360918     DOI: 10.1016/j.visres.2020.12.002

Source DB:  PubMed          Journal:  Vision Res        ISSN: 0042-6989            Impact factor:   1.886


  2 in total

1.  Stock prediction based on bidirectional gated recurrent unit with convolutional neural network and feature selection.

Authors:  Qihang Zhou; Changjun Zhou; Xiao Wang
Journal:  PLoS One       Date:  2022-02-04       Impact factor: 3.240

2.  COVID-19 masks: A barrier to facial and vocal information.

Authors:  Nadia Aguillon-Hernandez; Renaud Jusiak; Marianne Latinus; Claire Wardak
Journal:  Front Neurosci       Date:  2022-09-23       Impact factor: 5.152

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.