Literature DB >> 18344871

Identification of environmental sounds with varying spectral resolution.

Valeriy Shafiro1.   

Abstract

OBJECTIVES: This study investigated the identification of familiar environmental sounds with varying spectral resolution to establish (1) the number of frequency channels needed to perceive a large heterogeneous set of familiar environmental sounds, (2) the role of cross-channel asynchrony in identification performance, and (3) the acoustic correlates of the spectral resolution required for identification.
DESIGN: In experiment 1, 60 normal-hearing listeners identified environmental sounds in a 60-alternative closed--set response task as a function of six spectral resolution conditions (i.e., 2, 4, 8, 16, 24, and 32 frequency channels) obtained with an envelope-vocoder. In experiment 2, identification accuracy for varying amounts of cross-channel asynchrony was determined for sounds with preserved and degraded fine spectral structure in 10 normal-hearing listeners. Experiment 3 examined identification performance of 72 listeners across six spectral resolution conditions as in experiment 1, but using three different signal processing methods designed to minimize cross-channel asynchrony across channels. Follow-up acoustic and discriminant analyses were carried out to identify parameters that can distinguish environmental sounds based on required spectral resolution.
RESULTS: Identification accuracy tended to improve with increasing spectral resolution reaching the maximum of 76%. However, in experiment 1, performance did not change significantly beyond eight channels, whereas identification accuracy of some sounds declined with increasing spectral resolution. In experiment 2, increases in cross-channel asynchrony for sounds with preserved fine spectra had a small, but significant negative effect on identification. However, minimizing the amount of asynchrony had no significant effect on the overall identification of spectrally degraded sounds in experiment 3. Acoustic analysis indicated several spectral and temporal measures that differed significantly between sounds that required eight or fewer channels and those that required 16 or more channels for 70% correct identification. Discriminant analysis revealed that the sounds could be classified into high- and low-required spectral resolution groups with 83% accuracy based on only two acoustic parameters: the number of bursts in the envelope and the standard deviation of spectral centroid velocity.
CONCLUSIONS: Increasing spectral resolution generally had a positive effect on identification of familiar environmental sounds. However, across conditions performance accuracy remained well-below that of control stimuli with preserved fine spectra, despite becoming asymptotic above eight channels. Cross-channel asynchrony introduced during vocoder processing, although detrimental for some sounds, was not a major factor that prevented further improvement in overall accuracy. A spectral resolution greater than 32 channels, along with additional fine spectral and temporal information may be required for identification of a number of environmental sounds. This study provides a preliminary basis for optimizing environmental sound perception by cochlear implant users by highlighting the role of several acoustic factors important for environmental sound identification.

Entities:  

Mesh:

Year:  2008        PMID: 18344871     DOI: 10.1097/AUD.0b013e31816a0cf1

Source DB:  PubMed          Journal:  Ear Hear        ISSN: 0196-0202            Impact factor:   3.570


  18 in total

1.  The Relationship Between Environmental Sound Awareness and Speech Recognition Skills in Experienced Cochlear Implant Users.

Authors:  Michael S Harris; Lauren Boyce; David B Pisoni; Valeriy Shafiro; Aaron C Moberly
Journal:  Otol Neurotol       Date:  2017-10       Impact factor: 2.311

2.  Effects of real-time cochlear implant simulation on speech production.

Authors:  Elizabeth D Casserly
Journal:  J Acoust Soc Am       Date:  2015-05       Impact factor: 1.840

3.  Word Identification With Temporally Interleaved Competing Sounds by Younger and Older Adult Listeners.

Authors:  Karen S Helfer; Sarah F Poissant; Gabrielle R Merchant
Journal:  Ear Hear       Date:  2020 May/Jun       Impact factor: 3.570

4.  The incongruency advantage for environmental sounds presented in natural auditory scenes.

Authors:  Brian Gygi; Valeriy Shafiro
Journal:  J Exp Psychol Hum Percept Perform       Date:  2011-04       Impact factor: 3.332

5.  The influence of environmental sound training on the perception of spectrally degraded speech and environmental sounds.

Authors:  Valeriy Shafiro; Stanley Sheft; Brian Gygi; Kim Thien N Ho
Journal:  Trends Amplif       Date:  2012-08-12

6.  Supra-Segmental Changes in Speech Production as a Result of Spectral Feedback Degradation: Comparison with Lombard Speech.

Authors:  Elizabeth D Casserly; Yeling Wang; Nicholas Celestin; Lily Talesnick; David B Pisoni
Journal:  Lang Speech       Date:  2017-06-27       Impact factor: 1.500

7.  Perception of environmental sounds by experienced cochlear implant patients.

Authors:  Valeriy Shafiro; Brian Gygi; Min-Yu Cheng; Jay Vachhani; Megan Mulvey
Journal:  Ear Hear       Date:  2011 Jul-Aug       Impact factor: 3.570

8.  Objective measures of electrode discrimination with electrically evoked auditory change complex and speech-perception abilities in children with auditory neuropathy spectrum disorder.

Authors:  Shuman He; John H Grose; Holly F B Teagle; Craig A Buchman
Journal:  Ear Hear       Date:  2014 May-Jun       Impact factor: 3.570

9.  Processing pitch in a nonhuman mammal (Chinchilla laniger).

Authors:  William P Shofner; Megan Chaney
Journal:  J Comp Psychol       Date:  2012-09-17       Impact factor: 2.231

10.  Environmental Sound Awareness in Experienced Cochlear Implant Users and Cochlear Implant Candidates.

Authors:  Kevin R McMahon; Aaron C Moberly; Valeriy Shafiro; Michael S Harris
Journal:  Otol Neurotol       Date:  2018-12       Impact factor: 2.311

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.