Literature DB >> 10530032

On the number of channels needed to understand speech.

P C Loizou1, M Dorman, Z Tu.   

Abstract

Recent studies have shown that high levels of speech understanding could be achieved when the speech spectrum was divided into four channels and then reconstructed as a sum of four noise bands or sine waves with frequencies equal to the center frequencies of the channels. In these studies speech understanding was assessed using sentences produced by a single male talker. The aim of experiment 1 was to assess the number of channels necessary for a high level of speech understanding when sentences were produced by multiple talkers. In experiment 1, sentences produced by 135 different talkers were processed through n (2 < or = n < or = 16) number of channels, synthesized as a sum of n sine waves with frequencies equal to the center frequencies of the filters, and presented to normal-hearing listeners for identification. A minimum of five channels was needed to achieve a high level (90%) of speech understanding. Asymptotic performance was achieved with eight channels, at least for the speech material used in this study. The outcome of experiment 1 demonstrated that the number of channels needed to reach asymptotic performance varies as a function of the recognition task and/or need for listeners to attend to fine phonetic detail. In experiment 2, sentences were processed through 6 and 16 channels and quantized into a small number of steps. The purpose of this experiment was to investigate whether listeners use across-channel differences in amplitude to code frequency information, particularly when speech is processed through a small number of channels. For sentences processed through six channels there was a significant reduction in speech understanding when the spectral amplitudes were quantized into a small number (< 8) of steps. High levels (92%) of speech understanding were maintained for sentences processed through 16 channels and quantized into only 2 steps. The findings of experiment 2 suggest an inverse relationship between the importance of spectral amplitude resolution (number of steps) and spectral resolution (number of channels).

Mesh:

Year:  1999        PMID: 10530032     DOI: 10.1121/1.427954

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  54 in total

1.  Features of stimulation affecting tonal-speech perception: implications for cochlear prostheses.

Authors:  Li Xu; Yuhjung Tsai; Bryan E Pfingst
Journal:  J Acoust Soc Am       Date:  2002-07       Impact factor: 1.840

2.  The intelligibility of noise-vocoded speech: spectral information available from across-channel comparison of amplitude envelopes.

Authors:  Brian Roberts; Robert J Summers; Peter J Bailey
Journal:  Proc Biol Sci       Date:  2010-11-10       Impact factor: 5.349

3.  Relative contributions of spectral and temporal cues for phoneme recognition.

Authors:  Li Xu; Catherine S Thompson; Bryan E Pfingst
Journal:  J Acoust Soc Am       Date:  2005-05       Impact factor: 1.840

Review 4.  The development of the Nucleus Freedom Cochlear implant system.

Authors:  James F Patrick; Peter A Busby; Peter J Gibson
Journal:  Trends Amplif       Date:  2006-12

5.  Evaluation of TIMIT sentence list equivalency with adult cochlear implant recipients.

Authors:  Sarah E King; Jill B Firszt; Ruth M Reeder; Laura K Holden; Michael Strube
Journal:  J Am Acad Audiol       Date:  2012-05       Impact factor: 1.664

6.  Music perception and appraisal: cochlear implant users and simulated cochlear implant listening.

Authors:  Rose Wright; Rosalie M Uchanski
Journal:  J Am Acad Audiol       Date:  2012-05       Impact factor: 1.664

7.  Spectral and temporal cues for speech recognition: implications for auditory prostheses.

Authors:  Li Xu; Bryan E Pfingst
Journal:  Hear Res       Date:  2007-12-28       Impact factor: 3.208

8.  Age-Related Differences in the Processing of Temporal Envelope and Spectral Cues in a Speech Segment.

Authors:  Matthew J Goupell; Casey R Gaskins; Maureen J Shader; Erin P Walter; Samira Anderson; Sandra Gordon-Salant
Journal:  Ear Hear       Date:  2017 Nov/Dec       Impact factor: 3.570

9.  The process of spoken word recognition in the face of signal degradation.

Authors:  Ashley Farris-Trimble; Bob McMurray; Nicole Cigrand; J Bruce Tomblin
Journal:  J Exp Psychol Hum Percept Perform       Date:  2013-09-16       Impact factor: 3.332

10.  Contribution of consonant landmarks to speech recognition in simulated acoustic-electric hearing.

Authors:  Fei Chen; Philipos C Loizou
Journal:  Ear Hear       Date:  2010-04       Impact factor: 3.570

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.