Literature DB >> 34401506

Speech comprehension across multiple CI processor generations: Scene dependent signal processing.

Matthias Hey1, Britta Böhnke1, Alexander Mewes1, Patrick Munder1, Stefan J Mauger2, Thomas Hocke3.   

Abstract

OBJECTIVES: In clinical practice, characterization of speech comprehension for cochlear implant (CI) patients is typically administered by a set of suprathreshold measurements in quiet and in noise. This study investigates speech comprehension of the three most recent cochlear implant sound processors; CP810, CP910, and CP1000 (Cochlear Limited). To compare sound processor performance across generations and input dynamic range changes, the state-of-the art signal processing technologies available in each sound processor were enabled. Outcomes will be assessed across a range of stimulation intensities, and finally analyzed with respect to normal hearing listeners.
METHODS: In a prospective study, 20 experienced postlingually deafened CI patients who received a Nucleus CI in the ENT department of the University Hospital of SH in Kiel were recruited. Speech comprehension was measured in quiet at 40, 50, and 65 dBSPL with monosyllabic words as well as by speech reception threshold for two-digit numbers. In noise, speech reception thresholds were measured with the adaptive German matrix test with speech and noise in front.
RESULTS: We found that high levels of open-set speech comprehension are achieved at suprathreshold presentation levels in quiet. However, results at lower test levels have remained mostly unchanged for tested sound processors with default dynamic range. Expanding the lower limit of the acoustic input dynamic range yields better speech comprehension at lower presentation levels. In noise the application of ForwardFocus improves the speech reception. Overall, a continuous improvement for speech perception across three generations of CI sound processors was found.
CONCLUSIONS: Findings motivate further development of signal pre-processing, an additional focus of clinical work on lower stimulation levels, and automation of ForwardFocus. LEVEL OF EVIDENCE: 2.
© 2021 The Authors. Laryngoscope Investigative Otolaryngology published by Wiley Periodicals LLC. on behalf of The Triological Society.

Entities:  

Keywords:  Cochlear implant; ForwardFocus; noise reduction; signal processing; speech audiometry; speech intelligibility

Year:  2021        PMID: 34401506      PMCID: PMC8356868          DOI: 10.1002/lio2.564

Source DB:  PubMed          Journal:  Laryngoscope Investig Otolaryngol        ISSN: 2378-8038


INTRODUCTION

Severe to profound sensorineural hearing loss can nowadays successfully be treated by cochlear implantation. However, cochlear implant (CI) recipients still face communication challenges in some everyday communication situations. Speech perception outcomes in clinical environments can be regarded as a surrogate for treatment success. One main outcome measure of cochlear implantation is speech perception in quiet. This is normally tested at suprathreshold levels between 60 and 70 dBSPL., , , A great proportion of these recipients are able to achieve high levels of open‐set speech perception in such situations. Another main outcome measure is speech perception in noise. This can be assessed at fixed signal‐to‐noise ratios, , for accessing percent correct scores or at variable signal‐to‐noise ratios by adaptively administering a speech reception threshold (SRT) test., , In noisy listening situations, CI recipients experience a larger deterioration of speech perception than normal hearing subjects, accompanied by a considerable variability in quiet as well in noise., , , Recent studies, have provided new insights into the auditory environment of CI recipients. Using CI data logging which classify and record recipient listening environments into one of six scenes (speech in noise, speech, noise, wind, quiet, and music), a large variability of classified listening situations were found across, and within all age groups., These findings were complemented by an analysis of the acoustical signal levels in the corresponding classified listening situation. In these studies, it was found that a considerable portion of speech related listening situations occurred below 60 dBSPL signal level. The conclusion was that measurements at lower presentation levels might help to complete the audiometric evaluation of CI patients. Some studies have already included additional measurements at lower presentation levels, for example, 50 dBSPL , , or even 40 dBSPL. , Other recent studies, , have focused on the mapping of CI patients and investigated the effect in different listening situation. These studies focused specifically on setting T‐level and described improved speech comprehension by such CI fitting methods. It can be concluded that the optimization of CI system mapping for different acoustic environments remains challenging as a clear rule for such fitting has not yet been found. During the last two decades a range of dedicated signal processing algorithms for CI sound processors such as conventional beamformer, dynamic range optimization, and spatial post‐filter technologies were introduced. Each has aimed to improve the speech perception in a specific listening situation like speech in quiet,, speech in noise,, speech in spatially distributed noise,, , , and speech in fluctuating competing signals., , The most recent sound processor generation from Cochlear (CP1000) offers a dedicated preprocessing technology for spatially distributed fluctuating competing signals, enabling CI recipients to better understand conversations in one of the most challenging listening situations, commonly referred to as cocktail party noise. The primary aim of this study is to investigate to which degree a clinical audiometric test battery was able to assess suitability of signal processing technologies for individuals. The most common clinical practice measurements of speech in quiet, and speech in noise will be used, and complemented with the addition of tests at lower levels in quiet. The secondary goal was to investigate if changes in signal preprocessing cross multiple sound processor generations have provided improved speech comprehension. Signal processing changes investigated were (a) increasing the instantaneous input dynamic range by reducing its threshold sound pressure level (T‐SPL) for better soft level speech comprehension, and (b) enabling the new sound processing technology ForwardFocus which was primarily designed for spatially distributed fluctuating competing signals in noisy test environments. Testing was conducted under standard clinical test conditions with speech and noise from the front.

MATERIALS AND METHODS

Research participants

Twenty recipients participated in this investigation. It was approved and carried out in accordance with local ethics approval (D 6/18). Subjects gave their informed consent to participate in the investigation. All procedures involving human participants were performed in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Selection criteria of the CI research patients were post‐lingual onset of deafness, implantation with a Nucleus CI24RE or CI500 series cochlear implant (Cochlear Limited, Australia), adult (>18 years), and use of a CI system for more than 5 years. Additionally, participants had to demonstrate scores of 80% or more in the Oldenburg sentence test in quiet at 65 dBSPL. All patients had a fully inserted electrode array with all 22 electrodes activated, except for subjects (#8; 10; 16) who had 21 electrodes activated. Table 1 summarizes biographical details for research participants in this study.
TABLE 1

Recipients biographical data

Patient earAge (years)Usage of CI (years)SideGenderRate (pps)Maxima
#157, 413, 2rew7208
#248, 16, 2liw120012
#373, 16, 0lim120012
#440, 86, 1rew90012
#556, 57, 1rew120012
#632, 912, 0rem90010
#750, 86, 3rew120012
#867, 66, 0rew120012
#938, 38, 9rem180010
#1068, 58, 1lim120012
#1163, 56, 2rem120012
#1266, 89, 4rem120012
#1349, 515, 4liw120012
#1476, 49, 1rem50012
#1552, 36, 5lim120012
#1650, 38, 1rem120012
#1731, 56, 1rew120012
#1846, 06, 9lim120012
#1945, 810, 7rew120012
#2052, 87, 2rew120012
Recipients biographical data

Test procedures

This study used a single‐subject design with repeated measures. Testing was conducted across four test sessions spaced 2 to 3 weeks apart to allow for take‐home acclimatization to the new sound processor and its signal processing algorithms. All tests were conducted in an audiometric sound treated test booth via calibrated loudspeakers which were 1.3 m from the patient. Bilateral participants were tested unilaterally with the contralateral sound processor switched off, with no testing of their contralateral ear. Of the 11 bilateral patients, 5 fulfilled the inclusion criteria for one ear only. Five patients showed comparable speech comprehension and reaching inclusion criteria on both sides. In these patients the subjectively preferred ear was measured. One patient showed asymmetric speech comprehension, in this case the worse ear was investigated. All speech comprehension tests were presented through a computer‐based implementation (Equinox audiometer; Interacoustics, Denmark and evidENT 3 software, Merz Medizintechnik, Germany). Speech in quiet was tested with Freiburg monosyllabic words and Freiburg numbers (20 words per list). Words were presented at fixed levels of 40 and 50 dBSPL (two lists each), and 65 dBSPL. Words as well as numbers within one list were presented in randomized order and lists were used only once for each patient to minimize any repetitive learning effect. Additionally, the discrimination function in the region of the SRT for Freiburg two‐digit numbers (10 words per list) was measured starting at a fixed level of 40 dBSPL. The percent correct score (greater or smaller than 50%) was used to increase or decrease presentation level in 5 dB steps until the SRT (interpolated 50% score) could be derived. A German version of this so‐called Matrix test with fixed syntactic structure and the same set of words in all test lists were used: the Oldenburg sentences. Speech and noise signals were presented from in front in all cases. Sentences were presented in quasi‐stationary noise without strong fluctuations at a presentation level of 65 dBSPL. , Each list contained 30 sentences. The SRT was measured adaptively, defined as the SNR yielding a 50% words correct score. All CI recipients were accustomed to the test procedure, having been previously assessed five or more times as part of our clinical routine in alignment with the training practice. To further ensure sufficient familiarity and to minimize procedural training effects with the Oldenburg sentence test,, training was performed with one list prior to each session. Patients own clinical map parameters including T‐levels and C‐levels were used without change for all programs during this study. Only the sound processor generation and signal preprocessing algorithms described below were changed. Subjects were selected from the clinics patient pool (of which ~80% meet the sentence in quiet inclusion criteria, and therefore generally represent the patient performance from our clinic) where routine clinical mapping uses an extensive set of audiometric and electrophysiologic measures and a single electrode approach. A first baseline audiometric test session was carried out using patients own sound CP810 sound processor (except for patient #13, #19 who used a CP910 and therefore did not participate in the first session, but did participate in all other sessions). All patients completed testing with new speech processors. Patients CP810 sound processor had adaptive dynamic range optimization (ADRO) and automatic sensitivity control (ASC) enabled for take home use and for clinical testing. Three subsequent tests sessions were conducted. Prior to each test session, CI patients took home either the CP910 (one take home period) or the CP1000 sound processor (two take home periods) in a randomized order. For take home use of CP910 and CP1000, the sound processor was programmed with two programs, one with the default settings. The CP910 sound processor had ADRO, ASC, background noise reduction (SNR‐NR) and automatic scene classification (SCAN) which chooses a suitable directional microphone technology, (Standard, Zoom, and BEAM) depending on the actual acoustical environment. The CP1000 sound processor had ADRO, ASC, SNR‐NR, SCAN, and the spatial post‐filter technology ForwardFocus (FF). ForwardFocus was activated by the clinician, and patients were counseled for appropriate usage. The second sound processor program provided for both the CP910 and CP1000 sound processors was the same settings as above but with the Standard moderately directional microphone enabled rather than SCAN. An enlarged dynamic range program (CP1000DR+) implemented by decreasing the lower limit of the dynamic range (T‐SPL) from 25 dB (default setting for the three processor generations) down to 15 dB was provided for testing in the CP1000 sound processor. After 2 to 3 weeks of acclimatization to each test sound processor patients returned for audiometric testing. For audiometric testing the microphone directionality was used which the automated scene classifier chooses in quiet or in noise, and having a fixed program ensured that adaptation was not a factor in outcomes (Table 1). A ForwardFocus program (CP1000FF) was created and enabled by the audiologist during testing. Technologies used in each test program for both quiet and in noise are shown in Table 2.
TABLE 2

SmartSound options used in the given test condition

CP810CP910CP1000CP1000DR+
QuietStandard & dro & ASCStandard & SNR‐NR & Adro & ASCStandard & SNR‐NR & Adro & ASCStandard & SNR‐NR & Adro & ASC & T‐SPL = 15 dB
CP810CP910CP1000CP1000FF
NoiseBeam & Adro & ASCBeam & SNR‐NR & Adro & ASCBeam & SNR‐NR & Adro & ASCFF & SNR‐NR & Adro & ASC
SmartSound options used in the given test condition

Data analysis

To determine the relative performance of different sound processors, paired comparison analyses were performed, where each participant served as their own control. In all cases, a significance level of 0.05 was used to determine significance for two‐tailed analyses. Statistical analyses of the data were performed with SPSS (Ver 26; IBM). Data are presented as box plots. Box plot shows median (solid mid line), 25th and 75th percentile intervals (box limits), and 5th and 95th percentile intervals (whiskers). Mean values are also shown as squares. For testing the difference between multiple related samples, we used the Friedman test. For post hoc analyses the Wilcoxon test was used, and Bonferroni correction for repeated measures was applied. In the text the Bonferroni corrected F‐values and P‐values are shown.

RESULTS

Speech comprehension in quiet

Figure 1 shows results for monosyllabic words in quiet at different presentation levels from 40 to 65 dBSPL. Figure 2 shows SRTs for Freiburg numbers in quiet. All patients successfully completed all tests with the upgrade speech processors. Freiburg words are routinely assessed in most German clinics at 65 dBSPL. However, testing at levels of 40 and 50 dBSPL is not often conducted in practice or published in research studies. The SRT for Freiburg numbers shows a direct correlation to 500 Hz hearing threshold with an offset of 18 dB., The Freiburg numbers SRT allows for a quick check of near‐threshold mapping of CIs, very similar to the SRT for sentences in quiet.
FIGURE 1

Speech perception for Freiburg monosyllabic words at different test levels with different CI sound processors (N = 20). Reference data for normal hearing subjects at 40, 50, and 65 dBSPL are 83%, 100%, and 100%

FIGURE 2

Speech reception threshold for Freiburg numbers with different CI sound processors (N = 20). The SRT for normal hearing subjects was 18 dB. Lower SRT values (ordinate up) correspond to better speech reception thresholds

Speech perception for Freiburg monosyllabic words at different test levels with different CI sound processors (N = 20). Reference data for normal hearing subjects at 40, 50, and 65 dBSPL are 83%, 100%, and 100% Speech reception threshold for Freiburg numbers with different CI sound processors (N = 20). The SRT for normal hearing subjects was 18 dB. Lower SRT values (ordinate up) correspond to better speech reception thresholds Word recognition scores for four different sound processor configurations across the three test levels are shown in Figure 1. The 40 dBSPL presentation yielded a significant main effect across speech processors conditions (Friedman test: F = 15.8; df = 3; P = .001). At 40 dBSPL the median word recognition for CP810 was 8.75%, and for the latest sound processor (CP10DR+) the median increased to 25%. Post hoc analyses found a difference between CP810 and CP1000DR+ (P = .001). At 50 dBSPL a significant main effect across speech processors conditions was also found (F = 18.3; df = 3; P < .001). Post hoc analyses yielded an improvement by 9% points between CP810 and CP910 (P = .02) as well as for CP810 and CP1000DR+ (P < .001) showing an increase by 14% points. For the highest presentation level of 65 dBSPL no difference was found across speech processors conditions likely due to the dominant ceiling effect which was not found at other test levels (Figure 1). The median SRTs for Freiburg numbers (Figure 2) appear to be improving monotonically with newer sound processor generations with median SRTs up to 4 dB for softer sounds. A significant main effect across speech processors conditions was found using a Friedman test (F = 22; df = 3; P < .001). Post hoc analyses yielded an improvement of 4.1 dB from CP810 to CP1000DR+ (P < .001) as well as of 3.1 dB from CP910 to CP1000DR+ (P = .02), and of 2.9 dB from CP1000 to CP1000DR+ (P = .02).

Speech comprehension in noise

Figure 3 presents the results of the SRT measurements in noise. Baseline speech comprehension in noise with CP810 showed a median SRT of −0.3 dB. In contrast to CP810, a median SRT of −4.0 dB was found with CP1000FF. The Friedman test found a significant main effect (F = 34.1; df = 3; P < .001). Post hoc analysis revealed significant differences. We found an improvement of 1.9 dBSNR and of 2.8 dBSNR between CP810 and both configurations of the newest sound processor, CP1000 (P = .01) and CP1000FF (P < .001), and an improvement of 1.7 dBSNR between CP910 and CP1000FF (P < .001).
FIGURE 3

Speech reception thresholds for Oldenburg sentences in a stationary speech‐spectrum shaped noise with different sound processors (N = 20; S0N0; noise level fixed at 65 dBSPL; speech level adaptive). The SRT for normal hearing subjects was −7.1 dB SNR. Lower SRT values (ordinate up) correspond to better speech reception thresholds

Speech reception thresholds for Oldenburg sentences in a stationary speech‐spectrum shaped noise with different sound processors (N = 20; S0N0; noise level fixed at 65 dBSPL; speech level adaptive). The SRT for normal hearing subjects was −7.1 dB SNR. Lower SRT values (ordinate up) correspond to better speech reception thresholds

DISCUSSION

This study investigated speech perception in quiet of CI recipients at a broad range of presentation levels and standard audiometric conditions, with some conditions showing significant improvements. This result indicates that dedicated pre‐processing strategies may potentially improve speech perception in quiet conditions, especially for soft presentation levels. It also investigated speech perception in noise and found a continuous improvement for speech perception across three generations of CI sound processors. To place these results and improvements in context, results across all conditions are plotted referenced to normal hearing subjects (Figure 4), as introduced in general in a previous study. Here we compare CI patients for different presentation levels and signal‐to‐noise ratios depending on speech processor generation to normal hearing. In this case normal hearing subjects were tested monaurally (with their other ear occluded) for comparison to the unilateral CI subjects, whereas in everyday listening situations they additionally benefit from binaural cues. Normal hearing performance is denoted as the zero on the y‐axis (top of the plot). This comparison (Figure 4A) shows that even though CI recipients are approaching normal hearing performance levels in terms of (saturated) scores at high levels of 65 dBSPL, there is still a larger performance gap at all lower presentation levels. At low‐level speech, significant improvements were found for the CP1000DR+ program for both words and numbers. These results suggest that the low‐level performance gap could be decreased by signal processing designed for low level speech which applies a lower T‐SPL in these conditions. For the noise conditions (Figure 4C) a continual improvement, decreasing the performance gap, can be seen, which illustrates the continuous improvement through signal processing technologies.
FIGURE 4

Speech perception gaps between CI and normal hearing subjects in quiet (A, Speech reception threshold for Freiburg numbers; B, Speech perception for Freiburg monosyllabic words) and in noise (C, Speech reception thresholds for Oldenburg sentences in noise). Median speech test results (N = 20) are plotted relative to normal hearing reference data in the same setting. Results closer to the upper abscissa represent a smaller gap of the CI patients relative to speech perception of normal hearing subjects

Speech perception gaps between CI and normal hearing subjects in quiet (A, Speech reception threshold for Freiburg numbers; B, Speech perception for Freiburg monosyllabic words) and in noise (C, Speech reception thresholds for Oldenburg sentences in noise). Median speech test results (N = 20) are plotted relative to normal hearing reference data in the same setting. Results closer to the upper abscissa represent a smaller gap of the CI patients relative to speech perception of normal hearing subjects

Soft speech in quiet

For soft speech levels we found a median improvement of 16 percentage points for monosyllabic words at 40 dBSPL from the CP810 to the CP1000 sound processor. The SRTs for Freiburg numbers were typically measured at presentation levels near 40 dBSPL with older generation sound processors. With the recent sound processor generation, SRT for Freiburg numbers were found to be as low as 34 dBSPL using CP1000DR+. This improvement of 4 dB in SRT for Freiburg numbers corresponds to an approximate 20%‐point increase. A previous study found that shifting of the input dynamic range had no influence on speech comprehension in quiet as well in noise. As shown by Dawson et al for CI22 implant recipients and in this study as well, the input dynamic range was increased by lowering the T‐SPL resulting in a higher compression of soft signals. Recent studies, , on T‐Level fitting and compression/expansion with CI patients do not provide a completely consistent picture of the effects of mapping yet. Some results, suggest a trade‐off between quiet scores at soft levels and speech in noise reception while applying compression or expansion of different degrees to the mapping of CI systems. More recent studies, focused on an automated procedure for precise determination of electrical T‐levels and found improved speech perception for both quiet scores and SRTs in noise. This is a remarkable result since on median the mapping‐change to precisely determined T‐Level, compared with previous clinical routine, was about 9 CL, corresponding to a compression. This was found in an earlier study to decrease speech perception in noise but not found by Rader et al who found no effect for SRTs in noise with noise at 65 dB or even an improvement a lower noise level of 50 dB. The different methods in the above studies may explain, at least partially, the differences in results, and may therefore encourage future studies. Additionally, the first study as well as this study, did not explicitly investigate the effect of T‐Level fitting. The T‐Level in clinical practice of our CI population is determined manually, followed by validation of the fitting via soft level speech audiometry. The comparable results in quiet in this study would indicate that automated T‐Level fitting would not affect the T‐Level fitting used. However, it is automated implementation would save considerable clinical effort, as already highlighted by Plesch et al, and provide a clinically efficient T‐Level fitting method to realize low level performance benefits seen with the CP1000DR+ program.

Speech in noise

One focus in evolution of CI sound processors during past decades was the improvement of speech perception in noise, since this is a well‐recognized limitation of CIs. A major step for improving speech in noise for CI systems started with introducing an adaptive beamformer using a two microphone array as available in the Freedom, and subsequent generation sound processors. Fixed directional microphone arrays like zoom, were first implemented in the CP810 sound processor and generally show less benefit for speech comprehension in noise, but are more suitable in certain situations as it does not require adaptation time. In the CP810 and earlier sound processors, pre‐selectable scenes supported the programming of sound processors for specific listening environments. In the CP900 series sound processors signal processing was supported by the introduction of automatic scene classification which enabled appropriate pre‐processing technologies. Additionally, the introduction of the noise reduction algorithm SNR‐NR, further improved the speech comprehension in noisy environments. BEAM is activated by the SCAN automatic scene classifier in noisy situations with speech present. The progression of performance in Figure 3 illustrates that technical advancements of each new sound processor and its signal processing have provided significant improvement for speech comprehension in noise. Complementary to the presented results in quiet, it has been shown in a previous study that introducing an enlarged dynamic range does not necessarily compromise speech comprehension in noise. Therefor such algorithms may be used as an additional feature for improved perception for soft speech levels in quiet in a future automatic scene classifier system. The relevance of performance at low‐level speech in quiet should not be understated given the proportion of daily communication which occurs at these levels.

Scene dependent speech comprehension

Recent studies, found that a considerable part of communication takes place in noisy situations. On the other hand, a daily exposure of about 4 to 6 hr was found, for speech in quiet. In addition, this occurs often at lower intensities. Some algorithms are already addressing speech perception performance in quiet at low intensities. An established approach was the introduction of higher signal compression knee‐point with the Whisper‐setting followed by the adaptive dynamic range processing (ADRO)., , As indicated by earlier results an optimized setting of signal preprocessing for quiet has the potential to compromise speech perception in noise and vice versa. Today's scene classification has the ability to prevent such unwanted effects through further optimization of scene dependent signal processing in those two clearly distinguishable situations. However, the presented findings would suggest, that scene analysis needs to evolve toward more complexity. It is reasonable to assume, that beneficial compression (eg, ADRO and T‐SPL lowering) for soft speech levels can be annoying if louder speech in quiet is present. Additionally, an algorithm cannot predict user preferences per se. It could, and probably should be expected that future technologies beyond currently available commercial CI systems could enable self‐learning systems. As ForwardFocus is designed for situations with spatially separated signal but also shows some benefit for co‐located speech and noise from in front, automation to detect and enable ForwardFocus in these situations would seem valuable. The subjective perceived benefit of a dedicated noise suppression algorithm like ForwardFocus may depend also on the patients’ movements. It is beneficial in Cocktail‐Party situations but might be less appreciated by the user while moving through a crowed shopping mall with family members behind. Further evolution of scene classification may support users in handling optimized signal processing in complex listening situations. Information on motion of CI recipients (acceleration detection, already an emerging trend in hearing aids) together with self‐learning systems could further support the automatization of those kinds of preprocessing.

CONCLUSION

CI sound processors are able to provide open‐set speech comprehension in a great proportion of patients showing saturated test results for suprathreshold word tests in quiet. Complementary tests at lower presentation levels can provide a fuller characterization of patients benefit. An increased acoustic input dynamic range can provide better speech comprehension for lower speech levels. ForwardFocus provides improvement over BEAM for speech comprehension in the standard audiometric noise listening situation with speech and noise from the front. Overall, a continuous improvement for speech perception across three generations of CI sound processors was found.

CONFLICT OF INTERESTS

This study was partly supported by Cochlear Europe Ltd. Thomas Hocke is an employee of Cochlear Deutschland GmbH & Co KG and Stefan Mauger is an employee of Cochlear Limited, Melbourne, Australia. The authors report no other potential or actual conflict of interest. The authors alone are responsible for the content and writing of this paper.
  42 in total

1.  [A multicentre comparative study of the ESPrit and the Nucleus 22].

Authors:  K Berger; H Bagus; H Michels; J Roth; B Voss; T Klenzner
Journal:  HNO       Date:  2006-05       Impact factor: 1.284

2.  [Noise signal reduction in cochlear implant speech processors].

Authors:  J Müller-Deile
Journal:  HNO       Date:  1995-09       Impact factor: 1.284

3.  Cochlear implant optimized noise reduction.

Authors:  Stefan J Mauger; Komal Arora; Pam W Dawson
Journal:  J Neural Eng       Date:  2012-11-27       Impact factor: 5.379

4.  Investigation of a matrix sentence test in noise: reproducibility and discrimination function in cochlear implant patients.

Authors:  Matthias Hey; Thomas Hocke; Jürgen Hedderich; Joachim Müller-Deile
Journal:  Int J Audiol       Date:  2014-08-20       Impact factor: 2.117

5.  Bimodal benefit for cochlear implant listeners with different grades of hearing loss in the opposite ear.

Authors:  Ulrich Hoppe; Thomas Hocke; Frank Digeser
Journal:  Acta Otolaryngol       Date:  2018-03-19       Impact factor: 1.494

6.  [Multicentric analysis of the use behavior of cochlear implant users].

Authors:  Tobias Oberhoffner; Ulrich Hoppe; Matthias Hey; Dietmar Hecker; Heike Bagus; Peter Voigt; Silvia Schicktanz; Astrid Braun; Thomas Hocke
Journal:  Laryngorhinootologie       Date:  2018-03-13       Impact factor: 1.057

7.  Speech audiometry and data logging in CI patients : Implications for adequate test levels.

Authors:  M Hey; T Hocke; P Ambrosch
Journal:  HNO       Date:  2018-01       Impact factor: 1.284

8.  A clinical assessment of cochlear implant recipient performance: implications for individualized map settings in specific environments.

Authors:  Matthias Hey; Thomas Hocke; Stefan Mauger; Joachim Müller-Deile
Journal:  Eur Arch Otorhinolaryngol       Date:  2016-06-08       Impact factor: 2.503

9.  A psychoacoustic application for the adjustment of electrical hearing thresholds in cochlear implant patients.

Authors:  Johannes Plesch; Benjamin P Ernst; Sebastian Strieth; Tobias Rader
Journal:  PLoS One       Date:  2019-10-11       Impact factor: 3.240

10.  Speech Intelligibility in Various Noise Conditions with the Nucleus® 5 CP810 Sound Processor.

Authors:  Norbert Dillier; Wai Kong Lai
Journal:  Audiol Res       Date:  2015-10-07
View more
  1 in total

1.  Ecological Momentary Assessment to Obtain Signal Processing Technology Preference in Cochlear Implant Users.

Authors:  Matthias Hey; Adam A Hersbach; Thomas Hocke; Stefan J Mauger; Britta Böhnke; Alexander Mewes
Journal:  J Clin Med       Date:  2022-05-23       Impact factor: 4.964

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.