| Literature DB >> 25893012 |
M P Paulraj1, Kamalraj Subramaniam1, Sazali Bin Yaccob1, Abdul H Bin Adom1, C R Hema2.
Abstract
Hypoacusis is the most prevalent sensory disability in the world and consequently, it can lead to impede speech in human beings. One best approach to tackle this issue is to conduct early and effective hearing screening test using Electroencephalogram (EEG). EEG based hearing threshold level determination is most suitable for persons who lack verbal communication and behavioral response to sound stimulation. Auditory evoked potential (AEP) is a type of EEG signal emanated from the brain scalp by an acoustical stimulus. The goal of this review is to assess the current state of knowledge in estimating the hearing threshold levels based on AEP response. AEP response reflects the auditory ability level of an individual. An intelligent hearing perception level system enables to examine and determine the functional integrity of the auditory system. Systematic evaluation of EEG based hearing perception level system predicting the hearing loss in newborns, infants and multiple handicaps will be a priority of interest for future research.Entities:
Keywords: Auditory brainstem response; Auditory evoked potential; Electroencephalogram; Hearing loss; Hearing perception; neural networks
Year: 2015 PMID: 25893012 PMCID: PMC4391208 DOI: 10.2174/1874120701509010017
Source DB: PubMed Journal: Open Biomed Eng J ISSN: 1874-1207
Summary of AEP studies.
| Authors (Year) | Sample size & Stimulus Duration | Stimuli Frequency Range (Hz) | Stimulus Levels | Analysis method/ Feature Extraction | Reports/Results |
|---|---|---|---|---|---|
| T. W. Picton et al. (1973) | N= 12 ear, 1 sec | None | 60 dB lowered by fixed 5 dB. | The baseline was determined for peak amplitude and latency of the attend and the ignore conditions | Potential significance in amplitude was detected between attend and ignore conditions of the stimuli |
| E. Delgada et al. (1994) | N=24 ear, HL= 11 ear, 10 msec | None | 10-70 dB at an increment of 10 dB. | The peak identification system used a combined matched filtering and rule based system approach | Precision for peak V for normal subject was 96 % compared to 82.3 % for hearing impaired |
| Barrie. W. Jervis et al. (1983) | N=3 ear, 100 ms | 1 K | 40 , 70 dB | Energy levels in pre and post stimulus was compared by means of paired t-test. | AEP was due to phase reordering and contains additive energy in harmonic component |
| Robert Boston et al. (1981) | N=14 ear, 10 msec | 100, 500,1000 | 30, 50, 70, 80 dB | The bias, variance, and mean square error were determined from Hamming window spectral estimates | Peak occurs at lowest stimulus intensities, at which wave V is clearly defined. |
| J. Wilson et al. (1999) | N= 240 ear, 0.1 msec | None | 0, 10, 30, 50, 70, 90 dB | ABR waves time and frequency domain features were extracted | Both time and frequency domain features of ABR wave reflects change to subject age and gender |
| J. Wilson et al. (1999) | N= 240 ear, 0.1 msec | None | 90 dB | A Daubechies 5 wavelet was employed to discrete analysis the ABR wave | Multi resolution wavelet analysis reflects change to subject age and gender |
| Ulrich Hoppe (2001) | N= 22 ear, 300 msec | 500, 1 K, 2 K, 4 K | 20, 40,60, 80 dB | Wavelet features were extracted and then statistical test was conducted on classification | The proposed automatic detector finds the response as identical human experts |
| Robert Boston et al. (1989) | N= 14 ear, HL= 16 ear, 1.5 ms | None | None | A rule based expert system with heuristic criteria to identify peak V | The proposed system identifies when response was present but not as effective with no response present |
| Walker et al. (1983) | N = 4 ear, 28 clicks/sec | None | 25, 45, 65 dB | Matched filter was employed to detect peak V | Matched filter system reduces computational time from 30 min to 5-10 min for a hearing test |
| Tapio Gronofors. (1993) | N= 44 ear, 0.1 clicks/msec | None | 60, 70, 90 dB | Mutlifilters and an attributed automation were used to identify peak V | The proposed method identifies 80 per cent of peak V |
| Authors (Year) | Sample size & Stimulus Duration | Stimuli Frequency Range (Hz) | Stimulus Levels | Analysis method/ Feature Extraction | Reports/Results |
| Sudirman et al. (2009) | N= 4 ear, 10 sec | 40, 500, 5000, 15000 | None | AEP signal was analyzed by fast fourier transform | Feed -forward neural network was used to classify hearing level based on brain signals. |
| Masumi Kogure et al. (2010) | N= 10 ear, 500 msec | 440 | None | Gradient of the wave characteristics were analyzed to classify target and non-target trails. | Hearing level perception in targeted stimulus was 65.4-76.3 %, while for non- targeted stimulus was 66.4-70.5 % |
| Maryam Ravan et al. (2011) | 58 newborns, 300 msec | 294, 784 | 70 dB | First order and second order moment sequences of wavelet were identified as features | Brain neurological developments for hearing was not associated with the age difference in newborns |
| Sriraam (2012) | N= 16 ear, 10 sec | None | None | Two time- frequency domain features were extracted and classified using neural network | Applied features has a classification accuracy of 65.3-100 % |