| Literature DB >> 27807415 |
Shuihua Wang1, Ming Yang2, Sidan Du3, Jiquan Yang4, Bin Liu5, Juan M Gorriz6, Javier Ramírez6, Ti-Fei Yuan7, Yudong Zhang8.
Abstract
Highlights We develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging.Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems.The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls. Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM).Entities:
Keywords: computer aided diagnosis; confusion matrix; directed acyclic graph; sensorineural hearing loss; support vector machine; unilateral hearing loss; wavelet entropy
Year: 2016 PMID: 27807415 PMCID: PMC5069288 DOI: 10.3389/fncom.2016.00106
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 2.380
Demographic data of all subjects.
| Age (year) | 51.7 ± 9.6 | 53.9 ± 7.6 | 53.6 ± 5.4 | 0.305 | 0.739 |
| Gender (m/f) | 8/7 | 6/8 | 8/12 | ||
| Education level (year) | 12.5 ± 1.7 | 12.1 ± 2.4 | 11.5 ± 3.2 | 0.487 | 0.618 |
| Disease duration (year) | 17.6 ± 17.3 | 14.2 ± 14.9 | – | 0.517 | 0.610 |
| PTA of left ear (dB) | 78.1 ± 17.9 | 21.8 ± 3.2 | 22.2 ± 2.1 | 156.427 | 0.00 |
| PTA of right ear (dB) | 20.4 ± 4.2 | 80.9 ± 17.4 | 21.3 ± 2.2 | 167.796 | 0.00 |
Data are mean ±SD, LHL, left-sided hearing loss; RHL, right-sided hearing loss; PTA, pure tone average; m, male; f, female; F/x2/t means the score calculated by F-test or Pearson's chi-squared test or Student's t-test.
Figure 1Frequency-dependent hearing level of a LHL subject.
Figure 2Brain Extraction Result.
Figure 3Normalization and Gaussian kernel results. (A) Before. (B) After.
Figure 4Diagram of a 2-level decomposition: (A) original MR brain image; (B) one-level decomposition subband; (C) two-level decomposition subband; (D) Wavelet entropy (WE) vector. (B,C) are in wavelet coefficient domain. L, low-frequency subband; H, high-frequency subband; digits after L/H represents the decomposition level, E, entropy; WE, wavelet entropy.
Pseudocode of wavelet entropy.
| Step 1 Import the brain image |
| Step 2 Choose the wavelet family and decomposition level |
| Step 3 Decomposition and generate (3 |
| Step 4 Calculate entropy over each subband |
| Step 5 Combine all the entropy results to a column vector and output it as the feature |
Figure 5Decomposition for bior5.5. (A) Scaling Function. (B) Wavelet Function. (C) Low-pass filter. (D) High-pass filter.
Figure 6Reconstruction for bior5.5. (A) Scaling Function. (B) Wavelet Function. (C) Low-pass filter. (D) High-pass filter.
Figure 7An example of DAG Technique with . Root node and intermediate nodes represent individual classifiers, and the leaf nodes represented the output label.
Figure 8Diagram of the DAG-SVM for the hearing loss classification. L, LHL; R, RHL; H, HC.
Figure 9Decomposition results. (A) Original Image. (B) 1-level decomposition. (C) 2-level decomposition. (D) 3-level decomposition. (E) 4-level decomposition.
Classification Accuracy vs. decomposition level.
| 1 | 4 | 92.24% |
| 2 | 7 | 94.08% |
| 3 | 10 | |
| 4 | 13 | 94.29% |
Bold represents the best.
The experiment results of 3-level decomposition.
| Run 1 | 3 (4) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 47 (49) | 95.92 |
| Run 2 | 3 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 48 (49) | 97.96 |
| Run 3 | 4 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 48 (49) | 97.96 |
| Run 4 | 4 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 3 (5) | 5 (5) | 5 (5) | 47 (49) | 95.92 |
| Run 5 | 4 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 48 (49) | 97.96 |
| Run 6 | 2 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 5 (5) | 46 (49) | 93.88 |
| Run 7 | 4 (4) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 48 (49) | 97.96 |
| Run 8 | 4 (4) | 3 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 5 (5) | 5 (5) | 5 (5) | 46 (49) | 93.88 |
| Run 9 | 4 (4) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 5 (5) | 5 (5) | 5 (5) | 4 (5) | 4 (5) | 46 (49) | 93.88 |
| Run 10 | 4 (4) | 4 (5) | 5 (5) | 4 (5) | 4 (5) | 5 (5) | 4 (5) | 4 (5) | 5 (5) | 3 (5) | 42 (49) | 85.71 |
| Average | 95.10 |
F, Fold; x(y) means our classifier correctly predicts x brains out of y brains, Acc. means the accuracy for every run, Average gives the averaged accuracy over 10 runs.
Confusion Matrix.
| HC | 194 | 4 | 2 |
| LHL | 6 | 141 | 3 |
| RHL | 4 | 5 | 131 |
HC, healthy control; LHL, left-sided hearing loss; RHL, right-sided hearing loss.
Performance over each class.
| HC | 97.00% | 96.55% | 95.10% | 96.73% |
| LHL | 94.00% | 97.35% | 94.00% | 96.33% |
| RHL | 93.57% | 98.57% | 96.32% | 97.14% |
HC, healthy control; LHL, left-sided hearing loss; RHL, right-sided hearing loss.
Comparison of accuracy with manual interpretation.
| 36.73% | 32.65% | 38.78% | 95.10% |
O, Observer.
Classifier comparison.
| FNN (Kale et al., | 94.08% |
| DT (Scherfler et al., | 91.84% |
| NBC (Vasta et al., | 91.02% |
| DAG-SVM (Proposed) | 95.10% |