| Literature DB >> 22965654 |
Tim Hahn1, Andre F Marquand, Michael M Plichta, Ann-Christine Ehlis, Martin W Schecklmann, Thomas Dresler, Tomasz A Jarczok, Elisa Eirich, Christine Leonhard, Andreas Reif, Klaus-Peter Lesch, Michael J Brammer, Janaina Mourao-Miranda, Andreas J Fallgatter.
Abstract
Pattern recognition approaches to the analysis of neuroimaging data have brought new applications such as the classification of patients and healthy controls within reach. In our view, the reliance on expensive neuroimaging techniques which are not well tolerated by many patient groups and the inability of most current biomarker algorithms to accommodate information about prior class frequencies (such as a disorder's prevalence in the general population) are key factors limiting practical application. To overcome both limitations, we propose a probabilistic pattern recognition approach based on cheap and easy-to-use multi-channel near-infrared spectroscopy (fNIRS) measurements. We show the validity of our method by applying it to data from healthy controls (n = 14) enabling differentiation between the conditions of a visual checkerboard task. Second, we show that high-accuracy single subject classification of patients with schizophrenia (n = 40) and healthy controls (n = 40) is possible based on temporal patterns of fNIRS data measured during a working memory task. For classification, we integrate spatial and temporal information at each channel to estimate overall classification accuracy. This yields an overall accuracy of 76% which is comparable to the highest ever achieved in biomarker-based classification of patients with schizophrenia. In summary, the proposed algorithm in combination with fNIRS measurements enables the analysis of sub-second, multivariate temporal patterns of BOLD responses and high-accuracy predictions based on low-cost, easy-to-use fNIRS patterns. In addition, our approach can easily compensate for variable class priors, which is highly advantageous in making predictions in a wide range of clinical neuroimaging applications.Entities:
Mesh:
Substances:
Year: 2012 PMID: 22965654 PMCID: PMC3763208 DOI: 10.1002/hbm.21497
Source DB: PubMed Journal: Hum Brain Mapp ISSN: 1065-9471 Impact factor: 5.038
Figure 1Schematic arrangement of the fNIRS probe set (red squares: emitters; blue squares: detectors; numbers: measurement channels). For the visual checkerboard paradigm, a functional localizer was used to position the probeset directly over the primary visual cortex. (a) Figure illustrates the approximate position over the occipital cortex. (b) For the n‐back task, the inferior row of the left probe set was oriented towards T3 and Fp1 (T4 and Fp2 for the right side) according to the international 10–20 system (Jasper, 1958).
Figure 2Confusion matrix summarizing the errors made by the classifier on the test set.
Performance of a hypothetical classifier (balanced dataset)
| True class | |||
|---|---|---|---|
| Patient | Control | ||
| Predicted class | Patient | 80 | 20 |
| Control | 20 | 80 | |
Performance of a hypothetical classifier (unbalanced dataset)
| True class | |||
|---|---|---|---|
| Patient | Control | ||
| Predicted class | Patient | 4 | 19 |
| Control | 1 | 76 | |
Figure 3Spatial accuracy maps for classification of the 97%‐contrast (left panel), the 40%‐contrast (middle panel), and the 8%‐contrast condition (right panel) versus the no‐contrast condition (P < 0.05, FDR‐corrected). Highest accuracies are consistently found over the primary visual cortex.
Figure 4(a) Schematic arrangement of the fNIRS probe set (red squares: emitter; blue squares: detectors; numbers: measurement channels). (b) Spatial accuracy maps for the classification of schizophrenic patients and healthy controls (2‐back condition; P < 0.05, FDR‐corrected).
Classifier performance for a balanced training set and unbalanced test set (five patients for every control)
| Sensitivity | Specificity | PPV | NPV | Accuracy | Bal. Acc. | OPV | |
|---|---|---|---|---|---|---|---|
| Split half | 73.0 (1.0) | 69.0 (0.7) | 70.4 (0.4) | 72.4 (0.8) | 71.0 (0.4) | 71.0 (0.4) | 71.4 (0.4) |
| Not adjusted | 71.5 (1.1) | 73.1 (0.9) | 38.1 (1.1) | 87.6 (2.0) | 73.2 (0.7) | 72.3 (0.5) | 62.8 (1.3) |
| Adjusted | 58.5 (0.6) | 87.1 (0.7) | 55.3 (1.6) | 91.3 (0.1) | 82.3 (0.5) | 72.8 (0.3) | 73.3 (0.8) |
| Chance: n. adj. | 49.9 | 50.0 | 16.6 | 83.3 | 50.0 | 49.9 | 58.3 |
| Chance: adj. | 24.4 | 75.4 | 16.6 | 83.3 | 66.9 | 49.9 | 58.3 |
All statistics are reported as percentages and numbers in brackets indicate standard errors across 10 random splits of the data. Chance levels on the unbalanced test set for the adjusted and non‐adjusted classifiers are also reported. PPV = positive predictive value, NPV = negative predictive value, OPV = overall predictive value.
Classifier performance for a balanced training set and unbalanced test set (ten patients for every control)
| Sensitivity | Specificity | PPV | NPV | Accuracy | Bal. Acc. | OPV | |
|---|---|---|---|---|---|---|---|
| Split half | 74.0 (1.0) | 67.5 (0.9) | 69.7 (0.4) | 72.9 (0.9) | 70.8 (0.4) | 70.8 (0.4) | 71.0 (0.5) |
| Not adjusted | 74.0 (1.0) | 68.3 (0.9) | 20.1 (0.6) | 96.2 (0.2) | 68.8 (0.8) | 71.1 (0.5) | 58.1 (0.3) |
| Adjusted | 51.5 (0.7) | 88.5 (0.7) | 40.7 (2.5) | 89.4 (1.4) | 85.4 (0.6) | 70.0 (0.5) | 65.0 (1.6) |
| Chance: n. adj. | 50.1 | 50.0 | 9.1 | 90.1 | 50.0 | 50.0 | 54.6 |
| Chance: adj. | 16.2 | 83.5 | 9.0 | 90.1 | 77.4 | 49.9 | 54.4 |
All statistics are reported as percentages and numbers in brackets indicate standard errors across ten random splits of the data. Chance levels on the unbalanced test set for the adjusted and non‐adjusted classifiers are also reported. PPV = positive predictive value, NPV = negative predictive value, OPV = overall predictive value.
Classifier performance for a balanced training set and unbalanced test set (20 patients for every control)
| Sensitivity | Specificity | PPV | NPV | Accuracy | Bal. Acc. | OPV | |
|---|---|---|---|---|---|---|---|
| Split half | 71.0 (0.7) | 68.5 (1.0) | 70.9 (0.5) | 70.4 (0.5) | 69.8 (0.5) | 69.8 (0.5) | 70.7 (0.4) |
| Not adjusted | 71.0 (0.7) | 69.5 (0.9) | 11.0 (0.2) | 94.9 (1.1) | 69.1 (0.8) | 70.3 (0.4) | 53.0 (0.6) |
| Adjusted | 48.5 (1.1) | 90.0 (0.7) | 29.3 (2.8) | 97.2 (0.0) | 89.8 (0.4) | 69.3 (0.3) | 63.3 (1.4) |
| Chance: n. adj. | 50.4 | 50.0 | 4.8 | 95.3 | 50.0 | 50.1 | 52.4 |
| Chance: adj. | 14.8 | 85.2 | 4.8 | 95.2 | 81.9 | 50.0 | 52.4 |
All statistics are reported as percentages and numbers in brackets indicate standard errors across 10 random splits of the data. Chance levels on the unbalanced test set for the adjusted and non‐adjusted classifiers are also reported. PPV = positive predictive value, NPV = negative predictive value, OPV = overall predictive value.