| Literature DB >> 36004900 |
Daniela Carfora1, Suyeon Kim1, Nesma Houmani1, Sonia Garcia-Salicetti1, Anne-Sophie Rigaud2,3.
Abstract
This work proposes a decision-aid tool for detecting Alzheimer's disease (AD) at an early stage, based on the Archimedes spiral, executed on a Wacom digitizer. Our work assesses the potential of the task as a dynamic gesture and defines the most pertinent methodology for exploiting transfer learning to compensate for sparse data. We embed directly in spiral trajectory images, kinematic time functions. With transfer learning, we perform automatic feature extraction on such images. Experiments on 30 AD patients and 45 healthy controls (HC) show that the extracted features allow a significant improvement in sensitivity and accuracy, compared to raw images. We study at which level of the deep network features have the highest discriminant capabilities. Results show that intermediate-level features are the best for our specific task. Decision fusion of experts trained on such descriptors outperforms low-level fusion of hybrid images. When fusing decisions of classifiers trained on the best features, from pressure, altitude, and velocity images, we obtain 84% of sensitivity and 81.5% of accuracy, achieving an absolute improvement of 22% in sensitivity and 7% in accuracy. We demonstrate the potential of the spiral task for AD detection and give a complete methodology based on off-the-shelf features.Entities:
Keywords: Alzheimer’s disease; automatic feature extraction; classification; online spiral analysis; transfer learning
Year: 2022 PMID: 36004900 PMCID: PMC9404815 DOI: 10.3390/bioengineering9080375
Source DB: PubMed Journal: Bioengineering (Basel) ISSN: 2306-5354
Figure 1Azimuth and altitude angles captured by the Wacom digitizing tablet.
Figure 2HC raw (pen-down) spiral trajectory images generated from coordinate sequences.
Figure 3AD raw (pen-down) spiral trajectory images generated from coordinate sequences.
Figure 4HC hybrid spiral images embedding pointwise pressure values in grayscale. Low and high-pressure levels on pen-down trajectories are displayed from white to black, respectively.
Figure 5AD hybrid spiral images embedding pointwise pressure values in grayscale. Low and high-pressure levels on pen-down trajectories are displayed from white to black, respectively.
Figure 6HC hybrid spiral images embedding pointwise velocity values in grayscale. Low and high-velocity values on both pen-down and pen-up trajectories are displayed from white to black, respectively.
Figure 7AD hybrid spiral images embedding pointwise velocity values in grayscale. Low and high-velocity values on both pen-down and pen-up trajectories are displayed from white to black, respectively.
Figure 8Feature extraction at different layers of AlexNet.
Performance (in %) on raw images with descriptors from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 63.0 ± 9.0 | 69.0 ± 21.7 | 57.0 ± 27.6 |
| Conv2 | 68.5 ± 10.5 | 72.0 ± 12.5 | 65.0 ± 16.3 |
| Conv3 | 65.5 ± 11.5 | 67.0 ± 11.0 | 64.0 ± 23.3 |
| Conv4 | 67.5 ± 8.7 | 65.0 ± 20.6 | 70.0 ± 21.3 |
| Conv5 | 71.0 ± 8.9 | 68.0 ± 17.2 | 74.0 ± 27.6 |
|
|
|
|
|
1 Best case.
Performance (in %) on pressure-based images with descriptors from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 71.0 ± 4.4 | 48.0 ± 9.8 | 94.0 ± 4.9 |
| Conv2 | 74.0 ± 9.4 | 72.0 ± 14.0 | 76.0 ± 14.3 |
|
|
|
|
|
| Conv4 | 77.0 ± 11.0 | 75.0 ± 12.0 | 79.0 ± 13.7 |
|
| |||
| fc7 | 77.5 ± 6.4 | 74.0 ± 12.0 | 81.0 ± 7.0 |
1 Best case.
Performance (in %) on altitude-based images with descriptors from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 69.0 ± 9.3 | 67.0 ± 11.9 | 72.0 ± 18.9 |
| Conv2 | 75.5 ± 11.5 | 75.0 ± 15.0 | 76.0 ± 21.5 |
|
|
|
|
|
| Conv4 | 74.0 ± 10.7 | 77.0 ± 7.8 | 71.0 ± 20.7 |
| Conv5 | 69.0 ± 9.2 | 70.0 ± 18.4 | 68.0 ± 18.9 |
| fc7 | 71.0 ± 7.0 | 71.0 ± 13.7 | 71.0 ± 18.7 |
1 Best case.
Performance (in %) on velocity-based images with descriptors from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 65.0 ± 6.7 | 73.0 ± 17.9 | 57.0 ± 15.5 |
| Conv2 | 71.0 ± 9.4 | 74.0 ± 10.2 | 68.0 ± 14.0 |
| Conv3 | 73.5 ± 5.9 | 76.0 ± 11.1 | 71.0 ± 12.2 |
| Conv4 | 73.0 ± 4.0 | 76.0 ± 13.6 | 70.0 ± 12.6 |
| Conv5 | 75.5 ± 7.6 | 68.0 ± 15.4 | 83.0 ± 15.5 |
|
|
1 Best case.
Performance (in %) on acceleration-based images with descriptors from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 65.5 ± 12.7 | 47.0 ± 32.3 | 84.0 ± 14.3 |
| Conv2 | 72.5 ± 7.5 | 69.0 ± 13.0 | 76.0 ± 19.1 |
| Conv3 | 72.0 ± 6.8 | 67.0 ± 13.5 | 77.0 ± 14.2 |
| Conv4 | 68.0 ± 8.7 | 70.0 ± 20.5 | 66.0 ± 21.5 |
|
|
|
|
|
| fc7 | 70.5 ± 6.9 | 66.0 ± 16.9 | 75.0 ± 14.3 |
1 Best case.
Fusion of SVM experts’ decisions trained on pressure, altitude, and velocity descriptors extracted from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 74.5 ± 9.9 | 70.0 ± 11.8 | 79.0 ± 17.0 |
| Conv2 | 79.5 ± 9.6 | 81.0 ± 5.4 | 78.0 ± 16.6 |
|
|
|
|
|
| Conv4 | 76.5 ± 6.3 | 77.0 ± 10.0 | 76.0 ± 12.8 |
| Conv5 | 77.0 ± 8.4 | 74.0 ± 12.8 | 80.0 ± 17.0 |
| fc7 | 77.5 ± 5.1 | 74.0 ± 9.2 | 81.0 ± 11.4 |
1 Best case.
Fusion of SVM experts’ decisions trained on pressure, altitude, and acceleration descriptors extracted from all layers.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 74.0 ± 7.7 | 60.0 ± 17.9 | 88.0 ± 10.8 |
| Conv2 | 79.5 ± 7.6 | 81.0 ± 12.2 | 78.0 ± 16.6 |
|
|
|
|
|
| Conv4 | 74.0 ± 9.2 | 74.0 ± 10.2 | 74.0 ± 19.1 |
| Conv5 | 76.0 ± 7.3 | 74.0 ± 13.6 | 78.0 ± 10.8 |
| fc7 | 75.0 ± 4.5 | 73.0 ± 12.7 | 77.0 ± 11.0 |
1 Best case.
Fusion of experts’ decisions fed with dynamic representations from Conv2, Conv3, and fc7.
| Dynamic Representations | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| P (Pressure) | 77.0 ± 9.5 | 77.0 ± 14.2 | 77.0 ± 14.2 |
| Alt (Altitude) | 77.5 ± 11.0 | 82.0 ± 4.0 | 73.0 ± 22.4 |
| V (Velocity) | 74.5 ± 5.4 | 77.0 ± 9.0 | 71.0 ± 12.2 |
|
|
|
|
|
1 Best case.
Fusion of experts’ decisions fed with dynamic representations from Conv3, Conv5, and fc7.
| Dynamic Representations | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| P (Pressure) | 80.5 ± 8.8 | 77.0 ± 14.9 | 84.0 ± 15.6 |
| Alt (Altitude) | 73.0 ± 10.0 | 74.0 ± 15.0 | 72.0 ± 22.7 |
| V (Velocity) | 75.0 ± 5.9 | 70.0 ± 11.8 | 80.0 ± 13.4 |
|
|
|
|
|
1 Best case.
Classification results (in %) after low-level fusion of pressure, altitude, and velocity hybrid images in AlexNet input channels.
| Layer | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Conv1 | 73.5 ± 11.2 | 68.0 ± 19.9 | 79.0 ± 17.6 |
| Conv2 | 77.0 ± 12.9 | 71.0 ± 19.7 | 83.0 ± 17.3 |
| Conv3 | 76.5 ± 12.5 | 69.0 ± 23.9 | 84.0 ± 15.6 |
|
|
|
|
|
| Conv5 | 75.5 ± 7.6 | 66.0 ± 18.5 | 85.0 ± 17.6 |
| fc7 | 75.0 ± 8.7 | 62.0 ± 18.3 | 88.0 ± 8.7 |
1 Best case.