| Literature DB >> 35874933 |
Condell Eastmond1, Aseem Subedi1, Suvranu De1, Xavier Intes1.
Abstract
Significance: Optical neuroimaging has become a well-established clinical and research tool to monitor cortical activations in the human brain. It is notable that outcomes of functional near-infrared spectroscopy (fNIRS) studies depend heavily on the data processing pipeline and classification model employed. Recently, deep learning (DL) methodologies have demonstrated fast and accurate performances in data processing and classification tasks across many biomedical fields. Aim: We aim to review the emerging DL applications in fNIRS studies. Approach: We first introduce some of the commonly used DL techniques. Then, the review summarizes current DL work in some of the most active areas of this field, including brain-computer interface, neuro-impairment diagnosis, and neuroscience discovery.Entities:
Keywords: biophotonics; brain–machine interface; data processing; functional near-infrared spectroscopy; real-time imaging
Year: 2022 PMID: 35874933 PMCID: PMC9301871 DOI: 10.1117/1.NPh.9.4.041411
Source DB: PubMed Journal: Neurophotonics ISSN: 2329-423X Impact factor: 4.212
Fig. 1Illustrations of the three most common classes of network architectures used in the reviewed articles are shown in the figure above. (a) An MLP has all nodes fully connected. (b) A convolutional NN (CNN) with a kernel size , with subsequent pooling layers. (c) An LSTM architecture where final hidden states are used. For illustrative purposes, the output layer is constructed for binary classification problems.
Activated outputs of input “” from each of the activation functions is shown in the table. Also shown are the bounds/range of activated values.
| Activation function | Output values | Bounds |
|---|---|---|
| Sigmoid |
| (0,1) |
| Softmax |
| (0,1) |
| ReLU |
|
|
| Leaky-ReLU |
|
|
| ELU |
|
|
Description of the data collected in each paper that was considered.
| Citation no. | Author | Year | Number of participants | Approximate time recorded per participant | Number of channels | Sampling rate | Region of brain | Dataset |
|---|---|---|---|---|---|---|---|---|
| *Name of the author* | *Published year* | i.e. | i.e., 15 to 30 min | i.e. | Sampling rate in Hz (resampled rate in Hz) | *Part of the brain being observed* | *Name of the public dataset* | |
|
| Mirbagheri et al. | 2020 | 10 | 10 min | 23 | 10 | Prefrontal cortex | N/A |
|
| Tanveer et al. | 2019 | 13 | 30 min | 28 | 1.81 | Prefrontal and dorsolateral prefrontal cortex | N/A |
|
| Dolmans et al. | 2021 | 22 | 77 to 345 min | 27 | 10 | Not specified | N/A |
|
| Saadati et al. | 2019 | 26 and 29 | 60 to 180 min | 24 to 36 | 10 | Not specified | Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset |
|
| Ortega and Faisal | 2021 | 12 | 13 min | 24 | 12.6 | Sensorimotor cortex | N/A |
|
| Dargazany et al. | 2019 | 10 | 12.5 min | 80 | 7.8125 | Motor cortex | N/A |
|
| Nagasawa et al. | 2020 | 9 | 40 min | 41 | 10 | Sensorimotor regions | N/A |
|
| Ghonchi et al. | 2020 | 29 | <30 min? | 36 | 10 (128) | Not specified | Open access dataset for EEG+NIRS single-trial classification |
|
| Trakoolwilaiwan et al. | 2017 | 8 | 1000 s | 34 | 25.7 | Motor cortex | N/A |
|
| Janani et al. | 2020 | 10 | 60 min | 20 | 15.625 | Motor cortex | N/A |
|
| Yoo et al. | 2021 | 18 | 19 min | 44 | 1.81 | Auditory cortex | N/A |
|
| Gao et al. | 2020 | 13 | 3 to 10 h | 12 | Not specified | Prefrontal cortex | N/A |
|
| Ortega et al. | 2021 | 10 | 25 minutes | 24 | 12.5 | bilateral sensorimotor cortex | N/A |
|
| Gao et al. | 2020 | 30 | <25 min | 32 | Not specified | Prefrontal and primary motor cortex and supplementary motor area | N/A |
|
| Ma et al. | 2021 | 36 | 200 s | 31 | 7.14 | Not specified | N/A |
|
| Fernandez Rojas et al. | 2021 | 18 | 3 to 5 min? | 24 | 10 | Somatosensory cortex | N/A |
|
| Wickramaratne Mahmud | 2021 | 29 | <30 min? | 36 | 13.3 | Not specified | Open access dataset for EEG+NIRS single-trial classification |
|
| Sirpal et al. | 2019 | 40 | 30 to 180 min | Not specified | 19.5 | Bilateral anterior, middle and posterior temporal regions and frontopolar, frontocentral and dorsolateral frontal regions | N/A |
|
| Xu et al. | 2019 | 47 | 8 min | 44 | 14.29 | Bilateral inferior frontal gyrus and temporal cortex | N/A |
|
| Yang et al. | 2020 | 24 | 27 min | 48 | 8.138 | Prefrontal cortex | N/A |
|
| Takagi et al. | 2020 | 15 | 2 min | 22 | 10 | Prefrontal cortex | N/A |
|
| Lu et al. | 2020 | 8 | 12 to 16 min | 52 | 10 | Prefrontal cortex | Single-trial classification of antagonistic oxyhemoglobin responses during mental arithmetic |
|
| Saadati et al. | 2019 | 26 | 33 min | 36 | 10.4 | Not specified | Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset |
|
| Benerradi et al. | 2019 | 11 | 30 min | 16 | 2 | Prefrontal cortex | N/A |
|
| Liu et al. | 2021 | 18 | 6 min | 8 | 11.8 | Anterior prefontal cortex | N/A |
|
| Lee et al. | 2018 | 6 | 30 min | 40 | Not specified | Supplementary motor area and primary motor cortex | N/A |
|
| Kim et al. | 2022 | 42 | 540 s | 8 | 10 | Bilateral prefrontal areas | N/A |
|
| Wickramaratne and Mahmud | 2021 | 30 | 50 min | 20 | Not specified | Motor regions | Open-access fNIRS dataset for classification of unilateral finger- and foot-tapping |
|
| Woo et al. | 2020 | 11 | 7 min | 36 | 11 | Left motor cortex | N/A |
|
| Hennrich et al. | 2015 | 10 | 37 min | 8 | 10 | Prefrontal cortex | N/A |
|
| Kwon and Im | 2021 | 18 | 12 min | 16 | 13.3 | Prefrontal cortex | N/A |
|
| Ho et al. | 2019 | 16 | 90 min | 7 | 18 | Prefrontal cortex | N/A |
|
| Asgher et al. | 2020 | 15 | 22 min | 12 | 8 | Prefrontal cortex | N/A |
|
| Naseer et al. | 2016 | 7 | 440 s | 16 | 1.81 | Prefrontal cortex | N/A |
|
| Hakimi et al. | 2020 | 20 | 10 min | 23 | 10 | Prefrontal cortex | N/A |
|
| Erdoⓖan et al. | 2019 | 11 | 10 min | 48 | 3.91 | Frontal cortex, primary motor cortex and somatosensory motor cortex | N/A |
|
| Hamid et al. | 2022 | 9 | 6 min | 12 | 1.81 | Left hemisphere of M1 | N/A |
|
| Khan et al. | 2021 | 28 | 350 s | 48 | 3.9 | Frontal, frontal-central and central sulcus and central and temporal-parietal lobes | N/A |
|
| Ortega and Faisal | 2021 | 9 | 5 min | 24 | 12.5(80) | Bilateral sensorimotor cortex | N/A |
|
| Zhao | 2019 | 47 | — | 24 | Not specified | Primary motor and prefrontal region | N/A |
|
| Ghonchi et al. | 2015 | 29 | <30 min? | 36 | 10(128) | Not specified | Open access dataset for EEG+NIRS single-trial classification |
|
| Chiarelli et al. | 2018 | 15 | 10 min | 16 | 10 | Sensorimotor regions | N/A |
|
| Cooney et al. | 2021 | 19 | 2 h | 8 | 10(250) | Bihemispheric motor regions | N/A |
|
| Sun et al. | 2020 | 29 | 9 min | 36 | 12.5 | Not specified | Open access dataset for EEG+NIRS single-trial classification |
|
| Kwak et al. | 2022 | 29 | 9 min | 36 | 12.5 | Not specified | Open access dataset for EEG+NIRS single-trial classification |
|
| Khalil et al. | 2022 | 26 | 62 s | 36 | 10.4 | Frontal, motor cortex, parietal, and occipital regions | Simultaneous acquisition of EEG and NIRS during cognitive tasks for an open access dataset |
|
| Xu et al. | 2020 | 47 | 8 min | 52 | 14.3 | Bilateral temporal lobe | N/A |
|
| Xu et al. | 2020 | 47 | 8 min | 44 | 14.3 | Bilateral temporal lobe | N/A |
|
| Ma et al. | 2020 | 84 | 2 min | 52 | 10 | Bilateral frontal and temporal cortices | N/A |
|
| Wang et al. | 2021 | 96 | 150 min | 53 | 100 | Prefrontal cortex | N/A |
|
| Chao et al. | 2021 | 32 | 15 min | 22 | 7.81 | Prefrontal cortex | N/A |
|
| Chou et al. | 2021 | 67 | 160 s | 52 | 10 | Bilateral frontotemporal regions | N/A |
|
| Rosas-Romero et al. | 2019 | 5 | 30 to 180 min | 104 to 146 | 19.5 | Full scalp recording | N/A |
|
| Yang et al. | 2019 | 24 | 15 min | 48 | 8.138 | Prefrontal cortex | N/A |
|
| Yang and Hong | 2021 | 24 | 5 min | 48 | 8.138 | Prefrontal cortex | N/A |
|
| Ho et al. | 2022 | 140 | 30 min | 6 | 8 | Prefrontal cortex | N/A |
|
| Behboodi et al. | 2019 | 10 | 9.5 min | 52 | 18.51 | Sensorimotor and motor areas | N/A |
|
| Sirpal et al. | 2021 | 40 | 75 min | 138 | 19.5 | Full scalp recording | N/A |
|
| Bandara et al. | 2019 | 20 | 17 min | 52 | 10 | Frontal region | N/A |
|
| Qing et al. | 2020 | 8 | 18 min | 12 | 15.625 | Prefrontal cortex | N/A |
|
| Hiwa | 2016 | 22 | 6.5 min | 24 | 10 | Left hemisphere | N/A |
|
| Andreu-Perez et al. | 2021 | 30 | 7.5 min | 16 | Not specified | Prefrontal cortex | N/A |
|
| Ramirez et al. | 2022 | 5 | 14 min | 16 | Not specified | Left and right frontal lobes | N/A |
Fig. 2Extraction of samples to be used for training NNs. Time-series data denoting hemodynamic concentration changes are obtained from the raw data after initial analysis and changed to appropriate formats (shown in right in the form of dimensions of input data samples), based on chosen architecture. , , number of samples; , , number of timepoints; , number of channels; , , height and width of spatial map images; , number of statistical moments/features.
The applications and key findings of each paper considered.
| Citation no. | Author | Year | DL architecture | General task | Input | Output | Validation metrics | Ground truth | Results |
|---|---|---|---|---|---|---|---|---|---|
| *Name of the author* | *Published year* | i.e., CNN, LSTM, etc. | i.e., diagnosis | i.e., Hbr, Hbo | *What is being output by the network* | i.e., LOSO, LOUO | *What was being used to assess the models* | *Put important numbers here.* | |
|
| Mirbagheri et al. | 2020 | CNN | BCI | Statistical features | Stress versus relaxation | Fivefold CV | Task labels for each trial | 88.52 ± 0.77% accuracy |
|
| Tanveer et al. | 2019 | MLP | Cortical Analysis | segmented time series | Extracted Features to Feed to Classifier | 10-Fold CV | Drowsiness detected via change in facial expression | 83.3 ± 7.4% Accuracy with KNN Classifier |
|
| Dolmans et al. | 2021 | CNN+LSTM+MLP | BCI | segmented time series | Mental Workload Level | 5-Fold CV | Participant Reported Difficulty Ratings | 32% Accuracy |
|
| Saadati et al. | 2019 | DNN | BCI | segmented time series | n-back, WG, DSR and MI vs relaxation | LOOCV | 89% Accuracy | |
|
| Ortega and Faisal | 2021 | HEMCNN | BCI | Statistical features | Left-hand gripping or right-hand gripping | Fivefold CV | Task labels for each trial | 78% accuracy |
|
| Dargazany et al. | 2019 | MLP | BCI | Raw fNIRS data | Right hand, left hand, left leg, right leg, and both hand ME | Task labels for each trial | 77% to 80% accuracy | |
|
| Nagasawa et al. | 2020 | WGAN | BCI | Raw fNIRS data | Left-hand ME, right-hand ME, bimanual ME or rest | 10-fold CV | N/A for data generation/task labels for each trial | 73.3% accuracy for augmented SVM |
|
| Ghonchi et al. | 2020 | RCNN | BCI | Three-rank tensors of upsampled fNIRS time series | Mental arithmetic or rest/motor imagery or rest | k-fold CV | Task labels for each trial | 99.63% accuracy |
|
| Trakoolwilaiwan et al. | 2017 | MLP/CNN | BCI | Segmented time series | Left-hand MI right-hand MI or Rest | 10-fold CV | Task labels for each trial | 89.35% accuracy for MLP and 92.68% accuracy for CNN |
|
| Janani et al. | 2020 | MLP/CNN | BCI | Spectrograms+sample-point images/ segmented time series | Left- or right-hand MI/ME | Fivefold CV | Task labels for each trial | 80.49 ± 6.66% accuracy (MI) and 85.66 ± 8.25% accuracy (ME) |
|
| Yoo et al. | 2021 | LSTM | BCI | Time series | English, non-English, annoyance, natural sound, music, or gunshot | Sixfold CV | Task labels for each trial | 20.38 ± 4.63% accuracy |
|
| Gao et al. | 2020 | CNN | Cortical analysis | Statistical features | Expert or novice | LOUO+10-fold CV | Reported skill level of participants | 91% accuracy, 95% sensitivity and 67% specificity |
|
| Ortega et al. | 2021 | CNNATT | BCI | Upsampled fNIRS data | Discrete force profiles | Simultaneously recorded grip force | 55.2 FFAV% | |
|
| Gao et al. | 2020 | CNNIRS | Preprocessing/augmentation | Simulated/real HRF | Denoised HRF | Simulated HRF signals | 3.03 MSE | |
|
| Ma et al. | 2021 | CNN | BCI | Time series | Left-hand MI or right-hand MI | LOSO CV | Task labels for each trial | 98.6% accuracy with FCN and ResNet |
|
| Fernandez Rojas et al. | 2021 | LSTM | Diagnosis | Raw HbO data | Low-cold, low-heat, high-cold or high-heat | 10-fold CV | Task labels for each trial | 90.6% accuracy 84.6% sensitivity 90.4% specificity |
|
| Wickramaratne Mahmud | 2021 | CNN | BCI | Statistical features | MI MA or rest | 10-fold CV | Task labels for each trial | 87.14 ± 3.20% accuracy |
|
| Sirpal et al. | 2019 | LSTM | Diagnosis | Time series | Epileptic siezure versus normal | 10-fold CV | Labeled siezure and nonseizure segments | 98.3 ± 0.4% accuracy, 89.7 ± 0.5% recall, 87.3 ± 0.0.8% precision |
|
| Xu et al. | 2019 | CNN+GRU | Diagnosis | Segmented time series | ASD or TD | Diagnosis of subject | 92.2% accuracy, 85.0% sensitivity, and 99.4% specificity | |
|
| Yang et al. | 2020 | CNN | Diagnosis | Spatial maps from ΔHbO signals and spatiotemporal maps from statistical features | Cognitive impairment or healthy | Five-fold CV | Diagnosis of subject | 90.37 ± 5.30% accuracy, 86.98 ± 7.25% recall, 82.19 ± 9.93% precision for VFT |
|
| Takagi et al. | 2020 | CNN | Cortical analysis | Oxy, deoxy and OD images | Teeth clenching or relaxed | Fivefold CV | Task labels for each trial | 90.3 ± 6.5% accuracy, 88.1 ± 10.8% recall, 92.4 ± 7.8% specificity, 92.5 ± 7.1% precision |
|
| Lu et al. | 2020 | LSTM+CNN | BCI | Statistical features | Mental arithmetic or Rest | Fivefold CV | Task labels for each trial | 95.3% accuracy |
|
| Saadati et al. | 2019 | CNN | BCI | Topographical activity Maps | 0-back 2-back or 3-back task | 10-fold CV | Task labels for each trial | 97 ± 1% accuracy |
|
| Benerradi et al. | 2019 | CNN | BCI | statistical features | Mental workload level | LOUO CV | Task labels for each trial | 49.53% accuracy for three classes and 72.77% Accuracy for two classes |
|
| Liu et al. | 2021 | ESN/CAE | Preprocessing/augmentation | Segmented time series | 0-back, 1-back, 2-back, or 3-back task | 10x10 CV | Task labels for each trial | 52.45 (ESN) and 47.21% (CAE) accuracy for four classes |
|
| Lee et al. | 2018 | MLP | Preprocessing/Augmentation | Time series | Denoised time Series | Wavelet denoised methods | CNR of 0.63 | |
|
| Kim et al. | 2022 | CNN | Preprocessing/augmentation | Time series + HRF | Denoised time series | Ablation | Simulated HRF Signals | MSE of approx 0.004-0.005 |
|
| Wickramaratne and Mahmud | 2021 | GAN+CNN | Preprocessing/augmentation | GASF/ kernel PCA GASF | GASF/motor task | LOOCV | N/A for data generation/task labels for each trial | 96.67% accuracy and 0.98 AUROC for CNN+110% generated data |
|
| Woo et al. | 2020 | DCGAN+CNN | Preprocessing/augmentation | HbO t-maps | ME or rest | — | Task labels for each trial | 92.42% for unaugmented data and 97.17% for Augmented Data |
|
| Hennrich et al. | 2015 | MLP | BCI | Not specified | Mental arithmeetics, word generation, mental Rotation or Rest | 10-fold CV | Task labels for each trial | 64.1% accuracy |
|
| Kwon and Im | 2021 | CNN | BCI | Segmented time series | MA versus RELAXATION | LOSO CV | Task labels for each trial | 71.20 ± 8.74% accuracy |
|
| Ho et al. | 2019 | DBN/CNN | BCI | Statistical features | Mental workload level | Task labels for each trial | 84.26 ± 2.58% accuracy for DBN without PCA and 75.59 ± 3.4% for DBN with PCA inputs and 72.77 ± 1.92% accuracy for CNN without PCA and 68.12 ± 3.26% with PCA inputs | |
|
| Asgher et al. | 2020 | LSTM | BCI | Statistical features | Mental workload level | 10-fold CV | Task labels for each trial | 89.31 ± 3.95% accuracy 87.51 ± 3.90% precision 86.76 ± 4.38% recall |
|
| Naseer et al. | 2016 | MLP | BCI | Statistical features | MA or Rest | Ablation | Task labels for each trial | 96.3 ± 0.3% accuracy for MLP |
|
| Hakimi et al. | 2020 | CNN | BCI | Statistical features | Stress versus relaxation | Fivefold CV | Task labels for each trial | 98.69 ± 0.45% accuracy for HRF feature set and 88.60 ± 1.15% accuracy for fNIRS feature set |
|
| Erdoⓖan et al. | 2019 | MLP | BCI | Statistical features | MI, ME or rest | Ablation | Task labels for each trial | 96.3% ± 1.3% accuracy for ME versus rest, 95.8% ± 1.2% accuracy for MI versus rest and 80.1% ± 2.6% accuracy for ME versus MI |
|
| Hamid et al. | 2022 | CNN/LSTM | BCI | Time series | Motor execution or rest | 10-fold CV | Task labels for each trial | 79.73% accuracy with CNN, 77.21% accuracy with LSTM and 78.97% accuracy with bi-LSTM |
|
| Khan et al. | 2021 | MLP | BCI | Statistical features | Specific finger tapping or Rest | LOSO CV | Task labels for each trial | 60 ± 2% accuracy |
|
| Ortega and Faisal | 2021 | CNNATT | BCI | Segmented time series | Force profiles | FiveFold CV | Simultaneously recorded grip Force | 55% FVAF |
|
| Zhao | 2019 | BiLSTM | BCI | Statistical features | Goal execution versus completion | — | Task labels for each trial | 71.70% accuracy |
|
| Ghonchi et al. | 2015 | LSTM/CNN | BCI | Upsampled fNIRS data | MI classes | 10 x 5-Fold CV | Task labels for each trial | 99.6% accuracy |
|
| Chiarelli et al. | 2018 | MLP | BCI | Segmented time series | Left-hand MI or right-hand MI | 10-fold CV | Task labels for each trial | 83.28 ± 2.36% accuracy |
|
| Cooney et al. | 2021 | CNN | BCI | Filtered frequency bands of segmented time series | One of four action words + one of four word combinations | Nested fivefold CV | Task labels for each trial | 46.31% accuracy for overt speech and 34.29% accuracy for imagined speech |
|
| Sun et al. | 2020 | CNN | BCI | Tensors of fused EEG and fNIRS data | Mental arithmetic/motor Imagery or Rest | Fivefold CV | Task labels for each trial | 77.53% accuracy for MI and 90.19% accuracy for MA |
|
| Kwak et al. | 2022 | CNN+Attention | BCI | Three-rank tensors of upsampled fNIRS time series | Mental arithmetic/motor Imagery or Rest | Ablation | Task labels for each trial | 91.96 ± 5.82% accuracy for MA and 78.59 ± 5.82% accuracy for MI |
|
| Khalil et al. | 2022 | CNN | BCI | Time series | Mental workload level | 10-fold CV | Task labels for each trial | 94.52% accuracy |
|
| Xu et al. | 2020 | LSTM | Diagnosis | Time series | ASD or TD | 10-fold CV | Diagnosis of subject | 95.7 ± 4.99% accuracy |
|
| Xu et al. | 2020 | CNN+Attention | Diagnosis | segmented time series | ASD or TD | 10-fold CV | Diagnosis of subject | 93.3% accuracy 90.6% sensitivity and 97.5% specificity |
|
| Ma et al. | 2020 | AttentionLSTM+CNN | Diagnosis | Normalized HbO, HbR, and HbT matrices | BD or MDD | k-fold CV | Diagnosis of subject | 96.2% accuracy |
|
| Wang et al. | 2021 | CNN | Diagnosis | Time series/statistical features | Depressed or nondepressed | Ablation | Diagnosis of subject | 83% Accuracy 79% Precision and 83% recall (manually extracted features), 72% accuracy, 80% precision and 75% recall (raw data) |
|
| Chao et al. | 2021 | CFNN/RNN | Diagnosis | Statistical features | Fear stimulus or rest | LOSO CV | Task labels for each trial | 99.94% accuracy with CFNN and 99.94% with RNN |
|
| Chou et al. | 2021 | MLP | Diagnosis | Statistical features | FES or Healthy | Sevenfold CV | Diagnosis of subject | 79.7% accuracy, 88.8% specificity and 74.9% specificity |
|
| Rosas-Romero et al. | 2019 | CNN | Diagnosis | 3-dimensioal tensors of HbO and HbR | Pre-ictal versus inter-ictal | Fivefold CV | Labeled siezure and nonseizure segments | 99.67 ± 0.75% accuracy for CNN |
|
| Yang et al. | 2019 | CNN | Diagnosis | Activation t-map / channel correlation map | Cognitive impairment or Healthy | Sixfold CV | Diagnosis of subject | 90.62% accuracy with t-maps and 85.58% with correlation maps |
|
| Yang and Hong | 2021 | CNN | Diagnosis | Connectivity map | Mean STD and variance of Δ HbO and Δ HbR | Diagnosis of subject | 97.01% accuracy | |
|
| Ho et al. | 2022 | LSTM/LSTM+CNNN | Diagnosis | Segmented time series | Healthy, asymptomatic, prodromal or dementia AD | Fivefold CV | Diagnosis of subject | 86.8% accuracy for CNN-LSTM and 84.4% accuracy for LSTM |
|
| Behboodi et al. | 2019 | MLP/CNN | Cortical analysis | Time series | RSFC estimation | — | Channels anatomically located over the motor and sensorimotor cortex | 0.89 AUC for MLP and 0.92 AUC for CNN |
|
| Sirpal et al. | 2021 | LSTM AE | Cortical analysis | Full spectrum EEG | ΔHbO concentration | k-fold CV | Real fNIRS data | 6.52 x 10^-2 mean reconstruction error (Euclidean) |
|
| Bandara et al. | 2019 | CNN+LSTM | Cortical analysis | segmented time series | High/low valence+ high/low arousal or neutral | Fivefold CV | Emotional valence and arousal scores of DEAP Dataset | 70.18% accuracy for 1s windows and 77.29% accuracy for 10 s windows |
|
| Qing et al. | 2020 | CNN | Cortical analysis | Segmented time series | Like/dislike, like/so-so, or dislike/so-so | Eightfold CV | Subject ratings for each trial | 84.3, 87.9, and 86.4% accuracy for 15s, 30s, and 60s, respectively |
|
| Hiwa | 2016 | CNN | Cortical analysis | Time series | Male or female | LOOCV | Gender of subject | Approx. 60% accuracy in five best channels |
|
| Andreu-Perez et al. | 2021 | DCAE/MLP | Cortical analysis | Statistical features | Novice, Intermediate, or Expert | 10 repeated stratified k-fold CV with 5 splits | Reported skill level of participants | 91.43 ± 6.32% accuracy for DCAE, 91.44 ± 9.97% accuracy for MLP |
|
| Ramirez et al. | 2022 | CNN | Cortical analysis | fNIRS/fNIRS+EEG images | Discrete preference ratings | Subject ratings for each trial | 66.86% accuracy with fNIRS and 91.83% accuracy with fNIRS+EEG |
Fig. 3Distribution of the 63 papers reviewed in this article by year and color coded by application field.
Fig. 4The illustration of the fNIRS data simulation process and the designed DAE model. (a) The green lines are the experimental fNIRS data, including noisy HRF and resting fNIRS data, while the blue and red lines are simulated ones. (b) DAE model: The input data of the DAE model are the simulated noisy HRF, and the output is the corresponding clean HRF without noise. The DAE model incorporates nine convolutional layers, followed by max-pooling layers in the first four layers and upsampling layers in the next four layers, with one convolutional layer before the output. The parameters are labeled in parentheses for each convolutional layer, in the order of kernel size, stride, input channel size, and kernel number. (c), (d) number of residual motion artifacts for the simulated and experimental data sets, respectively. Adapted from Ref. 34.
Fig. 5Framework of GAN and data augmentation. (a) The generator creates the data from the random variables , and the critic evaluates the generated and original (measured) data. (b) After the training process, the data generated by the generator (referred to as generated data) are combined with the original fNIRS data as augmented data. (c) Trial-averaged waveforms for the four tasks considered in a CV hold. The red lines denote measured original data and the blue lines denote generated data using WGANs. The shaded area represents 95% confidence intervals. Adapted from Ref. 26.
Fig. 6A DL model combining LSTM and CNN (LAC) was used to accurately identify ASD from a TD subject based on time-varying behavior of spontaneous hemodynamic fluctuations from fNIRS. Adapted from Ref. 75.
Fig. 7(a) The cortical areas of hand-movement in sensorimotor and motor cortices. (b) Anatomical areas of sensorimotor and motor areas that are related to the hand-movement monitored by fNIRS are highlighted in green. (c) Group RSFC map derived from CNN-based resting-state connectivity detection. (d) ROC curves on different methods. Adapted from Ref. 84.
Fig. 8(a) Schematic depicting the FLS box simulator where trainees perform the bimanual dexterity task. A continuous-wave spectrometer is used to measure functional brain activation via raw fNIRS signals in real time. (b) Examples of acquired time-series hemoglobin concentration data from six PFC locations while the subject is performing the PC surgical task. (c) The true versus predicted FLS score plots. Each blue dot represents one sample. The red line is the line. The dot-dashed green lines represent the certification pass/fail threshold FLS score value. (d) ROC curves for each model with corresponding AUC values in the legend. Adapted from Refs. 100 and 32.