| Literature DB >> 35722548 |
Jason Smucny1, Ge Shi2, Ian Davidson2.
Abstract
Deep learning (DL) is of great interest in psychiatry due its potential yet largely untapped ability to utilize multidimensional datasets (such as fMRI data) to predict clinical outcomes. Typical DL methods, however, have strong assumptions, such as large datasets and underlying model opaqueness, that are suitable for natural image prediction problems but not medical imaging. Here we describe three relatively novel DL approaches that may help accelerate its incorporation into mainstream psychiatry research and ultimately bring it into the clinic as a prognostic tool. We first introduce two methods that can reduce the amount of training data required to develop accurate models. These may prove invaluable for fMRI-based DL given the time and monetary expense required to acquire neuroimaging data. These methods are (1) transfer learning - the ability of deep learners to incorporate knowledge learned from one data source (e.g., fMRI data from one site) and apply it toward learning from a second data source (e.g., data from another site), and (2) data augmentation (via Mixup) - a self-supervised learning technique in which "virtual" instances are created. We then discuss explainable artificial intelligence (XAI), i.e., tools that reveal what features (and in what combinations) deep learners use to make decisions. XAI can be used to solve the "black box" criticism common in DL and reveal mechanisms that ultimately produce clinical outcomes. We expect these techniques to greatly enhance the applicability of DL in psychiatric research and help reveal novel mechanisms and potential pathways for therapeutic intervention in mental illness.Entities:
Keywords: deep learning; explainable AI; fMRI; mixup data augmentation; transfer learning
Year: 2022 PMID: 35722548 PMCID: PMC9200984 DOI: 10.3389/fpsyt.2022.912600
Source DB: PubMed Journal: Front Psychiatry ISSN: 1664-0640 Impact factor: 5.435
Challenges for deep learning on fMRI data and proposed, emerging solutions.
| High dimensional data | Small sample sizes | Opaque interpretability | |
| Transfer learning | X | X | |
| Data augmentation: mixup | X | ||
| Explainable artificial intelligence | X |
FIGURE 1Cartoon illustration of transfer learning, where information learned from a source domain is transferred to learning in a target domain. For example, information learned from fMRI data collected during a particular cognitive task or scanning procedure can be transferred to improve learning on data gathered from a different cognitive task or scanning procedure.
FIGURE 2Example of Mixup as applied to fMRI data. In this example, a 50/50 virtual instance was created by combining task (cognitive control-associated) fMRI data from a recent onset patient with schizophrenia with a good clinical outcome [> 20% Improvement in Total Brief Psychiatric Rating score after 1 year of treatment (“Improver”)] with that of a patient with a poor clinical outcome (“Non-Improver”). Data were taken from a study by Smucny et al. (7).