Literature DB >> 33905882

Phenotype discovery from population brain imaging.

Weikang Gong1, Christian F Beckmann2, Stephen M Smith3.   

Abstract

Neuroimaging allows for the non-invasive study of the brain in rich detail. Data-driven discovery of patterns of population variability in the brain has the potential to be extremely valuable for early disease diagnosis and understanding the brain. The resulting patterns can be used as imaging-derived phenotypes (IDPs), and may complement existing expert-curated IDPs. However, population datasets, comprising many different structural and functional imaging modalities from thousands of subjects, provide a computational challenge not previously addressed. Here, for the first time, a multimodal independent component analysis approach is presented that is scalable for data fusion of voxel-level neuroimaging data in the full UK Biobank (UKB) dataset, that will soon reach 100,000 imaged subjects. This new computational approach can estimate modes of population variability that enhance the ability to predict thousands of phenotypic and behavioural variables using data from UKB and the Human Connectome Project. A high-dimensional decomposition achieved improved predictive power compared with widely-used analysis strategies, single-modality decompositions and existing IDPs. In UKB data (14,503 subjects with 47 different data modalities), many interpretable associations with non-imaging phenotypes were identified, including multimodal spatial maps related to fluid intelligence, handedness and disease, in some cases where IDP-based approaches failed.
Copyright © 2021. Published by Elsevier B.V.

Entities:  

Keywords:  Behaviour prediction; Multimodal independent component analysis; Neuroimaging; Phenotype discovery; UK Biobank

Mesh:

Year:  2021        PMID: 33905882      PMCID: PMC8850869          DOI: 10.1016/j.media.2021.102050

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


Introduction

Large-scale multimodal brain imaging has enormous potential for boosting epidemiological and neuroscientific studies, generating markers for early disease diagnosis and prediction of disease progression, and the understanding of human cognition, by means of linking to clinical or behavioural variables. Recent major studies have been acquiring brain magnetic resonance imaging (MRI), genetics and demographic/behavioural data from large cohorts. Examples are the UK Biobank (UKB) (Miller et al., 2016), the Human Connectome Project (HCP) (Van Essen et al., 2013) and the Adolescent Brain Cognitive Development (ABCD) study (Jernigan et al., 2018). These studies involve multimodal data, meaning that several distinct types of MRI data are acquired, mapping activity, functional networks, structural connectivity, white matter microstructure, and organisation and volumes of different brain tissues and sub-structures (Miller et al., 2016). However, the multimodal, high-dimensional and noisy nature of such big datasets makes many existing analytical approaches for extracting interpretable information impractical (Smith and Nichols, 2018). Traditionally, large-scale neuroimaging studies first summarize the imaging data into interpretable image-derived phenotypes (IDPs) (Miller, Alfaro-Almagro, Bangerter, Thomas, Yacoub, Xu, Bartsch, Jbabdi, Sotiropoulos, Andersson, et al., 2016, Elliott, Sharp, Alfaro-Almagro, Shi, Miller, Douaud, Marchini, Smith, 2018), which are scalar quantities derived from raw imaging data (e.g., regional volumes from structural MRI, mean task activations from task MRI, resting-state functional connectivities between brain parcels). This knowledge-based approach is simple and efficient, and effectively reduces the high-dimensional data into interpretable, compact, convenient features. However, there may well be a large loss of information, due to such “expert-hand-designed” features not capturing important sources of subject variability (or even just losing sensitivity by the pre-defined spatial sub-areas being suboptimal), as well as ignoring cross-modality relationships. Further, such uni-modal compartmentalised analyses do not utilise the fact that for many biological effects of interest we expect there to be biological convergence across different data modalities, i.e., changes in the underlying biological phenotype likely manifest themselves across multiple quantitative phenotypes, so that a joint analysis effectively increases both the power of detecting such effects and the interpretability of the findings. In contrast to such uni-modal analyses, data-driven multivariate approaches (i.e., unsupervised machine learning) have been proposed, which perform simultaneous decomposition of voxel-level data directly, generally representing data as the summation of a number of “components” or “modes”. Each mode is formed as the outer product of two vectors: one is a vector of subject weights (describing the relative strength of expression of that mode in each subject), and a vector of voxel weights (in effect a spatial map for each data modality, describing the spatial localisation of the mode). The subject weight vectors (one per mode) can be considered “features” (similar to IDPs, but being data-driven) for use in further modelling, such as for the prediction of non-imaging variables. They are often either based on eigendecomposition, such as multi-set canonical correlation analysis (mCCA) (Kettenring, 1971, Klami, Virtanen, Leppäaho, Kaski, 2015), or based on variations of independent component analysis (ICA) (Calhoun, Adali, Giuliani, Pekar, Kiehl, Pearlson, 2006, Liu, Pearlson, Windemuth, Ruano, Perrone-Bizzozero, Calhoun, 2009, Beckmann, Smith, 2005, Groves, Beckmann, Smith, Woolrich, 2011). Among them, FMRIB’s Linked ICA (FLICA) (Groves et al., 2011) is an efficient approach which has been successfully applied to identify brain systems that are involved in lifespan development and diseases (Groves, Smith, Fjell, Tamnes, Walhovd, Douaud, Woolrich, Westlye, 2012, Douaud, Groves, Tamnes, Westlye, Duff, Engvig, Walhovd, James, Gass, Monsch, et al., 2014), attention deficit hyperactivity disorder (Ball et al., 2018), preterm brain development (Ball et al., 2017) and cognition and psychopathology (Alnæs et al., 2018). FLICA has advantages compared with uni-modal analysis on IDPs, including: (1) It leverages the cross-modality information of multimodal data, so has the potential to detect patterns that are not discoverable in any single modality; (2) It is a data-driven objective approach which automatically discovers meaningful patterns in voxel-level multimodal data by searching for spatial non-Gaussian sources that have been shown to likely reflect real structured features in neuroimaging data (Griffanti et al., 2014). While this approach has been applied successfully to medium-sized cohort data (Groves, Smith, Fjell, Tamnes, Walhovd, Douaud, Woolrich, Westlye, 2012, Douaud, Groves, Tamnes, Westlye, Duff, Engvig, Walhovd, James, Gass, Monsch, et al., 2014, Ball, Malpas, Genc, Efron, Sciberras, Anderson, Nicholson, Silk, 2018, Ball, Aljabar, Nongena, Kennea, Gonzalez-Cinca, Falconer, Chew, Harper, Wurie, Rutherford, et al., 2017, Alnæs, Kaufmann, Doan, Córdova-Palomera, Wang, Bettella, Moberget, Andreassen, Westlye, 2018), the original algorithms for carrying out FLICA do not scale well with increasing data size, and are unable to analyze large datasets such as UKB, where dozens of different modalities over tens of thousands of subjects are available. Importantly, because the core FLICA algorithms are multivariate, acting in a complex way simultaneously across all subjects, modalities and voxels using Variational Bayesian updates of parameters, this problem cannot be solved through simple parallelisation or other algorithmically simple methods for distributing computations across a large cluster, and so cannot be addressed simply by increasing the number of processors or memory available. To tackle this problem, we propose an approach that embeds advanced data compression techniques across the different data dimensions into the FLICA approach. We use a multimodal extension of MELODIC’s Incremental Group Principal component analysis (Smith et al., 2014) (mMIGP, applied across modalities) and online dictionary learning (Mairal et al., 2010) (DicL, applied within-modalities) to efficiently reduce the size of multimodal neuroimaging data. The reduced data are then characterised through FLICA in terms of underlying modality-specifc maps and subject loading vectors. Here we refer to this combination of techniques as Big-data FLICA, or BigFLICA for short. Two important advantages of the proposed approach are: (1) Preserving key information in original data but also reducing the effects of stochastic domain-specific noise; (2) Increasing the computational efficiency of the FLICA algorithm for extremely large population datasets. BigFLICA is scalable for simultaneously analyzing all the multimodal data of the full 100,000-subjects UKB dataset using only a modest computing cluster (Fig. 1).
Fig. 1

Overview of the proposed approach for jointly analyzing a biobank-scale multimodal neuroimaging dataset. Currently for the UKB dataset (voxel-level data, 14,503 subjects, 47 modalities), the total data size is approximately 800 GB, and if we directly feed these data into FLICA and extract 750 components, we will need approximately 1066 GB CPU memory and 1680 h computation time. Our new approach, BigFLICA, used multimodal MIGP and dictionary learning to preprocess the multimodal data; this is efficient and memory friendly, and much of this preprocessing can be easily parallelized. BigFLICA only used 50 GB memory and 73 h to analyze the same dataset using a 24-core compute server.

Overview of the proposed approach for jointly analyzing a biobank-scale multimodal neuroimaging dataset. Currently for the UKB dataset (voxel-level data, 14,503 subjects, 47 modalities), the total data size is approximately 800 GB, and if we directly feed these data into FLICA and extract 750 components, we will need approximately 1066 GB CPU memory and 1680 h computation time. Our new approach, BigFLICA, used multimodal MIGP and dictionary learning to preprocess the multimodal data; this is efficient and memory friendly, and much of this preprocessing can be easily parallelized. BigFLICA only used 50 GB memory and 73 h to analyze the same dataset using a 24-core compute server. We first demonstrate the effectiveness of our approach through extensive simulations. Then, in real data, we quantify performance in terms of the prediction accuracy of non-imaging-derived phenotypes (nIDPs) (Liégeois, Li, Kong, Orban, Van De Ville, Ge, Sabuncu, Yeo, 2019, Kong, Li, Orban, Sabuncu, Liu, Schaefer, Sun, Zuo, Holmes, Eickhoff, et al., 2018), such as health outcome measures. Using voxel-level imaging data of 81 modalities from 1003 subjects in the HCP and 47 modalities from 14,053 subjects in the UKB, we show that BigFLICA can perform comparably with original FLICA (Groves et al., 2011) in terms of the prediction accuracy for nIDPs (158 in HCP and 8787 in UKB). Most importantly, we systematically investigated whether there are benefits to jointly fusing multimodal data together, instead of analysing them separately. We show that significant improvements in the prediction accuracy of nIDPs are found when comparing a high-dimensional BigFLICA with other widely-used data analysis strategies: (1) doing single-modality ICA and concatenating the results across modalities and (2) using existing IDPs (5812 in HCP and 3913 in UKB). In particular, the improvements in prediction of many health outcome and cognitive variables are large, more than doubling prediction accuracy for some variables. Furthermore, we investigate the relationship between modes derived by BigFLICA and IDPs. We find that although the modes were estimated from the same set of voxel-level data, they have complementary information which can be combined together to further increase the prediction accuracy of nIDPs. Finally, we applied BigFLICA to analyze the UKB data and extracted 750 components. Existing multimodal ICA cannot estimate this many modes from this many subjects. We found several interpretable associations between modes of BigFLICA and nIDPs, including modes that relate to fluid intelligence, handedness, age started wearing glasses or contact lenses and hypertension. In many cases BigFLICA can find associations with nIDPs with greater statistical sensitivity than was possible with IDPs. Overall, BigFLICA demonstrated the advantages of data-driven joint multimodal modelling in the analysis of biobank-scale multimodal datasets.

Methods

Brief overview of the proposed approach: BigFLICA

FLICA (Groves et al., 2011) is a Bayesian ICA approach for multimodal data fusion. The input of FLICA is modalities’ data matrices with dimensions where is the number of features (e.g., voxels) and is the number of subjects. FLICA aims to find a joint -dimensional decomposition of all : where is the shared subject mode (mixing matrix) across modalities (a vector of subject weights for each mode), so is a ‘link’ across different modalities, is a positive diagonal mode-weights matrix (one overall weight per modality per mode), is the independent (spatial) feature maps for the components of a modality (one map per modality per mode), and is the modality-specific Gaussian noise term (Fig. 1). We propose two efficient approaches that can either be used separately or combined together to reduce the size of the original data matrices, and therefore reduce the computational load of the original FLICA. An overview of BigFLICA is shown in Fig. 1. The first approach, termed multimodal extension of MELODIC’s Incremental Group Principal component analysis (Smith et al., 2014) (mMIGP), aims to reduce the subject dimension to a linear combination of the original subjects. mMIGP is a time- and memory-efficient approximation of principal component analysis (PCA) on feature-concatenated multimodal data. To this end, if we aim to get a decomposition, we first apply MIGP (Smith et al., 2014) separately within each modality to estimate which is an approximation of an -dimensional PCA decomposition of one modality . This step can be done in parallel across modalities. Then, we concatenate all in the component dimension and apply another MIGP to get which is an -dimensional approximate PCA decomposition of all modalities together. Finally, we project the original data of each modality to the PCA-reduced space using . If no further reduction (e.g., dictionary learning as below) is to be applied, the data that could then be fed into the core FLICA would be the component-by-feature matrices of size and FLICA would then extract () components from these (Methods). This step almost adds little computational cost compared with the original FLICA, because a similar PCA step is needed to initialize the parameters of the original FLICA, but this approach is feasible for large numbers of subjects and modalities. Although different modalities usually have different overall signal-to-noise ratios (SNR), which is largely ignored by this mMIGP step, the subsequent FLICA can take this into account by the modality-specific noise terms, and a high-dimensional mMIGP is used to capture modes with even small variations in each modality. It is known that voxels are correlated in both a local fashion (local spatial autocorrelation) and across brain networks (long range correlation); hence, effective feature subsampling could hope to capture all important information in the data but also reduce the cost of spatial modelling in FLICA (Hoyos-Idrobo et al., 2019). Therefore, we incorporate an approach, termed sparse online Dictionary Learning (Mairal et al., 2010) (DicL), to reduce the dimension of feature (e.g., voxel) space that can capture both local and distant spatial correlation structure. Specifically, for each modality, we use DicL to model the as a sparse linear combination of basis elements: where is the sparse spatial dictionary basis, and is the feature loadings. By minimizing the reconstruction error, and enforcing sparsity in the dictionary basis we aim to achieve an optimal subsampling of feature space. The inputs of FLICA are then smaller matrices which are only of dimension and FLICA then extracts () components from these (Methods). Compared with doing FLICA with the original large matrices, using the DicL preprocessed data can greatly reduce the computation load of FLICA. DicL can easily be parallelized across modalities and is memory friendly, which further increases efficiency (Fig. 1).

FLICA model

The input to FLICA is modalities’ data matrices with each modality’s dimensions being where is the number of features (e.g., voxels) in modality and is the number of subjects. FLICA aims to find a joint -dimensional decomposition of all :where is the shared subject mode (mixing matrix) across modalities, so is a ‘link’ across different modalities, is a positive diagonal mode-weights matrix, is the independent (spatial) feature maps for the components of modality and is the Gaussian noise term.

Multimodal extension of MELODIC’s incremental group principal component analysis for subject-space dimension reduction

We propose a multimodal extension of our previous MIGP approach (Smith et al., 2014), termed mMIGP, to reduce the subject dimension of multimodal data. MIGP has been extensively validated in simulations and real neuroimaging data for finding an approximate PCA decomposition in a time- and memory-efficient way (Smith et al., 2014). Suppose that our multimodal data are matrices with dimensions where is the number of subjects and is the number of features (e.g. voxels) in a modality. In mMIGP, each feature is z-score normalized first. Then, an MIGP is applied to each modality separately to find an -dimensional approximate PCA decomposition. Specifically, we want to find an approximation of a singular value decomposition (SVD) of each :where and are the left and right singular vectors, while are the singular values. A naive SVD on scales quadratically with which is not efficient when is large. To find the approximation, MIGP sequentially feeds a subset of (columns of) into an SVD, so that these subsets are reduced to a low-dimensional representation. The low-dimensional representation is then concatenated with another subset of (columns of) and is fed into another SVD to find the low-dimensional representation of them. The final SVD approximation is found after one pass of all data. The computational complexity of MIGP scales linearly with . For a detailed description, please see Appendix A of the MIGP paper (Smith et al., 2014). The third step is to concatenate all in the component dimension and apply another MIGP for finding a -dimensional approximate PCA decompositions of size which is a low-dimensional representation of multimodal data in the analysis. Finally, the z-score normalized data of each modality is projected onto the by:the are the inputs of the subsequent FLICA algorithm. Therefore, the total size of data output by this stage is which is smaller than the original input size . The fractional reduced data size is and the can be fixed when more subjects are introduced, so it is scalable in the big-data analysis. In practice, we usually choose based on the percentage of explained variance of SVD in the third step. If we feed into FLICA to estimate FLICA modes, the output subject mode matrix is of the size so we then simply multiply this by to get the final subject-mode matrix: The mMIGP approach is equivalent to performing an approximate PCA on feature-concatenated data. The advantage is that it does not need to fit all data into the memory, and even can be parallelized across modalities (Smith et al., 2014). This approach is also equivalent to applying mCCA across all modalities (Parra, 2018).

Sparse dictionary learning for voxel-space dimension reduction

If the resolution of the data is high and the number of modalities is large, applying just the mMIGP reduction still leaves FLICA as being memory and computationally expensive. Therefore, we propose a method that can effectively reduce the voxel dimension, and preserve the important spatial information for subsequent FLICA spatial modelling. Although the most obvious ways of voxel subsampling are either to apply regular spatial downsampling (similarly, local voxel clustering) or apply PCA within each modality, the former only focuses on the local patterns (Hoyos-Idrobo et al., 2019) (and does not adapt downsampling to local variations in redundant information across voxels) while the later empirically finds more global and noise patterns in neuroimaging data, and does not work at all well empirically in this context (see also Allen et al., 2014 and references therein). The method we used here is sparse Dictionary Learning (DicL) (Mairal et al., 2010), which effectively performs ‘voxel grouping’ in both local and global fashion. It can be used directly on each of the original z-score normalized modalities, i.e., or on the mMIGP reduced data, i.e., . Taking the former as an example, the sparse DicL is adopted here:where is sparse spatial dictionary basis, and is the feature loadings with each column representing a linear combination of information from a group of voxels which might either be a local cluster or spatially distributed network. By minimizing an -regularized sparse-coding objective function, a local optimal solution can be obtained:where subscript represents the th column of the corresponding matrix, and is a regularization parameter. The -regularization term enforces that the learned spatial loadings are sparse. The objective function can be efficiently optimized by a block-coordinate descent optimizer with warm restarts. It has been implemented in the SPAMS package (http://spams-devel.gforge.inria.fr/). Compared with simply using PCA in this step, sparse DicL has three advantages: (1) the spatial loading matrix can be sparse, so a smaller number of voxels are involved in each column of the dictionary; (2) the columns of the dictionary do not need to be orthogonal to each other, which is more flexible; (3) an “overcomplete” dictionary is allowed, i.e., the number of dictionary basis vectors can exceed the minimum of and which further increases the flexibility. After the above modality-wise DicL, the final inputs to FLICA are the matrices of size if we use or if we use . Note that (unlike the typical approach of feeding spatial PCA eigenvectors into ICA) we are not feeding the spatial dictionary basis () into the FLICA core modelling, but the feature loadings (). To get the spatial loading matrices from FLICA, we do voxel-wise multiple regression where the target variable is a voxel and the design matrix is the FLICA subject mode. We could change the order by applying DicL first and then mMIGP, but this empirically has a lower computation efficiency.

Evaluation of BigFLICA in simulations

We simulated 500 subjects, and each had two modalities, which were both images. We first simulated ground-truth (independent) spatial maps ; each of these was a image. The spatial maps were a weighted sum of two Gaussian white noise images, where the first one was with weight 0.05, and the second was a cube randomly located in the full image with weight 0.95. Then, random positive component weights Gaussian random subject loadings and Gaussian white noise terms were simulated. Finally, after vectorizing each spatial map and noise term, the data for a single modality was generated as where was a parameter to control the signal-to-noise ratio (SNR). A small amount of spatial smoothing using a Gaussian kernel was applied to spatial maps and noise terms to mimic real image data. Each of the two modalities also had 5 unique spatial maps that were not shared by each other. The voxels were z-score normalized before feeding into the subsequent FLICA analysis. The SNR was defined as: . Performance evaluation: When FLICA was applied to the simulated data, the number of independent components was always set to the ground truth . The performance was measured by the similarity between estimated subject-mode matrix and the ground truth . The similarity was measured by the greedy matching of the components based on maximum correlation and then estimating the mean correlations across components. Evaluation of mMIGP for subject-space dimension reduction: After generating simulated data, we reduced the data to varying dimensions using mMIGP, and then fed the reduced data into FLICA. This was compared with the original FLICA. The number of ground-truth components was set to 25,35,45 and the SNR was set to 4,1,0.25,0.04. All simulations were repeated 50 times. Evaluation of DicL for voxel-space dimension reduction: To evaluate the influence of the DicL parameters on the subsequent FLICA results, we performed the DicL on simulated data using varying parameter combinations ( 0.1 to 16 and 100 to 3000) followed by FLICA (nIC ). This was compared with the original FLICA. The SNR was set to 4,1,0.25,0.04, and the number of iterations for the DicL was set to 50, because we empirically find that this number of iterations is sufficient for DicL to converge to a stable result in simulation and real data. All simulations were repeated 50 times.

HCP and UK Biobank data

The voxel/vertex-wise neuroimaging data of 81 different modalities of 1003 subjects from the HCP S1200 data release were used in this paper (Van Essen et al., 2013). The preprocessing was conducted by the HCP team using an optimized pipeline (Glasser et al., 2013). The 81 modalities included (1) 25 resting-state ICA dual-regression spatial maps (z-score normalized); (2) 47 unique task contrast maps as z-statistics from 7 different fMRI tasks; (3) 3 T1-image derived modalities (grey matter volume, surface area, surface thickness); (4) 6 Tract-Based Spatial Statistics (TBSS) features from diffusion MRI (FA, L1, L2, L3, MD, MO) (Smith et al., 2006). In addition, 158 nIDPs were used here, which was the same as our previous study (Smith et al., 2015). Names of nIDPs are in Supplementary File 1. The UK Biobank imaging data were mainly preprocessed by FSL (Smith, Jenkinson, Woolrich, Beckmann, Behrens, Johansen-Berg, Bannister, De Luca, Drobnjak, Flitney, et al., 2004, Jenkinson, Beckmann, Behrens, Woolrich, Smith, 2012) and FreeSurfer (Fischl, 2012) following an optimized pipeline (Alfaro-Almagro et al., 2018) (https://www.fmrib.ox.ac.uk/ukbiobank/). The voxel-wise neuroimaging data of 47 modalities of 14,053 subjects were used in this paper, including: (1) 25 “modalities” from the resting-state fMRI ICA dual-regression spatial maps (z-score normalized); (2) 6 modalities from the emotion task fMRI: 3 contrasts (shapes, faces, faces>shapes) of z-statistics and 3 contrasts of parameter estimate maps; (3) 10 diffusion MRI derived modalities (9 TBSS features, including FA, MD, MO, L1, L2, L3, OD, ICVF, ISOVF (Smith, Jenkinson, Johansen-Berg, Rueckert, Nichols, Mackay, Watkins, Ciccarelli, Cader, Matthews, et al., 2006, Zhang, Schneider, Wheeler-Kingshott, Alexander, 2012) and a summed tractography map of 27 tracts from AutoPtx in FSL); (4) 4 T1-MRI derived modalities (grey matter volume and Jacobian map (which shows expansion/contraction generated by the nonlinear warp to standard space, and hence reflects local volume) in the volumetric space, and cortical area and thickness in the Freesurfer’s fsaverage surface space); (5) 1 susceptibility-weighted MRI map (T2-star); (6) 1 T2-FLAIR MRI derived modality (white matter hyperintensity map estimated by BIANCA Griffanti et al., 2016). A detailed description is in Table A.6. In addition, the 8787 nIDPs were included, but we retained the 7245 of those that have at least 1000 non-missing values (subjects). Names of nIDPs are in Supplementary Files. Group-level resting-state independent component spatial maps and task activation z-statistic maps are in the Supplementary Files. When carrying out nIDP prediction, a total of 13 and 54 confounding variables were regressed out from nIDPs using linear regression in the HCP and the UKB datasets respectively (Supplementary Materials). Subjects with a missing modality were imputed by the mean value of all other subjects. We did not impute the missing nIDPs.

Comparing BigFLICA with the original FLICA on real data

On real data, we do not know the ground truth components, and the data may not follow the assumptions of ICA. Therefore, we rely on the performance of predicting nIDPs as a surrogate criterion to evaluate different methods. We applied the proposed mMIGP approach to HCP data and a subset of 1036 UKB subjects (so that the original FLICA is computationally tractable). Elastic-net regression, from the glmnet package (Zou and Hastie, 2005), was used to predict the nIDPs using FLICA’s subject modes as model regressors (features). This approach is widely-used and has been shown to achieve a robust and state-of-the-art performance in many neuroimaging studies (Cui, Gong, 2018, Jollans, Boyle, Artiges, Banaschewski, Desrivières, Grigis, Martinot, Paus, Smolka, Walter, et al., 2019). To evaluate the model performance, for each nIDP, we used 5-fold cross validation, and computed Pearson correlation between the predicted and true values of each nIDP across the 5 test sets. As there are tuning parameters within the Elastic-net regression, in each training set, we performed a nested 5-fold cross validation to tune the model parameters, and used the best model selected in the nested 5-fold cross validation to do the prediction in the test set. When comparing any two approaches, the same training-validation-testing split was used. The prediction accuracy was quantified as the Pearson correlation between predicted and the true values of each nIDP in the test sets. To evaluate MIGP preprocessing, we reduced the dimension to varying (from 100 to 500) using MIGP first and then used FLICA to extract components. The original FLICA was also applied to extract 50 components. To evaluate DicL preprocessing, we used the DicL (dictionary dimension = 2000 and sparsity parameter ) to reduce the data dimension of each modality followed by the FLICA to extract varying numbers of components (nIC). The original FLICA was also applied to extract the same numbers of components. The prediction accuracy of BigFLICA was compared with the original FLICA applied on non-reduced data.

Statistical significance of difference of prediction accuracy between two approaches

To compare the overall prediction accuracy of two approaches (e.g., BigFLICA with mMIGP preprocessing vs. the original FLICA), we estimate the statistical significance of the difference between the prediction correlations across nIDPs. Starting with a total of nIDPs, we first exclude nIDPs where both methods have low prediction accuracy ( as these would likely just add noise to the comparison), resulting in nIDPs. Then we want to test whether the overall prediction accuracy of nIDPs with one method is significantly higher than another method. A naive approach would be a simple paired t-test, but the correlation structures among nIDPs makes the samples dependent with each other, so that the p-value of a t-test is not valid. Note that the paired t-test can also be formulated as a linear regression model, and in the framework of linear regression, sample correlation can be taken into account by the generalized least squares approach. Specifically, we assume: where is the difference of the prediction accuracy between two methods and is a column of ones. The t statistic for coefficient can be calculated as (Kariya and Kurata, 2004):where is the sample covariance matrix, i.e., the covariance among nIDPs. Note that when the model is equivalent to a paired t-test. In general we do not know but can obtain a good estimate of by calculating the covariance of nIDPs using the nIDPs-by-subject matrix. We used the lscov function in Matlab to perform the above estimation.

Parameter settings of running BigFLICA in the full HCP and UKB datasets

We applied BigFLICA approach to extract a varying number of target components in two datasets. In HCP, we used FLICA with DicL preprocessing only (dictionary dimension 2000 and ). In UKB, we used FLICA with both mMIGP and DicL preprocessing (dictionary dimension 5000, and mMIGP dimension 1000 (95% explained variance)). The number of FLICA VB iterations is 1000.

Comparing BigFLICA with multiple independent single-modality ICA decomposition

ICA is a widely-used approach for decomposing single-modality neuroimaging data, including functional MRI (Smith et al., 2015) but also in structural MRI (Zeighami et al., 2015) and diffusion MRI (Li et al., 2012). A natural question arises whether BigFLICA is able to combine multimodal information more effectively than the single-modality approaches such as ICA (we used the fastICA algorithm (Hyvarinen, 1999)), which ignores inter-modality relationships. We first performed ICA on each modality of HCP and UKB data separately to extract 25,100, 250, 500 and 750 components. For a given component number, we built a prediction model using the concatenated ICA subject modes (across modalities) to predict each of the nIDPs. To be fair, for BigFLICA, we extract the same number of ICs to build the prediction model. For example, in the UKB data and a 25-dimensional decomposition, the predictor is a Subject  matrix for single-modality ICA, where 25 is the number of components in each modality and 47 is the total number of modalities. For BigFLICA, the predictor is a Subject  matrix. This is arguably a fair comparison because each of the BigFLICA modes potentially contains information from all modalities. The method to build a predictive model and evaluate this is the same as above, except that when we used the concatenated ICA subject modes, we added a univariate screening step in the training set to select the top 300 most informative features according to their correlation with an nIDP in the training set. This step, in general, boosts the predictive accuracy because the dimensionality of concatenated ICA modes is usually very high, so that many of the modes are pure noise with respect to any given nIDP. Therefore, the univariate screening can help the elastic-net regression to filter out noisy features effectively. We did not perform univariate screening when using the BigFLICA subject modes to predict nIDPs.

Comparing BigFLICA with hand-curated imaging-derived phenotypes

A popular choice of data analysis strategy is to extract imaging features based on expert knowledge (e.g., regional volumes and thickness, and resting-state functional connectivities between brain regions), often referred to as IDPs (Miller et al., 2016). Brain IDPs have been shown to genetically correlate with many SNPs in our previous genome-wide association study (GWAS) in UK Biobank (Elliott et al., 2018), and they have been shown to change in many psychiatric diseases (Kelly, Jahanshad, Zalesky, Kochunov, Agartz, Alloza, Andreassen, Arango, Banaj, Bouix, et al., 2017, Van Rooij, Anagnostou, Arango, Auzias, Behrmann, Busatto, Calderoni, Daly, Deruelle, Di Martino, et al., 2017, Hibar, Westlye, Doan, Jahanshad, Cheung, Ching, Versace, Bilderbeck, Uhlmann, Mwangi, et al., 2018). We extracted 5812 IDPs from the HCP, including (1) 199 structural MRI features from Freesurfer as provided by the HCP; (2) 4700 regional mean task activations from 47 independent task contrasts using a 100-dimensional parcellation atlas (Schaefer et al., 2017); (3) 625 functional connectivities (FCs) based on a 25-dimensional ICA parcellation with partial correlation to estimate FCs; (4) 288 regional mean TBSS features (FA, L1, L2, L3, MD, MO) using the Johns Hopkins University tract atlas. The names of these IDPs are given in the Supplementary File 3. We used 3913 IDPs from UKB, including global and local features from the 6 imaging modalities (T1, T2-FLAIR, swMRI, tfMRI, rfMRI, and dMRI) (Smith et al., 2020). The names of these IDPs are given in the Supplementary File 4. We built prediction models that use IDPs or BigFLICA modes to predict each of the nIDPs using the same strategy as above. The FLICA dimension is set to 25, 100, 250, 500, 750. In addition, we also concatenated IDPs and each of the BigFLICA subject modes together to predict the nIDPs, and the performance is compared with using IDPs alone. We used a univariate screening step to select the top 300/500 most informative IDPs according to their correlation with an nIDP in the inner-fold (i.e., training set) of HCP/UKB. Finally, we also built models that use IDPs to predict each of the FLICA subject modes and vice versa, aiming to evaluate the shared variances between features extracted by these two different approaches in the same data.

Results

We first applied BigFLICA on simulated data to evaluate the performance of mMIGP and DicL as data preprocessing approaches under different parameter settings and data signal-to-noise ratios. The mean correlation of extracted components with simulated ground truth was compared with the corresponding result from the original FLICA (Methods Section 2.5). For mMIGP, Fig. 2a shows that, in most of the situations, the BigFLICA with mMIGP preprocessing gave similar results to the original FLICA, and both FLICA and BigFLICA accurately find the underlying ground truth in most cases. This is in agreement with results of simulations in the MIGP paper (Smith et al., 2014) that it can accurately approximate a full-data PCA in different situations. The optimal dimension of mMIGP is different among simulations; sometimes a relative low dimension can achieve an accurate estimation of components (e.g. Fig. 2a first three columns), while in other cases a high dimension is needed (e.g. Fig. 2a the fourth column).
Fig. 2

Evaluation of multimodal extension of MIGP (mMIGP) and dictionary learning (DicL) as the data preprocessing steps for the FLICA using simulations. BigFLICA achieves similar performance as compared with original FLICA that uses the full data. a, Evaluation of mMIGP preprocessing. We compared the correlations (Z-transformed) of extracted components with ground truth across 50 simulations using the original FLICA (the left column of each figure) and the mMIGP preprocessed FLICA (other columns). The mMIGP dimensions vary between 50 and 400; the SNRs are between 4 and 0.04 (left to right), and the number of FLICA and ground truth components are 25, 35, 45 (top to bottom). As there are 500 subjects, the reduction factor is from 10 to 1.25. b, Evaluation of DicL preprocessing. We compared the correlations of extracted components with ground truth using the original FLICA (FLICA results given in the titles of each figure) and the DicL preprocessed FLICA with different sparsity parameters and dictionary dimensions (cells of the heatmaps). The SNRs are between 4 and 0.04 (left to right), and the number of FLICA and ground truth components are 25, 50, 100 (top to bottom). As there are 27,000 original features per modality, the reduction factor is from 270 to 9.

Evaluation of multimodal extension of MIGP (mMIGP) and dictionary learning (DicL) as the data preprocessing steps for the FLICA using simulations. BigFLICA achieves similar performance as compared with original FLICA that uses the full data. a, Evaluation of mMIGP preprocessing. We compared the correlations (Z-transformed) of extracted components with ground truth across 50 simulations using the original FLICA (the left column of each figure) and the mMIGP preprocessed FLICA (other columns). The mMIGP dimensions vary between 50 and 400; the SNRs are between 4 and 0.04 (left to right), and the number of FLICA and ground truth components are 25, 35, 45 (top to bottom). As there are 500 subjects, the reduction factor is from 10 to 1.25. b, Evaluation of DicL preprocessing. We compared the correlations of extracted components with ground truth using the original FLICA (FLICA results given in the titles of each figure) and the DicL preprocessed FLICA with different sparsity parameters and dictionary dimensions (cells of the heatmaps). The SNRs are between 4 and 0.04 (left to right), and the number of FLICA and ground truth components are 25, 50, 100 (top to bottom). As there are 27,000 original features per modality, the reduction factor is from 270 to 9. For DicL, Fig. 2b shows that in almost all circumstances: (1) increasing the dictionary dimensions will boost the performance of subsequent FLICA analysis; (2) the optimal sparsity parameters are usually between to 2, and they have similar performance; (3) In most cases the optimal performance given by DicL matches that of non-reduced analysis (noted in figure legends). Therefore, in the real data analysis, when using the DicL approach, we always use a very high dimensional DicL decomposition and fix the sparsity parameter to .

Computation time comparison

Table 1 shows the comparison of the computation time and memory requirement of BigFLICA with the original FLICA in the UKB dataset. All code was implemented in Python 2.7, and both BigFLICA and FLICA were run using 24 cores on a single compute node with Intel Xeon CPU E7-8857 v2 @ 3.00 GHz CPU and 2048 GB RAM. The computation time includes: (1) Preprocessing of data using mMIGP and DicL (BigFLICA only); (2) Initialization of FLICA parameters; (3) FLICA VB parameter updates. For the 100,000-subjects data, BigFLICA greatly decreases the computation time and memory usage from an unrealistic amount to a modest configuration for a modern HPC cluster, which therefore allows for the possibility of data-driven population phenotype discovery.
Table 1

Comparison of computation time and amount of RAM usage of BigFLICA with the original FLICA in the UKB dataset (14,503 subjects, 47 different modalities). BigFLICA greatly increases computational efficiency in different settings. Both BigFLICA and FLICA were run on the same computer using all 24 cores in all computation stages with Intel Xeon CPU E7-8857 v2 @ 3.00 GHz and 2 TB RAM.

ApproachesNumber of components
100 K subjects
nIC = 25nIC = 100nIC = 250nIC = 500nIC = 750750 components (estimated)
Computation time (h)The original FLICA160 h300 h580 h1020 h1680 h12,000 h
BigFLICA (mMIGP preprocessing)23 h54 h135 h315 h565 h630 h
BigFLICA (mMIGP + DicL preprocessing)52 h53 h58 h65 h73 h120 h
Peak RAM (GB)The original FLICA801 GB82187996310666000
BigFLICA (mMIGP preprocessing)6688136215297297
BigFLICA (mMIGP + DicL preprocessing)505050505050
Comparison of computation time and amount of RAM usage of BigFLICA with the original FLICA in the UKB dataset (14,503 subjects, 47 different modalities). BigFLICA greatly increases computational efficiency in different settings. Both BigFLICA and FLICA were run on the same computer using all 24 cores in all computation stages with Intel Xeon CPU E7-8857 v2 @ 3.00 GHz and 2 TB RAM.

Real data: comparing BigFLICA with the original FLICA based on the prediction accuracy of nIDPs

As there is no ground truth available, we tested modes of BigFLICA have a similar prediction accuracy of nIDPs compared with the original FLICA, using data from the HCP, and a subset of 1036 subjects from the UKB (Methods Section 2.7). Elastic-net regression with nested 5-fold cross-validation was used to predict each of the nIDPs. This approach is widely-used and has been shown to achieve a robust and state-of-the-art performance in many neuroimaging studies (Cui, Gong, 2018, Jollans, Boyle, Artiges, Banaschewski, Desrivières, Grigis, Martinot, Paus, Smolka, Walter, et al., 2019). Pearson correlation between each of the predicted and the true nIDPs in the outer test fold is used to quantify accuracy. The statistical significance of differences of prediction accuracy between two approaches are estimated by a weighted paired t-test approach (Methods Section 2.8). Fig. 3 shows the Bland–Altman plots comparing the prediction accuracy of nIDPs between original FLICA and BigFLICA with mMIGP preprocessing only (Fig. 3a), and with DicL preprocessing only (Fig. 3b), and with both data reduction approaches (Fig. 3c), in the UKB and HCP datasets. In these comparisons, mMIGP reduced the data to approximately 1/10 to 1/2 of the original data size, and DicL reduced data to approximately 1/75 of the original data size. Overall, BigFLICA can estimate similar sets of modes with comparable prediction accuracy in real multimodal neuroimaging data, i.e., the difference of the correlation between two methods is centered around zero across a wide range of mean correlation values (which are also reflected in the insignificant p-values of weighted paired t-test), which demonstrates that the mMIGP and DicL approaches are effective to reduce data and preserve key information in the data.
Fig. 3

Comparison of prediction accuracy of nIDPs between BigFLICA and the original FLICA. Overall, for most of the comparisons, the differences of prediction accuracy are not significant. In each of the BlandAltman plots, each point represents the prediction of one nIDP, where the x-axis is the average prediction correlation of the two approaches while the y-axis is the difference, i.e., BigFLICA - FLICA. The z- and p-values in the titles reflected the statistical significance of the differences. The Bonferroni correction 0.05 threshold corresponds to a raw p-value of 1.7e3. a, Comparing FLICA with mMIGP preprocessing with the original FLICA. We used a subset of 1036 subjects in the UKB dataset (top) and the HCP (bottom). The number of estimated FLICA components is set to 50, and mMIGP dimensions are set from 100 to 500. b, Comparing FLICA with DicL preprocessing with the original FLICA. We used a subset of 1036 subjects in the UKB dataset (top) and the HCP (bottom). The dictionary dimension is set to a high value of 2000, and the sparsity parameter is set to for all modalities. The number of estimated FLICA components are set from 25 to 300. c, Comparing FLICA with both mMIGP and DicL preprocessing combined, with the original FLICA. The mMIGP dimension is set to 500, and other settings are the same as in b. We use only a subset of UKB here so that running the original FLICA is computationally feasible. The lighter the blue, the higher the density of points.

Comparison of prediction accuracy of nIDPs between BigFLICA and the original FLICA. Overall, for most of the comparisons, the differences of prediction accuracy are not significant. In each of the BlandAltman plots, each point represents the prediction of one nIDP, where the x-axis is the average prediction correlation of the two approaches while the y-axis is the difference, i.e., BigFLICA - FLICA. The z- and p-values in the titles reflected the statistical significance of the differences. The Bonferroni correction 0.05 threshold corresponds to a raw p-value of 1.7e3. a, Comparing FLICA with mMIGP preprocessing with the original FLICA. We used a subset of 1036 subjects in the UKB dataset (top) and the HCP (bottom). The number of estimated FLICA components is set to 50, and mMIGP dimensions are set from 100 to 500. b, Comparing FLICA with DicL preprocessing with the original FLICA. We used a subset of 1036 subjects in the UKB dataset (top) and the HCP (bottom). The dictionary dimension is set to a high value of 2000, and the sparsity parameter is set to for all modalities. The number of estimated FLICA components are set from 25 to 300. c, Comparing FLICA with both mMIGP and DicL preprocessing combined, with the original FLICA. The mMIGP dimension is set to 500, and other settings are the same as in b. We use only a subset of UKB here so that running the original FLICA is computationally feasible. The lighter the blue, the higher the density of points. Finally, we tested whether the set of nIDPs which were predicted better with BigFLICA than the original FLICA are relatively consistent across different nIC. We calculated the differences of the prediction accuracy between BigFLICA and original FLICA, and then tested if they are correlated across different nICs (Fig. A.6). However, we did not observe a significant correlation. This might mean that the nIDP-related information in extracted imaging features is different at different scales (ICA dimensionalities), but as the space spanned by the imaging features is increased with increasing dimensionality, it seems more likely that the reduction in prediction of some nIDPs with increasing ICA dimensionality is a result of overfitting. We also compared BigFLICA outputs against features pooled across those from separate ICA processing of each modality (Methods Section 2.10). Fig. 4a shows that BigFLICA has a worse prediction performance than via running ICA separately on each modality when the dimensionality is low. This is because at low dimensional decomposition, single-modality ICA is most efficient because the constraints imposed on the degrees-of-freedom implied in the FLICA model is insufficient to capture the important data variation into joint components. However, when becomes large, the prediction accuracy becomes better than the single-modality ICA (e.g., in UKB). This is because, at high dimensional decomposition, BigFLICA effectively combines multimodal information by considering cross-modal correlation in the data decomposition stage. Although the cross-modal correlation is considered in the final prediction stage when using single-modality ICA, the fact that BigFLICA identifies and takes advantage of correlated information between modalities at an earlier stage in feature generation helps improve the prediction performance.
Fig. 4

Comparison of prediction accuracy of nIDPs between BigFLICA against single-modality ICA and the IDPs. Overall, for high-dimensional BigFLICA decompositions in the UKB dataset, BigFLICA achieved statistical significant increases of prediction accuracy of nIDPs compared with single-modality ICA and IDPs. Combining BigFLICA and IDPs together future improves compared with IDPs alone. In each of the Bland–Altman plots, each point represents the prediction of an nIDP, where the x-axis is the average prediction correlation of the two approaches, while the y-axis is the difference. The z- and p-values in the titles reflected the statistical significance of the differences. The Bonferroni correction 0.05 threshold corresponds to a raw p-value of 1.7e3. a, Comparing BigFLICA with the concatenation of single-modality ICA outputs. Top: UKB; Bottom: HCP. The number of FLICA components is set from 25 to 750. b, Comparing BigFLICA with IDPs. Top: UKB; Bottom: HCP. The number of IDPs is 3913 in UKB and 5812 in the HCP. c, Comparing the concatenation of BigFLICA and IDPs against IDPs only. Top: UKB; Bottom: HCP. The lighter the blue, the higher the density of points.

Comparison of prediction accuracy of nIDPs between BigFLICA against single-modality ICA and the IDPs. Overall, for high-dimensional BigFLICA decompositions in the UKB dataset, BigFLICA achieved statistical significant increases of prediction accuracy of nIDPs compared with single-modality ICA and IDPs. Combining BigFLICA and IDPs together future improves compared with IDPs alone. In each of the Bland–Altman plots, each point represents the prediction of an nIDP, where the x-axis is the average prediction correlation of the two approaches, while the y-axis is the difference. The z- and p-values in the titles reflected the statistical significance of the differences. The Bonferroni correction 0.05 threshold corresponds to a raw p-value of 1.7e3. a, Comparing BigFLICA with the concatenation of single-modality ICA outputs. Top: UKB; Bottom: HCP. The number of FLICA components is set from 25 to 750. b, Comparing BigFLICA with IDPs. Top: UKB; Bottom: HCP. The number of IDPs is 3913 in UKB and 5812 in the HCP. c, Comparing the concatenation of BigFLICA and IDPs against IDPs only. Top: UKB; Bottom: HCP. The lighter the blue, the higher the density of points. In Fig. A.7, we also compared, in the UKB data, the 750-dimensional BigFLICA decomposition with the 25-dimensional ICA decomposition concatenated across modalities, i.e., we have features in the single-modality ICA. In this comparison, the number of features for the two methods are almost the same, but we can see that BigFLICA clearly outperforms the single-modality ICA.

Comparing BigFLICA with hand-curated imaging-derive phenotypes

We compared the predictive performance of BigFLICA with IDPs in both HCP and UKB datasets (Methods Section 2.11). Fig. 4b shows that, in the UKB data, when the number of modes is low, BigFLICA has a worse predictive power than the joint performance of 3913 IDPs, due to the same insufficient degree-of-freedom reason as above. However, when the dimensionality becomes higher, BigFLICA is clearly outperforming the IDPs, owing to jointly fusing multimodal voxelwise data by considering cross-modality correlation. In the HCP data, the performance is overall similar. These results indicate that BigFLICA can potentially explain more phenotypic and behavioural variances than IDPs. In more detail, Table A.2 shows that, in the UKB dataset, the high-dimensional BigFLICA (nIC = 750) has improved prediction accuracy for many nIDPs that relate to cognition phenotypes and health outcomes compared with IDPs. These tables do not include nIDPs where both methods have low predictive power (). In the HCP dataset (Table A.3), BigFLICA (nIC = 100) also shows improved prediction accuracy in many cognitive and health outcomes variables compared with using IDPs. Further, when we concatenated the modes of BigFLICA and IDPs together to predict nIDPs, as shown in Fig. 4c, the combined feature sets have a significant improvement of prediction accuracy than the IDPs alone in the UKB data. There are almost no differences for the same comparison in the HCP data. This suggests that BigFLICA and IDPs may contain some complementary information of nIDPs. To investigate the relationships between BigFLICA and IDPs further, we built prediction models that used modes of BigFLICA to predict each of the IDPs, to further characterise information overlap and complementarity between the two approaches. As shown in Fig. A.8a and b, different types of IDPs can be predicted differently, and the resting-state functional connectivities always had the worst accuracy in both the HCP and the UKB datasets, because they are (relatively) noisy. However, when using BigFLICA modes to predict 6 new summary features of the connectivity matrices (derived by applying ICA to the matrix of subjects by network matrix edges) (Elliott et al., 2018), the accuracy is very high ( range from 0.85 to 0.89 for a 100 dimensional BigFLICA decomposition). In addition, when we used IDPs to predict modes of BigFLICA, as shown in Fig. A.8c and d, the prediction correlation almost showed a bimodal distribution, which means that some of the FLICA modes can be predicted by the IDPs (mean ) while others cannot (mean ). These results further demonstrate that BigFLICA and IDPs span significant complementary variance.

Examples of BigFLICA modes in the 14k UKB dataset

We now give four examples of significant associations between BigFLICA modes and nIDPs, namely, Fluid intelligence, Age started wearing glasses or contact lenses, Handedness and hypertension. In Fig. 5, we show the top four most strongly associated modalities in FLICA modes that correlate with a given nIDP. Fig. A.16 shows the population cross-subject mean maps for several task and rest fMRI modalities fed into FLICA. This helps give interpretive context for the FLICA mode maps, which depict subject variability in the activity/connectivity relative to these group mean maps.
Fig. 5

Examples of BigFLICA modes in the 14k UKB dataset. For each subfigure, each row shows one IC (BigFLICA mode or independent component) with top 4 most strongly associated modalities. a, Two BigFLICA modes that significantly correlate with fluid intelligence (IC25: ; IC57: ). b, Two BigFLICA modes that significantly correlate with Age started wearing glasses or contact lenses (IC164: ; IC13: ). c, Three BigFLICA modes that significantly correlate with handedness (IC235: ; IC569: ; IC232: ). d, Three BigFLICA modes that significantly correlate with hypertension (IC259: ; IC13: ; IC319: ). The Bonferroni corrected 0.05 threshold corresponds to an uncorrected p-value of (corrected for number of components (750) and number of nIDPs (7245)). All of the above correlations passed the Bonferroni threshold except for IC232 with uncorrected .

Examples of BigFLICA modes in the 14k UKB dataset. For each subfigure, each row shows one IC (BigFLICA mode or independent component) with top 4 most strongly associated modalities. a, Two BigFLICA modes that significantly correlate with fluid intelligence (IC25: ; IC57: ). b, Two BigFLICA modes that significantly correlate with Age started wearing glasses or contact lenses (IC164: ; IC13: ). c, Three BigFLICA modes that significantly correlate with handedness (IC235: ; IC569: ; IC232: ). d, Three BigFLICA modes that significantly correlate with hypertension (IC259: ; IC13: ; IC319: ). The Bonferroni corrected 0.05 threshold corresponds to an uncorrected p-value of (corrected for number of components (750) and number of nIDPs (7245)). All of the above correlations passed the Bonferroni threshold except for IC232 with uncorrected . For Fluid intelligence, using all modes (ICs) from the 750 dimensional BigFLICA decomposition as features (predictors) in multivariate elastic-net prediction, a cross-validated prediction correlation of is achieved. When we correlated each of the BigFLICA modes and IDPs with the fluid intelligence score in the UKB, we found that several task-fMRI-related BigFLICA modes have the strongest associations (Fig. 5a). The first (IC 25) involves task contrast “faces” and “faces shapes” and the second (IC 57) involves contrast “shapes” and “face” (see Table A.6 for the full list of these modalities). As the correlation of the mode IC 25 (i.e., its subject weights vector) with fluid intelligence is negative (), this means that the negative-weights voxels (such as in the anterior insula) are positively correlated with intelligence. The fMRI task (Hariri faces-shapes matching Hariri et al., 2002) has, as expected, the greatest population average activation in sensory-motor areas (plus some amygdala involvement due to the emotionally negative nature of the faces), as seen in Fig. A.16. However, the main brain areas involved in these modes are distinct, including anterior cingulate cortex, frontal pole, inferior frontal gyrus, and anterior insula; it is therefore interesting that the areas found by BigFLICA to be modulated in these components (and found to associate with intelligence) are more “frontal, cognitive” areas than the sensory-motor areas primarily activated on average. The top associations between fluid intelligence and IDPs also involve task-fMRI IDPs (Table A.4), but these were a factor of two weaker than associations with BigFLICA modes. For Age started wearing glasses or contact lenses, BigFLICA achieved a prediction correlation of . Several resting-state connectivity and task modalities showed associations in primary visual areas (Fig. 5b), which is consistent with the fact that this is a vision-related health variable. Lower age of first wearing glasses is correlated with stronger activity in primary visual areas, and also with strength of resting-fMRI connectivity (or functional coherence) within the relevant areas of group-average connectivity; interestingly, in nearby distinct (but still primary visual) areas, there is reduction of correlation (blue voxels), suggesting greater differentiation of primary visual areas. For Handedness, BigFLICA achieved a prediction correlation of . BigFLICA identified several multimodal, lateralized (or laterally asymmetric) modes, including resting-state mode 14 (left-lateralized language network), task, surface area and white matter tracts (Fig. 5c). There are several resting-state connectivity-related IDPs correlated with handedness (Table A.4), consistent with a recent study (Wiberg et al., 2019) that also used UKB IDPs, while no IDPs related to other modalities are found significant; in both cases the maximum IDP correlation only reached whereas the strongest association with BigFLICA modes was almost double this. For a health variable hypertension (Fig. 5d), BigFLICA achieved a prediction correlation of . Several TBSS-related modalities showed consistent associations in the External Capsule tracts. Meanwhile, white matter hyperintensity (T2-Lesion volume) in the corresponding areas is also higher in people with hypertension. Several consistent findings have been reported in the literature (Moon, Na, Kim, Ryoo, Chung, 2005, Allen, Muldoon, Gianaros, Jennings, 2016, Hannawi, Yanek, Kral, Vaidya, Becker, Becker, Nyquist, 2018).

BigFLICA comparison with mCCA and reproducibility

We tested whether BigFLICA (independent components-based spatial modelling) was better than mCCA (eigendecomposition based modelling, which could be considered to be similar to the output of BigFLICA without running the final core FLICA unmixing - note that to enable mCCA to run requires the same mMIGP initial processing that we have added in this work) in three ways. The number of extracted components was the same when performing this comparison. First, for the prediction accuracy of nIDPs, Fig. A.9 shows that, in the UKB data, BigFLICA has a (very slightly) improved prediction accuracy compared with mCCA. Then, we proposed a hypothesis that modes of BigFLICA are more parsimonious features of nIDPs compared with mCCA, or in other word, a smaller number of modes of BigFLICA can predict the nIDPs. Results shown in Fig. A.10 validate this hypothesis: for a given number of components and a given nIDP, BigFLICA modes have a (on average) higher proportion of zero weights in the elastic-net predictions, when compared with mCCA modes. The advantage is that a more parsimonious representation usually has a better biological interpretability. Finally, we estimated and compared the split-half reproducibility of BigFLICA and mCCA. As shown in Fig. A.11 (right), BigFLICA has a much higher between-subject reproducibility than mCCA (median BigFLICA correlation greater than 0.9 in all cases, while many mCCA dimensionalities have median correlation less than 0.5).

Reproducibility of BigFLICA

To test whether BigFLICA’s spatial independent components are estimated reliably, the whole UKB dataset was divided into two parts: the first part contained 7000 subjects and the second part contained the remaining 7503 subjects. We applied BigFLICA to the two parts separately. After estimating the subject modes, we reconstructed the z-score normalized (voxel-wise) spatial maps of each modality by regressing the subject mode against the mMIGP-reduced data. The spatial independent components of each modality were concatenated spatially and greedily paired, based on the absolute correlation between two runs. When we computed the correlations, only voxels whose absolute z-scores that are both larger than 3 in two runs were preserved (to reduce noise, given that there are huge numbers of empty voxels across all modalities for a given FLICA component in general; this does not bias the metric of reproducibility towards finding common similar patterns). Fig. A.11 (left) shows that the FLICA components have very high reproducibility in the split-half test across a varying number of components. We further tested the reproducibility of prediction accuracy of BigFLICA for predicting nIDPs. We ran BigFLICA separately on two halves of the dataset as above, and then predicted each of the nIDPs using elastic net regression in each half. Fig. A.12 shows the prediction accuracy of all nIDPs. We can see that the prediction accuracy of nIDPs are highly correlated, especially for nIDPs that have a high prediction accuracy ( for a 25 dimensional decomposition, and for higher dimensions). Finally, we investigated the influence of sample size on the reproducibility. We performed the same split-half reproducibility test on two random subsets of 1500 and 3500 subjects. Fig. A.13 shows that across different numbers of components, BigFLICA has a reproducibility of from 0.6 to 0.7, which is expected to be lower than the 7000 subject case. We can also see that there is a slight increase of reproducibility when the number of components increases. This different behaviour compared with the 7000-subject cases may be due to the fact that many of the components are empty when the number of components is larger than 250 (i.e., there is not enough data to support the ICA dimensionality), so that we can only compute the reproducibility index based on the non-empty components.

Stability of BigFLICA prediction

We evaluated the stability of prediction accuracy of BigFLICA against different train-test subject splits. We estimated the prediction correlation of each nIDP using 5 different train-test subject splits, and calculated the mean accuracy of prediction correlations for each nIDPs. We then computed the difference of the mean prediction correlation and the prediction correlation of one of the five random predictions. As shown in Fig. A.14, the differences are centered around zero with extremely small spread, which demonstrated the stability of BigFLICA against different random train-test splits. Among the 25 resting-state dual-regression spatial maps included in the analysis, four of them had been originally identified as non-neural components (Miller et al., 2016). Non-neural components likely reflect non-neuronal physiology and therefore may help prediction, particularly for the nIDPs that relate to basic physiology (e.g., blood pressure). Therefore, we tested the prediction accuracy of nIDPs when we exclude the four non-neural modalities (rfMRI components). Fig. A.15 shows that an increased prediction accuracy was observed for some nIDPs in the 25-dimensional decomposition when compared with using all 47 modalities, a similar prediction accuracy was observed for 100- and 250-dimensional decompositions, while a decreased prediction accuracy was observed for 500- and 750-dimensional decompositions. The set of nIDPs that are better predicted by inclusion of 4 non-neural components at high dimensions are those that relate to physical measures. Researchers may choose to include artifactual rfMRI components (e.g., where this helps maximise nIDP associations), or may exclude them (e.g., to maximise interpretability of associations).

Contribution of different modalities in a BigFLICA decomposition

Besides using BigFLICA for exploring the relationships between imaging and non-imaging phenotypic and behavioural data, we can also use it to investigate the relationship between different modalities. For each mode, BigFLICA estimates a vector of positive numbers reflecting the contributions of different modalities (i.e., the diagonal of each where the higher the number, the more important is one modality to a mode). We concatenated all such vectors across all modes so that it is a mode-by-modality matrix and normalized each column to sum to one. Six examples of such matrices are shown in Fig. A.17, with different numbers of estimated modes in the UKB dataset. We then calculate each row’s sum (across columns) in thereby reflecting the overall contribution of each modality in the BigFLICA decomposition. As shown in Fig. A.18, across all FLICA dimensionalities (numbers of estimated modes), each of the 25 resting-state fMRI dual-regression spatial maps usually has a low overall contribution, followed by task fMRI maps, while modalities reflecting more about structure of the brain (e.g., structural MRI and diffusion MRI) generally have high overall contributions. The relative differences of modality contribution between functional MRI-related modalities and structural/diffusion MRI-related modalities become larger with increasing number of estimated modes. We further estimated the total shared variances between a lower dimensional BigFLICA decomposition and a higher dimensional decomposition. Table A.5 shows that a higher dimensional decomposition explains almost all variances of a lower dimensional decomposition (upper triangle of the table), while a lower dimensional decomposition can explain a large proportion of the variances of a higher dimensional decomposition.

Relationship between different modalities in a BigFLICA decomposition

We calculated the cosine similarity between different columns of (using the 750-dimensional BigFLICA decomposition), to measure the similarity of different modalities in terms of their contribution to the BigFLICA decomposition, i.e., the more similar information two modalities carry, the more likely they will have similar contribution to a mode. Fig. A.19a shows that the modality relationship matrix is clearly grouped into three large clusters. The first is all resting-state modalities, while the second is the task fMRI maps, and the third is the diffusion MRI, structural MRI-related modalities and swMRI. The white matter hyper-intensity map (T2 lesions) forms a single cluster. As a comparison, we also performed a 50-dimensional ICA decomposition within each modality, and calculated the shared variances between every pair of 50 ICs in two modalities using a simple multivariate regression model. As shown in Fig. A.19b, we also observed a similar pattern as Fig. A.19a. The main difference is that in Fig. A.19a, there are relatively stronger correlations within resting-state modalities and between resting-state and other modalities, but weaker correlations between task modalities and structural related modalities. These results reflect the fact that the multimodal modelling effects of BigFLICA learn different inter-modality relationships compared with single-modality ICA.

Discussion

In this paper, we presented BigFLICA, a multimodal data fusion approach which is scalable and tuneable to analyze the full UK-Biobank neuroimaging dataset, and other large-scale multimodal imaging studies. To the best of our knowledge, this is the first approach for data-driven (unsupervised) multimodal analysis in a brain imaging dataset of this size and complexity. Building on the top of the powerful FLICA model, we proposed a two-stage dimension-reduction approach that combines an incremental group-PCA (mMIGP) and dictionary learning (DicL) to effectively preprocess the multimodal dataset and reduce the computational load of the final FLICA, while maintaining or even improving performance, with as much as a 150-fold “intelligent” reduction in data size. We provide effective ways of choosing the hyper-parameters of BigFLICA, so that it is free of tuning except for choosing the final number of estimated components. Although this approach is motivated by the need for analyzing extremely big neuroimaging data, it is also applicable to other kinds of data such as genetics and behavioural measures. An easy-to-use version of this software will be integrated into an upcoming version of the FSL software package (Smith, Jenkinson, Woolrich, Beckmann, Behrens, Johansen-Berg, Bannister, De Luca, Drobnjak, Flitney, et al., 2004, Jenkinson, Beckmann, Behrens, Woolrich, Smith, 2012). BigFLICA results on UKB will also be released via the UKB database as new data-driven IDPs (image features), for further epidemiological and neuroscientific research. A strength of our work is that, unlike previous work that was limited to more moderate datasets and a few phenotypic and behavioural variables (Calhoun, Adali, Giuliani, Pekar, Kiehl, Pearlson, 2006, Liu, Pearlson, Windemuth, Ruano, Perrone-Bizzozero, Calhoun, 2009, Beckmann, Smith, 2005, Groves, Beckmann, Smith, Woolrich, 2011, Sui, Adali, Yu, Chen, Calhoun, 2012), we used two of the largest, high-quality multimodal neuroimaging datasets, and thousands of phenotypic and behavioural variables to validate the proposed approach. We demonstrated that BigFLICA is not only much faster than the original FLICA (and can be run on very large data that is simply not analysable with FLICA or other existing methods), but also estimates similar modes with a comparable performance for predicting the non-imaging-derived phenotypes in real data (when tested on a large data subset that is just small enough to allow for comparison against FLICA). We provide insights into the advantages of data-driven multimodal fusion in big datasets by quantitative analysis (Calhoun and Sui, 2016, Uludağ, Roebroeck, 2014). First, when comparing BigFLICA with simpler IDP-based approaches (and also single-modality ICA approaches), we demonstrated that a high-dimensional BigFLICA has improved predictive power overall. We demonstrated the value of multimodal fusion instead of analyzing each modality separately. Second, when combining high-dimensional BigFLICA-derived features with IDPs together, the predictive power increased further compared with using either method alone. In addition, when we used BigFLICA-derived features and manually created (with expert knowledge) IDPs to predict each other, they cannot predict each other perfectly (although they are derived from the same imaging data). This indicates that BigFLICA-derived features and IDPs can be complementary to each other, both therefore providing potentially important imaging biomarkers that capture different signal in the imaging data. An interesting finding is that although a high-dimensional BigFLICA has a much higher predictive power than a low dimensional decomposition, a low dimensional decomposition can still explain more than 80% of the total variance of the high dimensional decomposition. This suggests that some the phenotypic and behavioural variables are explained by only small proportions of variance of imaging data. Third, in addition to the value of using BigFLICA-derived features for relating imaging to non-imaging data, BigFLICA components (particularly at lower dimensionalities) may allow us to learn more about how the different brain imaging modalities (and hence different spatial and biological aspects of the brain’s structure and function) relate to each other. Finally, when new primary data becomes available from new subjects, this data would need to have new IDPs calculated (at the subject level) and then combined with existing IDPs from previous subjects, for a complete between-subject analysis. The approach presented here, while not avoiding any of the necessary new computations, will make these efficient. Further, note that it is alternatively possible to use an existing decomposition and apply this to new subjects by projecting them onto the existing spatial bases to generate new subject weights (loadings) against the ICA features. We see opportunities to improve the current approach. First, BigFLICA is limited to linear feature estimation, while the “ideal, true” information in imaging data may be highly nonlinear. Therefore, a nonlinear extension of BigFLICA, which might be achieved with kernel methods or deep neural networks, is an important area of further research. Second, BigFLICA is an unsupervised dimension reduction and feature generation approach. However, integrating some supervision, i.e., the target variable (such as disease outcomes), into the dimension reduction may boost the performance of the algorithm. Additionally, because BigFLICA generates data-driven features, as opposed to expert-created IDPs, the biological or anatomical interpretation of features is often likely not to be immediately obvious, requiring potentially intensive expert study. Future work could attempt to automate this interpretation process, for example by relating features to existing anatomical templates and atlases, and even by mining imaging literature. Finally, BigFLICA, or extensions, may be an effective way of discovering imaging confound factors (Li et al., 2020) that cannot be found by traditional approaches.

Data availability

BigFLICA-derived features will be available from the UK Biobank database. For UK Biobank, all source data (including raw and processed brain imaging data, derived IDPs, and non-imaging measures) is available from UK Biobank via their standard data access procedure (see http://www.ukbiobank.ac.uk/register-apply). For HCP, data can be downloaded via website (http://humanconnectome.org/data) and ConnectomeDB.

Code availability

BigFLICA code is available at https://github.com/weikanggong/BigFLICA, and will also be released as part of an upcoming version of FSL. Matlab software for performing prediction using elastic-net regression is available at https://github.com/vidaurre/NetsPredict.

CRediT authorship contribution statement

Weikang Gong: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing - original draft, Writing - review & editing. Christian F. Beckmann: Conceptualization, Methodology, Writing - original draft, Writing - review & editing, Supervision, Project administration, Funding acquisition. Stephen M. Smith: Conceptualization, Methodology, Writing - original draft, Writing - review & editing, Supervision, Project administration, Funding acquisition.

Declaration of Competing Interest

Authors declare that they have no conflict of interest.
  48 in total

1.  The amygdala response to emotional stimuli: a comparison of faces and scenes.

Authors:  Ahmad R Hariri; Alessandro Tessitore; Venkata S Mattay; Francesco Fera; Daniel R Weinberger
Journal:  Neuroimage       Date:  2002-09       Impact factor: 6.556

2.  NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain.

Authors:  Hui Zhang; Torben Schneider; Claudia A Wheeler-Kingshott; Daniel C Alexander
Journal:  Neuroimage       Date:  2012-03-30       Impact factor: 6.556

3.  Benefits of multi-modal fusion analysis on a large-scale dataset: life-span patterns of inter-subject variability in cortical morphometry and white matter microstructure.

Authors:  Adrian R Groves; Stephen M Smith; Anders M Fjell; Christian K Tamnes; Kristine B Walhovd; Gwenaëlle Douaud; Mark W Woolrich; Lars T Westlye
Journal:  Neuroimage       Date:  2012-06-29       Impact factor: 6.556

4.  Tensorial extensions of independent component analysis for multisubject FMRI analysis.

Authors:  C F Beckmann; S M Smith
Journal:  Neuroimage       Date:  2005-01-08       Impact factor: 6.556

5.  Recursive Nearest Agglomeration (ReNA): Fast Clustering for Approximation of Structured Signals.

Authors:  Andres Hoyos-Idrobo; Gael Varoquaux; Jonas Kahn; Bertrand Thirion
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-03-13       Impact factor: 6.226

6.  Association of Heritable Cognitive Ability and Psychopathology With White Matter Properties in Children and Adolescents.

Authors:  Dag Alnæs; Tobias Kaufmann; Nhat Trung Doan; Aldo Córdova-Palomera; Yunpeng Wang; Francesco Bettella; Torgeir Moberget; Ole A Andreassen; Lars T Westlye
Journal:  JAMA Psychiatry       Date:  2018-03-01       Impact factor: 21.596

Review 7.  A review of multivariate methods for multimodal fusion of brain imaging data.

Authors:  Jing Sui; Tülay Adali; Qingbao Yu; Jiayu Chen; Vince D Calhoun
Journal:  J Neurosci Methods       Date:  2011-11-11       Impact factor: 2.390

8.  Combining fMRI and SNP data to investigate connections between brain function and genetics using parallel ICA.

Authors:  Jingyu Liu; Godfrey Pearlson; Andreas Windemuth; Gualberto Ruano; Nora I Perrone-Bizzozero; Vince Calhoun
Journal:  Hum Brain Mapp       Date:  2009-01       Impact factor: 5.038

9.  Resting brain dynamics at different timescales capture distinct aspects of human behavior.

Authors:  Raphaël Liégeois; Jingwei Li; Ru Kong; Csaba Orban; Dimitri Van De Ville; Tian Ge; Mert R Sabuncu; B T Thomas Yeo
Journal:  Nat Commun       Date:  2019-05-24       Impact factor: 14.919

10.  Brain aging comprises many modes of structural and functional change with distinct genetic and biophysical associations.

Authors:  Stephen M Smith; Lloyd T Elliott; Fidel Alfaro-Almagro; Paul McCarthy; Thomas E Nichols; Gwenaëlle Douaud; Karla L Miller
Journal:  Elife       Date:  2020-03-05       Impact factor: 8.140

View more
  5 in total

1.  Deep phenotyping for precision medicine in Parkinson's disease.

Authors:  Ann-Kathrin Schalkamp; Nabila Rahman; Jimena Monzón-Sandoval; Cynthia Sandor
Journal:  Dis Model Mech       Date:  2022-06-01       Impact factor: 5.732

2.  Hierarchical modelling of functional brain networks in population and individuals from big fMRI data.

Authors:  Seyedeh-Rezvan Farahibozorg; Janine D Bijsterbosch; Weikang Gong; Saad Jbabdi; Stephen M Smith; Samuel J Harrison; Mark W Woolrich
Journal:  Neuroimage       Date:  2021-08-25       Impact factor: 6.556

3.  The positive-negative mode link between brain connectivity, demographics and behaviour: a pre-registered replication of Smith et al. (2015).

Authors:  Nikhil Goyal; Dustin Moraczewski; Peter A Bandettini; Emily S Finn; Adam G Thomas
Journal:  R Soc Open Sci       Date:  2022-02-02       Impact factor: 2.963

4.  Three-way parallel group independent component analysis: Fusion of spatial and spatiotemporal magnetic resonance imaging data.

Authors:  Shile Qi; Rogers F Silva; Daoqiang Zhang; Sergey M Plis; Robyn Miller; Victor M Vergara; Rongtao Jiang; Dongmei Zhi; Jing Sui; Vince D Calhoun
Journal:  Hum Brain Mapp       Date:  2021-11-22       Impact factor: 5.038

5.  The Open-Access European Prevention of Alzheimer's Dementia (EPAD) MRI dataset and processing workflow.

Authors:  Luigi Lorenzini; Silvia Ingala; Alle Meije Wink; Joost P A Kuijer; Viktor Wottschel; Mathijs Dijsselhof; Carole H Sudre; Sven Haller; José Luis Molinuevo; Juan Domingo Gispert; David M Cash; David L Thomas; Sjoerd B Vos; Ferran Prados; Jan Petr; Robin Wolz; Alessandro Palombit; Adam J Schwarz; Gaël Chételat; Pierre Payoux; Carol Di Perri; Joanna M Wardlaw; Giovanni B Frisoni; Christopher Foley; Nick C Fox; Craig Ritchie; Cyril Pernet; Adam Waldman; Frederik Barkhof; Henk J M M Mutsaerts
Journal:  Neuroimage Clin       Date:  2022-07-07       Impact factor: 4.891

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.