Literature DB >> 30532080

fMRIPrep: a robust preprocessing pipeline for functional MRI.

Russell A Poldrack1, Krzysztof J Gorgolewski2, Oscar Esteban3, Christopher J Markiewicz1, Ross W Blair1, Craig A Moodie1, A Ilkay Isik4, Asier Erramuzpe5, James D Kent6, Mathias Goncalves7, Elizabeth DuPre8, Madeleine Snyder9, Hiroyuki Oya10, Satrajit S Ghosh7,11, Jessey Wright1, Joke Durnez1.   

Abstract

Preprocessing of functional magnetic resonance imaging (fMRI) involves numerous steps to clean and standardize the data before statistical analysis. Generally, researchers create ad hoc preprocessing workflows for each dataset, building upon a large inventory of available tools. The complexity of these workflows has snowballed with rapid advances in acquisition and processing. We introduce fMRIPrep, an analysis-agnostic tool that addresses the challenge of robust and reproducible preprocessing for fMRI data. fMRIPrep automatically adapts a best-in-breed workflow to the idiosyncrasies of virtually any dataset, ensuring high-quality preprocessing without manual intervention. By introducing visual assessment checkpoints into an iterative integration framework for software testing, we show that fMRIPrep robustly produces high-quality results on a diverse fMRI data collection. Additionally, fMRIPrep introduces less uncontrolled spatial smoothness than observed with commonly used preprocessing tools. fMRIPrep equips neuroscientists with an easy-to-use and transparent preprocessing workflow, which can help ensure the validity of inference and the interpretability of results.

Entities:  

Mesh:

Year:  2018        PMID: 30532080      PMCID: PMC6319393          DOI: 10.1038/s41592-018-0235-4

Source DB:  PubMed          Journal:  Nat Methods        ISSN: 1548-7091            Impact factor:   28.547


INTRODUCTION

Functional magnetic resonance imaging (fMRI) is a commonly used technique to map human brain activity[1]. However, the blood-oxygen-level dependent (BOLD) signal measured by fMRI is typically mixed with non-neural sources of variability[2]. Preprocessing identifies the nuisance sources and reduces their effect on the data[3,4], and further addresses particular imaging artifacts and the anatomical localization of signals[5]. For instance, slice-timing[6] correction (STC), head-motion correction (HMC), and susceptibility distortion correction (SDC) address particular artifacts, while co-registration, and spatial normalization are concerned with signal localization (Supplementary Note 1). Extracting a signal that is most faithful to the underlying neural activity is crucial to ensure the validity of inference and interpretability of results[7]. Thus, a primary goal of preprocessing is to reduce sources of false positive errors without inducing excessive false negative errors. An illustration of false positive errors familiar to most researchers is finding activation outside of the brain due to faulty spatial normalization. As a more practical example, Power et al. demonstrated that unaccounted-for head-motion in resting-state fMRI generated systematic correlations that could be misinterpreted as functional connectivity[8]. Conversely, false negatives can result from a number of preprocessing failures, such as anatomical misregistration across individuals which reduces statistical power. Workflows for preprocessing fMRI produce two broad classes of outputs. First, preprocessed time-series derive from the original data after the application of retrospective signal corrections, temporal/spatial filtering, and the resampling onto a target space appropriate for analysis (e.g. a standardized anatomical reference). Second, experimental confounds are additional time-series such as physiological recordings and estimated noise sources that are useful for analysis (e.g. to be modeled as nuisance regressors). Some commonly used confounds include: motion parameters, framewise displacement[9] (FD), spatial standard deviation of the data after temporal differencing (DVARS[8]), global signals, etc. Preprocessing may include further steps for denoising and estimation of confounds. For instance, dimensionality reduction methods based on principal components analysis (PCA) or independent components analysis (ICA), such as component-based noise correction (CompCor[10]) or automatic removal of motion artifacts (ICA-AROMA[11]). The neuroimaging community is well equipped with tools that implement the majority of the individual steps of preprocessing described so far (Table 1). These tools are readily available within software packages including AFNI[12], ANTs[13], FreeSurfer[14], FSL[15], Nilearn[16], or SPM[17]. Despite the wealth of accessible software and multiple attempts to outline best practices for preprocessing[2,5,7,18], the large variety of data acquisition protocols have led to the use of ad-hoc pipelines customized for nearly every study[19]. In practice, the neuroimaging community lacks a preprocessing workflow that reliably provides high-quality and consistent results on arbitrary datasets.
Table 1.

FMRIPrep integrates best-in-breed tools for each of the preprocessing tasks that its workflow covers, except for steps implemented as part of the development of fMRIPrep (in-house implementations). Tasks listed on the first column are described in detail in Supplementary Note 1.

Preprocessing taskfMRIPrep includesAlternatives (not included within fMRIPrep)

Anatomical T1w brain-extractionantsBrainExtraction.sh (ANTs)bet (FSL), 3dSkullstrip (AFNI), MRTOOL (SPM Plug-in)
Anatomical surface reconstructionrecon-all (FreeSurfer)CIVET, BrainSuite, Computational Anatomy (SPM Plug-in)
Head-motion estimation (and correction)mcflirt (FSL)3dvolreg (AFNI), spm_realign (SPM), cross_realign_4dfp (4dfp), antsBrainRegistration (ANTs)
Susceptibility-derived distortion estimation (and unwarping)3dqwarp (AFNI)fugue and topup (FSL), FieldMap and HySCO (SPM Plug-ins)
Slice-timing correction3dTshift (AFNI)slicetimer (FSL), spm_slice_timing (SPM), interp_4dfp (4dfp)
Intra-subject registrationbbregister (FreeSurfer), flirt (FSL)3dvolreg (AFNI), antsRegistration (ANTs), Coregister (SPM GUI)
Spatial normalization (inter-subject co-registration)antsRegistration (ANTs)@auto_tlrc (AFNI), fnirt (FSL), Normalize (SPM GUI)
Surface samplingmri_vol2surf (FreeSurfer)SUMA (AFNI), MNE, Nilearn
Subspace projection denoising (ICA, PCA, etc)melodic (FSL), ICA-AROMANilearn, LMGS (SPM Plug-in)
Confoundsin-house implementationfsl_motion_outliers (FSL), TAPAS PhysIO (SPM Plug-in)
Detection of nonsteady-statesin-house implementationAd-hoc implementations, manual setting

RESULTS

FMRIPrep is a robust and convenient tool for researchers and clinicians to prepare both task-based and resting-state fMRI data for analysis. Its outputs enable a broad range of applications, including within-subject analysis using functional localizers, voxel-based analysis, surface-based analysis, task-based group analysis, resting-state connectivity analysis, and many others.

A modular design alongside BIDS allow for a flexible, adaptive workflow

FMRIPrep is composed of sub-workflows that are dynamically assembled into different configurations depending on the input data. These building blocks combine tools from widely-used, open-source neuroimaging packages (Table 1). The workflow engine Nipype[20] is used to stage the workflows and to deal with execution details (such as resource management). As presented in Figure 1, the workflow comprises two major blocks, separated into anatomical and functional MRI processing streams. The Brain Imaging Data Structure[21] (BIDS, Supplementary Note 2) allows fMRIPrep to precisely identify the structure of the input data and gather all the available metadata (e.g. imaging parameters) with no manual intervention. FMRIPrep reliably self-adapts to dataset irregularities such as missing acquisitions or runs through a set of heuristics.
Figure 1.

FMRIPrep is an fMRI preprocessing tool that adapts to the input dataset.

Leveraging the Brain Imaging Data Structure (BIDS21), the software self-adjusts automatically, configuring the optimal workflow for the given input dataset. Thus, no manual intervention is required to locate the required inputs (one T1-weighted image and one BOLD series), read acquisition parameters (such as the repetition time –TR– and the slice acquisition-times) or find additional acquisitions intended for specific preprocessing steps (like field maps and other alternatives for the estimation of the susceptibility distortion).

Visual reports ease quality control and maximize transparency

Users can assess the quality of preprocessing with an individual report generated per participant (see Supplementary Figure 1). Reports contain dynamic and static mosaic views of images at different quality control points along the preprocessing pipeline. Written in hypertext markup language (HTML), reports can be opened with any web browser, are amenable to integration within online science services (e.g. OpenNeuro, or CodeOcean[22]), and maximize shareability between peers. These reports effectively minimize the amount of time required for assessing the quality of the results. As an additional transparency enhancement, reports include a citation boilerplate that follows the guidelines by Poldrack et al.[23], and gives due credit to all authors of all of the individual pieces of software used within fMRIPrep.

Highlights of fMRIPrep within the neuroimaging context

FMRIPrep is analysis-agnostic to currently-available analysis choices, as it supports a wide range of higher-level analysis and modeling options. Alternative workflows such as afni_proc.py (AFNI[12]), feat (FSL[15]), C-PAC[24] (configurable pipeline for the analysis of connectomes), Human Connectome Project (HCP[25]) Pipelines[26], or the Batch Editor of SPM, are not agnostic because they prescribe particular methodologies to analyze the preprocessed data. Important limitations to compatibility with downstream analysis derive from the coordinates space of the outputs and the regular (volume) vs. irregular (surface) sampling of the BOLD signal. For example, HCP Pipelines supports surface-based analyses on subject or template space. Conversely, C-PAC and feat are volume-based only. Although afni_proc.py is volume-based by default, pre-reconstructed surfaces can be manually set for sampling the BOLD signal prior to analysis. FMRIPrep allows a multiplicity of output spaces including subject-space and atlases for both volume-based and surface-based analyses. While fMRIPrep avoids including processing steps that may limit further analysis (e.g. spatial smoothing), other tools are designed to perform preprocessing that supports specific analysis pipelines. For instance, C-PAC performs several processing steps towards the connectivity analysis of resting-state fMRI. Further advantages of fMRIPrep are described in Online Methods, and include the “fieldmap-less” susceptibility distortion correction (SDC), the community-driven development and high-standards of software engineering, and the focus on reproducibility.

FMRIPrep yields high-quality results on diverse data

We iteratively maximized the robustness and overall quality of the results generated by fMRIPrep using the two-stage validation framework shown in Supplementary Figure 2. In a Phase I for fault-discovery, we tested fMRIPrep on a set of 30 datasets from OpenfMRI (see Table 2). Since data showing substandard quality are known to likely degrade the outcomes of image processing, we used MRIQC[27] to select the set of test images. Phase I concluded with the release of fMRIPrep version 1.0 on December 6, 2017. We addressed the quality assurance and reliability validation in Phase II. Figure 2 illustrates how the quality of results improved during Phase II. After Phase II, 50 datasets out of the total 54 were rated above the “acceptable” average quality level. The remaining 4 datasets were all above the “poor” level and in or nearby the “acceptable” rating. Correspondingly, Supplementary Figure 3 shows the individual evolution of every dataset at each of the seven quality control points. Phase II concluded with the release of fMRIPrep version 1.0.8 on February 22, 2018. Supplementary Results 1 presents some examples of issues resolved during validation.
Table 2.

S: number of sessions; T: number of tasks; R: number of BOLD runs; Modalities: number of runs for each modality, per subject (FM indicates acquisitions for susceptibility distortion correction); Part. IDs (phase): participant identifiers included in testing phase; N: total of unique participants; TR: repetition time (s); #TR: length of time-series (volumes); Resolution: voxel size of BOLD series (mm).

DS000XXXScannerSTRModalitiesPart. IDs (Phase I)Part. IDs (Phase II)NTR#TRResolution

001[54]SIEMENS11211 T1w, 3 BOLD02, 03, 09, 1501, 02, 07, 0872.063003.12×3.12×4.00
002[55]SIEMENS13481 T1w, 6 BOLD01, 11, 14, 1502, 03, 04, 1082.095103.12×3.12×5.00
003[56]SIEMENS1161 T1w, 1 BOLD03, 07, 09, 1102, 09, 10, 1162.09563.12×3.12×4.00
005[57]SIEMENS11211 T1w, 3 BOLD01, 03, 06, 1401, 04, 05, 1572.050403.12×3.12×4.00
007[58]SIEMENS13461 T1w, 5 BOLD09, 11, 18, 2003, 04, 08, 1282.082053.12×3.12×4.00
008[59]SIEMENS12381 T1w, 5 BOLD04, 09, 12, 1410, 12, 13, 1572.068083.12×3.12×4.39
009SIEMENS14481 T1w, 6 BOLD01, 03, 09, 1017, 18, 21, 2382.0105283.00×3.00×4.00
011[60]SIEMENS14411 T1w, 5 BOLD01, 03, 06, 0803, 09, 11, 1472.080413.12×3.12×5.00
017SIEMENS22484 T1w, 9 BOLD2, 4, 7, 82, 5, 7, 852.087363.12×3.12×4.00
030[34,61]SIEMENS18301 T1w, 7 BOLD10[440,638,668,855]42.262543.00×3.00×4.00
031[62]SIEMENS107919129 T1w, 18 T2w, 46 FM, 191 BOLD0111.2790172.55×2.55×2.54
051[63]SIEMENS11542 T1w, 7 BOLD03, 04, 05, 1302, 04, 06, 0972.0108003.12×3.12×6.00
052[64]SIEMENS12282 T1w, 4 BOLD06, 08, 12, 1405, 10, 12, 1372.063003.12×3.12×6.00
053SIEMENS13321 T1w, 8 BOLD002, 003, 005, 00641.2107122.40×2.40×2.40
101SIEMENS11161 T1w, 2 BOLD06, 08, 16, 1905, 11, 17, 2082.024163.00×3.00×4.00
102[6567]SIEMENS11161 T1w, 2 BOLD05, 19, 22, 2308, 10, 16, 2082.023363.00×3.00×4.00
105[68,69]GE11711 T1w, 11 BOLD1, 2, 3, 61, 4, 5, 662.585913.50×3.75×3.75
107[70]SIEMENS11141 T1w, 2 BOLD02, 05, 20, 2905, 36, 39, 4773.023153.00×3.00×3.00
108[71]GE11411 T1w, 5 BOLD01, 03, 07, 1703, 10, 24, 2672.078603.44×3.44×4.50
109[72]SIEMENS11121 T1w, 2 BOLD02, 10, 39, 4702, 11, 15, 3962.021483.00×3.00×3.54
110[73]GE11801 T1w, 10 BOLD07, 09, 17, 1801, 02, 03, 0682.0148803.44×3.44×4.01
114[74]GE25702 T1w, 10 BOLD01, 05, 07, 0802, 03, 04, 0775.0106264.00×4.00×4.00
115[75,76]SIEMENS13241 T1w, 3 BOLD31, 68, 77, 7804, 33, 67, 7982.532884.00×4.00×4.00
116[7780]PHILIPS12361 T1w, 6 BOLD02, 08, 10, 1508, 12, 15, 1762.061203.00×3.00×4.00
119[81]SIEMENS11311 T1w, 3 BOLD10, 51, 59, 7411, 26, 56, 5881.575643.12×3.12×4.00
120[82]SIEMENS11111 T1w, 2 BOLD04, 05, 08, 2441.523763.12×3.12×4.00
121[83]SIEMENS11281 T1w, 4 BOLD01, 04, 05, 2001, 18, 22, 2671.556563.12×3.12×4.00
133[84]PHILIPS21242 T1w, 6 BOLD06, 21, 22, 2341.734804.00×4.00×4.00
140[85]PHILIPS11361 T1w, 9 BOLD05, 27, 32, 3342.073802.80×2.80×3.00
148GE11121 T1w, 1 T2w, 3 BOLD09, 26, 28, 3341.831623.00×3.00×3.00
157[86]PHILIPS1141 T1w, 1 BOLD04, 21, 23, 2841.614854.00×4.00×3.99
158[87]SIEMENS1141 T1w, 1 BOLD064, 081, 122, 14942.012403.00×3.00×3.30
164[88]SIEMENS1141 T1w, 1 BOLD006, 012, 019, 02741.514803.50×3.50×3.50
168[89]SIEMENS1141 T1w, 1 BOLD08, 27, 30, 4942.521123.00×3.00×3.00
170[9092]GE14481 T1w, 12 BOLD1700, 1708, 1710, 171343.021603.44×3.44×3.40
171[93]SIEMENS12201 T1w, 5 BOLDcontrol0[4,8,14], mdd0343.020662.90×2.90×3.00
177[94]SIEMENS1141 T1w, 1 BOLD04, 07, 10, 1143.09203.00×3.00×3.00
200[95]SIEMENS1141 T1w, 1 BOLD2004, 2011, 2012, 201442.54803.28×3.28×4.29
205[96]SIEMENS12121 T1w, 3 BOLD01, 05, 06, 0742.241033.00×3.00×3.00
208[97]SIEMENS1141 T1w, 1 BOLD27, 45, 56, 6942.512003.44×3.44×3.00
212[98,99]SIEMENS12401 T1w, 10 BOLD07, 13, 20, 2943.058083.12×3.12×4.00
213[100]SIEMENS1141 T1w, 1 BOLD06, 10, 12, 1342.011203.00×3.00×3.99
214[101]SIEMENS1141 T1w, 1 BOLDEESS0[06,31,33,34]41.613643.44×3.44×5.00
216[102]GE11161 T1w, 4 BOLD (ME)01, 02, 03, 0443.526883.00×3.00×3.00
218[103]PHILIPS11121 T1w, 3 BOLD02, 07, 12, 1741.567092.88×3.00×2.88
219[103]PHILIPS11141 T1w, 3 BOLD04, 09, 10, 1241.578072.88×3.00×2.88
220[104]PHILIPS, SIEMENS31123 T1w, 3 BOLDtbi[03,05,06,10]42.017283.00×3.00×4.00
221SIEMENS21151 MP2RAGE, 9 FM, 3 BOLD010[016,064,125,251]42.598552.30×2.30×2.30
224[105]SIEMENS1263994 T1w, 4 T2w, 10 FM, 79 BOLDMSC[05,06,08,09]MSC[05,08,09,10]52.2885284.00×4.00×4.00
228SIEMENS1141 T1w, 1 BOLDpixar[001,017,103,132]42.06723.06×3.06×3.29
229[106]SIEMENS11121 T1w, 3 BOLD02, 05, 07, 1042.046803.44×3.44×3.00
231[107]SIEMENS11121 T1w, 3 BOLD01, 02, 03, 0942.045482.02×2.02×2.00
233[108]PHILIPS12802 T1w, 10 BOLDrid0000[12,24,36,41]rid0000[01,17,31,32]82.0156803.00×3.00×3.00
237[109]SIEMENS11411 T1w, 5 BOLD03, 08, 11, 1201, 03, 04, 0671.0198443.00×3.00×3.00
243[9]SIEMENS11131 T1w, 1 BOLD012, 032, 042, 071023, 066, 089, 09482.528844.00×4.00×4.00

Total2176120325304551769
Figure 2.

Integrating visual assessment into the software testing framework effectively increases the quality of results.

In an early assessment of quality using fMRIPrep version 1.0.0, the overall rating of two datasets was below the “poor” category and four below the “acceptable” level (left column of colored circles). After addressing some outstanding issues detected by the early assessment, the overall quality of processing is substantially improved (right column of circles), and no datasets are below the “poor” quality level. Only two datasets are rated below the “acceptable” level in the second assessment (using fMRIPrep version 1.0.7).

FMRIPrep prevents loss of spatial accuracy via smoothing

We demonstrate that the focus on robustness against data irregularity does not come at a cost in quality of the preprocessing outputs. Moreover, as shown in Figure 3A, the preprocessing outcomes of FSL feat are smoother than those of fMRIPrep. Although preprocessed data were resampled to an isotropic voxel size of 2.0×2.0×2.0 [mm], the smoothness estimation (before the prescribed smoothing step) for fMRIPrep was below 4.0mm, very close to the original resolution of 3.0×3.0×4.0 [mm] of these data. We calculated standard deviation maps in MNI space[28] for the temporal average map derived from preprocessing with both alternatives. Visual inspection of these variability maps (Figure 3B) reveals a higher anatomical accuracy of fMRIPrep over feat, likely reflecting the combined effects of a more precise spatial normalization scheme and the application of “fieldmap-less” SDC. FMRIPrep outcomes are particularly better aligned with the underlying anatomy in regions typically warped by susceptibility distortions such as the orbitofrontal lobe, as demonstrated by close-ups in Supplementary Figure 4. We also compared preprocessing done with fMRIPrep and FSL’s feat in two common fMRI analyses. First, we performed within subject statistical analysis using feat –the same tool provides preprocessing and first-level analysis– on both sets of preprocessed data. Second, we perform a group statistical analysis using ordinary least-squares (OLS) mixed modeling (flame[29], FSL). In both experiments, we applied identical analysis workflows and settings to both preprocessing alternatives. The first-level analysis showed that the thresholded activation count maps for the go vs. successful stop contrast in the “stopsignal” task were very similar (Figure 4). It can be seen that the results from both pipelines identified activation in the same regions. However, since data preprocessed with feat are smoother, the results from fMRIPrep are more local and better aligned with the cortical sheet. The overlap of statistical maps, as well as Pearson’s correlation, were tightly related to the smoothing of the input data. In the group analysis, fMRIPrep and feat perform equivalently (see Supplementary Results 2).
Figure 3.

FMRIPrep affords the researcher finer control over the smoothness of their analysis.

A | Estimating the spatial smoothness of data before and after the initial smoothing step of the analysis workflow confirmed that results of preprocessing with feat are intrinsically smoother. B | Mapping the standard deviation of averaged BOLD time-series displayed greater variability around the brain outline (represented with a black contour) for data preprocessed with feat. This effect is generally associated with a lower performance of spatial normalization28. Reference contours correspond to the brain tissue segmentation of the MNI atlas.

Figure 4.

The activation count maps from fMRIPrep are better aligned with the underlying anatomy.

The mosaics show thresholded activation count maps for the go vs. successful stop contrast in the “stopsignal” task after preprocessing using either fMRIPrep (top row) or FSL’s feat (bottom row), with identical single subject statistical modeling. Both tools obtained similar activation maps, with fMRIPrep results being slightly better aligned with the underlying anatomy.

DISCUSSION

FMRIPrep is an fMRI preprocessing workflow developed to excel at four aspects of scientific software: robustness to data idiosyncrasies, high quality and consistency of results, maximal transparency, and ease-of-use. We describe how using the Brain Imaging Data Structure (BIDS[21]) along with a flexible design allows the workflow to self-adapt to the idiosyncrasy of inputs (sec. A modular design alongside BIDS allow for a flexible, adaptive workflow). The workflow (briefly summarized in Figure 1) integrates state-of-art tools from widely used neuroimaging software packages at each preprocessing step (see Table 1). Some other relevant facets of fMRIPrep and how they relate to existing alternative pipelines are presented in sec. Highlights of fMRIPrep within the neuroimaging context. We stress that fMRIPrep is developed with the best software engineering principles, which are fundamental to ensure software reliability. The pipeline is easy to use for researchers and clinicians without extensive computer engineering experience, and produces comprehensive visual reports (Supplementary Figure 1). In sec. , we demonstrate the robustness of fMRIPrep on a representative collection of data from datasets associated with different studies (Table 2). We then interrogate the quality of those results with the individual inspection of the corresponding visual reports by experts (sec. Visual reports ease quality control and maximize transparency and the corresponding summary in Figure 2). A comparison to FSL’s feat (sec. ) demonstrates that fMRIPrep achieves higher spatial accuracy and introduces less uncontrolled smoothness (Figure 3, 4). Group 𝑝-statistical maps only differed on their smoothness (sharper for the case of fMRIPrep). The fact that first-level and second-level analyses resulted in small differences between fMRIPrep and our ad-hoc implementation of a feat-based workflow indicates that the individual preprocessing steps perform similarly when they are fine-tuned to the input data. That justifies the need for fMRIPrep, which autonomously adapts the workflow to the data without error-prone manual intervention. To a limited extent, that also mitigates some concerns and theoretical risks that arise from analytical degrees-of freedom[19] available to researchers. FMRIPrep stands out amongst pipelines because it automates the adaptation to the input dataset without compromising the quality of results. One limitation of this work is the use of visual (the reports) and semi-visual (e.g. Figure 3, 4) assessments for the quality of preprocessing outcomes. Although some frameworks have been proposed for the quantitative evaluation of preprocessing on task-based (such as NPAIRS[30]) and resting-state[31] fMRI, they impose a set of assumptions on the test data and the workflow being assessed that severely limit their suitability in general. The modular design of fMRIPrep defines an interface to each processing step, which will permit the programmatic evaluation of the many possible combinations of software tools and processing steps. That will also enable the use of quantitative testing frameworks to pursue the minimization of Type I errors without the cost of increasing Type II errors. The range of possible applications for fMRIPrep also presents some boundaries. For instance, very narrow field-of-view (FoV) images often do not contain enough information for standard image registration methods to work correctly. Reduced FoV datasets from OpenfMRI were excluded from the evaluation since they are not yet fully supported by fMRIPrep. Extending fMRIPrep’s support for these particular images is already a future line of the development road-map. FMRIPrep may also under-perform for particular populations (e.g. infants) or when brains show nonstandard structures, such as tumors, resected regions or lesions. Despite these challenges, fMRIPrep performed robustly on data from a simultaneous MRI/electrocorticography study, which is extremely challenging to analyze due to the massive BOLD signal drop-out near the implanted cortical electrodes (see Supplementary Figure 5). In addition, fMRIPrep’s modular architecture makes it straightforward to extend the tool to support specific populations or new species by providing appropriate atlases of those brains. This future line of work would be particularly interesting in order to adapt the workflow to data collected from rodents and nonhuman primates. Approximately 80% of the analysis pipelines investigated by Carp[19] were implemented using either AFNI[12], FSL[15], or SPM[17]. Ad-hoc pipelines adapt the basic workflows provided by these tools to the particular dataset at hand. Although workflow frameworks like Nipype[20] ease the integration of tools from different packages, these pipelines are typically restricted to just one of these alternatives (AFNI, FSL or SPM). Otherwise, scientists can adopt the acquisition protocols and associated preprocessing software of large consortia[26,32] like the Human Connectome Project (HCP) or the UK Biobank[33]. The off-the-shelf applicability of these workflows is contravened by important limitations on the experimental design. Therefore, researchers typically opt to recode their custom preprocessing workflows with nearly every new study[19]. That practice entails a “pipeline debt”, which requires the investment on proper software engineering to ensure an acceptable correctness and stability of the results (e.g. continuous integration testing) and reproducibility (e.g. versioning, packaging, containerization, etc.). A trivial example of this risk would be the leakage of magic numbers that are hard-coded in the source (i.e. a crucial imaging parameter that inadvertently changed from one study to the next one). Until fMRIPrep, an analysis-agnostic approach that builds upon existing software instruments and optimizes preprocessing for robustness to data idiosyncrasies, quality of outcomes, ease-of-use, and transparency, was lacking. The rapid increase in volume and diversity of data, as well as the evolution of available techniques for processing and analysis, presents an opportunity for significantly advancing research in neuroscience. The drawback resides in the need for progressively complex analysis workflows that rely on decreasingly interpretable models of the data. Such context encourages “black-box” solutions that efficiently perform a valuable service but do not provide insights into how the tool has transformed the data into the expected outputs. Black-boxes obscure important steps in the inductive process mediating between experimental measurements and reported findings. This way of moving forward risks producing a future generation of cognitive neuroscientists who have become experts in using sophisticated computational methods, but have little to no working knowledge of how data were transformed through processing. Transparency is often identified as a treatment for these problems. FMRIPrep ascribes to “glass-box” principles, which are defined in opposition to the many different facets or levels at which black-box solutions are opaque. The visual reports that fMRIPrep generates are a crucial aspect of the glass-box approach. Their quality control checkpoints represent the logical flow of preprocessing, allowing scientists to critically inspect and better understand the underlying mechanisms of the workflow. A second transparency element is the citation boilerplate that formalizes all details of the workflow and provides the versions of all involved tools along with references to corresponding scientific literature. A third asset for transparency is the thorough documentation which delivers additional details on each of the building blocks that are represented in the visual reports and described in the boilerplate. Further, fMRIPrep is open-source since its inception: users have access to all the incremental additions to the tool through the history of the version-control system. The use of GitHub (https://github.com/poldracklab/fmriprep) grants access to the discussions held during development, allowing the retrieval of how and why the main design decisions were made. GitHub also provides an excellent platform to foster the community with useful tools such as source browsing, code review, bug tracking and reporting, submission of new features and bug fixes through pull requests, etc. The modular design of fMRIPrep enhances its flexibility and improves transparency, as the main features of the software are more easily accessible to potential collaborators. In combination to some coding style and contribution guidelines, this modularity has enabled multiple contributions by peers and the creation of a rapidly growing community that would be difficult to nurture behind closed doors. A number of existing tools have implemented elements of “glass-box” philosophy (for example visual reports in feat, documentation in C-PAC, open source community of Nilearn), but the complete package (visual reports, educational documentation, reporting templates, collaborative open source community) is still rare among scientific software. FMRIPrep’s transparent and accessible development and reporting aims to better equip fMRI practitioners to perform reliable, reproducible, statistical analyses with a high-standard, consistent, and adaptive preprocessing instrument.

DATA

Data used in the validation of fMRIPrep

Participants were drawn from a multiplicity of studies available in OpenfMRI, accessed on September 30, 2017. Studies were sampled uniformly (four participants each), except for DS000031 that consists of only one participant. Data selection criteria are described below. Magnetic resonance imaging (MRI) data were acquired at multiple scanning centers, with the following frequencies of vendors: ∼70% SIEMENS, ∼14% PHILIPS, ∼14% GE. Data were acquired by 1.5T and 3T systems running varying software versions. Acquisition protocols, as well as the particular acquisition parameters (including relevant BOLD settings such as the repetition time −TR−, the echo time −TE−, the number of TRs and the resolution) also varied with each study. However, only datasets including at least one T1-weighted (T1w) and one BOLD per subject run were included. Datasets containing BIDS errors (DS000210), and degenerate data (many T1w images of DS000223 are skull-stripped) at the time of access were discarded. Similarly, very-narrow FoV BOLD datasets (DS000172, DS000217, and DS000232) were also excluded. In total, 54 datasets (46 single-session datasets, 8 multi-session) were included in this assessment. Table 2 overviews the particular properties of each dataset, summarizing the large heterogeneity of the resource. This evaluation covered0[i] 54 studies out of a total of 58 studies in OpenfMRI that included the two required imaging modalities (T1w and BOLD). Therefore, by covering 93% of the studies in OpenfMRI, we ensured a large heterogeneity in terms of acquisition protocols, settings, instruments and parameters that is necessary to demonstrate the robustness of fMRIPrep against the variability in input data features.

Data used in the comparison to FSL feat

We reuse the UCLA Consortium for Neuropsychiatric Phenomics LA5c Study[34], a dataset that is publicly available on OpenfMRI under data accession DS000030. During the experiment, subjects performed six tasks, a block of rest, and two anatomical scans. The study includes imaging data of a large group of healthy individuals from the community, as well as samples of individuals diagnosed with schizophrenia, bipolar disorder, and attention-deficit/hyperactivity disorder. As described in their data descriptor[34], MRI data were acquired on one of two 3T Siemens Trio scanners, located at the Ahmanson-Lovelace Brain Mapping Center (syngo MR B15) and the Staglin Center for Cognitive Neuroscience (syngo MR B17). FMRI data were collected using an echo-planar imaging (EPI) sequence (slice thickness=4mm, 34 slices, TR=2s, TE=30ms, flip angle=90deg, matrix 64×64, FoV=192mm, oblique slice orientation). Additionally, a T1w image is available per participant (MPRAGE, TR=1.9s, TE=2.26ms, FoV=250mm, matrix=256×256, sagittal plane, slice thickness=1mm, 176 slices). For this experiment, only images including both the T1w and the functional scans corresponding the Stop Signal task (referred to as “stopsignal”) were included (totaling N=257 participants).

Stop Signal task.

Participants were instructed to respond quickly to a “go” stimulus. During some of the trials, at unpredictable times, a stop signal would appear after the stimulus is presented. During those trials, the subject has to inhibit any planned response. In this experiment, we specifically look into the difference between the brain activation during a successful stop trial and a go trial (contrast: Go - StopSuccess). Thus, we expect to see brain regions responsible for response inhibition (negative) and motor response (positive). Further details on the task are available with the dataset descriptor[34].

THE FMRIPREP WORKFLOW

Preprocessing anatomical images

The T1w image is corrected for intensity non-uniformity using N4BiasFieldCorrection[35] (ANTs), and skull-stripped using antsBrainExtraction.sh (ANTs). Skull-stripping is performed through coregistration to a template, with two options available: the OASIS template[36] (default) or the NKI template[37]. Using visual inspection, we have found that this approach outperforms other common approaches, which is consistent with previous reports[26]. When several T1w volumes are found, the intensity non-uniformity-corrected versions are first fused into a reference T1w map of the subject with mri_robust_template[38] (FreeSurfer). Brain surfaces are reconstructed from the subject’s T1w reference (and T2w images if available) using recon-all[39] (FreeSurfer). The brain mask estimated previously is refined with a custom variation of a method (originally introduced in Mindboggle[40]) to reconcile ANTs-derived and FreeSurfer-derived segmentations of the cortical gray matter (GM). Both surface reconstruction and subsequent mask refinement are optional and can be disabled to save run time when surface-based analysis is not needed. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template[41] (version 2009c) is performed through nonlinear registration with antsRegistration[42] (ANTs), using brain-extracted versions of both the T1w reference and the standard template. ANTs was selected due to its superior performance in terms of volumetric group level overlap[43]. Brain tissues –cerebrospinal fluid (CSF), white matter (WM) and GM– are segmented from the reference, brain-extracted T1w using fast[44] (FSL).

Preprocessing functional runs

For every BOLD run found in the dataset, a reference volume and its skull-stripped version are generated using an in-house methodology (described in Supplementary Note 3). Then, head-motion parameters (volume-to-reference transform matrices, and corresponding rotation and translation parameters) are estimated using mcflirt[45] (FSL). Among several alternatives (see Table 1), mcflirt is used because its results are comparable to other tools[46] and it stores the estimated parameters in a format that facilitates the composition of spatial transforms to achieve one-step interpolation (see below). If slice timing information is available, BOLD runs are (optionally) slice time corrected using 3dTshift (AFNI[12]). When field map information is available, or the experimental “fieldmap-less” correction is requested (see ), SDC is performed using the appropriate methods (see Supplementary Figure 6). This is followed by co-registration to the corresponding T1w reference using boundary-based registration[47] with nine degrees of freedom (to minimize remaining distortions). If surface reconstruction is selected, fMRIPrep uses bbregister (FreeSurfer). Otherwise, the boundary based coregistration implemented in flirt (FSL) is applied. In our experience, bbregister yields the better results[47] due to the high resolution and the topological correctness of the GM/WM surfaces driving registration. To support a large variety of output spaces for the results (e.g. the native space of BOLD runs, the corresponding T1w, FreeSurfer’s fsaverage spaces, the template used as target in the spatial normalization step, etc.), the transformations between spaces can be combined. For example, to generate preprocessed BOLD runs in template space (e.g. MNI), the following transforms are concatenated: head-motion parameters, the warping to reverse susceptibility-distortions (if calculated), BOLD-to-T1w, and T1w-to-template mappings. The BOLD signal is also sampled onto the corresponding participant’s surfaces using mri_vol2surf (FreeSurfer), when surface reconstruction is being performed. Thus, these sampled surfaces can easily be transformed onto different output spaces available by concatenating transforms calculated throughout fMRIPrep and internal mappings between spaces calculated with recon-all. The composition of transforms allows for a single-interpolation resampling of volumes using antsApplyTransforms (ANTs). Lanczos interpolation is applied to minimize the smoothing effects of linear or Gaussian kernels[48]. Optionally, ICA-AROMA can be performed and corresponding “non-aggressively” denoised runs are then produced. When ICA-AROMA is enabled, the time-series are first smoothed and then denoised, following the description of the original method[11].

Extraction of nuisance time-series

To avoid restricting fMRIPrep’s outputs to particular analysis types, the tool does not perform any temporal denoising by default. Nonetheless, it provides researchers with a diverse set of confound estimates that could be used for explicit nuisance regression or as part of higher-level models. This lends itself to decoupling preprocessing and behavioral modeling as well as evaluating robustness of final results across different denoising schemes. A set of physiological noise regressors are extracted for the purpose of performing component-based noise correction (CompCor[10]). Principal components are estimated after high-pass filtering the BOLD time-series (using a discrete cosine filter with 128s cut-off) for the two CompCor variants: temporal (tCompCor) and anatomical (aCompCor). Six tCompCor components are then calculated from the top 5% variable voxels within a mask covering the subcortical regions. This subcortical mask is obtained by heavily eroding the brain mask, which ensures it does not include cortical GM regions. For aCompCor, six components are calculated within the intersection of the aforementioned mask and the union of CSF and WM masks calculated in T1w space, after their projection to the native space of each functional run (using the inverse BOLD-to-T1w transformation). FD and DVARS are calculated for each functional run, both using their implementations in Nipype (following the definitions by Power et al.[8]). Three global signals are extracted within the CSF, the WM, and the whole-brain masks using Nilearn[16]. If ICA-AROMA[11] is requested, the “aggressive” noise-regressors are collected and placed within the corresponding confounds files. Since the non-aggressive cleaning with ICA-AROMA is performed after extraction of other nuisance signals, the “aggressive” regressors can be used to orthogonalize those other nuisance signals to avoid the risk of re-introducing nuisance signal within regression. In addition, a “non-aggressive” version of preprocessed data is also provided since this variant of ICA-AROMA denoising cannot be performed using only nuisance regressors.

“Fieldmap-less” susceptibility distortion correction

Many legacy and current human fMRI protocols lack the MR field maps necessary to perform standard methods for SDC. As described in Supplementary Figure 6, the BIDS dataset is queried to discover whether extra acquisitions containing field map information are available. When no fieldmap information is found, fMRIPrep adapts the “fieldmap-less” correction for diffusion EPI images introduced by Wang et al.[49]. They propose using the same-subject T1w reference as the undistorted target in a nonlinear registration scheme. To maximize the similarity between the T2★ contrast of the EPI scan and the reference T1w, the intensities of the latter are inverted. To regularize the optimization of the deformation field, only displacements along the phase-encoding direction are allowed, and the magnitude of the displacements is modulated using priors. To our knowledge, no other existing pipeline applies “fieldmap-less” SDC to the BOLD images. Further details on the integration of the different SDC techniques and particularly this “fieldmap-less” option are found in Supplementary Note 3.

FMRIPrep is thoroughly documented, community-driven, and developed with high-standards of software engineering

Preprocessing pipelines are generally well documented, however the extreme flexibility of fMRIPrep makes its proper documentation substantially more challenging. As for other large scientific software communities, fMRIPrep contributors pledge to keep the documentation thorough and updated along coding iterations. Packages also differ on the involvement of the community: while fMRIPrep includes researchers in the decision making process and invites their suggestions and contributions, other packages have a more closed model where the feedback from users is more limited (e.g. a mailing list). In contrast to other pipelines, fMRIPrep is community-driven. This paradigm allows the fast adoption of cutting-edge advances on fMRI preprocessing, which tend to render existing workflows (including fMRIPrep) obsolete. For example, while fMRIPrep initially performed STC before HMC, we adapted the tool to the recent recommendations of Power et al.[18] upon a user’s request[ii]. This model has allowed the user base to grow rapidly and enabled substantial third-party contributions to be included in the software, such as the support for processing multi-echo datasets. The open-source nature of fMRIPrep has permitted frequent code reviews that are effective in enhancing the software’s quality and reliability[50]. Supplementary Note 4 describes how the community interacts, discusses the code review process, and underscores how the modular design of fMRIPrep successfully facilitates contributions from peers. Finally, fMRIPrep undergoes continuous integration testing (see Supplementary Fig. SN4.1), a technique that has recently been proposed as a means to ensure reproducibility of analyses in computational sciences[51,52]. Additional comparison points, such as the graphical user interface of several preprocessing workflows, are given in Supplementary Note 5.

Ensuring reproducibility with strict versioning and containers

For enhanced reproducibility, fMRIPrep fully supports execution via the Docker (https://docker.com) and Singularity[53] container platforms. Container images are generated and uploaded to a public repository for each new version of fMRIPrep. These containers are released with a fixed set of software versions for fMRIPrep and all its dependencies, maximizing run-to-run reproducibility in an easy way. This helps address the widespread lack of reporting of specific software versions and the large variability of software versions, which threaten the reproducibility of fMRI analyses[19]. Except for C-PAC, alternative pipelines do not provide official support for containers. The adoption of the BIDS-Apps[51] container model makes fMRIPrep amenable to a multiplicity of infrastructures and platforms: PC, high-performance computing, Cloud, etc.

VALIDATION OF FMRIPREP ON DIVERSE DATA

The general validation framework presented in Supplementary Figure 2 implements a testing plan elaborated prior the release of version 1.0 of the software. The plan is divided into two validation phases in which different data samples and validation procedures are applied. Table 2 describes the data samples used on each phase. In Phase I, we ran fMRIPrep on a manually selected sample of participants that are potentially challenging to the tool’s robustness, exercising the adaptiveness to the input data. Phase II focused on the visual assessment of the quality of preprocessing results on a large and heterogeneous sample.

Methodology and test plan

To ensure that fMRIPrep fulfills the specifications on reliability and scientific-software standards, the tool undergoes a thorough acceptance testing plan. The plan is structured in three phases: the first was aimed at the discovery of faults, the second at the evaluation of the robustness, and the final phase at the full coverage of OpenfMRI. To note, an early test Phase 0 was conducted as a proof of concept for the tool.

Validation Phase I – Fault-discovery testing.

During Phase I, a total of 120 subjects from 30 different datasets (see Table 2) were manually identified as low-quality using MRIQC[27]. Data showing substandard quality are known to likely degrade the outcomes of image processing, and therefore they are helpful to test software reliability. This sub-sample of OpenfMRI underwent preprocessing in the Stampede2 supercomputer of the Texas Advanced Computer Center (TACC), Austin, TX. Results were visually inspected and failures reported in the GitHub repository. Once software faults were fixed, fMRIPrep 1.0.0 of was released and the Phase II of validation was launched.

Validation Phase II – Quality assurance and reliability testing.

In this second phase, the coverage of OpenfMRI was extended to 54 available datasets (Table 2), randomly selecting four participants per dataset (with replacement of participants covered in Phase I). A total of 325 participants[iii] were preprocessed in the Sherlock cluster of Stanford University, Stanford, CA. Validation Phase II integrated a protocol for the screening of results into the software testing (Supplementary Figure 2). Three raters evaluated each participant’s report following the protocol described below. Their ratings are made available with the corresponding reports for scrutiny.

Protocol for manual assessment.

Each visual report generated in Phase II was inspected by one expert (selected randomly between authors CJM, KJG and OE) at seven quality checkpoints: i) overall performance; ii) surface reconstruction from anatomical MRI; iii) T1w brain mask and tissue segmentation; iv) spatial normalization; v) brain mask and regions-of-interest (ROIs) for CompCor application in native BOLD space (“BOLD ROIs”); vi) intra-subject BOLD-to-T1w co-registration; and vii) SDC. Experts were instructed to assign a score on a scale from 1 (poor) to 3 (excellent) at each quality control point. A special rating score of 0 (unusable) was assigned to tasks that failed in a critical way hampering further preprocessing. Poor (1) was assigned when fMRIPrep did not critically failed at the task, but the outcome would likely affect negatively downstream analysis. For example, when “fieldmap-less” correction unwarped in the expected direction, although some distorted areas remained (or were overcorrected), then the acceptable (2) rating was assigned. Finally, excellent (3) was assigned when the expert did not notice any substantial defect that would indicate a lower rating. Supplementary Figure 3 shows the evolution of the quality ratings at the seven checkpoints at the beginning and completion of Phase II (indicated by versions 1.0.0 and 1.0.7, respectively).

COMPARISON TO AN ALTERNATIVE PREPROCESSING TOOL

For comparison, data were preprocessed with two alternate pipelines: fMRIPrep 1.0.8 and FSL’s feat 5.0.10. We then performed identical analyses on each dataset preprocessed with either pipeline. On the first level analysis, we calculate a 𝑡-statistic map per participant for the task under analysis (N=257). Second level analyses were performed in a specific resampling scheme to allow a statistical comparison between the pipelines: two random (non-overlapping) subsets of 𝑛 participants are repeatedly entered into a group level analysis. The first step is the experimental manipulation resulting in two conditions: (1) the data are preprocessed with fMRIPrep, and (2) the data are preprocessed using feat. The next two steps are identical for both conditions.

Preprocessing

Preprocessing with fMRIPrep is described using the corresponding citation boilerplate (Supplementary Box SN3.1). We configured feat using its graphical user interface (GUI) and generated a template.fsf file, which can be found in GitHub[iv]. We manually extended execution to all participants in our sample creating the script fsl_feat_wrapper.py that accompanies the template.fsf file in GitHub. As it can be seen on the template.fsf file, we disabled band-pass filtering and spatial smoothing to make results of preprocessing comparable. Both processing steps (temporal filtering and spatial smoothing) were implemented in a common, subsequent analysis workflow described below. Additionally, we manually configured the ICBM 152 Nonlinear Asymmetrical template[41] version 2009c as target for spatial normalization. Finally, we manually resampled the preprocessed BOLD files into template space using FSL’s flirt.

Mapping the BOLD variability on standard space.

To investigate the spatial consistency of the average BOLD across participants, we calculated standard deviation maps in MNI space for the temporal average map[28] derived from preprocessing with both alternatives.

Smoothness.

We used AFNI’s 3dFWHMx to estimate the (average) smoothness of the data at two check- points: i) before the first-level analysis workflow, and ii) after applying a 5.0mm full-width half-maximum (FWHM) spatial smoothing, which was the first step of the analysis workflow described in the following.

First-level statistical analysis

We analyzed the “stopsignal” task data using FSL and AFNI tools, integrated in a workflow using Nipype. Spatial smoothing was applied using AFNI’s 3dBlurInMask with a Gaussian kernel of FWHM=5mm. Activity was estimated using a general linear model (GLM) with FSL’s feat. For the one condition under comparison (go - successful), one task regressor was included with a fixed duration of 1.5s. An extra regressor was added with equal amplitude, but the duration equal to the reaction time. These regressors were orthogonalized with respect to the fixed duration regressor of the same condition. Predictors were convolved with a double-gamma canonical hemodynamic response function. Temporal derivatives were added to all task regressors to compensate for variability on the hemodynamic response function. Furthermore, the six rigid-motion parameters (translation in three directions, rotation in three directions) were added as regressors to avoid confounding effects of head-motion. We included a high-pass filter (100Hz) in FSL’s feat.

Activation-count maps.

The statistical map for each participant was binarized at 𝑧=±1.65 (which corresponds to a two-sided test of 𝑝<0.1). Then, the average of these maps is computed across participants. The average negative map (percentage of subjects showing a negative effect with 𝑧 < −1.65) is subtracted from the average positive map to indicate the direction of effects. High values in certain regions and low values in other regions show a good overlap of activation between subjects.

Second-level statistical analysis

Subsequent to the single subject analyses, two random, non-overlapping subsamples of 𝑛 subjects were taken and entered into a second level analysis. We vary the sample size 𝑛 of groups between 10 and 120. We ran the group level analyses based on two variants of the first level: with a prescribed smoothing of 5.0mm FWHM, and without such smoothing step. The resampling process was repeated 200 times per group sample-size and smoothing condition. To investigate the implications of either pipeline on the group analysis use-case, we ran the same OLS mixed modeling using FSL’s flame on each two disjoint subsets of randomly selected subjects and resampling repetition. We calculated several metrics of spatial agreement on the resulting maps of (uncorrected) 𝑝-statistical values. We also calculated the spatial agreement of the thresholded statistical maps, binarized with a threshold chosen to control for the false-discovery rate (FDR) at 5% (using FSL’s fdr command).
  95 in total

Review 1.  Progress and challenges in probing the human brain.

Authors:  Russell A Poldrack; Martha J Farah
Journal:  Nature       Date:  2015-10-15       Impact factor: 49.962

2.  Methods to detect, characterize, and remove motion artifact in resting state fMRI.

Authors:  Jonathan D Power; Anish Mitra; Timothy O Laumann; Abraham Z Snyder; Bradley L Schlaggar; Steven E Petersen
Journal:  Neuroimage       Date:  2013-08-29       Impact factor: 6.556

Review 3.  Evaluating fMRI preprocessing pipelines.

Authors:  Stephen C Strother
Journal:  IEEE Eng Med Biol Mag       Date:  2006 Mar-Apr

4.  A component based noise correction method (CompCor) for BOLD and perfusion based fMRI.

Authors:  Yashar Behzadi; Khaled Restom; Joy Liau; Thomas T Liu
Journal:  Neuroimage       Date:  2007-05-03       Impact factor: 6.556

5.  Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion.

Authors:  Jonathan D Power; Kelly A Barnes; Abraham Z Snyder; Bradley L Schlaggar; Steven E Petersen
Journal:  Neuroimage       Date:  2011-10-14       Impact factor: 6.556

Review 6.  Software tools for analysis and visualization of fMRI data.

Authors:  R W Cox; J S Hyde
Journal:  NMR Biomed       Date:  1997 Jun-Aug       Impact factor: 4.044

7.  ICA-AROMA: A robust ICA-based strategy for removing motion artifacts from fMRI data.

Authors:  Raimon H R Pruim; Maarten Mennes; Daan van Rooij; Alberto Llera; Jan K Buitelaar; Christian F Beckmann
Journal:  Neuroimage       Date:  2015-03-11       Impact factor: 6.556

8.  Sources and implications of whole-brain fMRI signals in humans.

Authors:  Jonathan D Power; Mark Plitt; Timothy O Laumann; Alex Martin
Journal:  Neuroimage       Date:  2016-10-15       Impact factor: 6.556

Review 9.  Methods for cleaning the BOLD fMRI signal.

Authors:  César Caballero-Gaudes; Richard C Reynolds
Journal:  Neuroimage       Date:  2016-12-09       Impact factor: 6.556

10.  Slice-timing effects and their correction in functional MRI.

Authors:  Ronald Sladky; Karl J Friston; Jasmin Tröstl; Ross Cunnington; Ewald Moser; Christian Windischberger
Journal:  Neuroimage       Date:  2011-07-02       Impact factor: 6.556

View more
  412 in total

1.  File-based localization of numerical perturbations in data analysis pipelines.

Authors:  Ali Salari; Gregory Kiar; Lindsay Lewis; Alan C Evans; Tristan Glatard
Journal:  Gigascience       Date:  2020-12-02       Impact factor: 6.524

2.  Measuring Statistical Learning Across Modalities and Domains in School-Aged Children Via an Online Platform and Neuroimaging Techniques.

Authors:  Julie M Schneider; Anqi Hu; Jennifer Legault; Zhenghan Qi
Journal:  J Vis Exp       Date:  2020-06-30       Impact factor: 1.355

3.  Common functional networks in the mouse brain revealed by multi-centre resting-state fMRI analysis.

Authors:  Joanes Grandjean; Carola Canella; Cynthia Anckaerts; Gülebru Ayrancı; Salma Bougacha; Thomas Bienert; David Buehlmann; Ludovico Coletta; Daniel Gallino; Natalia Gass; Clément M Garin; Nachiket Abhay Nadkarni; Neele S Hübner; Meltem Karatas; Yuji Komaki; Silke Kreitz; Francesca Mandino; Anna E Mechling; Chika Sato; Katja Sauer; Disha Shah; Sandra Strobelt; Norio Takata; Isabel Wank; Tong Wu; Noriaki Yahata; Ling Yun Yeow; Yohan Yee; Ichio Aoki; M Mallar Chakravarty; Wei-Tang Chang; Marc Dhenain; Dominik von Elverfeldt; Laura-Adela Harsan; Andreas Hess; Tianzi Jiang; Georgios A Keliris; Jason P Lerch; Andreas Meyer-Lindenberg; Hideyuki Okano; Markus Rudin; Alexander Sartorius; Annemie Van der Linden; Marleen Verhoye; Wolfgang Weber-Fahr; Nicole Wenderoth; Valerio Zerbi; Alessandro Gozzi
Journal:  Neuroimage       Date:  2019-10-12       Impact factor: 6.556

4.  Proximal threats promote enhanced acquisition and persistence of reactive fear-learning circuits.

Authors:  Leonard Faul; Daniel Stjepanović; Joshua M Stivers; Gregory W Stewart; John L Graner; Rajendra A Morey; Kevin S LaBar
Journal:  Proc Natl Acad Sci U S A       Date:  2020-06-29       Impact factor: 11.205

5.  Automatic Brain Extraction for Rodent MRI Images.

Authors:  Yikang Liu; Hayreddin Said Unsal; Yi Tao; Nanyin Zhang
Journal:  Neuroinformatics       Date:  2020-06

6.  A framework for evaluating correspondence between brain images using anatomical fiducials.

Authors:  Jonathan C Lau; Andrew G Parrent; John Demarco; Geetika Gupta; Jason Kai; Olivia W Stanley; Tristan Kuehn; Patrick J Park; Kayla Ferko; Ali R Khan; Terry M Peters
Journal:  Hum Brain Mapp       Date:  2019-06-07       Impact factor: 5.038

7.  The Resting Brain Sets Support-Giving in Motion: Dorsomedial Prefrontal Cortex Activity During Momentary Rest Primes Supportive Responding.

Authors:  Tristen K Inagaki; Sasha Brietzke; Meghan L Meyer
Journal:  Cereb Cortex Commun       Date:  2020-11-02

8.  Prefrontal cortical activation during working memory task anticipation contributes to discrimination between bipolar and unipolar depression.

Authors:  Anna Manelis; Satish Iyengar; Holly A Swartz; Mary L Phillips
Journal:  Neuropsychopharmacology       Date:  2020-02-18       Impact factor: 7.853

9.  Adaptive Memory Distortions Are Predicted by Feature Representations in Parietal Cortex.

Authors:  Yufei Zhao; Avi J H Chanales; Brice A Kuhl
Journal:  J Neurosci       Date:  2021-02-22       Impact factor: 6.167

10.  Neuroimaging correlates and predictors of response to repeated-dose intravenous ketamine in PTSD: preliminary evidence.

Authors:  Agnes Norbury; Sarah B Rutter; Abigail B Collins; Sara Costi; Manish K Jha; Sarah R Horn; Marin Kautz; Morgan Corniquel; Katherine A Collins; Andrew M Glasgow; Jess Brallier; Lisa M Shin; Dennis S Charney; James W Murrough; Adriana Feder
Journal:  Neuropsychopharmacology       Date:  2021-07-31       Impact factor: 7.853

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.