| Literature DB >> 26375206 |
Jason R Taylor1, Nitin Williams2, Rhodri Cusack3, Tibor Auer4, Meredith A Shafto5, Marie Dixon6, Lorraine K Tyler7, Richard N Henson8.
Abstract
This paper describes the data repository for the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) initial study cohort. The Cam-CAN Stage 2 repository contains multi-modal (MRI, MEG, and cognitive-behavioural) data from a large (approximately N=700), cross-sectional adult lifespan (18-87years old) population-based sample. The study is designed to characterise age-related changes in cognition and brain structure and function, and to uncover the neurocognitive mechanisms that support healthy cognitive ageing. The database contains raw and preprocessed structural MRI, functional MRI (active tasks and resting state), and MEG data (active tasks and resting state), as well as derived scores from cognitive behavioural experiments spanning five broad domains (attention, emotion, action, language, and memory), and demographic and neuropsychological data. The dataset thus provides a depth of neurocognitive phenotyping that is currently unparalleled, enabling integrative analyses of age-related changes in brain structure, brain function, and cognition, and providing a testbed for novel analyses of multi-modal neuroimaging data.Entities:
Keywords: Ageing; Brain imaging; Cognition; Data repository; Magnetic resonance imaging; Magnetoencephalography
Mesh:
Year: 2015 PMID: 26375206 PMCID: PMC5182075 DOI: 10.1016/j.neuroimage.2015.09.018
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556
Structural MRI scans collected in Stage 2.
| Scan type | Sequence | TR (ms) | TE (ms) | Flip angle (°) | FOV | Voxel size | Other |
|---|---|---|---|---|---|---|---|
| T1-weighted | MPRAGE | 2250 | 2.99 | 9 | 256 × 240 × 192 | 1 × 1 × 1 | GRAPPA: 2; TI: 900 ms |
| T2-weighted | SPACE | 2800 | 408 | 9 | 256 × 256 × 192 | 1 × 1 × 1 | GRAPPA: 2 |
| Diffusion-weighted | |||||||
| b = 1000 | Twice-refocused SE | 9100 | 104 | 192 × 192 | 2 × 2 × 2 | directions: 30; slices: 66 (axial); averages: 1 | |
| b = 2000 | Twice-refocused SE | 9100 | 104 | 192 × 192 | 2 × 2 × 2 | directions: 30; slices: 66 (axial); averages: 1 | |
| b = 0 | Twice-refocused SE | 9100 | 104 | 192 × 192 | 2 × 2 × 2 | slices: 66 (axial);images: 3 | |
| Magnetisation transfer | |||||||
| Baseline | MT-prepared SPGR | 30 | 5 | 192 × 192 | 1.5 × 1.5 × 1.5 | bandwidth: 190 Hz/px | |
| MT | MT-prepared SPGR | 30 | 5 | 192 × 192 | 1.5 × 1.5 × 1.5 | bandwidth: 190 Hz/px; RF pulse applied |
Notes: TR = repetition time; TE = echo time; TI = inversion time; FOV = field of view; MPRAGE = magnetisation prepared gradient echo; SPACE = spatially-selective single-slab 3D turbo-spin-echo (Mugler and Brookeman, 2004); SE = spin echo; MT = magnetisation transfer; SPGR = spoiled gradient.
TR = 50 used if SAR exceeded limits.
RF pulse: Gaussian RF pulse, 1950Hz (bandwidth = 375Hz, flip angle = 500°, duration = 9984 μs).
Functional MRI scans collected in Stage 2.
| Scan type | Sequence | TR (ms) | TE | Flip angle (°) | FOV | Voxel Size (mm) | Volumes (N) | Slices (N) | Slice thickness (mm) | Gap (%) | Order | Task |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Resting state | EPI | 1970 | 30 | 78 | 192 × 192 | 3 × 3 × 4.44 | 261 | 32 | 3.7 | 20 | Descending | Rest with eyes closed |
| Movie watching | multi-echo EPI | 2470 | 5 echoes | 78 | 192 × 192 | 3 × 3 × 4.44 | 5x | 32 | 3.7 | 20 | Descending | Watch and listen to movie |
| Sensori-motor task | EPI | 1970 | 30 | 78 | 192 × 192 | 3 × 3 × 4.44 | 261 | 32 | 3.7 | 20 | Descending | Audio-visual stimuli and manual response |
| Field map | ||||||||||||
| Magnitude | PE-GRE | 400 | 2 echoes | 60 | 192 × 192 | 3 × 3 × 4.44 | 1 | 32 | 3.7 | 20 | Descending | None |
| Phase | PE-GRE | 400 | 2 echoes | 60 | 192 × 192 | 3 × 3 × 4.44 | 1 | 32 | 3.7 | 20 | Descending | None |
Notes. TR = repetition time; TE = echo time; TI = inversion time; FOV = field of view; EPI = T2*-weighted gradient echo echo planar image; PE-GRE = phase-encoded gradient echo.
Task: see text for details.
Multi-echo EPI TE: 9.4, 21.2, 33, 45, 57.
PE-GRE TE: 5.19, 7.65.
MEG data collected in Stage 2.
| Recording type | Sampling rate (Hz) | Duration (min:s) | Task |
|---|---|---|---|
| Resting state | 1000 | 08:40 | Rest with eyes closed |
| Sensorimotor task | 1000 | 08:40 | Audio-visual stimuli and manual response |
| Audio-visual task | 1000 | 02:00 | Separate auditory and visual stimuli, no manual response |
Task: See text for details.
Cognitive behavioural tasks used in Stage 2.
| Task name | Brief description | Key variables | References |
|---|---|---|---|
| Emotion expression recognition | View face and label emotion expressed (happy, sad, anger, fear, disgust, surprise) where faces are morphs along axes between emotional expressions. | Acc, RT for each emotion | |
| Emotional memory | Study: View (positive, neutral, or negative) background image, then object image superimposed, and imagine a ‘story’ linking the two; Test (incidental): View and identify degraded image of (studied, new) object, then judge memory and confidence for visually intact image of same object, then recall valence and any details of background image from study phase. | For each valence: Priming (Acc for studied vs. new degraded objects); familiarity (Acc for item memory); recollection (Acc for background memory) | |
| Emotional reactivity and regulation | View (positive, neutral, negative) film clips under instructions to simply ‘watch’ or ‘reappraise’ (attempt to reduce emotional impact by reinterpreting its meaning; for some negative films only), then rate emotional impact (how negative, positive they felt during clip) and the degree to which they successfully reappraised. | Reactivity (ratings for ‘watch’ trials: positive vs. neutral; negative vs. neutral); regulation (ratings for ‘reappraise’ negative vs. ‘watch’ negative) | |
| Face recognition: familiar faces | View faces of famous people (and some unknown foils), judge whether each is familiar, and if so, what is known about the person (occupation, nationality, origin of fame, etc.), then attempt to provide person's name. | Acc (identifying information or full name given) as a proportion of number of faces recognised as familiar, subtracting false alarms (unknown faces given ‘familiar’ response) | |
| Face recognition: unfamiliar faces | Given a target image of a face, identify same individual in an array of 6 face images (with possible changes in head orientation and lighting between target and same face in the test array) | Acc | |
| Fluid intelligence | Complete nonverbal puzzles involving series completion, classification, matrices, and conditions. | Acc on each of 4 subtests | |
| Force matching | Match mechanical force applied to left index finger by using right index finger either directly, pressing a lever which transmits force to left index finger, or indirectly, by moving a slider which adjusts the force transmitted to the left index finger. | Average difference between target force and matched force applied by participant via (direct, indirect) means | |
| Hotel task | Perform tasks in role of hotel manager: write customer bills, sort money, proofread advert, sort playing cards, alphabetise list of names. Total time must be allocated equally between tasks; there is not enough time to complete any one task. | Number of tasks attempted, deviation from optimal time allocation | |
| Motor learning | Time-pressured movement of a cursor to a target by moving an (occluded) stylus under veridical, perturbed (30°), and reset (veridical again) mappings between visual and real space. | RT (movement time to hit target), trajectory error (angle) across phases | |
| Picture-picture priming | Name the pictured object presented alone (baseline), then when preceded by a prime object that is phonologically related (one, two initial phonemes), semantically related (low, high relatedness), or unrelated. | Acc, RT, priming effects (RT of each condition vs. baseline) | |
| Proverb comprehension | Read and interpret three English proverbs. | Sum of response ratings (1 = incorrect or “don't know”, 2 = partly correct but literal, 3 = correct and abstract) | |
| Sentence comprehension | Listen to and judge grammatical acceptability of partial sentences, beginning with an (ambiguous, unambiguous) sentence stem (e.g., “Tom noticed that landing planes…”) followed by a disambiguating continuation word (e.g., “are”) in a different voice. Ambiguity is either semantic or syntactic, with empirically determined dominant and subordinate interpretations. | RT, proportion of “unacceptable” responses in each condition | |
| Tip-of-the-tongue task | View faces of famous people (actors, musicians, politicians, etc.) and respond with the person's name, or “don't know” if they do not know the person's name (even if familiar), or “TOT” if they know the person's name but are (temporarily) unable to retrieve it. | Proportion of responses of each type; incorrect “Know” responses; partial information responses (e.g., occupation) | |
| Visual short-term memory | View (1–4) coloured discs briefly presented on a computer screen, then after a delay, attempt to remember the colour of the disc that was at a cued location, with response indicated by selecting the colour on a colour wheel (touchscreen input). | Parameters of model fitted to error distribution: VSTM capacity (k), precision, probability of reporting an un-cued item |
Notes. Acc = accuracy; RT = response times.
Fig. 1Schematic illustration of MRI processing pipelines. Coloured columns indicate processing stream (see corresponding labels); shaded rows indicate stage of processing (see corresponding labels). Blue text indicates a data type; red text indicates a processing step; dashed lines and boxes emphasise important and unique steps in the pipelines (coregistration of all images to T1; normalisation to MNI by applying flow field parameters computed during DARTEL processing); dotted lines and boxes illustrate planned analyses. See text for a complete description. Notes: Abbreviations as in footnote 1 and text; Mb = magnetisation transfer baseline; ∑ indicates weighted sum.
Fig. 2Schematic illustration of MEG processing pipelines. Coloured columns indicate sensor-space and source-space streams (see corresponding labels); shaded rows indicate stage of processing (see corresponding labels). Blue text indicates a data type; red text indicates a processing step; dotted lines and boxes illustrate planned analyses. See text for a complete description. Notes: Abbreviations as in footnote 1 and text; tSSS = temporal extension of signal space separation.