| Literature DB >> 34295959 |
Fredrik Öhman1,2, Jason Hassenstab3,4, David Berron5,6, Michael Schöll1,2,7, Kathryn V Papp8,9.
Abstract
There is a pressing need to capture and track subtle cognitive change at the preclinical stage of Alzheimer's disease (AD) rapidly, cost-effectively, and with high sensitivity. Concurrently, the landscape of digital cognitive assessment is rapidly evolving as technology advances, older adult tech-adoption increases, and external events (i.e., COVID-19) necessitate remote digital assessment. Here, we provide a snapshot review of the current state of digital cognitive assessment for preclinical AD including different device platforms/assessment approaches, levels of validation, and implementation challenges. We focus on articles, grants, and recent conference proceedings specifically querying the relationship between digital cognitive assessments and established biomarkers for preclinical AD (e.g., amyloid beta and tau) in clinically normal (CN) individuals. Several digital assessments were identified across platforms (e.g., digital pens, smartphones). Digital assessments varied by intended setting (e.g., remote vs. in-clinic), level of supervision (e.g., self vs. supervised), and device origin (personal vs. study-provided). At least 11 publications characterize digital cognitive assessment against AD biomarkers among CN. First available data demonstrate promising validity of this approach against both conventional assessment methods (moderate to large effect sizes) and relevant biomarkers (predominantly weak to moderate effect sizes). We discuss levels of validation and issues relating to usability, data quality, data protection, and attrition. While still in its infancy, digital cognitive assessment, especially when administered remotely, will undoubtedly play a major future role in screening for and tracking preclinical AD.Entities:
Keywords: clinical assessment; clinical trials; cognition; computerized assessment; digital cognitive biomarkers; home‐based assessment; preclinical Alzheimer's disease; smartphone‐based assessment
Year: 2021 PMID: 34295959 PMCID: PMC8290833 DOI: 10.1002/dad2.12217
Source DB: PubMed Journal: Alzheimers Dement (Amst) ISSN: 2352-8729
Validation of primarily in‐clinic computerized and tablet‐based cognitive assessment
| Type of validation | Instrument | Authors | Longitudinal/cross‐sectional | Platform | Validation | Effect size | Biomarker | Participants ( |
|---|---|---|---|---|---|---|---|---|
| Biomarker validation | Cogstate C3 | Papp et al. (2020) | Cross‐sectional | Tablet | Lower scores were associated with Aβ | Small effect (d = 0.11) | [18F]florbetapir‐PET | 4486 |
| NIHTB‐CB | Snitz et al. (2020) | Cross‐sectional | Tablet | Lower scores were associated with tau in higher Braak regions | Small/moderate effect ( | [11C]PiB‐PET [18F]AV1451‐PET | 118 | |
| Cogstate CPAL | Baker et al. (2019) | Longitudinal (36 months) | Personal computer | Lower training effect in Aβ+ CN | Small effect (d = 0.25 to 0.30) | [11C]PiB‐PET | 356 | |
| Cogstate CBB | Mielke et al. (2016) | Longitudinal (30 months) | Personal computer | No significant correlation between Aβ and cognition | No effect | [11C]PiB‐PET | 464 | |
| CANTAB | Bischof et al. (2016) | Cross‐sectional | Personal computer | Lower memory scores were associated with higher Aβ | Moderate effect (r = 0.47 to 0.48) | [18F]florbetapir‐PET | 147 | |
| Cogstate CBB | Lim et al. (2015) | Longitudinal (36 months) | Personal computer | Decline in memory were greater in Aβ+ CN | Small/moderate effect (d = 0.39 to 0.59) | [11C]PiB‐PET | 178 | |
| Paper/pencil validation | NIHTB‐CB and Cogstate C3 | Buckley et al. (2017) | Cross‐sectional | Tablet | Memory tasks was associated with PACC | Moderate/large effect (ρ r = 0.49 to 0.58) | N/A | 50 |
Abbreviations: Aβ, amyloid beta; β, standardized β coefficients; CN, clinically normal; Cogstate CBB, Cogstate Brief Battery; Cogstate CPAL, Cogstate Continuous Paired Associate Learning; d, Cohen's d; NIHTB‐CB, National Institutes of Health Toolbox Cognition Battery; CANTAB, Cambridge Neuropsychological Test Automated Battery; PACC, Preclinical Alzheimer Cognitive Composite; PiB, Pittsburgh compound B positron emission tomography; ρ r, Spearman correlation; r, correlation coefficient.
Note: Only published articles are included in this table.
FIGURE 1A, Cogstate One Back tests. Copyright© 2020 Cogstate. All rights reserved. Used with Cogstate's permission. B, CANTAB Spatial Span and Paired Associates Learning. Copyright Cambridge Cognition. All rights reserved. C, NIH‐Toolbox Pattern Comparison Processing Speed Test Age 7+ v2.1. Used with permission NIH Toolbox, © 2020 National Institutes of Health and Northwestern University
FIGURE 2A, Ambulatory Research in Cognition (ARC) Symbols Test, Grids Test, and Prices Test. Used with permission from J. Hassenstab. B, neotiv Objects‐in‐Rooms Recall test. Used with permission from neotiv GmbH. C, Boston Remote Assessment for Neurocognitive Health (BRANCH). Used with permission from K. V. Papp
Validation of remotely administered tablet‐ and smartphone‐based cognitive assessment and other novel types of cognitive assessment
| Type of validation | Instrument | Authors | Longitudinal/cross‐sectional | Platform | Validation | Effect size | Biomarker | Participants ( |
|---|---|---|---|---|---|---|---|---|
| Biomarker validation | FNAME | Samaroo et al. (2020) | Longitudinal | iPad | Diminished learning were associated with greater amyloid and tau PET burden | Moderate effect (d = 0.60) | [11C]PiB‐PET, [18F]flortaucipir‐PET | 94 |
| ORCA‐LLT | Lim et al. (2020) | Longitudinal | Any platform using web‐browser | Lower learning curves were seen in Aβ+ CN | Large effect (d = 2.22) | [11C]PiB‐PET, [18F]florbetapir‐PET, or [18F] flutemetamol‐PET | 80 | |
| Sea Hero Quest | Coughlan (2019) | Cross‐sectional | Smartphone | Wayfinding discriminated between carriers and non‐carriers | Acceptable discrimination (AUC = 0.71) |
| 60 | |
| Speech analysis | Verfaillie et al. (2019) | Cross‐sectional | Audio recorder | Fewer specific words were associated with Aβ burden | Moderate (β = 0.48 to 0.69) | CSF Aβ42, [18F]florbetapir‐PET | 63 | |
| Spatial Navigation task | Allison et al. (2016) | Cross‐sectional | Computer | Lower results on wayfinding were seen in Aβ+ CN | Moderate effect (d = 0.76) | CSF Aβ42 | 71 | |
| Paper/pencil validation | VPC task using eye‐tracking devices | Bott et al. (2018) | Cross‐sectional | Eye‐tracking camera and device embedded camera | Eye movement was associated with PACC and NIHTB‐CB | Moderate effect (ρ r = 0.35‐0.39) | NA | 49 |
Abbrevations: Aβ, amyloid beta; AUC, area under the curve; β, beta interaction effect; CN, clinically normal; CSF, cerebrospinal fluid; d, Cohen's d; ORCA‐LLT, Online Repeatable Cognitive Assessment‐Language Learning Task; PiB, Pittsburgh Compound B PET; ρ r, Spearman's rank correlation; p r, Pearsons correlation coefficient; VPC, Visual Paired Association.
Note: Only published articles are included in this table.
FIGURE 3A, Sea Hero Quest Wayfinding and Path integration. Used with permission from M. Hornberger. B, Digital Maze Test from survey perspective and landmarks from a first‐person perspective. Used with permission from D. Head. C, Data and analysis process for digital Clock Drawing Test (dCDT), from data collection, the artificial intelligence (AI) analysis steps, and the machine learning (ML) analysis and reporting. Used with permission from Digital Cognition Technologies
FIGURE 4Overview of cognitive tests and their platforms. BRANCH, Boston Remote Assessment for Neurocognitive Health; ORCA‐LLT, Online Repeatable Cognitive Assessment‐Language Learning Test; NIH‐TB, National Institutes of Health Toolbox; CANTAB, Cambridge Neuropsychological Test Automated Battery; ARC, Ambulatory Research in Cognition; M2C2, Monitoring of Cognitive Change; dCDT, digital Clock Drawing Test. *Is available for use through a web browser