| Literature DB >> 35887956 |
Zihan Ding1, Tsz-Lok Lee1, Agnes S Chan1,2.
Abstract
The dementia population is increasing as the world's population is growing older. The current systematic review aims to identify digital cognitive biomarkers from computerized tests for detecting dementia and its risk state of mild cognitive impairment (MCI), and to evaluate the diagnostic performance of digital cognitive biomarkers. A literature search was performed in three databases, and supplemented by a Google search for names of previously identified computerized tests. Computerized tests were categorized into five types, including memory tests, test batteries, other single/multiple cognitive tests, handwriting/drawing tests, and daily living tasks and serious games. Results showed that 78 studies were eligible. Around 90% of the included studies were rated as high quality based on the Newcastle-Ottawa Scale (NOS). Most of the digital cognitive biomarkers achieved comparable or even better diagnostic performance than traditional paper-and-pencil tests. Moderate to large group differences were consistently observed in cognitive outcomes related to memory and executive functions, as well as some novel outcomes measured by handwriting/drawing tests, daily living tasks, and serious games. These outcomes have the potential to be sensitive digital cognitive biomarkers for MCI and dementia. Therefore, digital cognitive biomarkers can be a sensitive and promising clinical tool for detecting MCI and dementia.Entities:
Keywords: computerized test; dementia; digital biomarker; digital cognitive biomarker; digital cognitive test; mild cognitive impairment; systematic review
Year: 2022 PMID: 35887956 PMCID: PMC9320101 DOI: 10.3390/jcm11144191
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.964
Summary on the effect sizes of groups difference in various outcomes of computerized tests.
|
| Dementia vs. Control | CI vs. Control | ||||
|---|---|---|---|---|---|---|
|
|
| Most Sensitive Digital Biomarkers | Most Sensitive Digital Biomarkers | |||
|
| 0.7–1.6 | MITSI-L (PAL correct pairs A2 and B2); | 2.5–4.9 | CANTAB-PAL total errors adjusted | - | - |
|
| 0.1–2.9 | CANTAB (paired associate learning, rapid visual processing, spatial recognition memory); | 0.3–7.2 | CANTAB (paired association learning); | 0.2–2.1 | BHA (“Favorite (memory)” total correct); |
|
| 0.1–2.1 | dCDT (time on surface, time in air, total time); | 0.2–2.4 | Chinese Handwriting task (stroke position control, pause time per stroke, stroke orientation); | - | - |
|
| 0.1–1.3 | Computer-based Klondike Solitaire (average think time, average accuracy, total time SD); | 0.8–2.9 | SIMBAC (total accuracy, total completion time); Computerized Touch-Panel Games (“arranging pictures (processing and remote memory)” completion time; “beating devils (judgment)” accuracy; “flipping cards (recent memory)” completion time; “finding mistakes (attention and discrimination)” completion time) | 0.7–1.2 | CFSAT accuracy (“Ugreens website”, “Internet banking”, “medication management”, “ATM task”, “Ticket task”) |
|
| 0.7–1.2 | correct cancellations in e-CT; | 0.9–3.1 | Computerized SRT task and FRT score adjusted for age and education; | 1.0–2.0 | TMT (total score, total completion time, total response time) |
Abbreviations. BHA: Brain Health Assessment; BoCA: Boston Cognitive Assessment; CANTAB: Cambridge Neuropsychological Test Automated Battery; CDST: Computerized Dementia Screening Test; CFSAT: Computer-based functional skills assessment and training; CoCoSc: Computerized Cognitive Screen; CompBased-CAT: CompBased administered by Computerized Adaptive Testing; dCDT: digital Clock Drawing Test; dTMT-B&W-A: digital Trail-Making Test—Black and White—Part A; e-CT: electronic version of Cancellation Test; FRT: Flanker Reaction Time; Inbrain CST: Inbrain Cognitive Screening Test; MITSI-L: The Miami Test of Semantic Interference; PAL: Paired Associate Learning; SIMBAC: SIMulation-Based Assessment of Cognition; SRT: Simple Reaction Time; TMT: Trail-Making Test; MoCA-CC: Computerized Tool for Beijing version of The Montreal Cognitive Assessment (MoCA).
Summary on diagnostic performance of computerized tests and the comparison paper-and-pencil tests.
| Sen (%) | Spec (%) | AUC | Computerized Tests vs. Paper-and-Pencil Tests | Whether Computerized Test Is Better | |
|---|---|---|---|---|---|
|
| |||||
|
| 42.0–85.8 | 66.0–93.3 | 0.53–0.93 | CANTAB-PAL vs. CERAD wordlist learning delay recall | inferior |
| MemTrax vs. MoCA-BJ | better | ||||
| Digital VSM vs. Cube-copying test | much better | ||||
| digital TPT vs. paper-and-pencil TPT | better | ||||
|
| 41.4–100.0 | 64.0–100.0 | 0.65–0.97 | CANS-MCI vs. MoCA, ACE-R | comparable |
| subsets in NeuroTrax MindStreams vs. subsets in WMS-III, RAVLT, CDT, TMT-A, Boston Naming Test, COWA | comparable and some subsets are even better | ||||
| memory factor in Tablet-based cognitive assessments vs. MMSE | inferior | ||||
| BHA vs. MoCA | better | ||||
| CAMCI vs. MMSE | better | ||||
| COMCOG-CAT vs. CAMCOG | comparable | ||||
|
| 71.4–100.0 | 56.0–100.0 | 0.77–0.89 | machine learning on dCDT features vs. CERAD | comparable |
|
| 76.9–84.4 | 58.0–88.9 | 0.77–0.90 | SASG vs. MoCA | comparable |
| SIMBAC vs. MMSE, Composite score of RAVLT-Delayed recall, Boston Naming Test, Digit Span, Digit Symbol Coding, and TMT-B | comparable | ||||
|
| 56.3–84.7 | 53.6–90.5 | 0.67–0.91 | e-CT vs. K-T CT | comparable |
|
| |||||
|
| 88.9 | 92.9 | - | digital TPT vs. paper-and-pencil TPT | comparable |
|
| 52.9–100.0 | 56.0–100.0 | 0.54–0.99 | CST vs. MMSE | better |
| CCS vs. MoCA | inferior | ||||
| BHA vs. MoCA | comparable | ||||
|
| 82.0–97.7 | 71.4–86.0 | 0.90–0.92 | dCDT parameters vs. CERAD | comparable |
|
| 86.0 | 75.0 | 0.97 | SIMBAC vs. MMSE, Composite score of RAVLT-Delayed recall, Boston Naming Test, Digit Span, Digit Symbol Coding, and TMT-B | comparable |
|
| 62.7–86.1 | 75.0–95.3 | 0.76–0.95 | e-CT vs. K-T CT | comparable |
|
| |||||
|
| 91.8 | 72.0 | 0.89 | - | - |
|
| 70.7–91.0 | 69.0–94.2 | 0.78–0.95 | BHA vs. MoCA | better |
| eSAGE vs. paper version of SAGE | better | ||||
|
| 74.0–89.7 | 70.0–100.0 | 0.84–0.92 | machine learning on dCDT vs. MMSE | better |
|
| 70.0 | 82.0 | 0.84 | - | - |
|
| 77.0–97.0 | 80.6–92.6 | 0.77–0.97 | TMT vs. MMSE | comparable |
| e-CT vs. K-T CT | comparable |
Abbreviations. ACE-R: Addenbrooke’s Cognitive Examination-Revised; BHA: Brain Health Assessment; CANTAB: Cambridge Neuropsychological Test Automated Battery; CAMCI: Computer Assessment of Memory and Cognitive Impairment; CDT: Clock Drawing Test; CERAD: The Consortium to Establish a Registry for Alzheimer’s Disease; COMCOG: Computer-assisted Cognitive Rehabilitation; COMCOG-CAT: Computer-assisted Cognitive Rehabilitation administered by Computerized Adaptive Testing; COWA: Controlled Oral Word Association Test; e-CT: electronic version of Cancellation Test; e-SAGE: electronic version of Self-Administered Gerocognitive Examination; MMSE: Mini-Mental State Examination; RAVLT: Rey Auditory Verbal Learning Test; PAL: Paired Associate Learning; SAGE: Self-Administered Gerocognitive Examination; SASG: Smart Aging Smart Game; SIMBAC: SIMulation-Based Assessment of Cognition; TMT-A: Trail-Making Test—Part A; TMT-B: Trail-Making Test—Part B; MoCA: The Montreal Cognitive Assessment (MoCA); MoCA-BJ: Beijing version of The Montreal Cognitive Assessment (MoCA); VSM: Visuo-spatial Memory task; WMS-III: Wechsler Memory Scale, 3rd edition.
Figure 1PRISMA flow diagram of study selection process.