Literature DB >> 32864411

cCOG: A web-based cognitive test tool for detecting neurodegenerative disorders.

Hanneke F M Rhodius-Meester1,2, Teemu Paajanen3, Juha Koikkalainen4, Shadi Mahdiani5, Marie Bruun6, Marta Baroni7, Afina W Lemstra1, Philip Scheltens1, Sanna-Kaisa Herukka8,9, Maria Pikkarainen8, Anette Hall8, Tuomo Hänninen9, Tiia Ngandu10,11, Miia Kivipelto8,10,11, Mark van Gils5, Steen Gregers Hasselbalch6, Patrizia Mecocci7, Anne Remes12, Hilkka Soininen8,9, Wiesje M van der Flier1,13, Jyrki Lötjönen4.   

Abstract

INTRODUCTION: Web-based cognitive tests have potential for standardized screening in neurodegenerative disorders. We examined accuracy and consistency of cCOG, a computerized cognitive tool, in detecting mild cognitive impairment (MCI) and dementia.
METHODS: Clinical data of 306 cognitively normal, 120 mild cognitive impairment (MCI), and 69 dementia subjects from three European cohorts were analyzed. Global cognitive score was defined from standard neuropsychological tests and compared to the corresponding estimated score from the cCOG tool containing seven subtasks. The consistency of cCOG was assessed comparing measurements administered in clinical settings and in the home environment.
RESULTS: cCOG produced accuracies (receiver operating characteristic-area under the curve [ROC-AUC]) between 0.71 and 0.84 in detecting MCI and 0.86 and 0.94 in detecting dementia when administered at the clinic and at home. The accuracy was comparable to the results of standard neuropsychological tests (AUC 0.69-0.77 MCI/0.91-0.92 dementia). DISCUSSION: cCOG provides a promising tool for detecting MCI and dementia with potential for a cost-effective approach including home-based cognitive assessments.
© 2020 The Authors. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring published by Wiley Periodicals, Inc. on behalf of Alzheimer's Association.

Entities:  

Keywords:  Alzheimer's disease; clinical decision support; cognition; computerized cognitive test; dementia; memory; mild cognitive impairment; neuropsychology; web‐based cognitive test

Year:  2020        PMID: 32864411      PMCID: PMC7446945          DOI: 10.1002/dad2.12083

Source DB:  PubMed          Journal:  Alzheimers Dement (Amst)        ISSN: 2352-8729


BACKGROUND

Despite the great progress in diagnostic biomarkers for Alzheimer's disease (AD) and other types of dementia, only 20% to 50% of dementia cases are recognized and documented. This indicates a need for simple and efficient tools as well as clinical procedures for timely detection of neurodegenerative disorders. Although no cure for major neurodegenerative disorders such as AD is available, early diagnosis, combined with adequate management, can affect cognition, delay institutionalization, and lead to socioeconomic benefits. , Early detection and treatment of patients with cognitive impairment is estimated to be cost effective even when taking the increased assessment costs into account. In clinical practice, elderly persons with suspected cognitive impairment are typically first evaluated with simple cognitive tests, such as the Mini‐Mental State Examination (MMSE), Montreal Cognitive Assessment, Clock Drawing Test or Consortium to Establish a Registry for Alzheimer's Disease (CERAD) battery. Further examinations with comprehensive neuropsychological assessments are ordered according to the clinical symptoms and cognitive screening results already obtained. Neuropsychological examinations are, however, time consuming and require a specialist psychologist. Web‐based tests may increase the availability of testing (in the clinic, and perhaps even at home) and help to reduce costs at the same time. , In addition, results can be more easily integrated into electronic patient data platforms. Web‐based cognitive tests have been found to be promising in measuring cognition in population level and in detecting mild cognitive impairment (MCI) and dementia. , However, test performance can vary depending on test devices. , , Also retest reliability of self‐administered cognitive tests and their correlation to the traditional neuropsychological tests varies. , , Despite the clear benefits and potential of web‐based tests, they are clearly underused in clinical settings. In this study, we present a web‐based self‐administrable cognitive test tool, cCOG, designed for early detection of neurodegenerative disorders. cCOG tasks were developed based on traditional cognitive tests to maintain the internal validity and to support clinicians to interpret the results. In validation, the correlation with standard neuropsychological tests (presenting the gold standard of cognition) was studied first. Then, the test tool accuracy to detect MCI and dementia was evaluated and compared to standard neuropsychological tests. Finally, the internal consistency of the cCOG was studied with a special interest in comparing measurements administered at clinical settings and in the home environment.

METHODS

Subjects

Three study cohorts including cCOG measurements were used: The PredictND (Predict Neurodegenerative Disorders) cohort (PredictND) included patients with MMSE ≥ 25 and contained data from 323 cognitively normal, MCI, and dementia patients from four European memory clinics. The data were acquired during 2015 to 2016. , The VPH‐DARE@IT (Virtual Physiological Human: Dementia Research Enabled by Information Technology) cohort (VPHDARE, www.vph-dare.eu) contained data from 80 cognitively normal, MCI, and dementia patients from one memory clinic in Finland. This was acquired during 2015 to 2016. The Finnish Geriatric Intervention Study cohort (FINGER) contained data from 92 subjects who had an overall cognitive performance at the mean level or slightly lower than expected for their age according to Finnish population norms, did not have a diagnosed MCI or dementia, but were at higher risk of developing dementia. The data were acquired during 2013 to 2014. All patients provided written informed consent for their clinical data to be used for research purposes. Demographic and clinical group characteristics of the cohorts are summarized in Table 1.
TABLE 1

Characteristics of subjects (mean ± standard deviation) included in the three study cohorts PredictND, VPHDARE, and FINGER, and in the ADC reference cohort

 PredictNDPredictND ‐ longitudinal
CNMCIDEMCNMCIDEM
 n = 195n = 83n = 45n = 94n = 31n = 9
Female (%)663747664233
Age (years)64 ± 971 ± 771 ± 1063 ± 870 ± 871 ± 12
Education (years)14 ± 412 ± 413 ± 414 ± 313 ± 412 ± 5
Neuropsychology
MMSE29.3 ± 1.027.9 ± 1.627.2 ± 1.929.4 ± 1.027.9 ± 1.627.6 ± 2.1
Memory, learning b 43 ± 1037 ± 1525 ± 1643 ± 1142 ± 1826 ± 14
Memory, recall b 10 ± 36 ± 52 ± 49 ± 38 ± 53 ± 5
TMT‐A (s)37 ± 1647 ± 1761 ± 4934 ± 941 ± 1271 ± 72
TMT‐B (s)84 ± 46131 ± 60172 ± 8273 ± 34112 ± 55158 ± 81
Category fluency24 ± 720 ± 615 ± 525 ± 721 ± 615 ± 8

“PredictND – longitudinal” is a subcohort for which cCOG was measured in all four time points.

Abbreviations: cCOG, computerized cognitive test; CERAD, Consortium to Establish a Registry for Alzheimer's Disease; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination; RAVLT, Rey Auditory Verbal Learning Test; TMT, Trail Making Test.

Verhage rating scale for education

Converted from CERAD scores to RAVLT scores using z‐score comparison

Characteristics of subjects (mean ± standard deviation) included in the three study cohorts PredictND, VPHDARE, and FINGER, and in the ADC reference cohort “PredictND – longitudinal” is a subcohort for which cCOG was measured in all four time points. Abbreviations: cCOG, computerized cognitive test; CERAD, Consortium to Establish a Registry for Alzheimer's Disease; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination; RAVLT, Rey Auditory Verbal Learning Test; TMT, Trail Making Test. Verhage rating scale for education Converted from CERAD scores to RAVLT scores using z‐score comparison In addition, the Amsterdam Dementia Cohort, composed of data from memory clinic patients assessed between 2004 to 2014, , was used as a separate reference data cohort for developing a composite cognitive score from standard neuropsychological tests (see below). Data from 138 cognitively normal individuals and 470 dementia patients were used.

Clinical assessment

The participants in all the cohorts received a clinical work‐up including medical history, physical assessment, traditional neuropsychological assessments, and laboratory tests. Subjects were diagnosed as cognitively normal when the cognitive complaints could not be confirmed by cognitive testing and criteria for MCI or dementia were not met. The Petersen criteria were used to define MCI. , Patients were diagnosed with dementia according to the criteria for the specific underlying neurodegenerative disorder. , , , , , ,

Global cognitive score based on standard neuropsychological tests

A global cognitive score composed of several standard neuropsychological tests was developed to serve as a gold standard for the overall status of cognition. This score was developed to optimally separate cognitively normal cases from cases with cognitive impairment. To construct a global cognitive score, we selected a subset of tests that was available in all the cohorts: MMSE was selected as a measure for global cognition, learning and delayed recall scores of Rey Auditory Verbal Learning Task (RAVLT) or CERAD word list memory task for episodic memory, Trail Making Test A and B conditions (TMT‐A, TMT‐B) for mental processing speed and executive function, categorical (animals) verbal fluency for language and executive function, and digit span test (forward and backward) for working memory/attention and executive functioning. To bridge differences between cohorts, Z‐scores for RAVLT and CERAD were used. The independent Amsterdam Dementia Cohort was used for setting the parameters of the global cognitive score, which was defined as an index computed by feeding the abovementioned measures to the disease‐state index (DSI) classifier. DSI is a supervised learning method that processes heterogeneous patient data to derive a numeric index value between zero and one denoting the disease status of a patient. In this study, a global cognitive score value of zero means a high similarity to subjects with dementia (worse cognitive performance) while the value of one means a high similarity to cognitively normal subjects (better cognitive performance). Finally, a global cognitive score was computed for all subjects of the PredictND, VPHDARE, and FINGER cohorts. The supporting information appendix gives more details about the method.

RESEARCH IN CONTEXT

Systematic review: We reviewed the scientific literature regarding the early detection of neurodegenerative disorders using computerized cognitive screening tests. Interpretation: Results of this study indicate that our novel developed web‐based cognitive test tool, cCOG, is comparable to the traditional paper‐and‐pencil neuropsychological tests in detecting mild cognitive impairment (MCI) and dementia disorders. The parameters of the new test tool had strong correlations with traditional neuropsychological tests. In addition, consistency of self‐administered home assessments and superintended assessments conducted in clinic was high, especially for the total cCOG score. Future directions: This article proposes that the new web‐based cognitive test tool is accurate in discriminating MCI and dementia from elderly with normal cognition. More research is needed to confirm its properties in detecting cognitive change over time. Future studies focusing on the use of cCOG in a stepwise diagnostic approach could in addition be beneficial.

Computerized cognitive test tool (cCOG)

In PredictND and VPHDARE, patients performed the computerized test tool (cCOG) as part of the aim of these studies to develop computer tools for dementia diagnostics. In FINGER, cCOG was performed as an exploratory measure after the completion of an interventional study, assessing the efficacy of a 2‐year lifestyle intervention on cognition. In all three studies, patients performed the web‐based test tool, cCOG, superintended at baseline at the clinical sites. In PredictND, participants were asked to repeat the test battery four times to evaluate performance in the clinic and at home: at baseline and 12 months superintended at the memory clinics, and 6 months and 18 months independently at home (for which an online reminder was sent twice, including a direct link to cCOG). The computerized test battery is based on the three classical cognitive tasks: a modification of wordlist test, , simple reaction task, and Trail Making Test. It is divided into seven tasks, taking approximately 20 minutes to complete. A keyboard and mouse or a touchscreen device were used. The test battery is currently available in five languages: English, Finnish, Danish, Dutch, and Italian. Task 1 (Episodic memory test: learning task) is a classical memory test in which the user is asked to remember 12 words shown one by one. Memory encoding is supported by a simultaneously presented visual image of the target word, that is, the word “CAR” is presented with a picture of a car. After word/picture combinations have been presented, the subject is asked to type as many words as she/he can remember. The same list is shown three times, followed by the immediate recalls. The order of the words varies between the rehearsal rounds. Tasks 2–3 (Reaction tests) measure attention and reaction speed. Stimuli are letters shown on the screen indicating the direction (right and left) to which the user should react by pressing the arrow button as quickly as possible. In Task 2, the user should hit the arrow button on the right “→” whenever “R” is displayed. In Task 3, both “R” and “L” letters are displayed, and the user should hit the right arrow “→” for “R” and the left arrow “←” for “L.” Tasks 4–5 (modified Trail Making Tests) measure visuomotor speed, attention, and executive function. In Task 4, the user is asked to select the numbers from 1 to 24 in the ascending order as quickly as possible. Numbers from 1 to 24 located on the squares are shown in random locations on the screen. In Task 5, the user must again click the numbers in order; however, this time each number from 1 to 24 is presented both in the circle and square. Altogether 48 stimuli are shown on the screen and the user is asked to select numbers in ascending order but every first time a circle and every second a square in a sequence (one inside circle, two inside square, three inside circle, etc.). In Task 6 (Episodic memory test: Recall task), the user is asked to recall and type the words from Task 1. In Task 7 (Episodic memory test; Recognition task), the user is asked to recognize the word/picture combinations shown previously in Task 1. The user is shown altogether 24 word/picture images and asked to recognize whether the word was shown in Task 1. cCOG tasks were quantified as follows: Task 1: the total number of correct words recalled in immediate trails, Task 2 & 3: the average reaction time calculated for correct clicks, Task 4 & 5: the duration for selecting numbers in ascending order from 1 to 24, Task 6: the number of correct words in delayed recall, and Task 7: the duration of time from the beginning until the end of the recognition task. Thereafter, a linear regression model was developed using PredictND data, for estimating the global cognitive score (dependent variable) from the abovementioned seven features. This estimated score, cCOG score, was then computed for all subjects of the PredictND, VPHDARE, and FINGER cohorts. Finally, MMSE, global cognitive score, and cCOG scores were normalized for age, sex, and education years.

Data analysis

The Spearman correlation coefficient was computed between the cognitive features of cCOG and different clinical cognitive test results, for PredictND, VPHDARE, and FINGER data. The correlations are rated as follows: 0–0.39 weak, 0.40–0.59 moderate, and 0.60–1.0 strong. The accuracy of the global scores (MMSE, global cognitive score, and cCOG) and cCOG individual subtasks in classifying patients to different diagnostic groups was studied. Two classifiers were developed: one for detecting MCI patients and one for detecting dementia patients. First, the PredictND data were divided randomly into training set (75% of cases) and test set (25% of cases). Then, the median value of the score (MMSE, global cognitive score, and cCOG) was computed for both diagnostics groups in the training set, and the cut‐off value was chosen as the midpoint between the median values. Finally, the test set was classified using the cut‐off value. A set of statistical performance measures was computed: area‐under‐the‐curve (AUC), sensitivity, specificity, and balanced accuracy (BACC, defined as the average of sensitivity and specificity). The whole process was repeated 1000 times and the average accuracy was calculated. In addition to cross‐validation using the PredictND data, classification performance was evaluated using the independent VPHDARE cohort. Because the FINGER cohort does not contain MCI and dementia patients, these data were not used for this part of the study. Finally, consistency of cCOG measurements at clinics and at home in PredictND was studied by calculating the Pearson correlation coefficient between different time points and by comparing classification performance from four time points. The Matlab toolbox R2017a (The MathWorks Inc., Natick, Massachusetts, USA) was used to run all data analyses.

RESULTS

Correlation with standard neuropsychological tests

Table 2 presents the correlation coefficients of MMSE, global cognitive score, cCOG, and the individual cCOG tasks, with results from standard neuropsychological tests. The correlation coefficients between global cognitive score and cCOG for PredictND, VPHDARE, and FINGER cohorts were 0.78, 0.81, and 0.63, respectively. When correlations between individual tasks and single cognitive tests were studied, the highest correlations were found between episodic memory learning (Task 1) and RAVLT/CERAD learning, r = 0.44–0.64, episodic memory recall (Task 6) and RAVLT/CERAD recall, r = 0.47–0.54, and modified trail making (B) (Task 5) and TMT‐B, r = 0.62–0.80.
TABLE 2

Spearman correlations coefficients between cCOG tasks and standard neuropsychological tests

PredictND (N = 328) VPH‐DARE@IT (N = 80) FINGER (N = 92)MMSEGlobal cognitive scoreRAVLT/CERAD LearningRAVLT/CERAD RecallFluency AnimalTMT‐ ATMT‐ BDigitSpan ForwardDigitSpan Backward
MMSEPDN0.610.420.460.40−0.31−0.400.220.19
VPH0.820.670.580.61−0.49−0.640.410.55
 FNG0.560.260.280.35−0.07−0.370.180.20
Global cognitivePDN0.610.790.780.70−0.61−0.730.390.51
scoreVPH0.820.860.770.85−0.74−0.850.470.67
 FNG0.560.680.670.65−0.52−0.710.460.56
cCOGPDN0.540.780.510.500.59−0.61−0.710.270.36
VPH0.680.810.640.510.64−0.65−0.780.380.45
 FNG0.170.630.390.460.50−0.37−0.530.380.32
Task 1: Episodic memory learningPDN0.520.710.520.500.55−0.47−0.580.210.30
VPH0.560.670.640.530.55−0.52−0.570.250.34
FNG0.120.540.440.490.39−0.20−0.400.460.31
Task 2: Simple reactionPDN−0.34−0.36−0.23−0.22−0.400.300.33−0.05−0.08
VPH−0.48−0.62−0.52−0.42−0.600.440.63−0.38−0.45
FNG−0.01−0.42−0.24−0.37−0.260.250.30−0.19−0.16
Task 3: Choice reactionPDN−0.29−0.42−0.33−0.29−0.370.330.36−0.12−0.20
VPH−0.43−0.56−0.39−0.26−0.540.470.57−0.34−0.45
FNG−0.11−0.11−0.01−0.13−0.110.140.160.130.13
Task 4: Modified Trail Making (A)PDN−0.27−0.49−0.17−0.18−0.350.600.57−0.24−0.31
VPH−0.44−0.58−0.38−0.30−0.460.550.66−0.44−0.49
FNG−0.28−0.55−0.31−0.28−0.390.310.55−0.31−0.31
Task 5: Modified Trail Making (B)PDN−0.32−.60−0.30−0.29−0.420.650.70−0.28−0.37
VPH−0.65−0.74−0.51−0.43−0.570.580.80−0.41−0.51
FNG−0.20−0.58−0.25−0.28−0.460.440.62−0.21−0.28
Task 6: Episodic memory recallPDN0.480.660.490.540.49−0.42−0.500.170.26
VPH0.530.590.520.530.48−0.43−0.510.270.22
FNG0.080.470.380.470.35−0.21−0.310.290.23
Task 7: Episodic memory recognitionPDN−0.40−0.60−0.35−0.35−0.430.520.59−0.23−0.28
VPH−0.67−0.75−0.55−0.45−0.600.630.75−0.38−0.49
FNG−0.15−0.39−0.12−0.19−0.420.300.40−0.32−0.26

Abbreviations: cCOG, computerized cognitive test; CERAD, Consortium to Establish a Registry for Alzheimer's Disease; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination; RAVLT, Rey Auditory Verbal Learning Test; TMT, Trail Making Test.

Notes: Color scaling dependent on absolute values of correlation: no color for very weak or weak correlation (0–0.39), light red for moderate correlation (0.40–0.59), red for strong or very strong correlation (0.60–1.00).

Spearman correlations coefficients between cCOG tasks and standard neuropsychological tests Abbreviations: cCOG, computerized cognitive test; CERAD, Consortium to Establish a Registry for Alzheimer's Disease; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination; RAVLT, Rey Auditory Verbal Learning Test; TMT, Trail Making Test. Notes: Color scaling dependent on absolute values of correlation: no color for very weak or weak correlation (0–0.39), light red for moderate correlation (0.40–0.59), red for strong or very strong correlation (0.60–1.00). Correlation coefficients between the global scores and age were –0.15 for MMSE, –0.16 for global cognitive score, and –0.26 for cCOG, and between the composite scores and education years 0.09 for MMSE, 0.12 for global cognitive score, and 0.17 for cCOG.

Classification accuracy in diagnostics

Figure 1 shows the distributions of MMSE, global cognitive score, and cCOG for different diagnostic groups in all three study cohorts. Table 3 presents classification performance in detecting MCI and dementia patients using MMSE, global cognitive score, and cCOG in PredictND. The results indicate that classification accuracy is comparable between global cognitive score and cCOG. Table A.1 in the supporting information appendix shows classification performance also for the individual cCOG tasks. The highest values can be observed for the memory tasks (learning and recall) both in detecting MCI and dementia while the reaction time tasks have clearly the lowest values.
FIGURE 1

Distributions of Mini‐Mental State Examination, global cognitive score, and computerized cognitive tool (cCOG) global score shown for different diagnostic groups of the PredictND cohort (blue), VPHDARE cohort (orange), and FINGER cohort (gray) using boxplots. For each boxplot, the line and cross indicate the median and mean values, respectively, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually

TABLE 3

Classification performance in the PredictND cohort (mean; 95% confidence interval)

CN (N = 195) vs MCI (N = 83)CN (N = 195) vs DEM (N = 45)
PredictNDMMSEGlobal cognitive scorecCOGMMSEGlobal cognitive scorecCOG
AUC0.75 (0.63 0.86)0.77 (0.66 0.88)0.84 (0.75 0.92)0.84 (0.71 0.94)0.91 (0.77 0.99)0.92 (0.83 0.98)
BACC0.68 (0.58 0.78)0.71 (0.61 0.80)0.77 (0.67 0.86)0.78 (0.66 0.89)0.88 (0.76 0.96)0.83 (0.71 0.93)
Sensitivity0.66 (0.45 0.85)0.71 (0.50 0.90)0.77(0.57 0.95)0.72 (0.45 0.92)0.86 (0.64 1.00)0.75 (0.50 1.00)
Specificity0.71 (0.57 0.84)0.71 (0.58 0.84)0.77 (0.67 0.88)0.83 (0.71 0.94)0.89 (0.81 0.96)0.91 (0.83 0.98)

Abbreviations: AUC, area under the curve; BACC; balanced accuracy; cCOG, computerized cognitive test; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination.

Distributions of Mini‐Mental State Examination, global cognitive score, and computerized cognitive tool (cCOG) global score shown for different diagnostic groups of the PredictND cohort (blue), VPHDARE cohort (orange), and FINGER cohort (gray) using boxplots. For each boxplot, the line and cross indicate the median and mean values, respectively, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually Classification performance in the PredictND cohort (mean; 95% confidence interval) Abbreviations: AUC, area under the curve; BACC; balanced accuracy; cCOG, computerized cognitive test; CN, cognitively normal; DEM, dementia; MCI, mild cognitive impairment; MMSE, Mini‐Mental State Examination. The cut‐off values defined from the PredictND data were 28.3 (MMSE), 0.60 (global cognitive score), and 0.60 (cCOG) for detecting MCI patients, and 28.2 (MMSE), 0.49 (global cognitive score), and 0.52 (cCOG) for detecting dementia patients. When these cutoffs were applied to the VPHDARE data, the following balanced accuracies were obtained: 0.63 (MMSE), 0.71 (global cognitive score), and 0.67 (cCOG) in detecting MCI patients, and 0.71 (MMSE), 0.79 (global cognitive score), and 0.78 (cCOG) in detecting dementia patients.

Consistency of cCOG at clinic and at home

Consistency of the cCOG results between clinic and home‐based assessments was analyzed using the PredictND data. Of the 323 participants, 25 participants performed cCOG only at baseline, 94 participants performed cCOG twice, and 69 patients performed cCOG three times. Only 134 participants (94 cognitively normal, 31 MCI, and 9 dementia) performed cCOG at all four time points. Of these time points, baseline and 12‐month visits were administered superintended at memory clinics, and 6‐ and 18‐month visits at home using participants’ own computers. Assistance (mostly in typing) was reported in 21% of cases over all testing sessions; 31% of testing sessions at the first clinical visit, and 17% of testing session after the first visit (both at home and at the clinic). Furthermore, 79% of testing sessions were done using touchscreen in the clinic versus only 14% at home. Figure 2 presents the distribution of cCOG results for the different diagnostic groups at the measurement points. AUC values were 0.79 (month 0), 0.72 (month 6), 0.76 (month 12), 0.76 (month 18) in detecting MCI patients, and 0.94 (month 0), 0.93 (month 6), 0.91 (month 12), and 0.87 (month 18) in detecting dementia showing a decent consistency between the clinic and home measurements. By comparison, AUC was 0.69 in detecting MCI and 0.91 in detecting dementia when using the global score from standard neuropsychological tests at the baseline.
FIGURE 2

Computerized cognitive tool (cCOG) retest distributions in follow‐up setting. Distributions of cCOG for the cognitively normal (CN), mild cognitive impairment (MCI), and dementia (DEM) groups at baseline (first dark blue, clinic), month 6 (first light blue, home), month 12 (second dark blue, clinic), and month 18 (second light blue, home). The distributions have been computed only from the subjects having all four time points available (n = 136)

Computerized cognitive tool (cCOG) retest distributions in follow‐up setting. Distributions of cCOG for the cognitively normal (CN), mild cognitive impairment (MCI), and dementia (DEM) groups at baseline (first dark blue, clinic), month 6 (first light blue, home), month 12 (second dark blue, clinic), and month 18 (second light blue, home). The distributions have been computed only from the subjects having all four time points available (n = 136) Table 4 presents the Pearson correlation coefficient for test‐retest consistency between the two cCOG measurements either at clinic, at home, or between clinic and home.
TABLE 4

Pearson correlation coefficients between different time points for the cCOG tasks and the global score at clinic, at home, and between clinic and home

Task 1: Episodic memory learningTask 2: Simple reactionTask 3: Choice reactionTask 4: Modified Trail Making (A)Task 5: Modified Trail Making (B)Task 6: Episodic memory recallTask 7: Episodic memory recognitioncCOG
Clinic (N = 288) M0‐M120.750.480.420.690.690.720.640.82
Home (N = 134) M6‐M180.690.480.540.630.440.590.780.77
Clinic‐Home (N = 186) M0‐M60.540.420.240.630.630.570.590.67
Clinic‐Home (N = 177) M6‐M120.640.340.350.620.620.670.550.72
Clinic‐Home (N = 160) M12‐M180.690.340.480.640.610.650.660.74

Abbreviations: cCOG, computerized cognitive test; M0, baseline visit at memory clinic; M12, 12 months visit at memory clinic; M18, 18 months visit at home; M6, 6 months visit at home.

Pearson correlation coefficients between different time points for the cCOG tasks and the global score at clinic, at home, and between clinic and home Abbreviations: cCOG, computerized cognitive test; M0, baseline visit at memory clinic; M12, 12 months visit at memory clinic; M18, 18 months visit at home; M6, 6 months visit at home.

DISCUSSION

This study validated a self‐administrable web‐based cognitive test tool for the early detection of neurodegenerative disorders. The test was designed in such a way that tasks resemble standard neuropsychological tests making interpretation easier for clinicians. Classification accuracy was high in detecting both patients with MCI and dementia, and comparable to the global cognitive score derived from standard neuropsychological tests. Furthermore, accuracy was relatively consistent over time and between testing at home or in clinic. A recent systematic review reported accuracies for 11 computerized tools in detecting either MCI or early dementia based on 14 studies. The performance was reported either for the overall output of the tool, for the subtasks of the tool, or both. The median value of AUC was 0.85 and the balanced accuracy was 0.77 in detecting MCI. The corresponding values in detecting early dementia were 0.82 for AUC and 0.85 for balanced accuracy. These results are comparable to the performance reported in this work. The time needed for testing is also an important factor when considering feasibility. The time used in testing for those 11 tools varied between 10 and 45 minutes, which is comparable to about 20 minutes used for cCOG. Despite this review, to date, very few automated computerized tools are being used in clinical practice. Our study adds to this work by developing a web‐based cognitive test tool that uses tests that are easy to interpret and use in clinical practice. Furthermore, the developed test showed consistent performance when used at home or in clinic. In addition, computerized test‐batteries can be beneficial in emergency situations (such as the current COVID‐19 pandemic) for which remote assessments are needed. Several factors can affect test accuracy. First, it is well known that diagnostic accuracy improves at later stages of neurodegenerative diseases. MMSE scores 20 to 24 are considered to suggest mild dementia. In PredictND, the average MMSE score for dementia patients was 27.2, indicating that these patients received their diagnosis at a very early phase. The cCOG assessment was performed in PredictND only in memory clinic patients with MMSE ≥ 25 explaining the normal MMSE value. The accuracy obtained in this cohort is thus fairly high despite the fact that these patients were all very mild dementia cases, in which diagnosis is more challenging. Interestingly, the cognitively normal subjects at risk in FINGER had the same average MMSE score (27.9) as MCI patients in PredictND. These results demonstrate possible differences in clinical populations but potentially also how diagnostic criteria are applied. Second, the number of subjects in the VPHDARE cohort was relatively low with only 19 cognitively normal subjects. This means that the impact of a single subject on accuracy is considerable and random effects may explain differences. Because of these reasons, the accuracy values reported should be considered only indicative. cCOG had high correlations with the global cognitive score from standard neuropsychological tests in the three cohorts studied, r = 0.63–0.81. When correlations between individual cCOG tasks and single cognitive tests were studied, the highest correlations were found for the memory domain and executive functioning domain. For comparison, Mielke et al. reported correlations of the CogState computerized tests and standard neuropsychological tests in a non‐demented elderly cohort including both cognitively normal and MCI subjects. The correlations were r = 0.13–0.34 for delayed recall and r = 0.24–0.47 for TMT‐B. In some studies correlations between the computerized tests and paper‐and‐pencil tests have been very weak (r = 0.09–0.26 for immediate recall, r = 0.09–0.23 for delayed recall, and r = 0.02–0.28 for TMT‐B), but in other studies computerized cognitive batteries have yielded moderate to high correlation (r = 0.47–0.71) with traditional tests also in the healthy population. In general, correlations between cCOG and traditional neuropsychological tests were good when compared to previously developed web‐based test batteries. , , , Also cCOG memory subscore correlations with traditional memory tests were comparable to those reported recently among MCI and healthy elderly. The consistency over time was studied using data from four time points, two measured at clinic and two at home. cCOG showed very strong correlation both in clinic (r = 0.82) and at home (r = 0.77). For comparison, Hammers et al. studied test‐retest reliability using the CogState test battery and reported correlations for different tasks between r = 0.23–0.79 in healthy controls, 0.33–0.75 in MCI subjects, and 0.59–0.80 in AD. Cacciamani et al. reported test‐retest reliability for the CANTAB (Cambridge Neuropsychological Test Automated Battey) test in MCI subjects over three time points. The Paired Associates Learning test (total errors) provided the highest correlation over three measurements, r = 0.74–0.85. The same test gave correlation r = 0.68 in an older elderly cohort without neuropsychiatric diagnosis in Goncalves et al. They reported the highest test‐retest performance for a reaction time test (RTI five‐choice movement time), r = 0.86, while the reaction times tests produced variable results in, r = 0.03–0.82. In cCOG, correlations for the reaction time tests were only r = 0.42–0.54. In Maljkovic et al., CANTAB was administered at home and the highest test‐retest correlation in a healthy control, MCI, and dementia cohort was obtained for memory tests, intraclass correlation > 0.71. In our study, a single composite score was developed both for the standard neuropsychological tests and for the cCOG test tool. Recent research implies that composite scores of memory and global cognition can be more sensitive than single test scores in detecting cognitive impairment in prodromal AD, , supporting this approach. In addition, a single index score is easier to interpret in screening purposes than a battery including several separate scores. Finally, automatic adjustment of demographic variables is also straightforward in the computerized tests. Regarding the feasibility of cCOG, assistance was requested in 21% of testing sessions. In most cases, assistance was needed in typing. To alleviate this challenge and improve usability, we have updated cCOG so that the user needs to type only the first three letters and if they are right, the word is completed automatically. Voice recognition could also be an option for future versions. The main strength of this study was the use of three different cohorts in validation and comparison to standard neuropsychological tests. The main limitation, however, was a relatively limited sample size. Less than half of the PredictND subjects did cCOG at all four time points. Another limitation was that our study design was not optimal for defining test‐retest reliability. Instead of having two measurements made within a short period of time, the time difference was 6 or even 12 months. Yet, this can also be considered an advantage, preventing learning effects. In addition, testing at home was not controlled in any way as patients used their own computers. Only 14% of testing sessions were done using a touchscreen at home while the number was 79% at clinics. Although no big difference was observed between the clinic and home measurements, standardizing the hardware would potentially improve reproducibility. A systematic study about the impact of the hardware should be performed in the future. Finally, our study on the role of individual tasks remained limited. Because the number of dementia cases was relatively small, we could not evaluate how the tasks reflecting different domains of cognition performance in separating different dementia etiologies. In conclusion, the web‐based cCOG test tool demonstrated corresponding accuracy in detecting MCI and dementia with a composite score derived from standard neuropsychological tests. In addition, cCOG results showed high consistency when measurements were administered at home and in the clinic. These results give support that cCOG could be a useful and cost‐efficient tool in early assessment for neurodegenerative diseases. Tools like this can even be administered at home and can pave the way for stepwise approach diagnostics in dementia.

CONFLICTS OF INTEREST

Hanneke FM Rhodius‐Meester performs contract research for Combinostics, all funding is paid to her institution. Teemu Paajanen reports no disclosures. Juha Koikkalainen and Jyrki Lötjönen report that Combinostics owns the following IPR related to the article: 1. J. Koikkalainen and J. Lötjönen. A method for inferring the state of a system, US 7,840,510 B2. 2. J. Lötjönen, J. Koikkalainen, and J. Mattila. State Inference in a heterogeneous system, US 10,372,786 B2. Koikkalainen and Lötjönen are shareholders in Combinostics. Shadi Mahdiani reports no disclosures. Marie Bruun reports no disclosures. Marta Baroni reports no disclosures. Afina W. Lemstra reports no disclosures. Philip Scheltens has received consultancy/speaker fees (paid to the institution) from Biogen, Novartis Cardiology, Genentech, AC Immune. He is PI of studies with Vivoryon, EIP Pharma, IONIS, CogRx, AC Immune, and FUJI‐film/Toyama. Sanna‐Kaisa Herukka reports no disclosures. Maria Pikkarainen reports no disclosures. Anette Hall reports no disclosures. Tuomo Hänninen reports no disclosures. Tiia Ngandu reports no disclosures. Miia Kivipelto has received research support from the Academy of Finland, Swedish Research Council, Joint Program of Neurodegenerative Disorders, Knut and Alice Wallenberg Foundation, Center for Innovative Medicine (CIMED) Stiftelsen Stockholms sjukhem, Konung Gustaf Vs och Drottning Victorias Frimurarstiftelse, Alzheimerfonden, Hjärnfonden, Region Stockholm (ALF and NSV grants). She takes part in the WHO guidelines development group, is a governance committee member of the Global Council on Brain Health, and is on the advisory board of Combinostics and Roche. Mark van Gils reports no disclosures. Steen Gregers Hasselbalch reports no disclosures. Patrizia Mecocci reports no disclosures. Anne Remes reports no disclosures. Hilkka Soininen has received fees as a member of advisory board of ACImmune, MERCK, and Novo Nordisk outside this work. Wiesje M van der Flier performs contract research for Biogen. Research programs of Wiesje van der Flier have been funded by ZonMW, NWO, EU‐FP7, Alzheimer Nederland, CardioVascular Onderzoek Nederland, Gieskes‐Strijbis fonds, Pasman stichting, Boehringer Ingelheim, Piramal Neuroimaging, Combinostics, Roche BV, AVID. She has been an invited speaker at Boehringer Ingelheim and Biogen. All funding is paid to her institution. Supplementary information Click here for additional data file.
  45 in total

1.  "Mini-mental state". A practical method for grading the cognitive state of patients for the clinician.

Authors:  M F Folstein; S E Folstein; P R McHugh
Journal:  J Psychiatr Res       Date:  1975-11       Impact factor: 4.791

2.  The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.

Authors:  Ziad S Nasreddine; Natalie A Phillips; Valérie Bédirian; Simon Charbonneau; Victor Whitehead; Isabelle Collin; Jeffrey L Cummings; Howard Chertkow
Journal:  J Am Geriatr Soc       Date:  2005-04       Impact factor: 5.562

Review 3.  Neuropsychological Assessment: Past and Future.

Authors:  Kaitlin B Casaletto; Robert K Heaton
Journal:  J Int Neuropsychol Soc       Date:  2017-10       Impact factor: 2.892

4.  The consortium to establish a registry for Alzheimer's disease (CERAD). Part IV. Rates of cognitive change in the longitudinal assessment of probable Alzheimer's disease.

Authors:  J C Morris; S Edland; C Clark; D Galasko; E Koss; R Mohs; G van Belle; G Fillenbaum; A Heyman
Journal:  Neurology       Date:  1993-12       Impact factor: 9.910

5.  [Digit series and Knox cubes as concentration tests for elderly subjects].

Authors:  J Lindeboom; D Matto
Journal:  Tijdschr Gerontol Geriatr       Date:  1994-05

Review 6.  Diagnosis and management of dementia with Lewy bodies: Fourth consensus report of the DLB Consortium.

Authors:  Ian G McKeith; Bradley F Boeve; Dennis W Dickson; Glenda Halliday; John-Paul Taylor; Daniel Weintraub; Dag Aarsland; James Galvin; Johannes Attems; Clive G Ballard; Ashley Bayston; Thomas G Beach; Frédéric Blanc; Nicolaas Bohnen; Laura Bonanni; Jose Bras; Patrik Brundin; David Burn; Alice Chen-Plotkin; John E Duda; Omar El-Agnaf; Howard Feldman; Tanis J Ferman; Dominic Ffytche; Hiroshige Fujishiro; Douglas Galasko; Jennifer G Goldman; Stephen N Gomperts; Neill R Graff-Radford; Lawrence S Honig; Alex Iranzo; Kejal Kantarci; Daniel Kaufer; Walter Kukull; Virginia M Y Lee; James B Leverenz; Simon Lewis; Carol Lippa; Angela Lunde; Mario Masellis; Eliezer Masliah; Pamela McLean; Brit Mollenhauer; Thomas J Montine; Emilio Moreno; Etsuro Mori; Melissa Murray; John T O'Brien; Sotoshi Orimo; Ronald B Postuma; Shankar Ramaswamy; Owen A Ross; David P Salmon; Andrew Singleton; Angela Taylor; Alan Thomas; Pietro Tiraboschi; Jon B Toledo; John Q Trojanowski; Debby Tsuang; Zuzana Walker; Masahito Yamada; Kenji Kosaka
Journal:  Neurology       Date:  2017-06-07       Impact factor: 9.910

7.  Are We There Yet? Exploring the Impact of Translating Cognitive Tests for Dementia Using Mobile Technology in an Aging Population.

Authors:  Kai Ruggeri; Áine Maguire; Jack L Andrews; Eric Martin; Shantanu Menon
Journal:  Front Aging Neurosci       Date:  2016-03-17       Impact factor: 5.750

8.  Initial validation of a web-based self-administered neuropsychological test battery for older adults and seniors.

Authors:  Tor Ivar Hansen; Elise Christina D Haferstrom; Jan F Brunner; Hanne Lehn; Asta Kristine Håberg
Journal:  J Clin Exp Neuropsychol       Date:  2015-05-26       Impact factor: 2.475

9.  Unsupervised online neuropsychological test performance for individuals with mild cognitive impairment and dementia: Results from the Brain Health Registry.

Authors:  R Scott Mackin; Philip S Insel; Diana Truran; Shannon Finley; Derek Flenniken; Rachel Nosheny; Aaron Ulbright; Monica Comacho; David Bickford; Brian Harel; Paul Maruff; Michael W Weiner
Journal:  Alzheimers Dement (Amst)       Date:  2018-06-21

10.  The Association Between Distinct Frontal Brain Volumes and Behavioral Symptoms in Mild Cognitive Impairment, Alzheimer's Disease, and Frontotemporal Dementia.

Authors:  Antti Cajanus; Eino Solje; Juha Koikkalainen; Jyrki Lötjönen; Noora-Maria Suhonen; Ilona Hallikainen; Ritva Vanninen; Päivi Hartikainen; Matteo de Marco; Annalena Venneri; Hilkka Soininen; Anne M Remes; Anette Hall
Journal:  Front Neurol       Date:  2019-10-03       Impact factor: 4.003

View more
  2 in total

1.  Web-Based Assessment of the Phenomenology of Autobiographical Memories in Young and Older Adults.

Authors:  Manila Vannucci; Carlo Chiorri; Laura Favilli
Journal:  Brain Sci       Date:  2021-05-18

2.  Assessing the Views of Professionals, Patients, and Care Partners Concerning the Use of Computer Tools in Memory Clinics: International Survey Study.

Authors:  Aniek M van Gils; Leonie Nc Visser; Heleen Ma Hendriksen; Jean Georges; Majon Muller; Femke H Bouwman; Wiesje M van der Flier; Hanneke Fm Rhodius-Meester
Journal:  JMIR Form Res       Date:  2021-12-03
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.