| Literature DB >> 31489940 |
Avital Sternin1, Alistair Burns2, Adrian M Owen3,4.
Abstract
Over the past 35 years, the proliferation of technology and the advent of the internet have resulted in many reliable and easy to administer batteries for assessing cognitive function. These approaches have great potential for affecting how the health care system monitors and screens for cognitive changes in the aging population. Here, we review these new technologies with a specific emphasis on what they offer over and above traditional 'paper-and-pencil' approaches to assessing cognitive function. Key advantages include fully automated administration and scoring, the interpretation of individual scores within the context of thousands of normative data points, the inclusion of 'meaningful change' and 'validity' indices based on these large norms, more efficient testing, increased sensitivity, and the possibility of characterising cognition in samples drawn from the general population that may contain hundreds of thousands of test scores. The relationship between these new computerized platforms and existing (and commonly used) paper-and-pencil tests is explored, with a particular emphasis on why computerized tests are particularly advantageous for assessing the cognitive changes associated with aging.Entities:
Keywords: aging; computerized cognitive assessment; dementia; executive function; memory
Year: 2019 PMID: 31489940 PMCID: PMC6787729 DOI: 10.3390/diagnostics9030114
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1(A) average standardized scores on the 12 Cambridge Brain Sciences (CBS) tasks taken at home and in the lab by 19 healthy young adult controls. The results showed no significant effect of at home versus in laboratory testing (F = 1.71, p = 0.2); (B) average raw scores on 4 CBS tasks as well as simple and choice reaction time tasks taken at home and in the lab by 27 patients with Parkinson’s Disease. Again, there was no significant effect of at home versus in-lab testing (p > 0.1) and the tasks showed reliable correlations across the two testing environments (p < 0.05).
Figure 2Average scores on 3 CBS tasks (Digit Span, Spatial Span and Token Search), taken at home and in the lab by more than 100 young adult controls. The results showed no significant effect of at home versus in laboratory testing [35]. In the case of Token Search (lower panel), the overlap in performance for participants tested at home using Amazon’s MTurk and those tested in the laboratory persisted even after several weeks of intensive training on the task [35].
Figure 3The CBS composite score was highly correlated with Montreal Cognitive Assessment (MoCA) scores and better differentiated impaired and unimpaired individuals. The border colour of each datapoint indicates the categorization of individuals based on MoCA scores alone. The fill colour indicates to which group borderline participants are categorized when the composite score of 3 CBS tests is used.