| Literature DB >> 36090378 |
Alexandra S Atkins1, Michael S Kraus1, Matthew Welch1, Zhenhua Yuan1, Heather Stevens1, Kathleen A Welsh-Bohmer1,2, Richard S E Keefe1,2.
Abstract
Cognitive impairment is a common and pervasive feature of etiologically diverse disorders of the central nervous system, and a target indication for a growing number of symptomatic and disease modifying drugs. Remotely acquired digital endpoints have been recognized for their potential in providing frequent, real-time monitoring of cognition, but their ultimate value will be determined by the reliability and sensitivity of measurement in the populations of interest. To this end, we describe initial validation of remote self-administration of cognitive tests within a regulatorily compliant tablet-based platform. Participants were 61 older adults (age 55+), including 20 individuals with subjective cognitive decline (SCD). To allow comparison between remote (in-home) and site-based testing, participants completed 2 testing sessions 1 week apart. Results for three of four cognitive domains assessed demonstrated equivalence between remote and site-based tests, with high cross-modality ICCs (absolute agreement) for Symbol Coding (ICC = 0.75), Visuospatial Working Memory (ICC = 0.70) and Verbal Fluency (ICC > 0.73). Group differences in these domains were significant and reflected sensitivity to objective cognitive impairment in the SCD group for both remote and site-based testing (p < 0.05). In contrast, performance on tests of verbal episodic memory suggested inflated performance during unmonitored testing and indicate reliable use of remote cognitive assessments may depend on the construct, as well as the population being tested.Entities:
Keywords: cognition; digital biomarkers; digital endpoints; remote assessment; subjective cognitive decline
Year: 2022 PMID: 36090378 PMCID: PMC9448897 DOI: 10.3389/fpsyt.2022.910896
Source DB: PubMed Journal: Front Psychiatry ISSN: 1664-0640 Impact factor: 5.435
Participant characteristics.
| HC | SCD | |||||
| Mean | SD | Mean | SD |
| ||
| Age (years) | 67.02 | 7.71 | 70.30 | 9.76 | –1.43 | ns |
| Education (years) | 16.02 | 2.60 | 16.05 | 2.30 | –0.04 | ns |
| MMSE | 28.17 | 1.59 | 27.20 | 1.37 | 2.35 | <0.05 |
| CFI | 1.51 | 1.11 | 6.48 | 2.51 | –8.44 | <0.001 |
| ADCS-ADL-PI | 42.88 | 2.51 | 38.05 | 5.07 | 4.04 | <0.001 |
|
| ||||||
|
|
|
|
| |||
|
| ||||||
| Sex | 0.44 | ns | ||||
| Male | 18 (43.9%) | 7 (35.0%) | ||||
| Female | 23 (56.1%) | 13 (65.0%) | ||||
| Race | 0.90 | ns | ||||
| White | 31 (75.6%) | 14 (70%) | ||||
| African American | 9 (22.0%) | 6 (30%) | ||||
| Other | 1 (2.4%) | 0 (0.0%) | ||||
*Self-reported measure.
Brief Assessment of Cognition (BAC) self-administered digital cognitive tests.
| Domain | Test name | Description |
| Episodic verbal memory | 3-trial verbal memory (Learning) | Subject hears 15 unrelated words and is asked to recall as many as possible. This procedure is repeated 3 times. |
| Delayed Recall | Following a standard delay, subject is asked to recall as many words as possible from the previous list. | |
| Working memory | Visuospatial working memory–sequences | Subject is presented with progressively longer series of objects placed within a grid. Memory for the location of each object is queried in sequence. |
| Verbal fluency | Animal fluency | Subject is given 60 s to name as many animals as possible. |
| Speed of processing | Symbol Coding | Subject is provided a key and asked to fill in the corresponding numbers beneath a series of symbols as quickly as possible within 90 s |
Self-administered BAC tests represent modified, abbreviated versions of standard rater-administered assessments. Each test can be completed individually or as a part of a battery that includes additional performance-based assessments, PROs or ecological momentary assessments.
ICC absolute agreement between remote and site-based measures.
| Test | ICC (95% confidence interval) | ||
| Total sample | HC | SCD | |
| Symbol Coding | 0.747 (0.610, 0.841) | 0.714 (0.521, 0.838) | 0.78 (0.522, 0.907) |
| Visuospatial WM | 0.733 (0.583, 0.833) | 0.673 (0.459, 0.814) | 0.786 (0.542, 0.909) |
| Verbal fluency | 0.748 (0.610, 0.842) | 0.75 (0.574, 0.86) | 0.733 (0.436, 0.885) |
| Verbal memory–Total learning | 0.478 (0.248, 0.658) | 0.408 (0.11, 0.643) | 0.548 (0.115, 0.804) |
|
| 0.579 (0.371, 0.733) | 0.56 (0.29, 0.75) | n/a |
| Delayed Free Recall | 0.247 (–0.01, 0.477) | 0.264 (–0.055, 0.542) | 0.154 (–0.309, 0.559) |
|
| 0.49 (0.246, 0.676) | 0.544 (0.241, 0.751) | 0.385 (–0.102, 0.718) |
ICCs reflect use of alternate forms for Symbol Coding, Visuospatial WM and Verbal memory.
*“Trimmed” values reflect ICCs following removal of extreme outliers.
FIGURE 1Performance on site-based vs. remote BAC cognitive tests (Mean ± SEM). Performance on Symbol Coding. (A) Verbal fluency (B) and Visuospatial WM (C) was similar for site-based and remote testing. Performance on measures of episodic verbal memory (D,E) were higher during remote testing, suggesting inflated performance during remote, unmonitored testing.
FIGURE 2Group differences in cognitive test performance during on-site and remote testing sessions (Mean ± SEM). Self-administered remote and site-based assessments of processing speed [Symbol Coding; (A)] and Visuospatial WM (C) were equally sensitive to objective cognitive declines in participants with SCD. Group differences in Verbal Fluency (B) were similar for site-based and remote tasks, but did not reach statistical significance. *p < 0.05 for between-group comparison.
Participant feedback on Brief Assessment of Cognition (BAC) self-administration.
| HC | SCD | |||||
|
| ||||||
| Item | Mean | SD | Mean | SD |
| |
| 1. See text and objects clearly | 4.585 | 0.499 | 4.529 | 0.514 | 0.385 | ns |
| 2. Hear instructions clearly | 4.585 | 0.547 | 4.588 | 0.507 | –0.019 | ns |
| 3. Understand instructions easily | 4.537 | 0.505 | 4.529 | 0.514 | 0.049 | ns |
| 4. Overall Experience (1–10) | 9.195 | 1.054 | 8.412 | 1.583 | 2.210 | |
*Responses to items 1–3 are coded 1–5 (strongly disagree – strongly agree). Responses to item 4 reflect participant ratings on a scale of 1–10 (extremely difficult–extremely easy).