Literature DB >> 34934639

Validity of remote administration of the MATRICS Consensus Cognitive Battery for individuals with severe mental illness.

Madisen T Russell1, Kensie M Funsch1, Cassi R Springfield1, Robert A Ackerman1, Colin A Depp2, Philip D Harvey3,4, Raeanne C Moore2, Amy E Pinkham1.   

Abstract

The MATRICS Consensus Cognitive Battery (MCCB) is a gold-standard tool for assessing cognitive functioning in individuals with severe mental illness. This study is an initial examination of the validity of remote administration of 4 MCCB tests measuring processing speed (Trail Making Test: Part A, Animal Fluency), working memory (Letter-Number Span), and verbal learning and memory (Hopkins Verbal Learning Test-Revised). We conducted analyses on individuals with bipolar disorder (BD) and schizophrenia-spectrum disorders (SCZ), as well as healthy volunteers, who were assessed in-person (BD = 80, SCZ = 116, HV = 14) vs. remotely (BD = 93, SCZ = 43, HV = 30) to determine if there were significant differences in performance based on administration format. Additional analyses tested whether remote and in-person assessment performance was similarly correlated with symptom severity, cognitive and social cognitive performance, and functional outcomes. Individuals with BD performed significantly better than those with SCZ on all MCCB subtests across administration format. Animal Fluency did not differ by administration format, but remote participants performed significantly worse on Trail Making and HVLT-R. On the Letter-Number Span task, individuals with bipolar disorder performed significantly better when participating remotely. Finally, patterns of correlations with related constructs were largely similar between administration formats. Thus, results suggest that remote administration of some of the MCCB subtests may be a valid alternative to in-person testing, but more research is necessary to determine why some tasks were affected by administration format.
© 2021 The Authors.

Entities:  

Keywords:  Bipolar disorder; Cognitive functioning; Remote assessment; Schizophrenia; Telehealth

Year:  2021        PMID: 34934639      PMCID: PMC8655110          DOI: 10.1016/j.scog.2021.100226

Source DB:  PubMed          Journal:  Schizophr Res Cogn        ISSN: 2215-0013


Introduction

Cognitive impairment is a central feature of schizophrenia spectrum disorders (Bowie and Harvey, 2006). Individuals with schizophrenia demonstrate cognitive deficits across numerous domains, including attention, verbal learning and memory, processing speed, working memory, and executive functioning (e.g., Nuechterlein et al., 2004; Kahn and Keefe, 2013; Bowie and Harvey, 2006; Bora et al., 2009) with average weighted effect sizes (Hedge's g) ranging from 0.43 to 1.55 (Schaefer et al., 2013). Most patients with schizophrenia demonstrate cognitive impairment to some extent, though the breadth and severity of cognitive dysfunction varies (Bowie and Harvey, 2006). Within an individual, the level of cognitive impairment appears to be stable across time and fluctuations in clinical status (Harvey et al., 1999; Gold, 2004). Similarly, bipolar disorder is also associated with cognitive deficits across the same domains (e.g., Cardenas et al., 2016; Bora and Ozerdem, 2017) with average weighted effect sizes (Hedge's g) ranging from 0.42 to 0.96 (Torres et al., 2007; Mann-Wrobel et al., 2011). Though cognitive impairment in bipolar disorder is relatively less severe than impairments observed in schizophrenia, cognitive profiles of both disorders are very similar (e.g., Bortolato et al., 2015; Krabbendam et al., 2005; Lynham et al., 2018; Bora and Pantelis, 2015; Reichenberg et al., 2008). Recent work examining cognitive impairment across the bipolar-schizophrenia spectrum suggests that cognitive dysfunction increased in severity from bipolar disorder to schizoaffective bipolar type, to schizophrenia and schizoaffective depressive type, with no differences in severity of cognitive impairment between schizophrenia and schizoaffective depressive type (Lynham et al., 2018). Overall, the trajectory of cognitive dysfunction in bipolar disorder appears somewhat similar to schizophrenia, with impairment beginning early in both disorders and remaining relatively stable over time after the diagnosis of the disorder (e.g., Bora and Ozerdem, 2017; Bora and Pantelis, 2015). Importantly, cognitive dysfunction contributes significantly to functional disability in both schizophrenia and bipolar disorder. In schizophrenia, cognitive impairment is associated with poorer community living skills, deficits in problem-solving, and difficulty maintaining employment (Bryson and Bell, 2003; Green et al., 2000). Estimates derived from reviews of the literature suggest that neurocognitive dysfunction explains between 20% and 60% of the variance in functional outcomes of individuals with schizophrenia (Green et al., 2000; Fett et al., 2011). Similarly, cognitive disability accounts for a significant proportion of variation in functioning in bipolar disorder, with estimates consistent with those identified in schizophrenia (Depp et al., 2012). Although there is greater functional disability in schizophrenia relative to bipolar disorder, neurocognitive dysfunction still predicts poorer work skills, poorer community living skills, and difficulties in interpersonal behavior in bipolar disorder, with evidence suggesting that the structure of the correlational relationships are essentially identical (Bowie et al., 2010; Mausbach et al., 2010). Given the cognitive deficits and associated functional outcomes seen in both bipolar disorder and schizophrenia spectrum disorders, it is important to have a standardized way to assess cognition in these groups. The MATRICS Consensus Cognitive Battery (MCCB; Nuechterlein et al., 2008) was developed to standardize assessment of cognitive impairment in schizophrenia and is typically considered the gold standard in the field. It contains ten tasks covering seven cognitive domains: processing speed, verbal learning, working memory, visual learning, reasoning/problem solving, social cognition, and attention. Although there have been suggestions regarding modifying the MCCB for bipolar disorder (Yatham et al., 2010), studies demonstrate that the MCCB is sensitive to cognitive impairment in both schizophrenia and bipolar disorder (Bo et al., 2017; Burdick et al., 2011; Kern et al., 2011; Lystad et al., 2014). Abbreviated forms of the MCCB (e.g., Pinkham et al., 2018) and similar neurocognitive assessments (Keefe et al., 2006) capture schizophrenia-related impairment while still showing expected correlations with functional outcomes. In situations where in-person testing may not be feasible (e.g., due to health concerns, lack of transportation, limited funds, etc.), it is crucial to have a reliable, remote battery to assess cognitive functioning in individuals with severe mental illness. To our knowledge, no remote version of the MCCB has currently been developed, but web-based and smartphone app assessments designed to mirror widely used assessments like the MCCB show strong correlations with in-person measures in schizophrenia spectrum, bipolar, and healthy control populations (Biagianti et al., 2019; Domen et al., 2019; Miskowiak et al., 2021). Though they still require validation for at-home administration, these assessments appear to be comparable, albeit not identical, alternatives to traditional assessment. However, limited technology literacy and lack of access to technology and internet could hinder implementation of internet- and app-based assessments in certain populations. Telephone-based assessment offers a potentially more accessible solution but has received less attention within psychiatric populations. Notably, Berns et al. (2004) compared performance on an in-person and telephone-based cognitive battery in outpatients with schizophrenia. They found no difference between administration modes on tasks that were conceptually simple or that gradually increased in complexity, such as Letter Number Span (Gold, 1997). Tasks that were complex and demanding from the outset, however, showed poorer performance when administered by phone, such as the California Verbal Learning Test (CVLT; Delis et al., 1987). At-home telephone assessment has been more thoroughly studied in non-psychiatric cognitively impaired populations. Telephone versions of two MCCB tasks, the Hopkins Verbal Learning Test-Revised (Brandt and Benedict, 2001) and a category fluency task, demonstrated strong correlations with in-person assessments and good discrimination between cognitively impaired and healthy participants (e.g., Bunker et al., 2016; Lachman et al., 2014). Other tasks not contained within the MCCB, but which assess overlapping cognitive domains (e.g., verbal learning, memory, processing speed) have similarly shown good agreement between telephone and in-person administration (Jagtap et al., 2021; Lachman et al., 2014; Rapp et al., 2012). Given these data suggesting that remote administration of cognitive assessments like the MCCB may be a feasible and valid alternative to in-person testing, the current paper presents results from an initial assessment of the validity of telephone-based administration of select MCCB subtests in individuals with schizophrenia/schizoaffective disorder and bipolar disorder. Task performance was compared between individuals who completed in-person assessments vs. those who completed them remotely, as well as between diagnoses. Correlations between task performance and symptoms and functional outcomes were also compared between administration formats. Based on previous findings, we anticipated that individuals with bipolar disorder would perform better than individuals with schizophrenia spectrum disorders. Similarly, given previous findings that telephone based assessments are comparable to in-person assessments, we also predicted minimal effects of administration format on task performance and similar patterns of correlations between most MCCB tasks and related constructs regardless of administration format. For HVLT however, and based on Berns et al. (2004), we anticipated that performance could be poorer for remote administration in the SCZ group.

Methods

Participants

Participants were adults between the ages of 18 and 60 with schizophrenia/ schizoaffective disorder (SCZ), bipolar disorder I or II (BP), or non-psychiatric healthy controls (HV). Psychiatric diagnoses were confirmed via the Mini International Neuropsychiatric Interview (MINI) (Sheehan et al., 1998) and Structured Clinical Interview for DSM Disorders- Psychosis Module (SCID) (First et al., 2015). Individuals were required to be proficient in English, as well as to have had no psychiatric hospitalizations for at least 6 weeks, no significant medication regimen changes for a minimum of 6 weeks, and no dose changes >20% for a minimum of 2 weeks. Additionally, participants could not have (1) presence or history of medical or neurological disorders that may affect brain function (e.g., stroke, epilepsy), (2) presence or history of neurodegenerative disorder (e.g., dementia, Parkinson's Disease), (3) history of unconsciousness for a period greater than 15 min, (4) significant impairment of visual (e.g., blindness, glaucoma, vision uncorrectable to 20/40) or hearing (e.g., hearing loss) abilities, (5) presence or history of pervasive developmental disorder (e.g., autism) or intellectual disability (defined as IQ <70), or (6) current diagnosis of substance use disorder. Data were collected across three sites between December 2018 and June 2021: The University of Texas at Dallas, University of Miami, and University of California, San Diego, resulting in a total of 376 participants. Data were collected as part of a larger study, during which COVID-19 related restrictions on in-person data collection necessitated a transition to remote assessment. Participants were separated into groups based on MCCB administration format and mental health diagnosis: 166 completed the MCCB subtests remotely (93 BD, 43 SCZ, 30 HV) and 210 completed the MCCB subtests in-person (80 BD, 116 SCZ, 14 HV).

Measures

MATRICS Consensus Cognitive Battery (MCCB; Nuechterlein et al., 2008)

Participants completed four tests from the MCCB, two measuring processing speed (Trail Making Test: Part A, Category Fluency: Animal Naming), one assessing verbal working memory (Letter-Number Span), and one assessing verbal learning and memory (Hopkin's Verbal Learning Test-Revised). The Trail Making Test (TMT): Part A (range 0–300 s) is a timed paper-and-pencil task in which participants draw a single line to consecutively connect numbered circles placed irregularly on a sheet of paper. The Category Fluency: Animal Naming test is an oral test in which participants name as many animals as they can in a one-minute period. The Letter-Number Span test (range 0–24) is an orally administered test in which the tester reads a string of numbers and letters, and the participant mentally reorders them (numbers consecutively, then letters alphabetically) and repeats them back. The HVLT-R (range 0–36) is an orally administered test in which the researcher reads aloud a list of 12 words from three different categories and the participant is asked to recall as many words as possible after each of three learning trials.

Social & cognitive measures

Measures assessing global cognitive, social cognitive, and real-world functioning were completed separately by an informant and the research coordinator. Informants were high-contact individuals who knew the participant well and who themselves did not have any psychiatric diagnoses (e.g., first-degree relative, significant other, close friend). All informant reports were collected via telephone. Research coordinators generated ratings using an “all-sources” approach consistent with Harvey et al. (2019) that integrated information gathered from interviews with the patients, informants, and their own experiences with the participants. The Specific Levels of Functioning Scale (SLOF; Schneider and Streuening, 1983) is a 30-item survey assessing participants' functioning and behavior across 4 domains: interpersonal relationships, social acceptability, activities of community living, and work skills. Informants responded to items using a 5-point Likert scale, with higher mean values representing better functioning in each domain. The Observable Social Cognition Rating Scale (OSCARS; Healey et al., 2015) is an 8-item self-report or interviewer assessment of ability across social cognitive domains (i.e., theory of mind, emotion perception, cognitive rigidity, jumping to conclusions, attributional style), yielding a total score ranging from 8 to 56, with higher scores indicating greater impairment. The Cognitive Assessment Interview (CAI; Ventura et al., 2010) assesses subjective cognitive functioning across 6 domains (10-items): (1) working memory, (2) attention/vigilance, (3) verbal learning and memory, (4) reasoning and problem-solving, (5) speed of processing, and (6) social cognition. The CAI was administered to informants as an oral semi-structured interview, and ratings were made by the researcher according to participant/informant responses. In addition to a total score comprised of the sum of all items (range 7–70), a global assessment of function score is also given (range 0–100). Both indices were used here, with higher scores indicating worse cognitive functioning on the summed score and better cognitive functioning on the global assessment of function score.

Symptom assessments

Severity of positive, negative, and general symptoms was assessed with the Positive and Negative Syndrome Scale (PANSS; Kay et al., 1987). Mood symptoms were further assessed with the Montgomery-Asperg Depression Rating Scale (MADRS; Montgomery and Asberg, 1979) and the Young Mania Rating Scale (YMRS; Young et al., 1978). For all measures, higher scores indicate greater severity.

Premorbid intellectual functioning

Estimated premorbid IQ was assessed using the Wide Range of Achievement Test III (WRAT-III; Snelbaker et al., 2001) Reading subtest.

Procedures

All participants provided documented informed consent, and IRBs at the University of Texas at Dallas, University of California San Diego, and University of Miami approved the study. In-person visits took place in labs on campus, while remote visits were done via telephone. Research staff had a bachelor's degree or higher, and were trained over the course of several weeks, within and across sites, to administer and score all assessments in-person and remotely. After establishing reliability (ICCs>0.80), regular consensus meetings were held to ensure acceptable reliability between raters over time.

Remote visit task modifications

All tasks and interviews were completed primarily via telephone and required minimal modification. Those tasks that are typically administered orally (i.e., Animal Fluency, Letter-Number Span, HVLT-R) were implemented as is. The forms needed to complete the MCCB Trail Making Test were mailed to participants in advance of their appointments in separate, sealed envelopes with instructions to only open these materials when prompted and observed by the examiner. A supplemental video call via smartphone or tablet was used during the Trail Making Test so that researchers could accurately gauge participants' time to completion. Participants were texted or emailed a link to view the WRAT-III stimuli. PANSS ratings for 4 items that required prolonged visual behavioral observations (i.e., Blunted Affect, Tension, Mannerisms and Posturing, and Motor Retardation) were omitted from both in-person and remote participants' total scores. Prior to task administration, participants were instructed to move to a quiet environment without distractions (e.g., away from other individuals, silencing/powering down extraneous devices) and researchers ensured that the participant could hear them well. Participants were also asked to refrain from using any performance aids, such as writing down stimulus items, or seeking help from others.

Statistical analyses

Groups were first split by diagnosis (BD, SCZ, HV), and demographic differences between administration format groups (remote vs. in-person administration) were assessed using independent sample t-tests or Chi-Square tests (x2) as appropriate. Extreme outliers (+/− 3 SDs) on each task were excluded from analyses task-by-task, resulting in slight N differences between tasks (see Table 2). Because Trail Making scores are completion times, they were most likely to be outliers, and thus excluded than scores from other tasks. The numbers of participants performing at levels consistent with floor/ceiling effects on each of the MCCB subtests were also assessed to evaluate score distributions in each administration format.
Table 2

In-person and remote MCCB test scores and effect sizes.


BD
SCZ
HV
In-person
Remote

In-person
Remote

In-person
Remote

NM (SD)NM (SD)Cohen's dNM (SD)NM (SD)Cohen's dNM (SD)NM (SD)Cohen's d
TMT-A7928.86 (10.98)8833.70 (14.90)0.3711436.07 (16.41)3546.33 (28.09)0.45⁎⁎1330.50 (10.59)2938.20 (13.62)0.63
LNS8012.93 (3.63)9314.99 (3.75)0.56⁎⁎11611.80 (4.02)4311.86 (4.36)0.011415.00 (4.06)3015.93 (2.85)0.27
Animal fluency8023.45 (5.84)9224.32 (6.33)0.1411519.93 (5.79)4320.81 (6.43)0.141425.57 (5.61)3022.87 (4.53)0.53
HVLT-R7923.99 (6.17)9122.79 (5.35)0.2111420.19 (5.61)4319.00 (6.51)0.201427.86 (4.09)3024.87 (4.85)0.67

Note. TMT-A = Trail Making Test- Part A (completion times; higher scores indicate worse performance); LNS = Letter-Number Span (range 0–24; higher scores indicate better performance); Animal Fluency (range 0–48; higher scores indicate better performance); HVLT-R = Hopkins Verbal Learning Test- Revised (range 0–36; higher scores indicate better performance).

Cohen's d = (M2 − M1) / SDpooled) where SDpooled = √((SD12 + SD22) / 2; small effect, 0.20; medium effect, 0.50; large effect, 0.80.

Six scores were excluded from TMT-A (1 BD in-person, 1 BD remote, 2 SCZ in-person, 1 SCZ remote, 1 HV in-person) and 2 scores were excluded from HVLT-R (1 BD in-person, 1 SCZ in-person).

Due to various extraneous circumstances, twelve individuals did not complete the TMT-A (4 BD remote, 7 SCZ remote, 1 HV remote), two did not complete Animal Fluency (1 BD remote, 1 SCZ in-person), and one individual did not complete the HVLT-R (1 SCZ in-person).

p < .05.

p < .01.

Separate two-way analysis of covariance (ANCOVA) tests were then conducted to identify statistically significant effects of diagnosis (BD, SCZ) and administration format on MCCB test performance, controlling for PANSS symptom ratings (positive, negative). An additional independent-samples t-test was utilized to examine the effects of administration format among healthy participants. PANSS ratings were converted to averages for each participant, to account for the difference in number of items rated between administration types. To determine differences in the strength of associations between related constructs as a function of administration format, we analyzed comparisons between Pearson's r correlations by performing Fisher's r to z transformation, then calculating observed z values (zobserved).

Results

Demographics

All groups were similar on age, gender, race, ethnicity, years of education, and estimated IQ (see Table 1). Compared to remote participants, in-person BD and SCZ groups had significantly higher ratings of positive symptoms (tBD (171) = 3.946, p < .001; tSCZ (157) = 3.324, p = .001), and in-person individuals with BD also had higher ratings of negative symptoms (t(171) = 2.945, p = .004).
Table 1

Participant demographics.


BD (n = 173)N/M (%/SD)
SCZ (n = 159)N/M (%/SD)
HV (n = 44)N/M (%/SD)
In-person(n = 80)Remote(n = 93)t/x^2In-person(n = 116)Remote(n = 43)t/x^2In-person(n = 14)Remote(n = 30)t/x^2
Gender
 Female52 (65%)64 (68.8%)p = .59464 (55.2%)17 (39.5%)p = .0699 (64.3%)15 (50%)p = .322
 Male28 (35%)29 (31.2%)52 (44.8%)25 (58.1%)5 (35.7%)15 (50%)
 Other0001 (2.3%)00
Age40.5 (11.7)37.8 (11.0)p = .11941.7 (10.6)40.4 (12.0)p = .49435.9 (9.7)37.0 (13)p = .761
Race
 Caucasian44 (55%)60 (64.5%)p = .26538 (32.8%)17 (39.5%)p = .52110 (71.4%)15 (50%)p = .239
 African American21 (26.3%)15 (16.1%)61 (52.6%)23 (53.5%)2 (14.3%)11 (36.7%)
 American Indian/Alaskan Native1 (1.3%)01 (0.9%)001 (3.3%)
 Asian4 (5%)9 (9.7%)2 (1.7%)1 (2.3%)1 (7.1%)3 (10%)
 Native Hawaiian/Other Pacific Islander1 (1.3%)01 (0.9%)1 (2.3%)00
 Other9 (11.3%)9 (9.7%)13 (11.2%)1 (2.3%)1 (7.1%)0
Years of education14.3 (2.6)14.7 (2.2)p = .30312.6 (2.3)12.9 (2.4)p = .35215 (1.1)13.5 (1.8)p = .199
IQ (WRAT-3)101.4 (11.8)103.8 (11.0)p = .17096.2 (12.3)96.3 (12.3)p = .973104.1 (8.4)101.1 (11.5)p = .471
Ethnicity
 Hispanic22 (27.5%)25 (27.2%)p = .96230 (25.9%)6 (14%)p = .1113 (21.4%)8 (26.7%)p = .752
 Non-Hispanic58 (72.5%)67 (72.8%)86 (74.1%)37 (86%)11 (78.6%)22 (73.3%)
Psychopathology
 PANSS- positive (mean)1.9 (0.75)1.5 (0.55)p < .001⁎⁎2.4 (0.65)2.0 (0.66)p = .001⁎⁎
 PANSS- negative (mean)1.6 (0.45)1.4 (0.46)p = .004⁎⁎2.0 (0.59)1.8 (0.69)p = .206
 PANSS- general (mean)2.1 (0.54)1.9 (0.41)p = .0562.0 (0.50)2.0 (0.38)p = .923
 MADRS13.1 (10.8)13.0 (10.3)p = .9399.7 (10.7)11.7 (10.2)p = .288
 YMRS3.3 (4.6)2.9 (4.3)p = .5450.9 (3.1)1.8 (4.6)p = .186
Site
 UT Dallas27 (33.8%)46 (49.5%)45 (38.8%)24 (55.8%)14 (100%)30 (100%)
 U Miami14 (17.5%)16 (17.2%)43 (37.1%)9 (20.9%)00
 UCSD39 (48.8%)31 (33.3%)28 (24.1%)10 (23.3%)00

Note. PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale.

p < .05.

p < .01.

Participant demographics. Note. PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale. p < .05. p < .01.

Score distributions

Outliers represented less than 5% of the overall sample and were distributed relatively evenly across groups (see Table 2). No participants in either administration format performed at ceiling or floor on any of the tasks. In-person and remote MCCB test scores and effect sizes. Note. TMT-A = Trail Making Test- Part A (completion times; higher scores indicate worse performance); LNS = Letter-Number Span (range 0–24; higher scores indicate better performance); Animal Fluency (range 0–48; higher scores indicate better performance); HVLT-R = Hopkins Verbal Learning Test- Revised (range 0–36; higher scores indicate better performance). Cohen's d = (M2 − M1) / SDpooled) where SDpooled = √((SD12 + SD22) / 2; small effect, 0.20; medium effect, 0.50; large effect, 0.80. Six scores were excluded from TMT-A (1 BD in-person, 1 BD remote, 2 SCZ in-person, 1 SCZ remote, 1 HV in-person) and 2 scores were excluded from HVLT-R (1 BD in-person, 1 SCZ in-person). Due to various extraneous circumstances, twelve individuals did not complete the TMT-A (4 BD remote, 7 SCZ remote, 1 HV remote), two did not complete Animal Fluency (1 BD remote, 1 SCZ in-person), and one individual did not complete the HVLT-R (1 SCZ in-person). p < .05. p < .01.

Diagnosis × format results

ANCOVAs were conducted separately for each MCCB subtest: (1) Trail Making, (2) Letter-Number Span, (3) Animal Fluency, and (4) HVLT-R. Descriptive statistics for performance are provided in Table 2. As anticipated, there was a significant main effect of group (BD vs. SCZ) on all MCCB tests: Trail Making (F(1,310) = 6.544, p = .011), Letter-Number Span (F(1, 326) = 6.665, p = .003), Animal Fluency (F(1, 324) = 12.985, p < .001), HVLT-R (F(1,321) = 12.056, p < .001), showing that individuals with bipolar disorder performed significantly better than individuals with schizophrenia/schizoaffective disorders across all tasks. There was a significant main effect of administration format on Trail Making (F(1, 310) = 22.393, p < .001) and HVLT-R (F(1, 321) = 6.499, p = .007), with remote participants performing worse on both tasks. No other tasks showed significant main effects of format. The only task that had a significant interaction between diagnosis and administration format was Letter-Number Span (F(1, 326) = 4.487, p = .05), indicating that individuals with bipolar disorder performed significantly better on this task when it was administered remotely. There were no statistically significant differences in MCCB task performance by administration format in the healthy volunteer group (all p values > .05).

Relationship to symptom severity

Across diagnostic groups and administration formats, higher severity of negative symptoms significantly correlated with poorer performance on several MCCB tests. In BD, positive symptoms, depression (MADRS), and mania (YMRS) did not correlate with performance on any MCCB tests (Table 3). In the SCZ group, increased positive symptoms significantly correlated with higher scores on Animal Fluency, regardless of administration format, and increased depressive symptoms were positively correlated with HVLT-R performance in the remote group (Table 4).
Table 3

Task correlations for in-person and remote administration in BD.


TMT-A
LNS
Animal fluency
HVLT-R
In-personRemoteIn-personRemoteIn-personRemoteIn-personRemote
Age0.36⁎⁎0.20−0.31⁎⁎−0.09−0.13−0.11−0.35⁎⁎−0.29⁎⁎
YOE0.09−0.190.230.170.35⁎⁎0.270.37⁎⁎0.10
PANSS positive0.12−0.04−0.01−0.12−0.040.05−0.17−0.06
PANSS negative0.070.31⁎⁎−0.29⁎⁎−0.10−0.37⁎⁎−0.27−0.39⁎⁎−0.12
PANSS general0.14−0.10−0.14−0.08−0.26−0.06−0.16−0.04
PANSS total0.150.10−0.17−0.15−0.26−0.13−0.30⁎⁎−0.11
MADRS0.080.030.040.02−0.16−0.06−0.050.13
YMRS0.12−0.14−0.080.01−0.020.11−0.10−0.05
LNS−0.19−0.09
Animal fluency−0.27−0.35⁎⁎0.47⁎⁎0.36⁎⁎
HVLT-R−0.24−0.170.44⁎⁎0.58⁎⁎0.57⁎⁎0.32⁎⁎
SLOF- RA−0.20−0.240.230.080.29⁎⁎0.080.30⁎⁎0.08
OSCARS- RA0.130.26−0.23−0.24−0.14−0.12−0.26−0.33⁎⁎
CAI overall- RA0.230.25−0.16−0.38⁎⁎−0.25−0.21−0.25−0.32⁎⁎
CAI GAF- RA−0.11−0.230.220.40⁎⁎0.250.110.30⁎⁎0.38⁎⁎
SLOF- informant−0.17−0.36⁎⁎−0.130.110.080.09−0.060.08
OSCARS- informant0.120.37⁎⁎−0.05−0.16−0.07−0.22−0.04−0.23
CAI overall- informant0.080.41⁎⁎0.08−0.16−0.02−0.200.01−0.14
CAI GAF- informant−0.03−0.28−0.120.160.010.190.0020.16

Note. Bold text indicates correlations that differ significantly in strength between in-person and remote formats. TMT-A = Trail Making Test- Part A; LNS = Letter-Number Span; HVLT-R = Hopkins Verbal Learning Test- Revised; YOE = years of education; PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale; SLOF = Specific Levels of Functioning; OSCARS = Observable Social Cognition: A Rating Scale; CAI = Cognitive Assessment Interview; GAF = Global Assessment of Functioning.

p < .05.

p < .01.

Table 4

Task correlations for in-person and remote administration in SCZ.


TMT-A
LNS
Animal fluency
HVLT-R
In-personRemoteIn-personRemoteIn-personRemoteIn-personRemote
Age0.14−0.01−0.16−0.07−0.03−0.34−0.05−0.03
YOE−0.14−0.110.25⁎⁎0.310.160.160.37⁎⁎0.17
PANSS positive0.18−0.11−0.140.230.240.31−0.090.22
PANSS negative0.25⁎⁎0.57⁎⁎−0.25⁎⁎−0.55⁎⁎−0.36⁎⁎−0.52⁎⁎−0.09−0.50⁎⁎
PANSS general0.010.09−0.19−0.040.17−0.04−0.17−0.02
PANSS total0.220.38−0.28⁎⁎−0.230.02−0.160.01−0.20
MADRS−0.06−0.28−0.020.190.070.220.030.30
YMRS−0.040.030.01−0.020.040.150.040.28
LNS−0.33⁎⁎−0.55⁎⁎
Animal fluency−0.35⁎⁎−0.350.29⁎⁎0.49⁎⁎
HVLT-R−0.41⁎⁎−0.410.51⁎⁎0.47⁎⁎0.39⁎⁎0.56⁎⁎
SLOF- RA−0.19−0.040.28⁎⁎0.230.090.090.160.26
OSCARS- RA0.25⁎⁎−0.01−0.29⁎⁎−0.06−0.090.04−0.16−0.22
CAI overall- RA0.30⁎⁎0.40−0.21−0.49⁎⁎−0.23−0.31−0.16−0.37
CAI GAF- RA−0.28⁎⁎−0.370.27⁎⁎0.41⁎⁎0.230.180.150.30
SLOF- informant−0.15−0.100.250.050.250.0010.0010.07
OSCARS- informant0.090.34−0.200.03−0.240.090.03−0.25
CAI overall- informant0.230.06−0.08−0.45−0.24−0.110.06−0.16
CAI GAF- informant−0.22−0.150.080.330.29⁎⁎0.05−0.160.29

Note. Bold text indicates correlations that differ significantly in strength between in-person and remote formats. TMT-A = Trail Making Test- Part A; LNS = Letter-Number Span; HVLT-R = Hopkins Verbal Learning Test- Revised; YOE = years of education; PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale; SLOF = Specific Levels of Functioning; OSCARS = Observable Social Cognition: A Rating Scale; CAI = Cognitive Assessment Interview; GAF = Global Assessment of Functioning.

p < .05.

p < .01.

Task correlations for in-person and remote administration in BD. Note. Bold text indicates correlations that differ significantly in strength between in-person and remote formats. TMT-A = Trail Making Test- Part A; LNS = Letter-Number Span; HVLT-R = Hopkins Verbal Learning Test- Revised; YOE = years of education; PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale; SLOF = Specific Levels of Functioning; OSCARS = Observable Social Cognition: A Rating Scale; CAI = Cognitive Assessment Interview; GAF = Global Assessment of Functioning. p < .05. p < .01. Task correlations for in-person and remote administration in SCZ. Note. Bold text indicates correlations that differ significantly in strength between in-person and remote formats. TMT-A = Trail Making Test- Part A; LNS = Letter-Number Span; HVLT-R = Hopkins Verbal Learning Test- Revised; YOE = years of education; PANSS = Positive and Negative Syndrome Scale; MADRS = Montgomery-Asperg Depression Rating Scale; YMRS = Young Mania Rating Scale; SLOF = Specific Levels of Functioning; OSCARS = Observable Social Cognition: A Rating Scale; CAI = Cognitive Assessment Interview; GAF = Global Assessment of Functioning. p < .05. p < .01.

Relationship to functional outcomes

Across administration types and diagnostic groups, both informant- and RA-ratings of SLOF, OSCARS, and CAI were significantly associated with performance on several MCCB tests at different correlation strengths. As expected, correlations with MCCB tasks were strongest for the CAI, which assesses cognitive functioning (Table 3, Table 4).

Differences in strengths of associations

There were relatively few differences in correlation strengths across administration formats (see Supplemental Table 1). For the BD group, Trail Making showed the highest number of discrepancies based on remote vs. in-person administration, with 4 pairs of correlations (out of 19) showing significant differences. Letter-Number Span had no differences, whereas Animal Fluency and HVLT-R each had one (see Table 3). For the SCZ group, Letter-Number Span and HVLT-R had 4 and 3 discrepancies, respectively, between administration formats. Trail Making and Animal Fluency each had only one (see Table 4). Discrepancies were not concentrated in any particular domains; however, within the SCZ group, most discrepancies occurred for positive and negative symptoms, with stronger correlations in the remote group as compared to in-person.

Discussion

With advances in technology and potential limitations to in-person testing, adaptation of cognitive functioning assessments for remote administration may be a viable option for assessing cognitive abilities in individuals with severe mental illness. This study provides an initial assessment of the validity of remote administration of select MCCB subtests (Trail Making, Letter-Number Span, Animal Fluency, and HVLT-R) in individuals with schizophrenia-spectrum disorders and bipolar disorder. As anticipated, the bipolar group performed significantly better than the schizophrenia-spectrum group on all MCCB tasks, regardless of administration format, supporting previous research findings that individuals with bipolar disorder have higher levels of cognitive functioning than individuals with schizophrenia-spectrum disorders (Bortolato et al., 2015; Krabbendam et al., 2005; Lynham et al., 2018; Bora and Pantelis, 2015). Additionally, this finding provides some validation for remote telephone administration of the MCCB given that group differences were evident in both formats. Further supporting the validity of remote assessment, we found that, across diagnostic groups and administration formats, MCCB task performance was significantly correlated with symptom severity (especially negative symptoms), social functioning, and overall cognitive functioning, which is in line with previous research findings (e.g., August et al., 2012; Harvey et al., 2006). While the strength of some correlations varied between administration formats, only 8.89% of these strength differences were significant in the BD group, and 11.11% were significantly different in the SCZ group, suggesting comparable patterns of correlation for the majority of tasks. While it is not possible to draw definitive conclusions from the current data, it is possible that having a social versus non-social environment during testing, as well as symptom severity, and task attention/engagement may explain some of the correlation strength differences. Future research should look at factors that may moderate remote versus in-person cognitive performance's relationship with symptom severity, as well as social and non-social functioning. In terms of specific subtests, administration format did not appear to affect performance on the Animal Fluency task. This test has relatively short and simple instructions and does not require any back-and-forth between the administrator and participant, which may explain why we did not see any significant difference in performance between administration types. Thus, Animal Fluency can be validly administered via telephone. However, performance on both HVLT-R and Trail Making were worse when administered remotely. As noted previously, tasks that are complex and demanding from the outset, like HVLT, may be more difficult when administered by phone versus in-person, and similar findings have been reported for the CVLT (Berns et al., 2004). Slower completion of remotely administered Trail Making may be related to reduced control over participants' testing environments, as well as technological difficulties (e.g., difficulty setting up devices for video call, potential lag in video call making corrections processes take more time, assessor not able to point directly at participants' papers during mistake corrections, etc.). Future studies administering these two tasks may consider conducting thorough prescreening to ensure strong audio and video connections or additional practice trials to ensure participant understanding. Individuals with bipolar disorder performed better on the Letter-Number Span task when administered remotely versus in-person, but the reasons for this are unclear. We did not see the same pattern in individuals with schizophrenia, concurrent with Berns et al.'s (2004) findings. While premorbid IQ was slightly higher in the remote BD group, accounting for variability related to IQ had only a minimal effect on the results, increasing the p-value very minimally (p = .057). IQ differences are therefore unlikely to account for the interaction effect. Because administrators were not on video with participants during this task, it is also possible that participants could have been cheating; however, because BD remote participants did worse on HVLT-R, for which they could have also written down the items, this seems unlikely. Some limitations require consideration. First, only between-subject comparisons were assessed. Definitive attempts to examine the validity of remote assessments would require within-person comparisons between formats. Second, our sample of healthy individuals was relatively small, as was the SCZ remote group. Third, while the current analyses addressed sensitivity to group differences, relationships to functional outcomes, and floor/ceiling effects, a full psychometric analysis that allows examination of test-retest reliability and utility as repeated measures is still needed. Fourth, the COVID-19 pandemic provided impetus for the adaptation of our measures to remote administration, and remote data collection occurred exclusively during the pandemic. Therefore, it is possible that differences in administration formats may be confounded with presence of the pandemic. Finally, while the prospect of remote assessment has many feasible benefits, it is important to note that this format may also be less accessible than in-person testing to different demographic groups (e.g., those with reduced access to video calling, etc.). Overall, while not definitive, our results suggest remote telephone-based administration of some MCCB tests may be a feasible and valid method for assessing cognitive functioning in individuals with bipolar disorder or schizophrenia spectrum disorders. The following is the supplementary data related to this article.

Supplemental Table 1

Correlation coefficient comparisons for remote vs. in-person administration (zobserved).

Funding source

This work was supported by the (grant number R01 MH112620 to A.P).

CRediT authorship contribution statement

Madisen T. Russell: Formal analysis; Investigation; Resources; Data curation; Writing- Original draft; Writing- Review & Editing; Visualization Kensie M. Funsch: Investigation; Resources; Writing- Original draft Cassi R. Springfield: Resources; Writing- Original draft Robert A. Ackerman: Formal analysis; Writing- Review & Editing Colin A. Depp: Conceptualization; Methodology; Resources; Writing- Review & Editing; Supervision; Project administration Philip D. Harvey: Conceptualization; Methodology; Resources; Writing- Review & Editing; Supervision; Project administration Raeanne C. Moore: Conceptualization; Methodology; Writing- Review & Editing; Supervision Amy E. Pinkham: Conceptualization; Methodology; Resources; Formal analysis; Writing- Review & Editing; Supervision; Funding acquisition; Project administration.

Declaration of competing interest

Dr. Harvey has received consulting fees or travel reimbursements from Alkermes, Bio Excel, Boehringer Ingelheim, Karuna Pharma, Minerva Pharma, SK Pharma, and Sunovion Pharma during the past year. He receives royalties from the Brief Assessment of Cognition in Schizophrenia (Owned by Verasci, Inc. and contained in the MCCB). He is chief scientific officer of i-Function, Inc. Dr. Moore is a co-founder of KeyWise, Inc. and a consultant for NeuroUX. All other authors report no conflicts of interest.
  49 in total

1.  The MATRICS Consensus Cognitive Battery, part 1: test selection, reliability, and validity.

Authors:  Keith H Nuechterlein; Michael F Green; Robert S Kern; Lyle E Baade; Deanna M Barch; Jonathan D Cohen; Susan Essock; Wayne S Fenton; Frederick J Frese; James M Gold; Terry Goldberg; Robert K Heaton; Richard S E Keefe; Helena Kraemer; Raquelle Mesholam-Gately; Larry J Seidman; Ellen Stover; Daniel R Weinberger; Alexander S Young; Steven Zalcman; Stephen R Marder
Journal:  Am J Psychiatry       Date:  2008-01-02       Impact factor: 18.112

2.  Development and testing of a web-based battery to remotely assess cognitive health in individuals with schizophrenia.

Authors:  Bruno Biagianti; Melissa Fisher; Benjamin Brandrett; Danielle Schlosser; Rachel Loewy; Mor Nahum; Sophia Vinogradov
Journal:  Schizophr Res       Date:  2019-02-04       Impact factor: 4.939

3.  The positive and negative syndrome scale (PANSS) for schizophrenia.

Authors:  S R Kay; A Fiszbein; L A Opler
Journal:  Schizophr Bull       Date:  1987       Impact factor: 9.306

4.  Relationship of the Brief UCSD Performance-based Skills Assessment (UPSA-B) to multiple indicators of functioning in people with schizophrenia and bipolar disorder.

Authors:  Brent T Mausbach; Philip D Harvey; Ann E Pulver; Colin A Depp; Paula S Wolyniec; Mary H Thornquist; James R Luke; John A McGrath; Christopher R Bowie; Thomas L Patterson
Journal:  Bipolar Disord       Date:  2010-02       Impact factor: 6.744

5.  Meta-analysis of longitudinal studies of cognition in bipolar disorder: comparison with healthy controls and schizophrenia.

Authors:  E Bora; A Özerdem
Journal:  Psychol Med       Date:  2017-06-06       Impact factor: 7.723

6.  Meta-analysis of neuropsychological functioning in euthymic bipolar disorder: an update and investigation of moderator variables.

Authors:  Monica C Mann-Wrobel; Jaymee T Carreno; Dwight Dickinson
Journal:  Bipolar Disord       Date:  2011-06       Impact factor: 6.744

Review 7.  Neurocognitive deficits and functional outcome in schizophrenia: are we measuring the "right stuff"?

Authors:  M F Green; R S Kern; D L Braff; J Mintz
Journal:  Schizophr Bull       Date:  2000       Impact factor: 9.306

Review 8.  Cognitive functioning in schizophrenia, schizoaffective disorder and affective psychoses: meta-analytic study.

Authors:  Emre Bora; Murat Yucel; Christos Pantelis
Journal:  Br J Psychiatry       Date:  2009-12       Impact factor: 9.319

9.  Validation of a cognitive assessment battery administered over the telephone.

Authors:  Stephen R Rapp; Claudine Legault; Mark A Espeland; Susan M Resnick; Patricia E Hogan; Laura H Coker; Maggie Dailey; Sally A Shumaker
Journal:  J Am Geriatr Soc       Date:  2012-09       Impact factor: 5.562

10.  Neuropsychological functioning in euthymic bipolar disorder: a meta-analysis.

Authors:  I J Torres; V G Boudreau; L N Yatham
Journal:  Acta Psychiatr Scand Suppl       Date:  2007
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.