Literature DB >> 33057404

Using a patient-reported outcome to improve detection of cognitive impairment and dementia: The patient version of the Quick Dementia Rating System (QDRS).

James E Galvin1, Magdalena I Tolea1, Stephanie Chrisphonte1.   

Abstract

INTRODUCTION: Community detection of mild cognitive impairment (MCI) and Alzheimer's disease and related disorders (ADRD) is a challenge. While Gold Standard assessments are commonly used in research centers, these methods are time consuming, require extensive training, and are not practical in most clinical settings or in community-based research projects. Many of these methods require an informant (e.g., spouse, adult child) to provide ratings of the patients' cognitive and functional abilities. A patient-reported outcome that captures the presence of cognitive impairment and corresponds to Gold Standard assessments could improve case ascertainment, clinical care, and recruitment into clinical research. We tested the patient version of the Quick Dementia Rating System (QDRS) as a patient-reported outcome to detect MCI and ADRD.
METHODS: The patient QDRS was validated in a sample of 261 consecutive patient-caregiver dyads compared with the informant version of the QDRS, the Clinical Dementia Rating (CDR), neuropsychological tests, and Gold Standard measures of function, behavior, and mood. Psychometric properties including item variability, floor and ceiling effects, construct, concurrent, and known-groups validity, and internal consistency were determined.
RESULTS: The patient QDRS strongly correlated with Gold Standard measures of cognition, function, mood, behavior, and global staging methods (p-values < .001) and had strong psychometric properties with excellent data quality and internal consistency (Cronbach alpha = 0.923, 95%CI:0.91-0.94). The patient QDRS had excellent agreement with the informant QDRS, the CDR and its sum of boxes (Intraclass Correlation Coefficients: 9.781-0.876). Receiver operator characteristic curves showed excellent discrimination between normal controls from CDR 0.5 (AUC:0.820;95% CI: 0.74-0.90) and for normal controls from any cognitive impairment (AUC:0.885;95% CI: 0.83-0.94). DISCUSSION: The patient QDRS validly and reliably differentiates individuals with and without cognitive impairment and can be completed by patients through all stages of dementia. The patient QDRS is highly correlated with Gold Standard measures of cognitive, function, behavior, and global staging. The patient QDRS provides a rapid method to screen patients for MCI and ADRD in clinical practice, determine study eligibility, improve case ascertainment in community studies.

Entities:  

Mesh:

Year:  2020        PMID: 33057404      PMCID: PMC7561106          DOI: 10.1371/journal.pone.0240422

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Background

Alzheimer’s disease and related dementias (ADRD) currently affect over 5.7 million Americans and over 35 million people worldwide [1]. The number of ADRD cases is expected to increase as the number of people over age 65 grows by 62% and the number over age 85 is expected to grow by 84% [1-3]. More than one in eight adults over age 65 has dementia, and current projections indicate a three-fold increase by 2050 [1]. Community detection of mild cognitive impairment (MCI) [4] and early Alzheimer’s disease (AD) [5] and related disorders may be limited due to the lack of screening tests characterizing the earliest signs of impairment, monitoring response to interventions, correspondence to biomarkers [6, 7], and the potential benefits versus harms from screening [3]. The inability to detect MCI and ADRD may affect eligibility determination for care and services and impede case ascertainment and recruitment into clinical research. Primary care providers are often responsible for the detection, diagnosis, and treatment of ADRD as the number of dementia specialists (neurologists, psychiatrists, and geriatricians) and specialty centers is not sufficient to meet the growing demands [2]. Gold Standard evaluations such as the Clinical Dementia Rating (CDR) [8] are used in many research projects but require a trained clinician to administer, interpret and score the CDR and requires an extended period of time with both an informant and the patient. While feasible in a research setting such as a clinical trial or longitudinal observational study, the CDR is not practical in primary care settings or for use in epidemiologic case-ascertainment projects. Briefer evaluations tools are often used in these settings. These briefer tools can be grouped into performance-based assessments including the Mini Mental State Exam [9] or Mini-Cog [10], or interview-based assessments usually with an informant such as the AD8 [11, 12], Informant-Questionnaire in Cognitive Decline in the Elderly (IQCODE) [13], or Quick Dementia Rating System (QDRS) [14]. There are limitations with both brief approaches. Performance-measures can be biased by education, language, and culture that can lower their accuracy in underrepresented groups. Brief tests may not provide a sense of change or functional impairment if prior testing has not been done [2]. It has been reported that up to 38% of patients refuse cognitive screening tests in primary care offices [15, 16]. Furthermore, ADRD can be insidious in its onset with symptoms fluctuating over time [17]. Informant-based measures are limited by the patient being able to identify a reliable, observant informant, and in many cases, patients may not be accompanied by an informant in a clinical setting. Patient Reported Outcome (PRO) approaches may be able to overcome the above barriers for early detection of ADRD in primary care practices [18, 19]. While performance tests have biases associated with age, education, race, language, and culture, PROs are intraindividual assessments based on what the patient believes is occurring over a defined time-period, or as a comparison to prior time [2]. PROs may provide valid information on patient functional status and well-being, can be used to enhance care quality, and are proposed for use in assessing performance and could be beneficial in the detection of cognitive impairment if they were able to adequately capture cognitive symptoms and functional impairment, and correlate with Gold Standard assessments commonly used to establish diagnoses [2]. Initially tested as an informant rating, we examined the utility of the QDRS as a self-rated PRO scale to detect MCI and ADRD compared with the informant version of the QDRS, the CDR and neuropsychological testing.

Methods

Study participants

This study was conducted in 270 consecutive patient-caregiver dyads attending our center for clinical care or participation in cognitive aging research. During the visit, the patient and caregiver underwent a comprehensive evaluation including the Clinical Dementia Rating (CDR) and its sum of boxes (CDR-SB) [8], Global Deterioration Scale (GDS) [20], mood, neuropsychological testing, caregiver ratings of patient behavior and function, and a caregiver psychosocial and needs assessment. All components of the assessment are part of standard of care at our center [21] and protocols in the clinic and research projects are identical. A waiver of consent was obtained from clinic patients and research participants provided written informed consent. This study was approved by the University of Miami Institution Review Board.

Administration of QDRS

Prior to the in-person visit, a welcome packet was mailed to the patient and caregiver to collect demographics and medical history and included both the caregiver [21] and patient versions of the QDRS. The respondents were given directions to complete the questionnaires independently of each other. The packets including the QDRS were returned prior to the appointment. The QDRS was not considered in the clinical evaluation, staging or diagnosis of the patient.

Clinical assessment

The in-person clinical assessments are modelled on the Uniform Data Set (UDS) 3.0 from the NIA Alzheimer Disease Center program [22, 23]. The CDR [8] was used to determine the presence or absence of dementia and to stage its severity; a global CDR 0 indicates no dementia; CDR 0.5 represents MCI or very mild dementia; CDR 1, 2, or 3 correspond to mild, moderate, or severe dementia. The CDR-SB was calculated by adding up the individual CDR categories giving a score from 0–18 with higher scores supporting more severe stages. The GDS [20] was determined to provide a global cognitive and function stage: a GDS 1 indicates no cognitive impairment; GDS 2 indicates subjective cognitive impairment; GDS 3 corresponds to mild cognitive impairment; GDS 4–7 corresponds to mild, moderate, moderate-severe, or severe dementia [Reisberg]. Diagnoses were determined using standard criteria for MCI [4], AD [5], dementia with Lewy bodies (DLB) [24], vascular dementia (VaD) [25], and frontotemporal degeneration (FTD) [26]. Extrapyramidal features were assessed with the Movement Disorders Society-Unified Parkinson’s Disease Rating Scale, motor subscale part III (UPDRS) [27]. The Charlson Comorbidity Index [28] was used to measure overall health and medical comorbidities. The risk of vascular contributions to dementia was assessed with the modified Hachinski scale [29]. The presence of physical frailty was assessed with the Fried Frailty Scale [30].

Cognitive assessment

Each patient was administered a 30-minute test battery at the time of the office visit to assess their cognitive status. The psychometrician was unaware of the diagnosis, CDR global score, or QDRS scores. The Montreal Cognitive Assessment [31] was used for a global screen. The rest of the battery was modeled after the UDS battery used in the NIA Alzheimer Disease Centers [23] supplemented with additional measures: 15-item Multilingual Naming Test (naming) [23]; Animal naming and Letter fluency (verbal fluency) [23]; Hopkins Verbal Learning Task (episodic memory for word lists–immediate, delayed, and cued recall) [32]; Number forward/backward and Months backwards tests (working memory) [23]; Trailmaking A and B (processing and visuospatial abilities) [33]; and a novel Number-Symbol Coding Test (executive function). Mood was assessed with the Hospital Anxiety Depression Scale [34] providing subscale scores for depression (HADS-D) and anxiety (HADS-A).

Caregiver ratings of patient cognition, function, and behavior

Standardized scales were administered to the caregivers to provide ratings of cognition, function, and behavior. In addition to the caregiver version of the QDRS, activities of daily living were captured with the Functional Activities Questionnaire (FAQ) [35]. Dementia-related behaviors and psychological features were measured with the Neuropsychiatric Inventory (NPI) [36]. Patient daytime sleepiness was assessed with the Epworth Sleepiness Scale (ESS) [37] while daytime alertness was rated on a 1–10 Likert scale (“Rate the patient’s general level of alertness for the past 3 weeks on a scale from 0 to 10”) anchored by “Fully and normally awake” (scored 10) and “Sleep all day” (scored 0) [38]. Caregiver burden was captured with the 12-item Zarit Burden Inventory [39].

Statistical analyses

Analyses were conducted with IBM SPSS Statistics v26 (Armonk, NY). Descriptive statistics were used to examine patient and caregiver demographic characteristics, informant rating scales, dementia staging, and neuropsychological testing. One-way analysis of variance (ANOVA) with Tukey-Kramer post-hoc tests were used for continuous data and Chi-square analyses were used for categorical data. Data completeness was assessed by calculating the rates of missing data for each QDRS item. To assess item variability, the item frequency distributions, range, and standard deviations were calculated. Patient and Informant QDRS and CDR-SB scores were examined for floor and ceiling effects. Factor analysis using principle components with a Varimax rotation was performed revealing a one-factor solution. Total QDRS scores and individual items were examined for their psychometric properties and compared with patient and caregiver characteristics, rating scales, and neuropsychological test performance. QDRS-derived CDR and CDR-SB scores were computed by using the first six QDRS domains. The Toileting and Personal Hygiene QDRS domain has a 0.5 category that the CDR Personal Care domain does not–in order to compare these domains Toileting and Personal Hygiene scores of 0 or 0.5 are recoded as 0 [14]. Concurrent (criterion) validity was assessed comparing the mean performance on each Gold Standard measure of cognition (e.g., CDR, GDS, neuropsychological testing), function (i.e., FAQ), behavior (e.g., NPI, HADS), and caregiver ratings (e.g., ESS, ZBI) with the patient version of the QDRS using Pearson correlation coefficients [14, 40, 41]. Internal consistency was examined as the proportion of the variability in the responses that is the result of differences in the respondents, reported as the Cronbach alpha reliability coefficient. Coefficients greater than 0.7 are good measures of internal consistency [14, 40, 41]. The intraclass correlation coefficient (ICC) assessed inter-scale reliability comparing the patient and informant versions of the QDRS individual questions, total score, and QDRS-derived CDR and CDR-SB with the independently determined CDR global score and CDR-SB correcting for chance agreement [14, 40, 41]. Simple agreement (i.e., the proportion of responses in which two observations agree such as a Pearson or Spearman correlation coefficient) is strongly influenced by the distribution of positive and negative responses, and the agreement by chance alone. The ICC instead examines the proportion of responses in agreement in relation to the agreement expected by chance [14, 40, 41]. An ICC between 0.55 and 0.75 is considered good agreement, whereas an ICC greater than 0.76 is considered excellent [42]. Receiver operator characteristic (ROC) curves were used to assess discrimination between CDR stages and the patient QDRS. Three analyses were performed. The first discriminated CDR 0 vs 0.5, which is generally the most difficult staging to determine. The second discriminated CDR 0 from CDR >0. The third examined the discrimination properties of the patient QDRS, the MoCA and combining the QDRS and MoCA. Results are reported as area under the curve (AUC) with 95% confidence intervals (CIs). Known-group validity was assessed by examining the QDRS scores by CDR staging, and dementia etiology [14, 40, 41]. Multiple comparisons were addressed using the Bonferroni correction.

Results

Sample characteristics

The mean age of the patients was 75.7±8.9 years and 15.4±2.7 years of education. The mean age of the caregivers was 55.5±15.1 years and 16.0±2.6 years of education. The sample was 96.5% White, 3.1% African American and 0.4% Asian, with 10.4% reporting Hispanic ethnicity. The cognitively impaired group (CDR>0) had a higher proportion of White patients than the CDR 0 healthy controls (94.9% vs 81.0), while African American patients had a higher proportion of healthy controls (11.9% vs 1.4%, χ2 = 14.1, p = .001). The patients had a mean CDR-SB of 4.5±4.7, a mean informant QDRS score of 6.4±6.3, a mean patient QDRS score of 4.5±4.9, and a mean MoCA score of 18.5±7.1. Caregivers were mostly spouses (65.2%), adult children (21.0%), or other individuals (13.8%) with 69.1% reporting living with the patient and having daily contact. This sample covered a range of healthy controls (CDR 0 = 41), MCI or very mild dementia (CDR 0.5 = 119), mild dementia (CDR 1 = 59), moderate dementia (CDR 2 = 35), and severe dementia (CDR 3 = 17). Consensus clinical diagnoses included: 41 Healthy Controls, 88 MCI, 42 AD, 71 DLB, 18 VaD, 9 FTD, and 1 Undefined dementia. All CDR 0 patients were able to complete the patient QDRS, while 9 individuals with cognitive impairment (one CDR 0.5, two CDR 1, two CDR 2, and four CDR 3) for a total of 261 patients who were able to complete the patient QDRS. Diagnoses of those patients who could not complete the patient QDRS include 2 AD, 3 DLB, 1 VAD and 3 FTD. lists mean performances on all patient and caregiver rating scales used in this study by CDR staging. Both the informant and patient versions of the QDRS increase in scores across CDR stages. demonstrates the strength of association between the patient version of the QDRS and other indices of cognition, behavior, and function. The patient QDRS was strongly correlated with all rating scales and neuropsychological tests. Mean (SD). KEY: FAQ = Functional Activities Questionnaire; NPI = Neuropsychiatric Inventory; HADS-A = Hospital Anxiety and Depression Scale-Anxiety Subscale; HADS-D = Hospital Anxiety and Depression Scale-Depression Subscale; MoCA = Montreal Cognitive Assessment; GDS = Global Deterioration Scale; CDR = Clinical Dementia Rating’ CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS-Inf = Quick Dementia Rating System-Informant Version; QDRS-Pt = Quick Dementia Rating System-Patient Version. KEY: FAQ = Functional Activities Questionnaire; NPI = Neuropsychiatric Inventory; HADS-A = Hospital Anxiety and Depression Scale-Anxiety Subscale; HADS-D = Hospital Anxiety and Depression Scale-Depression Subscale; MoCA = Montreal Cognitive Assessment; GDS = Global Deterioration Scale; CDR = Clinical Dementia Rating’ CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS-Inf = Quick Dementia Rating System-Informant Version; UPDRS = Unified Parkinson’s Disease Rating Scale; HVLT = Hopkins Verbal Learning Test; MINT = Multilingual Naming Test; MFQ = Mayo Fluctuations Questionnaire.

QDRS data quality

demonstrates that all items of the QDRS exhibited the full range of possible responses across the five-item QDRS response options with few missing items (range 0–0.7%), even in individuals with moderate to severe dementia. The item-level floor effects range from 40.2% (Memory and Recall) to 79.8% (Toileting and Personal Hygiene). The item-level ceiling effects range from 0.4% (Activities Outside the Home) to 5.7% (Function at Home and Hobby Activities). The standard deviation was similar for all items, ranging from 0.5 to 0.8. Thus, data quality for the QDRS were good to excellent.

Reliability and scale score feature of the patient QDRS

The internal consistency of the patient QDRS, a measure based on the correlation between the different QDRS questions, was assessed by its internal consistency with Cronbach alpha (). The internal consistency was excellent at 0.92 which is comparable to the informant QDRS (0.95) and CDR-SB (0.96). The patient QDRS covered the range of possible scores and the mean, median and standard deviation demonstrated a sufficient dispersion of scores for assessing the patients self-rating of their cognitive status with a low percentage of missing data. There was a modest floor (18.4%) and very low ceiling (0%) effect–these ranges were similar to the informant QDRS and the CDR-SB. The patient QDRS was strongly correlated with both the Informant QDRS and the CDR-SB. Note: % Floor is the percentage who reported the lowest (best) possible score. % Ceiling is the percentage who reported the highest (worst) possible score. KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; QDRS-Inf = Quick Dementia Rating System-Informant Version; CDR-SB = Clinical Dementia Rating Sum of Boxes.

Construct (inter-scale) validity of the patient QDRS

The informant and patient versions of the QDRS were compared to each other and to the CDR global score using Intraclass Correlation Coefficients (ICC) in . ICCs between patient and informant QDRS, patient QDRS and CDR global score, and informant QDRS and CDR are excellent for individual items, total QDRS scores, the QDRS-derived CDR global score and CDR-SB. The lowest ICC is for memory (ICC = 0.69) between the patient QDRS and the CDR global score. These analyses demonstrate the patient QDRS has high rates of agreement with both the informant QDRS and the Gold Standard CDR global score. Intraclass correlation coefficient (95% Confidence Intervals). KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; QDRS-Inf = Quick Dementia Rating System-Informant Version; CDR-SB = Clinical Dementia Rating Sum of Boxes. The range of patient QDRS and CDR-SB scores by global CDR stages is shown in . Both the patient QDRS and CDR-SB demonstrate a range of scores within each global CDR stage reflecting the range of symptoms self-reported by the patient (QDRS) or determined by the clinician (CDR-SB). To aid in interpreting the QDRS scores, we performed ROC curves for the QDRS to derive cut-off scores that can assist clinicians and researchers. For discriminating CDR 0 normal controls (with and without subjective complaints) from CDR 0.5 very mild impairment (which includes MCI and very mild dementia), a cut-off score of 1.5 provides the best sensitivity and specificity (AUC 0.823; 95% CI 0.74–0.90, p < .001) and is identical to the cut-off for the informant QDRS [14]. As the patient QDRS may be used in clinical practices and research projects to screen for cognitive impairment, we repeated the ROC analyses discriminating CDR 0 from any non-0 CDR stage. A cut-off of 1.5 again provides the best combination of sensitivity and specificity (AUC 0.888; 95% CI 0.84–0.94, p < .001) demonstrating excellent ability to discriminate normal controls from those individuals with any form of cognitive impairment. We repeated these analyses using consensus diagnoses instead of CDR global scores. For discriminating healthy controls from MCI, a cut-off score of 1.5 provides the best sensitivity and specificity (AUC 0.821; 95% CI 0.73–0.89, p < .001). Discriminating healthy controls from individuals with any form of cognitive impairment had an AUC 0.889 (95% CI 0.84–0.94, p < .001). Means (SD). KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; CDR-SB = Clinical Dementia Rating Sum of Boxes. We then examined whether combining the patient QDRS with a brief performance test, the MoCA could improve the detection of cognitive impairment more than either alone. For discriminating CDR 0 normal controls (with and without subjective complaints) from CDR 0.5 very mild impairment (which includes MCI and very mild dementia), the QDRS provided an AUC of 0.820 (0.74–0.90) and the MoCA provided an AUC of 0.888 (0.87–0.95). Combining the patient QDRS with the MoCA provided excellent discrimination with an AUC of 0.928 (0.89-.0.97). We repeated the ROC analyses discriminating CDR 0 from any non-0 CDR stage. the QDRS provided an AUC of 0.885 (0.83–0.94) and the MoCA provided an AUC of 0.932 (0.89–0.97). Combining the patient QDRS with the MoCA again provided excellent discrimination with an AUC of 0.962 (0.94–0.98).

Known-groups validity of the patient QDRS

The performance of the QDRS questions, total QDRS, and QDRS-derived CDR and CDR-SB scores by different dementia etiologies is demonstrated in . In general, QDRS questions perform similarly across different dementia etiologies, however several questions appear to be helpful with differential diagnosis following post-hoc analyses. QDRS question 3 (Decision Making) is more frequently endorsed by individuals with VaD. QDRS question 4 (Activities Outside the Home) is most frequently endorsed by DLB patients and least endorsed by FTD patients. QDRS question 5 (Function at Home and Hobbies) is least frequently endorsed by FTD patients. Questions 8 (Language and Communication) and 10 (Attention and Concentration) are more frequently endorsed by DLB and FTD patients. DLB patients are more likely to endorse problems with behavior (QDRS Question 7), mood (QDRS Question 9) and have higher total QDRS scores. Interestingly, although not reaching statistical significance, AD patients tended to report the lowest scores suggesting that impaired insight might be a more significant issue in AD compared with the other dementias. Means (SD) or %. KEY: AD = Alzheimer’s Disease; DLB = Dementia with Lewy Bodies; VaD = Vascular Dementia; FTD = Frontotemporal Degeneration; FAQ = Functional Activities Questionnaire, NPI = Neuropsychiatric Inventory, MoCA = Montreal Cognitive Assessment; CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS = Quick Dementia Rating System; QDRS-Pt = Quick Dementia Rating System-Patient Version *Signifies post-hoc differences between dementias (p < .05). Note: Controls (n = 44), MCI (n = 88), and Undefined dementia (n = 1) are not included in this table. Bold signifies differences after correction for multiple comparisons (corrected p < .005).

Discussion

The patient version of the QDRS is a brief dementia detection tool that validly and reliably differentiates individuals with normal cognition from those individuals with MCI and dementia. The patient version of the QDRS strongly correlated with Gold Standard assessments of cognition (e.g., CDR, neuropsychological testing), function (i.e., FAQ), and behavior (i.e., NPI) and showed strong psychometric properties and excellent data quality. The patient QDRS ratings had excellent agreement with independently obtained informant versions of the QDRS and with the CDR global and its sum of boxes. Discriminability of the patient QDRS for healthy controls vs. CDR 0.5 and CDR >0 had cut-off scores identical to the informant version. Finally combining the patient QDRS with a brief performance measure such as the MoCA further increased the accuracy of dementia detection in a valid and reliable fashion. Evaluation of dementia typically consists of objective testing of the patient and, when available, questioning of a reliable informant [2]. While informant interviews provide a more reliable way to determine cognitive and functional change in dementia patients, informants are not always attendant. Brief office visits such as annual check-ups, often without the presence of informants, may not uncover very mild symptoms of dementia. In a recent report, the Alzheimer Association conducted surveys with 1000 primary health care providers and 1954 older adults regarding expectations, benefits, and practices about dementia screening [2, 3]. While 94% of patients saw their providers in the last year, only 47% discussed memory and only 28% received a memory assessment. This contrasts with 95% of older adults wanting to know about their memory and 51% reporting changes. Although 50% of providers reported they assess cognition as part of their evaluation, only 40% were familiar with the toolkits available to them. Additionally, many patients refuse cognitive testing for a variety of reasons, particularly if “sprung” upon them in the midst of a routine office visit. A PRO approach may provide a means of capturing cognitive impairment in an unaccompanied patient presenting to the office [43-45] and could provide an “opening’ for the providers to discuss the issue of memory loss. They can create efficient and cost-effective clinical encounters with providers while also empowering patients and family caregivers to engage in early detection of ADRD [46-48]. Completion of the patient QDRS prior to the physician visit can offer several advantages above and beyond what is captured through medical records review and simple questions including (a) capture of non-memory symptoms (e.g., orientation, problem-solving, daily functioning) that are both disturbing to patients and families and are more likely to be accepted as a change that requires medical attention; (b) provide information about the patient’s perception of their real-world functioning; (c) provide information at baseline visits where prior testing may not be available; (d) capture of progression over time; and (e) allow for staging of ADRD in a brief, valid, and time- and cost-effective manner [14, 49]. This is an important point as in the era of COVID-19, nearly all evaluations are done remotely. We recently completed a study of 288 individuals with community based assessments by non-physician clinician with remote follow-up calls [3] and found a willingness to have their memory evaluated, complete the measures, complete the phone follow-up, without evidence of harm. To date, self-rating scales for dementia have not gained common use, perhaps due the general perception that dementia patients lack insight and deny cognitive decline, even in mild forms of dementia [50, 51]. However, awareness of deficits varies greatly between individuals and patients can offer reliable accounts of cognitive change, whether or not they perceive the change as a problem [2, 50]. The AD8 has demonstrated validity as a PRO [46] as has the Healthy Aging Brain Care Monitor [47]. Large multisite studies such as the Alzheimer’s Disease Cooperative Study and the Alzheimer Disease Neuroimaging Initiative ask participants to provide self-ratings of cognitive complaints using the Cognitive Function Instrument [52] or the Cognitive Change Index [53]. The Self -Administered Gerocognitive Examination [54] has been used to identify those individuals with MCI and early stage ADRD by testing orientation, language, cognition, visuospatial-construction, executive, and memory domains without any staff supervision. Additionally, patients with cognitive impairment are asked to self-rate a number of physical, psychological, and social symptoms including mood [55] and quality of life [56]. In this study, even patients with severe dementia (CDR 3) were able to complete the QDRS will little missing data. There are several limitations in this study. The patient QDRS was validated in the context of an academic research setting where the prevalence of MCI and dementia are high, and the patients tend to be highly educated and predominantly White. Validation of the patient QDRS in other settings where dementia prevalence is lower (i.e. community samples) and the sample is more diverse is needed. There is the potential for recall biases as patients may choose to tell the physician what they think they want to hear or may not recall. In this paper, we tested for this by comparing the patient QDRS to an independently collected caregiver QDRS and the physician directed gold standard evaluation. As this is a cross-sectional study, the longitudinal properties of the patient QDRS still need to be elucidated. The majority of cases consisted of MCI, AD, and DLB. There were fewer VaD cases and only progressive aphasic forms of FTD. Other dementia types need to be studied. The patient QDRS was completed prior to the in-person evaluation. While instructions were provided to complete the QDRS independently, we cannot be sure that the patient did not ask others for help answering the question. Finally, AD patients endorsed the fewest number of self-reported symptoms. Although the QDRS scores for AD patients performed well compared with neuropsychological testing and the CDR global score and its sum of boxes, denial or anosognosia [50, 51] in AD patients may limit the reliability in the more advanced stages of disease. Strengths of this study include the use of a comprehensive evaluation that is part of standard of care with measurement of multiple patient and caregiver constructs using Gold Standard instruments. Another advantage of the QDRS is its brevity consisting of 10 questions to be printed on one piece of paper or viewed in a single screenshot to maximize its clinical and research utility that can be answered by patients even in the severe stages of dementia. Although not designed as a differential diagnostic tool, the QDRS as a PRO may assist clinicians during the initial visit in diagnosis as patients with different dementia etiologies self-reported symptoms differently. The patient QDRS may serve as an effective clinical tool for dementia screening, case-ascertainment in epidemiological studies, in busy primary care settings, and in instances where an informant is not available. Combining the QDRS with a brief performance measure may provide excellent power to detect cognitive impairment. The patient QDRS performed reliably and validly in comparison to standardized scales of a comprehensive cognitive neurology evaluation, but in a brief fashion that could facilitate its use in clinical care and research. (PDF) Click here for additional data file. (PDF) Click here for additional data file. 1 Sep 2020 PONE-D-20-22622 Using a Patient-Reported Outcome to Improve Detection of Cognitive Impairment and Dementia: The Patient Version of the Quick Dementia Rating System (QDRS) PLOS ONE Dear Dr. Galvin, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Oct 16 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Simone Reppermund, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for stating the following in the Competing Interests section: "I have read the journal's policy and the authors of this manuscript have the following competing interests: JEG is the creator of the QDRS and holds the copyright with NYU School of Medicine.  He receives royalties from licensing agreements. MIT and SC have no competing interests" Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The main critique of this paper is that the sample comprised 85 individuals with Dementia with Lewy Bodies, 41 with Alzheimer's dementia, 12 with Vascular Dementia and 6 with FTD. Therefore it is difficult to generalize to the broader population, where Alzheimer's disease is by far the most common form of dementia. For this reason I find find it hard to see how the evidence supports the conclusion that "The patient QDRS provides a rapid method to screen patients for MCI and ADRD in clinical practice". The fact that the sample isn't representative is made even more important by the fact that those with Alzheimers' were deemed to have poorer insight than those with DLB and a more representative sample may therefore have yielded completely different results. Numbers with moderate dementia (n=34) and severe dementia (n=17) were low. The total number of patients with dementia outlined above (n=144) does not correspond to the number deemed to have dementia on the CDR (n=112). This may be because the functioning of some participants did not meet the threshold for dementia but maybe needs clarification. While participants were posted out the QDRS patient and informant versions and advised to do them separately, they may not have done so in practice and this is a limitation. For example, it would seem to be difficult for a patient to rate the changes in their own personality without asking others. Rather than test the accuracy sensitivity and specificity of the patient version of the QDRS against the CDR, would it not have been preferable to compare them against the gold standard final diagnosis of the patient (i.e. MCI or dementia) that was arrived after all the neuropsychological and functional testing was done and a clinical diagnosis was arrived at? Reviewer #2: In general, I think this is well done. I have two major concerns: the design of the study is unclear (see full comments) and tables need to include labels that identify the statistics shown. Please see the attachment for detailed comments. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PLoSOne_20200828.docx Click here for additional data file. 7 Sep 2020 RESPONSE TO REVIEWERS PONE-D-29-22622 Reviewer #1: 1. The main critique of this paper is that the sample comprised 85 individuals with Dementia with Lewy Bodies, 41 with Alzheimer's dementia, 12 with Vascular Dementia and 6 with FTD. Therefore, it is difficult to generalize to the broader population, where Alzheimer's disease is by far the most common form of dementia. For this reason, I find it hard to see how the evidence supports the conclusion that "The patient QDRS provides a rapid method to screen patients for MCI and ADRD in clinical practice". The fact that the sample isn't representative is made even more important by the fact that those with Alzheimers' were deemed to have poorer insight than those with DLB and a more representative sample may therefore have yielded completely different results. RESPONSE: We considered this point carefully and respectfully disagree with the reviewer. Our research projects and clinic focus on healthy aging, MCI and early stage ADRD so we have a variety of cases that come in and we demonstrate the QDRS can provide valid and descriptive data for each form of ADRD and may be particularly useful to discriminate between controls, MCI and very mild ADRD (due to any etiology). This is diagnostic challenge for clinicians and researchers alike and the goal of screening. There is much less of diagnostic challenge to determine moderate to severe stages of ADRD. The fact that we have included a diversity of dementia etiologies as well as a sample of MCI cases enhances, rather than diminishes the usefulness of the QDRS, particularly in community practice where diagnostic expertise into various ADRD etiologies may not be as precise. As for the poorer insight in AD, there are a number of papers that describe the anosognosia that is seen in AD that is not as pronounced in the non-AD dementias (for example References #50 and 51). 2. Numbers with moderate dementia (n=34) and severe dementia (n=17) were low. RESPONSE: The answer for this is two-fold. First, our research projects and clinic focus on healthy aging, MCI and early stage ADRD so fewer moderate to severe patients are seen by us. Second, individuals at the moderate and especially the severe stage of ADRD find it hard to complete many parts of the assessment – they were more likely for example to have trouble with Trailmaking B and self-rating scales. This is limitation of any PRO. We added this information into the revised manuscript (lines 203-205): “All CDR 0 patients were able to complete the patient QDRS, while 0.8% CDR 0.5, 6.8% CDR 1, 2.9% CDR 2, and 23.3% CDR 3 were unable to provide QDRS ratings”. 3. The total number of patients with dementia outlined above (n=144) does not correspond to the number deemed to have dementia on the CDR (n=112). This may be because the functioning of some participants did not meet the threshold for dementia but maybe needs clarification. RESPONSE: We apologize for this math challenge. In the revised manuscript, we went back to clarify the number of cases between CDR and Diagnoses to make sure that the columns added up. Additional one case included in the analysis was adjudicated so that this individual’s CDR and Diagnoses are counted in the columns for a sample total of 270 patient-caregiver dyads with 261 patients able to complete the QDRS. We re-checked all the numbers several times to make sure they all add up. This is described in the revised manuscript in lines 200-206. Again, we apologize for the addition error. 4. While participants were posted out the QDRS patient and informant versions and advised to do them separately, they may not have done so in practice and this is a limitation. For example, it would seem to be difficult for a patient to rate the changes in their own personality without asking others. RESPONSE: We thank the reviewer for mentioning this. We give rather detailed instructions but it is true that it is impossible to be assured that patients did not ask for help – this would be true for any patient survey or measure not collected in a face-to-face fashion. We have done a number of studies doing remote evaluations and assessments and comparing those to in-person Gold Standard instruments. We are confident that the instruments were completed independently as the scores have good inter-rater reliabilities but are not identical. However, we cannot ignore the great point brought up by the reviewer and included this as a limitation (lines 377-379). 5. Rather than test the accuracy sensitivity and specificity of the patient version of the QDRS against the CDR, would it not have been preferable to compare them against the gold standard final diagnosis of the patient (i.e. MCI or dementia) that was arrived after all the neuropsychological and functional testing was done and a clinical diagnosis was arrived at? RESPONSE: We appreciate this great idea. We chose the CDR since it is a Gold Standard, but repeated the analyses using consensus diagnoses – healthy controls vs. MCI and then healthy controls vs MCI/ADRD. This is added in the text (lines 278-281) and in Table 6. Reviewer #2: Abstract: 1. Please define the statistic “Ps.” RESPONSE: We apologize that this was unclear. We are referring to p-values. This is clarified in the revised manuscript Introduction: 1. The authors provide a paragraph describing the limitations of performance measures and brief tests. They then state that PROs can overcome those limitations but do not explain how, for example, PROs could improve upon the potential for bias by education, language, and culture in performance measures. RESPONSE: This is an excellent point and we appreciate the reviewer bring this up. Performance tests give a snapshot of the cognitive ability of the patient but have well described inherent biases associated with age, education, race, language and culture. PROs, on the other hand, are intra-individual assessments based on what the patient (or sometimes the caregiver) is observing. Because most PRO, as the patient to note symptoms over a defined time-period, or to compare themselves to a prior time, there is little-to-no age, education, race, language, or culture biases. There is the potential for recall biases as patients may choose to tell the physician what they think they want to hear or may not recall. In this paper, we tested for this by comparing the patient QDRS to an independently collected caregiver QDRS and the physician directed gold standard evaluation. We added this information to the Introduction (lines 93-100) and Discussion (lines 334-345) sections of the revised manuscript. 2. Similarly, is there literature suggesting that patients at risk for cognitive impairment would be more willing to take a self-administered assessment for dementia at home vs. in office? RESPONSE: We are not aware of any specific studies that compare the willingness of self-assessment between home and the office. In the era of COVID-19, nearly all evaluations are being done remotely and they appear to work as well but we think this is an area that deserves further study. We recently completed a study of 288 individuals (a different cohort that the current study) with community based assessments by non-physician clinician with remote follow-up calls (Galvin et al PLoS One 2020;15:e0235534, Reference #3) and found a willingness to have their memory evaluated, complete the measures, complete the phone follow-up, without evidence of harm. We added a point about this in the Discussion section (lines 351-355) Methods: 1. Please provide the specific name of your center. RESPONSE: We provide the name of our Center in our byline (Comprehensive Center for Brain Health, Department of Neurology, University of Miami Miller School of Medicine). However, the principal investigators and entire research team moved institutions and took the Center name, projects and data with us, and obtained IRB approval to continue the projects. We would prefer not to include the name in the body of the text. 2. I’m confused about the study design. The authors state that the study population consisted of 269 patient-caregiver dyads to their center but that welcome packets with the QDRS were mailed prior to the office visit. How can the test be administered before determination of eligibility? RESPONSE: We apologize if this was unclear. We have IRB approval for retrospective analyses of patient charts and prospective analyses for research charts. The data collection platforms are purposely designed to identical to allow combined analyses (now included in lines 111-112). In both cases, the QDRS was collected prior to their visit where the rest of the data was collected. In the case of patients, there are no eligibility criteria – the clinic only sees cognitive patients, no general neurology cases are seen. In cases of the research projects, it is designed to mimic as much as possible a “real-world” sample and we have few inclusion/exclusion criteria other than active cancer, Axis I psychiatric conditions, or metal-implanted devices that would preclude neuroimaging. These criteria were peer reviewed as part of our NIH grants (R01 AG040211-A1 and R01 NS101483-01A1). 3. If welcome packets were mailed to additional patients, how many chose to respond? How did these patients differ from those included in the study in terms of demographics, diagnosis, and disease severity? RESPONSE: All patients and research participants complete the same packets. There is no non-response rate. Patients are not seen without completion of the data packets. This assures that all patients have the same data. However, there were 9 patients who were evaluated that could not complete the QDRS due to cognitive impairment. This is in lines 203-206 on the revised manuscript. 4. I suggest calling the “CDR” the “CDR Global Score” to describe the specific summary measure of the CDR. RESPONSE: We have made the suggested changes through the manuscript. 5. Are all of the tests and assessments described in the section under the title “Caregiver ratings of patient cognition and behavior” provided by the caregiver and not the patient? RESPONSE: The reviewer is correct. This was clarified in the revised manuscript (lines 147-148) Results: 1. In Table 1, consider providing either in the table or the text, the possible ranges for each assessment of the assessments. RESPONSE: We inserted the possible range of scores for each of the scales in the table as requested 2. Given the small numbers in the tables, I don’t think the authors should provide a global p-value for race. Perhaps one for CDR 0 vs. CDR>0 would be more informative? RESPONSE: We agree with the reviewer and have removed this from the tables. Instead we describe the race and ethnicity characteristics in the text (lines 193-196). The sample was 96.5% White, 3.1% African American and 0.4% Asian, with 10.4% reporting Hispanic ethnicity. The cognitively impaired group (CDR>0) had a higher proportion of White patients than the CDR 0 healthy controls (94.9% vs 81.0), while Black patients had a higher proportion of healthy controls (11.9% vs 1.4%, �2=14.1, p=.001). 3. Is “Response Counts” showing the levels of the CDR Global Score? Please specify. RESPONSE: We apologize that this was unclear. The “Response Counts” with levels 0-3 in Table 3 refer to the QDRS score options. This is now clarified both in the text (lines 228-229) and in Table 3. 4. In the Reliability and Scale Score Feature of the Patient QDRS, the authors mention “random error.” I don’t think that’s an accurate description of the goal of analyzing internal consistency. RESPONSE: While the aspects of testing random error are part of internal consistency, we agree with the reviewer than this can be unclear. We therefore modified the description of internal consistency in the results section (lines 239-240) to now read “The internal consistency of the patient QDRS, a measure based on the correlation between the different QDRS questions, was assessed by its internal consistency with Cronbach alpha (Table 4).” 5. Tables 5 and 6 need more labels to descript coefficients, effects, 95% CIs, etc. RESPONSE: Table 5 now includes the label “Intraclass correlation coefficient (95% Confidence Intervals)” and Table 6 now includes the label “Means (SD)” 6. I’m not following why you would combine the QDRS with the MoCA if the goal is to have an instrument that can be self-administered. I would recommend cutting this section unless the authors can provide substantial motivation. RESPONSE: This is an interesting point. A long-standing research and clinical interest of ours is to explore ways to improve dementia detection in the community setting. Along this line, we have developed screening tests, conducted studies of screening paradigms, and surveyed the general population and health care providers. Most these studies have been published. With this as background, we have found that most providers rely heavily on cognitive tests to detect impairment (such as the MoCA), however without patient, or family, complaints the evidence exist that primary care providers (and this is probably true for specialists as well) often do not perform tests of cognition as part of their evaluation (see the Special Report from the Alzheimer’s Association, and Galvin et al PLoS One 2020;15:e0235534, Reference #3). The use of PRO which could be completed prior to the physician seeing the patient (e.g., at home, in the waiting room) could prompt the provider to ask more questions and perform a brief cognitive measure. We thus combined the two to demonstrate that the QDRS+MoCA would be an effective and time-efficient manner to evaluate older adults. This is expanded upon in the discussion (lines 334-345). 7. I think the ROC figures can be cut; providing the AUC is sufficient. RESPONSE: We appreciate the reviewer’s comments. Whenever possible, we believe that the ROC figures add to the understanding of the data but will abide by the reviewer’s suggestion to remove the figure. 8. In Table 7, specify what statistics are in the cells. Mean (sd)? RESPONSE: We apologize if this was unclear. Table 7 shows means with standard deviations for continuous variables and percentages for categorical variables. This is clarified in the revised manuscript 9. Are the p-values in Table 7 adjusted for multiple comparisons? Why formally test instead of providing CIs? RESPONSE: We had not intended to control for multiple comparisons as the goal was to show the pattern of how patients with different dementia etiologies respond to the different QDRS questions and domains they cover. In the revised manuscript, we used the Bonferroni correction for multiple comparisons. This is reflected in the Methods section (lines 187-188) and in Tab Submitted filename: PLoSOne_20200828_Response to Reviewers.docx Click here for additional data file. 28 Sep 2020 Using a Patient-Reported Outcome to Improve Detection of Cognitive Impairment and Dementia: The Patient Version of the Quick Dementia Rating System (QDRS) PONE-D-20-22622R1 Dear Dr. Galvin, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Simone Reppermund, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: One of my main criticisms of this paper was the fact that the sample of patients with MCI and Dementia was not representative of the general population, where Alzheimer's disease rather than DLB is the most common form of dementia. The authors have addressed this issue and given a counter argument and I feel that as long as it is clear within the paper who the sample were then the reader themselves can make their own judgment on this issue. A sample attending a clinic such as this is unlikely to be representative of the general population in any case and this is acknowledged by the authors in their limitations section. I believe that my other comments in relation to the "gold standard" criteria for MCI and Dementia have been adequately addressed. Reviewer #2: Thank you for your thoughtful responses and changes to the manuscript. I have no follow up questions. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No 6 Oct 2020 PONE-D-20-22622R1 Using a Patient-Reported Outcome to Improve Detection of Cognitive Impairment and Dementia: The Patient Version of the Quick Dementia Rating System (QDRS) Dear Dr. Galvin: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Simone Reppermund Academic Editor PLOS ONE
Table 1

Sample characteristics.

VariableCDR 0CDR 0.5CDR 1CDR 2CDR 3p-value
Age, y65.3 (9.9)74.2 (8.6)76.7 (8.5)80.5 (6.8)80.8 (6.8)< .001
Education, y15.9 (2.1)15.8 (8.6)14.8 (2.6)15.0 (2.7)13.9 (2.9).009
Sex, %Female75.046.245.247.142.1.02
Charlson (Range: 0–37)1.1 (1.5)2.6 (1.8)2.5 (1.4)2.8 (1.3)3.2 (1.9)< .001
FAQ (Range: 0–30)0.1 (0.2)3.5 (5.2)12.7 (7.4)20.4 (6.6)26.3 (8.1)< .001
NPI (Range: 0–36)1.5 (1.2)4.6 (4.3)7.4 (5.3)9.3 (5.8)9.6 (7.2)< .001
Hachinski (Range: 0–12)0.4 (0.5)1.1 (1.5)0.9 (1.3)1.4 (1.6)1.3 (1.7).02
Fried Frailty (Range: 0–5)0.9 (0.9)2.3 (1.3)2.9 (1.2)3.6 (1.0)3.8 (0.7)< .001
HADS-A (Range: 0–21)4.9 (3.6)6.1 (3.8)6.2 (3.4)6.2 (4.0)6.1 (3.1).40
HADS-D (Range: 0–21)4.3 (3.4)5.9 (3.7)7.0 (3.8)7.3 (4.0)5.8 (3.7).003
MoCA (Range: 0–30)26.7 (2.4)21.5 (3.6)15.9 (4.6)10.6 (5.4)6.1 (3.7)< .001
GDS (Range: 1–7)1.6 (0.7)3.2 (0.6)4.2 (0.6)5.6 (0.5)6.1 (0.5)< .001
CDR-SB (Range: 0–18)0.1 (0.2)1.9 (1.2)5.4 (1.5)10.8 (2.1)16.3 (2.0)< .001
QDRS-Inf (Range: 0–30)0.7 (1.1)3.3 (3.1)7.5 (3.6)12.2 (4.9)18.5 (6.4)< .001
QDRS-Pt (Range: 0–30)0.6 (1.2)2.8 (2.8)6.1 (4.4)9.3 (5.1)13.5 (6.4)< .001

Mean (SD).

KEY: FAQ = Functional Activities Questionnaire; NPI = Neuropsychiatric Inventory; HADS-A = Hospital Anxiety and Depression Scale-Anxiety Subscale; HADS-D = Hospital Anxiety and Depression Scale-Depression Subscale; MoCA = Montreal Cognitive Assessment; GDS = Global Deterioration Scale; CDR = Clinical Dementia Rating’ CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS-Inf = Quick Dementia Rating System-Informant Version; QDRS-Pt = Quick Dementia Rating System-Patient Version.

Table 2

Concurrent validity with patient QDRS.

VariableRP-ValueCovariance
QDRS-Inf.776< .00120.4
CDR.671< .0012.6
CDR-SB.730< .00116.1
GDS.658< .0014.3
FAQ.726< .00133.5
NPI.415< .00110.7
Charlson.203.0011.8
Hachinski.203.0011.4
Fried Frailty.491< .0013.4
UPDRS.457< .0013.4
AD8 –patient version.504< .0015.5
MoCA-.562< .001-18.2
Number Span Forward-.274< .001-1.8
Number Span Backward-.328< .001-2.5
HVLT–immediate-.506< .001-15.3
HVLT–delay-.406< .001-7.5
HVLT–recognition-.475< .001-6.5
Trails A.431< .00176.2
Trails B.358< .00161.2
Number-Symbol Coding-.487< .001-28.9
Animal Naming-.516< .001-15.6
Letter Fluency-.384< .001-0.9
MINT-.314< .001-3.8
HADS-A.235< .0013.9
HADS-D.406< .0017.1
MFQ.495< .0013.4
Epworth.243< .0016.5
Alertness-.420< .001-4.3
Caregiver Burden.377< .00117.8

KEY: FAQ = Functional Activities Questionnaire; NPI = Neuropsychiatric Inventory; HADS-A = Hospital Anxiety and Depression Scale-Anxiety Subscale; HADS-D = Hospital Anxiety and Depression Scale-Depression Subscale; MoCA = Montreal Cognitive Assessment; GDS = Global Deterioration Scale; CDR = Clinical Dementia Rating’ CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS-Inf = Quick Dementia Rating System-Informant Version; UPDRS = Unified Parkinson’s Disease Rating Scale; HVLT = Hopkins Verbal Learning Test; MINT = Multilingual Naming Test; MFQ = Mayo Fluctuations Questionnaire.

Table 3

Item distributions, missing rates, factor loading, item-total, and inter-item correlations.

Item Distribution and Missing RatesFactor LoadingItem-Total Pearson R
ItemResponse Counts (%) by QDRS Score Options
MeanSD00.5123miss
Memory and Recall (M/R)0.60.740.232.714.310.22.60.0.750.770
Orientation (O)0.40.645.933.814.74.11.50.0.827.823
Decision Making and Problem Solving (DM)0.60.741.526.823.84.23.80.3.823.828
Activities Outside the Home (AOH)0.50.749.124.213.213.20.40.3.835.844
Function at Home and Hobby Activities (FHH)0.50.851.122.715.54.95.70.7.870.870
Toileting and Personal Hygiene (TPH)0.30.779.87.14.16.72.20.0.741.754
Behavior and Personality Changes (B/P)0.30.664.217.011.76.40.80.3.757.762
Language and Communication (L/C)0.40.549.434.013.22.60.80.3.744.724
Mood (M)0.50.642.439.49.18.01.10.7.596.622
Attention and Concentration (A/C)0.40.550.628.317.03.80.40.3.780.761
Inter-Item Correlation MatrixM/RODMAOHFHHTPHB/PL/CMA/C
Memory and Recall (M/R)1
Orientation (O).6121
Decision Making and Problem Solving (DM).656.7701
Activities Outside the Home (AOH).573.676.7061
Function at Home and Hobby Activities (FHH).629.703.703.7751
Toileting and Personal Hygiene (TPH).479.588.556.583.7181
Behavior and Personality Changes (B/P).445.527.519.531.548.5431
Language and Communication (L/C).531.542.515.492.556.441.5991
Mood (M).392.349.312.460.449.297.578.4471
Attention and Concentration (A/C).478.555.545.607.587.493.626.669.4941
Table 4

QDRS scale score features: Internal-consistency reliability, score distributions, and inter-scale correlations.

ReliabilityScore Features and DistributionInter-Scale Correlation Spearman r
DomainItemsCronbach alpha (95% CI)RangeMeanMedianSD% Floor% CeilingQDRS-PtQDRS-InfCDR-SB
QDRS-Pt10.923 (.91-.94)0–304.53.04.918.40.01
QDRS-Inf10.949 (.94-.96)0–306.44.56.313.70.0.7701
CDR-SB6.965 (.96-.97)0–184.53.04.711.72.8.733.8501

Note: % Floor is the percentage who reported the lowest (best) possible score.

% Ceiling is the percentage who reported the highest (worst) possible score.

KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; QDRS-Inf = Quick Dementia Rating System-Informant Version; CDR-SB = Clinical Dementia Rating Sum of Boxes.

Table 5

Construct reliability (by ICC) between QDRS versions and CDR.

QDRS ItemPt QDRS–Inf QDRSInf QDRS–CDRPt QDRS—CDR
Memory.768 (.704-.818).780 (.720-.827).689 (.602-.756)
Orientation.793 (.736-.838).807 (.755-.848).722 (.645-.782)
Decision making.763 (.697-.814).794 (.739-.838).769 (.705-.819)
Activities outside home.803 (.749-.846).887 (.856-.911).805 (.751-.847)
Activities inside home.792 (.735-.837).878 (.846-.904).769 (.705-.819)
Personal hygiene.903 (.876-.924).911 (.885-.931).828 (.778-.866)
Behavior.706 (.625-.770)---------
Language.808 (.755-.850)--------
Mood.703 (.620-.768)--------
Attention.763 (.697-.815)--------
Total QDRS.871 (.835-.898)--------
QDRS-derived CDR-SB.876 (.842-.902).927 (.907-.942).845 (.803-.878)
QDRS-derived CDR.764 (.691-.820).842 (.795-.878).781 (.714-.833)

Intraclass correlation coefficient (95% Confidence Intervals).

KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; QDRS-Inf = Quick Dementia Rating System-Informant Version; CDR-SB = Clinical Dementia Rating Sum of Boxes.

Table 6

Discriminant properties of the patient QDRS.

CDR Global ScoreQDRS-Pt TotalRangeCDR-SBRange
00.6 (1.2)0–50.1 (0.2)0–1
0.52.8 (2.8)0–141.9 (1.2)1–8
16.1 (4.4)0–245.4 (1.5)2–9
29.3 (5.1)0–1910.8 (2.1)7–15
313.5 (6.4)5–2416.3 (2.0)12–18
ComparisonCut-offSensitivitySpecificityAUC (95% CI)
0 vs 0.51.5.72.82.823 (.74-.90)
0 vs. non-01.5.85.75.888 (.84-.94)
Controls vs MCI1.5.88.57.821 (.73-.89)
Controls vs. MCI/ADRD1.5.88.76.889 (.84-.94)

Means (SD).

KEY: QDRS-Pt = Quick Dementia Rating System-Patient Version; CDR-SB = Clinical Dementia Rating Sum of Boxes.

Table 7

Performance of patient QDRS across different dementia etiologies.

VariableAD N = 39DLB N = 68VaD N = 17FTD N = 6p-value
Patient Characteristics
Age, y81.2 (8.4)77.6 (6.8)80.5 (5.9)73.8 (8.0).02
Education, y14.5 (2.5)14.9 (2.7)14.4 (2.9)15.7 (4.0).69
Sex, %Female58.331.375.033.3.004
FAQ11.3 (8.4)17.4 (8.0)10.7 (12.2)8.7 (13.7).03
NPI5.4 (3.7)8.9 (5.9)5.1 (4.3)7.4 (3.6).003
MoCA13.5 (5.5)14.1 (5.7)13.8 (6.7)15.0 (5.1).91
CDR1.2 (0.7)1.5 (0.8)1.3 (0.9)0.7 (0.3).05
CDR-SB6.3 (4.2)8.3 (4.6)7.2 (5.3)4.2 (2.4).03
QDRS-Informant7.3 (4.5)10.5 (6.4)8.4 (7.0)5.7 (3.7).02
Patient QDRS Responses
QDRS-Pt memory and recall0.9 (0.8)0.8 (0.7)1.4 (1.0)1.2 (0.7).15
QDRS-Pt orientation0.6 (0.6)0.8 (0.6)0.8 (0.9)0.3 (0.3).23
QDRS-Pt decision making and problem solving0.7 (0.8)0.9 (0.6)1.4 (0.3)*0.8 (0.3).04
QDRS-Pt activities outside the home0.7 (0.6)1.0 (0.8)*0.6 (0.6)0.2 (0.3)*.01
QDRS-Pt function at home and hobby activities0.7 (0.9)1.0 (0.9)0.8 (1.0)0.2 (0.3)*.13
QDRS-Pt toileting and personal hygiene0.3 (0.6)0.6 (0.9)0.6 (1.1)0.0 (0.0).16
QDRS-Pt behavior and personality changes0.2 (0.3)0.7 (0.7)*0.5 (0.8)0.3 (0.6).002
QDRS-Pt language and communication0.3 (0.5)0.6 (0.5)*0.4 (0.6)1.3 (1.4)*.004
QDRS-Pt mood0.4 (0.4)0.7 (0.8)*0.2 (0.3)0.3 (0.3).01
QDRS-Pt attention0.3 (0.3)0.8 (0.6)*0.3 (0.3)0.7 (1.1)*< .001
QDRS-Pt Total4.9 (4.4)8.1 (5.3)*7.0 (6.7)5.3 (4.5).03

Means (SD) or %.

KEY: AD = Alzheimer’s Disease; DLB = Dementia with Lewy Bodies; VaD = Vascular Dementia; FTD = Frontotemporal Degeneration; FAQ = Functional Activities Questionnaire, NPI = Neuropsychiatric Inventory, MoCA = Montreal Cognitive Assessment; CDR-SB = Clinical Dementia Rating Sum of Boxes; QDRS = Quick Dementia Rating System; QDRS-Pt = Quick Dementia Rating System-Patient Version

*Signifies post-hoc differences between dementias (p < .05).

Note: Controls (n = 44), MCI (n = 88), and Undefined dementia (n = 1) are not included in this table.

Bold signifies differences after correction for multiple comparisons (corrected p < .005).

  51 in total

1.  Evaluation of the Functional Activities Questionnaire (FAQ) in cognitive screening across four American ethnic groups.

Authors:  Ruth M Tappen; Monica Rosselli; Gabriella Engstrom
Journal:  Clin Neuropsychol       Date:  2010-05       Impact factor: 3.535

2.  The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment.

Authors:  Ziad S Nasreddine; Natalie A Phillips; Valérie Bédirian; Simon Charbonneau; Victor Whitehead; Isabelle Collin; Jeffrey L Cummings; Howard Chertkow
Journal:  J Am Geriatr Soc       Date:  2005-04       Impact factor: 5.562

Review 3.  Selecting Patient-Reported Outcome Measures to Contribute to Primary Care Performance Measurement: a Mixed Methods Approach.

Authors:  San Keller; Sydney Dy; Renee Wilson; Vadim Dukhanin; Claire Snyder; Albert Wu
Journal:  J Gen Intern Med       Date:  2020-06-03       Impact factor: 5.128

4.  Validation of the Mayo Sleep Questionnaire to screen for REM sleep behavior disorder in a community-based sample.

Authors:  Bradley F Boeve; Jennifer R Molano; Tanis J Ferman; Siong-Chi Lin; Kevin Bieniek; Maja Tippmann-Peikert; Brendon Boot; Erik K St Louis; David S Knopman; Ronald C Petersen; Michael H Silber
Journal:  J Clin Sleep Med       Date:  2013-05-15       Impact factor: 4.062

5.  Screening Positive for Cognitive Impairment: Impact on Healthcare Utilization and Provider Action in Primary and Specialty Care Practices.

Authors:  Michael Rosenbloom; Terry R Barclay; Soo Borson; Ann M Werner; Lauren O Erickson; Jean M Crow; Kamakshi Lakshminarayan; Logan H Stuck; Leah R Hanson
Journal:  J Gen Intern Med       Date:  2018-08-10       Impact factor: 5.128

6.  A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.

Authors:  M E Charlson; P Pompei; K L Ales; C R MacKenzie
Journal:  J Chronic Dis       Date:  1987

7.  The Clinical Dementia Rating (CDR): current version and scoring rules.

Authors:  J C Morris
Journal:  Neurology       Date:  1993-11       Impact factor: 9.910

8.  Construct and concurrent validity of the Hopkins Verbal Learning Test-revised.

Authors:  A M Shapiro; R H Benedict; D Schretlen; J Brandt
Journal:  Clin Neuropsychol       Date:  1999-08       Impact factor: 3.535

9.  Acceptability and Results of Dementia Screening Among Older Adults in the United States.

Authors:  Amanda Harrawood; Nicole R Fowler; Anthony J Perkins; Michael A LaMantia; Malaz A Boustani
Journal:  Curr Alzheimer Res       Date:  2018       Impact factor: 3.498

Review 10.  Diagnosis and management of dementia with Lewy bodies: Fourth consensus report of the DLB Consortium.

Authors:  Ian G McKeith; Bradley F Boeve; Dennis W Dickson; Glenda Halliday; John-Paul Taylor; Daniel Weintraub; Dag Aarsland; James Galvin; Johannes Attems; Clive G Ballard; Ashley Bayston; Thomas G Beach; Frédéric Blanc; Nicolaas Bohnen; Laura Bonanni; Jose Bras; Patrik Brundin; David Burn; Alice Chen-Plotkin; John E Duda; Omar El-Agnaf; Howard Feldman; Tanis J Ferman; Dominic Ffytche; Hiroshige Fujishiro; Douglas Galasko; Jennifer G Goldman; Stephen N Gomperts; Neill R Graff-Radford; Lawrence S Honig; Alex Iranzo; Kejal Kantarci; Daniel Kaufer; Walter Kukull; Virginia M Y Lee; James B Leverenz; Simon Lewis; Carol Lippa; Angela Lunde; Mario Masellis; Eliezer Masliah; Pamela McLean; Brit Mollenhauer; Thomas J Montine; Emilio Moreno; Etsuro Mori; Melissa Murray; John T O'Brien; Sotoshi Orimo; Ronald B Postuma; Shankar Ramaswamy; Owen A Ross; David P Salmon; Andrew Singleton; Angela Taylor; Alan Thomas; Pietro Tiraboschi; Jon B Toledo; John Q Trojanowski; Debby Tsuang; Zuzana Walker; Masahito Yamada; Kenji Kosaka
Journal:  Neurology       Date:  2017-06-07       Impact factor: 9.910

View more
  3 in total

1.  The Effect of Baseline Patient and Caregiver Mindfulness on Dementia Outcomes.

Authors:  Ashley D Innis; Magdalena I Tolea; James E Galvin
Journal:  J Alzheimers Dis       Date:  2021       Impact factor: 4.472

2.  Detecting dementia among older, ethnically diverse residents of rural subsidized housing.

Authors:  Lisa Kirk Wiese; Christine L Williams; Debra Hain; David Newman; Christina P Houston; Carolina Kaack; James E Galvin
Journal:  Geriatr Nurs       Date:  2020-10-08       Impact factor: 2.361

3.  The Cognitive & Leisure Activity Scale (CLAS): A new measure to quantify cognitive activities in older adults with and without cognitive impairment.

Authors:  James E Galvin; Magdalena I Tolea; Stephanie Chrisphonte
Journal:  Alzheimers Dement (N Y)       Date:  2021-03-31
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.