Literature DB >> 29356098

A systematic review of the diagnostic accuracy of automated tests for cognitive impairment.

Rabeea'h W Aslam1, Vickie Bates1, Yenal Dundar2, Juliet Hounsome1, Marty Richardson1, Ashma Krishan1, Rumona Dickson1, Angela Boland1, Joanne Fisher1, Louise Robinson3, Sudip Sikdar2.   

Abstract

OBJECTIVE: The aim of this review is to determine whether automated computerised tests accurately identify patients with progressive cognitive impairment and, if so, to investigate their role in monitoring disease progression and/or response to treatment.
METHODS: Six electronic databases (Medline, Embase, Cochrane, Institute for Scientific Information, PsycINFO, and ProQuest) were searched from January 2005 to August 2015 to identify papers for inclusion. Studies assessing the diagnostic accuracy of automated computerised tests for mild cognitive impairment (MCI) and early dementia against a reference standard were included. Where possible, sensitivity, specificity, positive predictive value, negative predictive value, and likelihood ratios were calculated. The Quality Assessment of Diagnostic Accuracy Studies tool was used to assess risk of bias.
RESULTS: Sixteen studies assessing 11 diagnostic tools for MCI and early dementia were included. No studies were eligible for inclusion in the review of tools for monitoring progressive disease and response to treatment. The overall quality of the studies was good. However, the wide range of tests assessed and the non-standardised reporting of diagnostic accuracy outcomes meant that statistical analysis was not possible.
CONCLUSION: Some tests have shown promising results for identifying MCI and early dementia. However, concerns over small sample sizes, lack of replicability of studies, and lack of evidence available make it difficult to make recommendations on the clinical use of the computerised tests for diagnosing, monitoring progression, and treatment response for MCI and early dementia. Research is required to establish stable cut-off points for automated computerised tests used to diagnose patients with MCI or early dementia.
© 2018 The Authors. International Journal of Geriatric Psychiatry Published by John Wiley & Sons Ltd.

Entities:  

Keywords:  Alzheimer disease; MCI; ageing; automated tests; computerised tests; dementia; diagnosis; monitoring

Mesh:

Year:  2018        PMID: 29356098      PMCID: PMC5887872          DOI: 10.1002/gps.4852

Source DB:  PubMed          Journal:  Int J Geriatr Psychiatry        ISSN: 0885-6230            Impact factor:   3.485


INTRODUCTION

Cognitive impairment in dementia is a growing public health concern.1 It is a distinctive characteristic of all dementias, and its timely assessment is a crucial and essential element in the diagnosis of dementia.2 This is because some causes of dementia are treatable and are fully or partially reversible, including dementias caused by vitamin B12 deficiency,3 side effects of medications,4 metabolic abnormality, and certain brain tumours.5 There is evidence from the United States that early recognition and treatment of dementia may delay the subsequent need for nursing home care and may reduce the risk of misdiagnosis and inappropriate management and reduce responsibilities for carers.6 Obtaining accurate incidence and prevalence figures for MCI is difficult since people with cognitive impairment may go undiagnosed. These estimates also vary significantly depending on the definitions used in different studies. For example, a large population‐based study of older‐aged individuals in the United Kingdom7 reported prevalence estimates of individuals not classified from current MCI definitions were variable (range, 2.5‐41.0%). In addition, the rates of progression from MCI to dementia varied from 3.7% to 30.0%.7 Evidence from neuropathological and neuroimaging studies suggests that biological changes associated with dementia occur long before the onset of symptoms.8 This has given rise to the concept of mild cognitive impairment (MCI), which is the state between the cognitive changes of normal ageing and early dementia.9, 10, 11 Mild cognitive impairment refers to the clinical condition used to describe people whose cognitive function is below that of the normal population for their educational level and age but who do not have any loss of functional abilities or skills.11, 12, 13, 14 It is a heterogeneous state, with possible trajectories including Alzheimer disease (AD), Lewy body dementias, and even reversion to normal cognitive functioning.15 The difference between MCI and early dementia is based on the level of cognitive decline and pattern of change in mood and behaviour. Individuals diagnosed with early dementia present with multiple cognitive deficits, and their memory loss is sufficient to impact everyday social and occupational functioning. Among the 4 most common medical conditions causing dementia are AD, vascular conditions, frontotemporal atrophy, and Lewy body disease. Irrespective of the primary reason, the cognitive prognosis for people with most types of dementia is usually poor.16, 17 There are a number of pen‐and‐paper–based tools as suitable tests for screening people for cognitive impairment, for example, the General Practitioner Assessment of Cognition, 6‐item Cognitive Impairment Test, and Mini‐cog assessment instrument.18, 19 There are different pen‐and‐paper tests used to aid diagnosis by specialists for MCI and early dementia, for example, the Dementia Toolkit for Effective Communication,20 Montreal Cognitive Assessment,21 and Saint Louis University Mental Status.22 However, these specialist tests can be expensive and time‐consuming.23 More recently, several automated tests have been developed,24, 25 which may be uniquely suited to early detection of changes in cognition, by, for example, covering a wider range of ability to precisely record accuracy and speed of response with a level of sensitivity not possible in standard administrations.23 The rationale for this review is to determine whether automated computerised tests for cognitive impairment have the potential to contribute to early diagnosis and simplify the current method of monitoring progression and treatment response compared with standard clinical practice. Timely diagnosis of mild cognitive impairment (MCI) and early dementia is important for good prognosis and effective management. A number of automated tests for diagnosing and monitoring progression of cognitive impairment have been developed, which need to be used in conjunction with clinical assessment. The overall quality and quantity of the available evidence are insufficient to make recommendations on the clinical use of these automated computerised tests. Further research is required to examine the cut‐off points for different populations in automated tests for diagnosing and monitoring progression and treatment response of MCI and early dementia.

METHODS

A systematic review was performed to describe the diagnostic accuracy of automated tests to detect MCI and early dementia as well as investigate their role in monitoring disease progression and response to treatment. The methodology and reporting of this review followed the guidance set out by the Cochrane Handbook for Diagnostic Test Accuracy Reviews.26 See Appendix S1 found in the Supporting Information for an abbreviation list.

Criteria for considering studies for this review

Any study assessing the diagnostic accuracy of automated computerised tests to diagnose or monitor MCI or early dementia against a reference standard was considered for inclusion. Case studies and qualitative studies were excluded. Studies or diagnostic tools published in a non‐English language were also excluded.

Participants

Participants were people with MCI or early dementia diagnosed by any recognised diagnostic standard.

Index tests

The index tests considered for inclusion were automated computerised tests of cognitive impairment, which can either be self‐administered or interviewer administered.

Reference standard

The reference standard for this review is the clinical diagnosis of MCI and early dementia using a diagnostic criteria, for example, the International Classification of Diseases 2 edition 10 and the Diagnostic and Statistical Manual of Mental Disorders editions 4 and 5 (DSM‐IV and DSM‐V, respectively).27 It is recognised that clinical diagnosis itself has a degree of variability, but this is not unique to dementia studies and does not invalidate the basic diagnostic test accuracy approach.

Search methods for identification of studies

The following electronic databases were searched from January 2005 to August 2015 to identify studies for inclusion: Medline, Embase, Cochrane database, Institute for Scientific Information, PsycINFO, and ProQuest for dissertations and theses (see Appendix S2 found in the Supporting Information for search strategy in Medline). Through citation tracking, one study from 2001 was included since it reported on a computerised tests currently in use in clinical practise. The number of references retrieved from different databases is provided in Appendix S3 found in the Supporting Information, and were managed in Endnote X7.

Selection of studies

Two reviewers independently screened all relevant titles and abstracts and full‐text articles for inclusion. Any disagreements were resolved by discussion with a third reviewer.

Data extraction and management

Data extraction forms were developed and piloted in an Excel spreadsheet by using 2 of the included studies. Data on study design, population characteristics, and outcomes were extracted by one reviewer and independently checked for accuracy by a second reviewer, with disagreements resolved through discussion with a third reviewer when necessary. The extracted data included information on the reference standard, index test, cut‐off points, and the measures of diagnostic test accuracy including sensitivity, specificity, receiver operating characteristic curve, and the area under the curve (AUC) for discriminating amongst MCI, early dementia, and cognitively healthy individuals.

Assessment of methodological quality

The methodological quality of the included studies was assessed by one reviewer and independently checked for accuracy by a second reviewer using the Quality Assessment of Diagnostic Accuracy Studies tool,28 which is recommended by the Cochrane Diagnostic Test Accuracy Reviews Guidelines.29 This tool is designed to evaluate the risk of bias and applicability of primary diagnostic accuracy studies using signalling questions in 4 domains: patient selection, index test, reference standard, and flow and timing.

Statistical analysis and data synthesis

An Excel spreadsheet was used to construct 2 × 2 tables of index test performance. The measures of index test performance were recorded by the number of true‐positive, true‐negative, false‐positive, and false‐negative, sensitivity, and specificity values of MCI and early dementia. The sensitivity and specificity values with 95% confidence intervals, positive and negative predictive values (PPV and NPV, respectively), and positive and negative likelihood ratios (LR+ and LR−, respectively) were calculated when not reported in the studies. Out of authors of all the included studies approached with a request for specific sensitivity and specificity data, only 2 provided these data. It was not possible to perform a meta‐analysis because of noncomparable data; the study designs varied, the cut‐off points for the primary outcome measure were heterogeneous, and the summary statistics were often inconsistently reported. A narrative synthesis of the results of the included studies was conducted.

Patient and public involvement

An advisory group comprising clinicians and service users guided the team during the review. A call for participation was sent through frontline groups, for example, Alzheimer's Society and Dementia UK, to identify people interested in giving feedback on the results of the review and on the final report. The review team took guidance from these agencies and INVOLVE30 for planning and facilitating the meetings.

RESULTS

Results of the search

The electronic search was conducted in August 2015, and 18 796 records were retrieved, of which 399 articles were shortlisted for full‐text assessment (Figure 1). The comprehensive search strategy was necessary because indexing of diagnostic accuracy studies is poor. In total, 16 studies met the inclusion criteria for detecting MCI and early dementia. No studies met the review inclusion criteria for monitoring progression or treatment response in MCI or early dementia, and therefore, there is no further mention of monitoring disease progression in the results section.
Figure 1

Preferred Reporting Items for Systematic Reviews and Meta‐Analyses flow diagram

Preferred Reporting Items for Systematic Reviews and Meta‐Analyses flow diagram In addition to the 16 included studies, 4 trials were identified during hand searching (Appendix S4 found in the Supporting Information). The authors of these studies were approached by email and telephone for results, but no responses were received. The summary of the included 16 studies is presented in Table 1; there were 7 cohort studies, 7 case‐control studies, and 2 cross‐sectional studies.40 , 43 Seven of the 16 included studies evaluated the use of automated computerised tests to detect MCI alone, 2 studies reported results for early dementia, 6 studies reported results for combined MCI/early dementia, and 1 study reported on cognitive impairment with a co‐morbidity, eg, human immunodeficiency virus (HIV)–associated neurocognitive disorders (HANDs).43 Two different reference standards were used for MCI in these studies, 9 studies used the Petersen criteria, and 4 studies used clinical diagnosis with a battery of neurocognitive tests. The reference standard for early dementia varied across different studies, 2 studies used National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association Alzheimer's Criteria,42, 46 2 studies used DSM‐IV,33, 34 1 study used the DSM‐V criteria,39 2 studies used clinical diagnosis with neurocognitive tests,36, 46 and 1 study used the Clinical Dementia Rating score.41
Table 1

Study and participant characteristics

StudyConditionCountry, SettingNMean age, years (SD, range)Gender (Male %)Mean Education, y (SD, Range)Index Test NameReference Test
Ahmed et al31 MCIUnited Kingdom35 (control: 20, MCI: 15)Control: 77.4 (4)Control: 55.0Control: 14.7 (2.9)CANS‐MCIClinical diagnosis using the Petersen criteria
Primary care (Oxford OPTIMA study)a MCI: 80.9 (7.2)MCI: 33.3MCI: 13.1 (3)
De Jager et al32 MCIUnited Kingdom119 (control: 98, MCI: 21)Control: 77.18 (5.9)NRUnclearCogStateClinical diagnosis using battery of neurocognitive tests
CommunityMCI: 81.95 (5.4)
Doniger et al33 MCIUnited States161 (control: 71, MCI: 58, mild AD: 32)Entire group: 76.0 (8.2)Entire group: 37.5Entire group: 13.3 (3.6)Mindstreams (abridged)Clinical diagnosis using the Petersen criteria for MCI and DSM‐IV for dementia
Tertiary care
MCI/mild dementiaMemory clinic
Dwolatzky et al34 MCICanada/Israel98 (control: 39, MCI: 30, mild AD: 29)Control: 73.41 (8.00)Control: 33.3Control: 14.95 (3.5)MindstreamsClinical diagnosis using the Petersen criteria for MCI and DSM‐IV for mild AD
Mild AD2 tertiary care memory clinicsMCI: 77.15 (6.43)MCI: 56.7MCI: 13.07 (2.86)
Mild AD: 80.55 (4.91)Mild AD: 44.8Mild AD: 11.31 (2.85)
Juncos‐Rabadan et al35 aMCISpain162 (control: 85, mda‐MCI: 29, sda‐MCI: 48)Control: 62.25 (8.26, 50‐82)All participants: 36.4Control: 10.83 (5, 2‐21)CANTAB‐R (PRM, DMS, and PAL)Clinical diagnosis using neurocognitive tests and the Albert criteria and Peterson criteria for aMCI
Primary caremda‐MCI: 71.68 (7.74, 54‐87)mda‐MCI: 10.06 (3.99, 3‐20)
sda‐MCI: 68.02 (9.04, 50‐84)sda‐MCI: 9.83 (3.96, 2‐20)
Junkkila et al36 aMCI/mild/probable dementiaFinland58 (control: 22, aMCI: 17, AD: 19)Control: 70 (4.48, 65‐80)Control: 36.36Control: 10 (3.25)CANTAB‐PALClinical diagnosis using the Petersen criteria and neurocognitive tests
Mild/probable dementiaHospitalaMCI: 73 (6.3, 61‐83)aMCI: 64.7aMCI: 8 (3)
AD: 73 (6.76, 61‐83)AD: 26.35AD: 8 (2.88)
Kingsbury et al37 MCIAustralia140 (control: 95, MCI: 30, depressed: 15)Control: 68.85 (7.96, 53‐89)Control: 37Controls: 4.93 (1.71)CogniScreenClinical diagnosis using the Petersen criteria
CommunityMCI: 77.62 (7.45, 51‐87)MCI: 43MCI: 3.07 (1.71)
Memory clinicUnclear what is measured
Kluger et al38 MCIUnited States101 (control: 39, MCI: 19, probable AD: 17, no diagnosis: 25)Control: 64 (11)NRNRComputerised test (no name)Diagnosed by a consensus of at least 2 clinicians
Early dementiaMemory clinicMCI: 72 (10)
Probable AD: 78 (9)
Lichtenberg et al39 MCI/early dementiaUnited States102 (control: 55, MCI: 11, mild dementia: 36)All participants: 79.3 (6.6)All participants: 46.1All participants: 13.5 (2.9)CSTClinical diagnosis using the Petersen criteria; clinical diagnosis of dementia using DSM‐V
Specialised geriatric clinic
Maruff et al40 MCIAustralia766 (control: 659, aMCI: 107)Control: 69.5(6.6)Control: 42.2Control: 12a (9‐15)CBBClinical diagnosis using the Peterson criteria
Primary careMCI: 75.7 (7.5)MCI: 49.5MCI: 12a (9‐15)
Mundt et al41 DementiaUnited States116 (control: 74, mild dementia: 42)All participants: 76.7 (7.0, 56‐93)All participants: 36.7All participants: 13.3 (3, 6‐22)Computer‐automated telephone screeningClinical diagnosis using CDR score
Specialised geriatric clinic
O'Connell et al42 Probable ADIreland50 (control: 16, probable AD: 34)Control: 72.6 (7.7)Control: 12.5NRCANTAB‐PALClinical diagnosis using the NINCDS‐ADRDA criteria
Memory clinicProbable AD: 73 (5.9)Probable AD: 32.4
Rosenthal et al43 HANDUnited States55 (HIV+ controls:16, HAD: 39)HIV+ controls: 45.4 (6)HIV+ controls: 75.0HIV+ controls: 12.3 (1.8)CAMCI modifiedHAND category using the Frascati criteria
General clinical research clinicHAD: 48.3 (6.3)HAD: 71.8HAD: 12.6 (2.1)
Saxton et al44 MCIUnited States524 (control: 296, MCI: 228)Control: 71.84 (5.95)MCI: 37.7Control 13.74 (2.69)CAMCIClinical diagnosis by consensus using battery of neurocognitive tests and functional and medical information
Primary care and communityMCI: 75.18 (6.76)Control: 32.8MCI: 13.10 (2.61)
Tierney et al45 MCICanada263Completed without assistance: 78.7 (6.9)All participants: 41.4Completed without assistance: 15.2 (3.2)CAMCIClinical diagnosis using battery of neurocognitive tests
Tertiary careNRCompleted with assistance: 81.8 (6.5)Completed with assistance: 13.9 (4.0)
Vacante et al46 MCIUnited Kingdom78 (control: 40, MCI: 20, early AD: 18) Traditional version Traditional version Traditional version TPTClinical diagnosis using the Petersen criteria
Control: 74.7 (7.78)Control: 50Control: 15.85 (3.36)
MCI: 78.3 (8.4)MCI: 60MCI: 15.9 (3.32)
Early AD: 73.67 (6.28)Early AD: 66.7Early AD: 15 (3.04)
Early dementiaPrimary care (Oxford OPTIMA study)a Novel version Novel version Novel version
Control: 73.67 (7.14)Control: 45Control: 16.35 (3.18)
MCI: 79.7 (6.07)MCI: 60MCI: 15 (2.66)
Early AD: 77.22 (4.94)Early AD: 77.8Early AD: 16.11 (2.97)

Abbreviations: AD, Alzheimer disease; aMCI, amnestic mild cognitive impairment; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; CBB, CogState Brief Battery; CDR, Clinical Dementia Rating Scale; CST, Computerised Self‐Test; DMS, Delayed Matching to Sample; DSM‐IV, Diagnostic and Statistical Manual of Mental Disorders edition 4; HAD, HIV‐associated dementia; HAND, HIV‐associated neurocognitive disorder; HIV+, human immunodeficiency virus; NR, not reported; MCI, mild cognitive impairment; mda‐MCI, multiple‐domain amnestic mild cognitive impairment; NINCDS‐ADRDA, National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association; OPTIMA, Oxford Project to Investigate Memory and Ageing; PAL, Paired Associated Learning; PRM, Pattern Recognition Memory; sda‐MCI, single‐domain amnestic mild cognitive impairment; TPT, The Placing Test.

It is unclear as to whether these cohorts were independent to each other.

Median.

Study and participant characteristics Abbreviations: AD, Alzheimer disease; aMCI, amnestic mild cognitive impairment; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; CBB, CogState Brief Battery; CDR, Clinical Dementia Rating Scale; CST, Computerised Self‐Test; DMS, Delayed Matching to Sample; DSM‐IV, Diagnostic and Statistical Manual of Mental Disorders edition 4; HAD, HIV‐associated dementia; HAND, HIV‐associated neurocognitive disorder; HIV+, human immunodeficiency virus; NR, not reported; MCI, mild cognitive impairment; mda‐MCI, multiple‐domain amnestic mild cognitive impairment; NINCDS‐ADRDA, National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association; OPTIMA, Oxford Project to Investigate Memory and Ageing; PAL, Paired Associated Learning; PRM, Pattern Recognition Memory; sda‐MCI, single‐domain amnestic mild cognitive impairment; TPT, The Placing Test. It is unclear as to whether these cohorts were independent to each other. Median.

Findings

The diagnostic accuracy of 11 automated computerised tests for the detection of MCI and/or early dementia without co‐morbidities was evaluated in 15 studies and 1 study with co‐morbidity.43 The details of the index tests are summarised in Table 2. Pooling of data from these 16 studies was considered inappropriate since there were few studies evaluating the same index test in the same population, and it was only possible to extract 2 × 2 data from 5 of the 16 studies.
Table 2

Index test details

StudyTest NameCognitive Domains TestedDetails of Test Platform UsedTime (min)Method of Administration
Ahmed et al31 CANS‐MCIMemoryDesktop computer, a touch screen system with both oral (loud speakers) and on screen instructions30Self‐administered
Language
VisuospatialResearcher in room
Executive function
De Jager et al32 CogStateMemoryInternetApproximately 20Self‐administered
Executive function
AttentionPractice session with a psychologist
Processing speed
Doniger et al32 Mindstreams (abridged)MemoryComputer and mouse30Self‐administered
Executive function
VisuospatialPractice session
Motor skills
Dwolatzky et al34 MindstreamsMemoryDesigned for use with older people. Mouse with the number pad on the keyboard (similar to the telephone keypad)45Self‐administered
Executive functionPractice session with feedback prior to testing
Visuospatial
VerbalResearch assistant
Attention
Information processing
Motor skills
Juncos‐Rabadan et al35 CANTAB‐R (PRM, DMS, and PAL)MemoryTouch screen computerNRSelf‐administered
Researcher present
Junkkila et al36 CANTAB‐PALMemoryTouch screen computerNRSelf‐administered
Kingsbury et al37 CogniScreenMemoryLaptop, headset with microphone20‐40Self‐administered
Experimenter in room
Kluger et al38 Computerised test (no name)MemoryLaptop12‐15Self‐administered
Praxis
NamingScreening test for computer competency
Executive function
Lichtenberg et al39 CSTLearningInternet based, interface with both written and oral instructions15Self‐administered
MemoryKeyboard proficiency test
Executive functionAdministered by graduate psychology students
Maruff et al40 CBBMemoryDesktop computer, yes/no button attached through USB port10Self‐administered
Verbal instructions by supervisor
Practice session
Mundt et al41 Computer‐automated telephone screeningMemoryStandard touch tone telephones11‐15Self‐administered
Spatial (auditory)
Executive function orientationResearcher provided assistance in dialling the number
Language
O'Connell et al42 CANTAB‐PALMemoryTouch screen computer10NR
Rosenthal et al43 CAMCI modifiedMemoryTablet with stylus25Self‐administered
Attention
Executive function processing speed
Saxton et al44 CAMCIMemoryDesktop computerApproximately 20Self‐administered
Attention
Executive function processing speed
Tierney et al45 CAMCIMemoryTablet computer30Self‐administered, some required researcher assistance
Attention
Executive function processing speed
Vacante et al46 TPTMemoryComputer20Self‐administered
Including practice pages

Abbreviations: CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; CBB, CogState Brief Battery; CST, Computerised Self‐Test; DMS, Delayed Matching to Sample; NR, not reported; PAL, Paired Associated Learning; PRM, Pattern Recognition Memory; TPT, The Placing Test.

Index test details Abbreviations: CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; CBB, CogState Brief Battery; CST, Computerised Self‐Test; DMS, Delayed Matching to Sample; NR, not reported; PAL, Paired Associated Learning; PRM, Pattern Recognition Memory; TPT, The Placing Test.

Studies reporting on diagnostic accuracy outcomes with a 2 × 2 table

There were 5 studies that reported diagnostic accuracy outcomes in a 2 × 2 table as described in Table 3. Two studies reported the diagnostic accuracy outcomes for MCI, 3 studies reported outcomes for early dementia, and 1 study reported combined outcomes for both MCI and early dementia.
Table 3

Diagnostic accuracy outcomes with 2 × 2 table

StudyIndex TestCut‐offSensitivity, %Specificity, %AUCTPFNTNFPPPV, %NPV, %LR +LR−
MCI
Juncos‐Rabadan et al35 CANTAB
Overalla 79.776.3NR5514712271.483.33.40.3
PRM1.5 SD below controls45.5b 92.9b 0.704b 354279685.4b 65.3b 6.44b 0.59b
DMS1.5 SD below controls23.4b 97.6b 0.623b 185983290.0b 58.5b 9.94b 0.78b
PAL1.5 SD below controls58.4b 89.4b 0.747b 453276983.3b 70.4b 5.52b 0.46b
Saxton et al44 CAMCIFinal tree model86940.91b 201272771991.4b 91.1b 13.7b 0.127b
Early dementia
Junkkila et al36 CANTAB‐PALNR81.8b 97.2b 0.914b 18435194.7b 89.7b 5.35b 0.0.3b
Mundt et al41 Computer‐automated telephone systemA derived scoring algorithm79.17b 83.8b 0.819b 3810621276.0b 86.1b 4.88b 0.249b
O'Connell et al42 CANTAB‐PAL32 errors67.61000.780231116010059.3b 0.324
MCI/early dementia
Junkkila et al36 CANTAB‐PALNR96.9b 80.8b 0.897b 31121586.1b 95.5b 5.04b 0.04b

Abbreviations: AUC, area under curve; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; DMS, Delayed Matching to Sample; FN, false negative; FP, false positive; LR−, negative likelihood ratio; LR+, positive likelihood ratio; MCI, mild cognitive impairment; NPV, negative predictive value; NR, not reported; PAL, Paired Associated Learning; PPV, positive predictive value; PRM, Pattern Recognition Memory; TN, true negative; TP, true positive.

The study details were provided by the primary author.

Calculated by the research team.

Diagnostic accuracy outcomes with 2 × 2 table Abbreviations: AUC, area under curve; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANTAB, Cambridge Neuropsychological Test Automated Battery; CANTAB‐PAL, Cambridge Neuropsychological Test Automated Battery Paired Associated Learning; DMS, Delayed Matching to Sample; FN, false negative; FP, false positive; LR−, negative likelihood ratio; LR+, positive likelihood ratio; MCI, mild cognitive impairment; NPV, negative predictive value; NR, not reported; PAL, Paired Associated Learning; PPV, positive predictive value; PRM, Pattern Recognition Memory; TN, true negative; TP, true positive. The study details were provided by the primary author. Calculated by the research team.

Mild cognitive impairment

Juncos‐Rabadan et al35 evaluated 3 different visual episodic memory tests included in the Cambridge Neuropsychological Test Automated Battery (CANTAB); these memory tests were Pattern Recognition Memory, Delayed Matching to Sample, and Paired Associated Learning. The overall sensitivity and specificity for the 3 visual episodic memory tests were moderate at 79.7% and 76.3%, respectively. The overall AUC for the different visual episodic tests was not reported, but ranged from 0.623 (Delayed Matching to Sample) to 0.747 (Paired Associated Learning), showing poor ability to discriminate between the MCI group and the non‐MCI group. This test had a high overall PPV of 71.4%; this means 71.4% of the people who tested positive for MCI with the index test actually had MCI according to the reference standard. Similarly, the overall NPV for this test was 83.3%, meaning that 83.3% of people who tested negative for MCI on the index test did not have MCI. This test had a low overall LR+ of 3.4, which shows a low likelihood of the test to establish the presence of disease. It also had a low overall LR− of 0.3, which shows a low likelihood of the test to establish the absence of disease. The study by Saxton et al44 evaluated the Computer Assessment of Memory and Cognitive Impairment (CAMCI) and reported good sensitivity (86%) and exceptional specificity (94%). The reported AUC (0.91) was also very high.

Early dementia

The CANTAB Paired Associated Learning (CANTAB‐PAL) was evaluated in 2 of the studies. Junkikla et al36 reported high sensitivity (81.8%) and specificity (97.2%) and an AUC of exceptional discrimination (0.914) for early dementia. The study by O'Connell et al42 reported poor sensitivity (67.6%) and high specificity (100%) and an AUC of moderate discrimination (0.780) between the early‐dementia group and non–early‐dementia group. Mundt et al41 assessed the Computer Automated Telephone System and reported moderate sensitivity (79.17%) and high specificity (83.8%) for this test.

MCI/early dementia

One study evaluated CANTAB‐PAL. The authors reported high sensitivity (96.9%) and high specificity (80.8%) with an AUC of good discrimination (0.897) between the MCI/early‐dementia group and non‐MCI/early‐dementia group.

Studies reporting on diagnostic accuracy outcomes without a 2 × 2 table

The authors of 11 studies reported diagnostic accuracy outcomes for 9 different index tests without using 2 × 2 data as tabulated in Table 4. Instead, they calculated optimal sensitivity and specificity values using receiver operating characteristic curve analysis.
Table 4

Diagnostic accuracy outcomes without 2 × 2 table

StudyIndex TestCut‐offSensitivity, %Specificity, %AUC (95% CI)PPV, %NPV, %LR+LR−
MCI
Ahmed et al31 CANS‐MCI0.589.073.00.867 (0.743‐0.990)6084NRNR
De Jager et al32 CogState
Accuracy82.678.090.00.86 (NR)NRNRNRNR
Accuracy speed ratio3.5476.079.00.84 (NR)NRNRNRNR
Dwolatzky et al34 Mindstreams computerised cognitive testingNA for AUCNRNR0.84 (NR)NRNRNRNR
Kingsbury et al37 CogniScreen
Pair recognition0.4776.060.00.72 (0.62‐0.83)NRNRNRNR
Cued recall0.30582.176.70.87 (0.80‐0.95)NRNRNRNR
Immediate and delayed serial recall0.38592.680.00.89 (0.81‐0.97)NRNRNRNR
Kluger et al38 Computerised test (no name)NRNRNR0.89NRNRNRNR
Maruff et al40 CBB
Psychomotor/attention9041.185.70.67 (0.6‐0.73)NRNRNRNR
Learning/working memory9080.484.70.91 (0.87‐0.94)NRNRNRNR
Tierney et al45 CAMCI280.074.0NRNRNRNRNR
Vacante et al46 Computerised total (novel and traditional)19.570.076.2NRNRNRNRNR
Computerised objects and faces (novel and traditional)12.55064.3NRNRNRNRNR
Computerised objects and faces (novel and traditional)13.57552.4NRNRNRNRNR
Early dementia
Doniger et al33 Mindstreams (abridged)
OverallNANRNR0.886NRNRNRNR
Memory
Verbal memoryNRNR0.830 (0.762‐0.898)NRNRNRNR
Nonverbal memoryNRNR0.825 (0.756‐0.893)NRNRNRNR
Executive function
Go–No GoNRNR0.733 (0.640‐0.826)NRNRNRNR
Stoop interferenceNRNR0.790 (0.690‐0.890)NRNRNRNR
Catch gameNRNR0.748 (0.670‐0.827)NRNRNRNR
Visual spatial
Visual spatial imageryNRNR0.678 (0.567‐0.789)NRNRNRNR
Dwolatzky et al34 Mindstreams computerized cognitive testingNRNRNRNRNRNRNRNR
Kluger et al38 Computerised test (no name)NRNRNR0.97NRNRNRNR
Vacante et al46 TPT
Computerised total (novel and traditional)15.588.992.9NRNRNRNRNR
Computerised objects and faces (novel and traditional)11.594.478.6NRNRNRNRNR
Computerised objects and faces (novel and traditional)13.594.452.4NRNRNRNRNR
MCI/early dementia
Doniger et al33 Mindstreams (abridged)
OverallNA for AUCNRNR0.823 (0.757‐0.888)NRNRNRNR
Memory
Verbal memory0.773(0.697‐0.849)
Nonverbal memory0.767 (0.690‐0.844)
Executive function
Go–No Go0.719 (0.639‐0.800)
Stoop interference0.671 (0.575‐0.766)
Catch game0.685 (0.595‐0.776)
Visual spatial
Visual spatial imagery0.721 (0.638‐0.803)
Lichtenberg et al39 CST1.580.087.0NR88.079.0NRNR

Abbreviations: AUC, area under curve; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CBB, CogState Brief Battery; CST, Computerised Self‐Test; LR−, negative likelihood ratio; LR+, positive likelihood ratio; MCI, mild cognitive impairment; NA, not applicable; NPV, negative predictive value; NR, not reported; PPV, positive predictive value; TPT, The Placing Test.

Diagnostic accuracy outcomes without 2 × 2 table Abbreviations: AUC, area under curve; CAMCI, Computer Assessment of Mild Cognitive Impairment; CANS‐MCI, Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment; CBB, CogState Brief Battery; CST, Computerised Self‐Test; LR−, negative likelihood ratio; LR+, positive likelihood ratio; MCI, mild cognitive impairment; NA, not applicable; NPV, negative predictive value; NR, not reported; PPV, positive predictive value; TPT, The Placing Test. Eight studies reported the diagnostic accuracy outcomes for MCI. Ahmed et al evaluated Computer‐Administered Neuropsychological Screen for Mild Cognitive Impairment and reported high sensitivity (89.0%) and moderate specificity (73.0%) with an AUC of 0.867, which shows a good ability to discriminate between the MCI group and the non‐MCI group. Tierney et al evaluated the CAMCI test and reported a high sensitivity (80.0%) and a moderate specificity (74.0%); the authors did not report AUC values. Maruff et al evaluated the CogState Brief Battery (CBB). The CogState Brief Battery has 2 composite scores for 4 tasks: psychomotor function, attention function, learning memory, and working memory. The psychomotor/attention function had poor discrimination since its AUC was 0.67. It also had poor sensitivity (41.1%) but high specificity (85.7%). The AUC for the learning/working memory was 0.91, which shows exceptional ability to discriminate between the MCI group and the non‐MCI group. It also had high sensitivity (80.4%) and high specificity (84.7%). The overall sensitivity, specificity, and AUC were not reported. Dwolatzky et al34 and Doniger et al33 both assessed the Mindstreams computerised cognitive testing. Only Doniger et al reported results relating to early dementia. They evaluated an abridged version of Mindstreams with an overall AUC of 0.886, which showed a good ability to discriminate between the early‐dementia group and the non–early‐dementia group. Kluger et al evaluated an automated computerised test, which did not have a specific name. The authors reported an AUC of 0.97, which shows exceptional ability to discriminate between early dementia and healthy controls. Doniger et al reported an overall AUC of 0.823, which showed a good ability to discriminate between the cognitively healthy group and the cognitive unhealthy group. The AUC values for individual test results ranged from 0.671 to 0.773. Lichtenberg et al39 reported sensitivity and specificity values (80.0% and 87.0%, respectively), PPV (88.0%), and NPV (79.0%).

HIV‐associated neurocognitive disorders

One study43 evaluated diagnostic accuracy of an automated computerised test that included people with cognitive impairment with co‐morbidities. This study examined the HAND and used the automated test CAMCI. The CAMCI test assessed multiple domains with different tasks. The study examined a range of diagnostic accuracy outcomes but did not report the values for all of them.

Methodological quality

The methodological quality of the included studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool as summarised in Figure 2.
Figure 2

Risk of bias and applicability concerns summary [Colour figure can be viewed at wileyonlinelibrary.com]

Risk of bias and applicability concerns summary [Colour figure can be viewed at wileyonlinelibrary.com] The risk‐of‐bias criterion for patient selection was high for 7 studies because a case‐control study design had not been avoided (see Appendix S6 found in the Supporting Information). Seven studies were judged to be at unclear risk in the index test criteria for risk of bias since the threshold values for the index tests were not prespecified. There was high concern regarding the applicability of the index test for all of the studies because the interpretation of the index test was different from the review question, since it is not possible to establish diagnosis of MCI and early dementia using automated computerised tests in isolation; specialist expertise is necessary to establish a diagnosis. The reference standard domain for the risk of bias was unclear in 8 studies since it was not possible to ascertain whether reference standard results were interpreted without knowledge of the results of the index tests. All but one study38 were judged to have low concern for applicability regarding the reference standard since it used a consensus of 2 clinicians' opinions as the reference standard. In the flow and timing domain for the risk of bias, a judgement of unclear risk of bias was given to 2 studies35, 43 since attrition or timing was not described in the papers. However, 14 studies were assessed as being at low risk because all patients had received the same reference standard and all patients were included in the analysis. There was a high concern in the domains of applicability for 16 studies. Of the 16 studies, only 1 was judged to be at low of risk of bias across the 4 domains examined39; despite this, the overall quality of the included studies was considered to be good. Data from the included studies were presented and discussed with a service user. The structure of the meeting is described in Appendix S5 found in the Supporting Information. The service user thought that all of the index text domains needed to be tested to enable a comprehensive overview of any suspected cognitive impairment. His view was that more information on key domains would help clinicians and patients address the challenges faced by patients with MCI or early dementia. The service user raised concerns about the age of the study participants since there were no tests that assessed cognitive impairment in people over the age of 90 years. Another concern was the effect of little or no education on the ability to perform well on the test. The importance of the index tests being user‐friendly and acceptable to patients was also highlighted. He also stated a preference for desktop computers over touch screen test, in case a patient had tremors. He also highlighted the importance of ensuring that the colour palette in visual components of the tests had a sharp contrast because it is likely that older people will have problems with their eyesight. He also stated that some people might become frustrated with tests that lasted longer than 40 minutes.

DISCUSSION

In assessing the diagnostic accuracy of a test, an index test with high specificity is preferable for diagnosis, and high sensitivity is preferred for screening.47 When patients are diagnosed with MCI or early dementia, an index test with both high sensitivity and specificity is needed to be able to appreciate a distinctive pattern of cognitive impairment in MCI and early dementia. This distinctive pattern of cognitive impairment distinguishes the cognitive impairment caused by another disease process, eg, cognitive impairment as presented in depression or HIV. A number of studies included in this review were not conducted in samples representative of the usual clinical population in which these tests might be used (eg, patients visiting the memory clinics with a mix of MCI and dementia of various aetiologies and the “worried well” and depressed patients) but were conducted in convenience samples of patients with limited diagnoses (mostly MCI and AD). This, along with the lack of reliable evidence to support one test over the other, makes it difficult to draw a clear picture of the diagnostic accuracy of the index tests in this review. There was some disparity in how the studies were reported; for example, all of the index tests, except 4, were used as screening tests, yet the authors reported outcomes for diagnostic accuracy. It is also not clear from reviewing the included studies whether these computerised tests ought to be used in primary or secondary care. In the United Kingdom, some primary care practices take part in “case finding” for dementia, for example, targeting “high‐risk” groups (eg, older adults or patients with high vascular risk, learning disability, or Parkinson disease), and hospital staff undertake brief cognitive assessments during all acute admissions for older adults. The pen‐and‐paper tests currently used in clinical practice not only help clinicians differentiate between normal cognition, MCI, and dementia20, 21, 22 but also assist in staging severity of illness. The CANTAB test was the only automated test that could stage severity.35, 36, 42 But 2 of the 3 CANTAB‐PAL studies36, 42 had very small sample sizes (58 and 50, respectively), and the slightly larger study35 only tested the domain of visual episodic memory. The time taken to complete these computerised tests is not clear in the case of CANTAB‐PAL and depending on the version of Mindstreams, ranged from 30 to 45 minutes.33 In contrast, the paper‐based tests range from 7 to 10 minutes in their application.20, 21, 22 Concern for the time it takes to complete the tests was raised in the service user feedback; the user pointed out the possibility of people becoming frustrated with tests that lasted for more than 40 minutes, especially if they are not familiar with using technology. The data in the included papers also did not describe the time needed for training the assessor and the need for a specialist for scoring. An important point to consider is that current diagnosis of patients with MCI and early dementia is based on clinical judgement and medical history as well as on the results of paper‐based cognitive tests. Automated tests cannot be used in isolation or substituted for clinical judgement. Even with prespecified cut‐off values for a particular population, any cognitive testing measure alone is insufficient to render a diagnostic classification. None of the previously conducted relevant reviews in this area conducted a diagnostic accuracy review.23, 48, 49 They were narrative reviews that provided a summary of the battery of tests used and rated this evidence on validity and reliability, comprehensiveness, and usability. This review focused on computerised tests that were self‐administered and had a minimum level of involvement from professionals. In line with the findings of this review, the authors of the other reviews concluded that there is significant difference in automated computerised tests, and hence, they must be judged on a case‐by‐case basis.23 More research is required to establish stable cut‐off points for each automated test used to diagnose patients with MCI or early dementia. An important consideration is testing the cut‐off points in specific patient populations, for example, in patients of different age groups or education levels and from different geographical regions. Another area for future research is providing more information on the costs of automated tests and include time for training, administration, and scoring of the different tests, as these are important factors for their use in routine clinical practice. This information is currently absent in the published studies describing automated tests used to diagnose or monitor people with MCI or early dementia. No studies reporting on outcomes relating to monitoring progression of disease could be identified, which highlights a difficulty in the current method of monitoring progression and treatment response compared with standard clinical practice.

Strengths of this review

The search strategy for this review was extensive. The methodological rigour of the review process was enhanced by the use of 2 assessors to perform citation screening, quality assessment, and data extraction/checking. All of the primary study authors were contacted and asked to fill in the contingency tables. A patient and public involvement exercise was also conducted.

Weaknesses of the review

This review is limited in part by the number of included studies for the same automated computerised test. Because of noncomparable data relating to the index test, it was not appropriate to pool the data. Another limitation with the studies is the lack of comparative results across the different domains being examined.

CONCLUSIONS

It is difficult to draw a clear picture of the diagnostic accuracy of automated computerised tests to establish a diagnosis of MCI or early dementia in this review because there is currently insufficient evidence to support the use of one test over the other. Further research is required to examine the cut‐off points for the diagnosis of MCI and early dementia when using automated tests. These test scores do not always relate with medical history and more importantly with functioning. The suitability of these tests also depends on their cost, time needed for training the assessor, time needed for the administration of the test, and the need for a specialist for scoring.

CONFLICT OF INTEREST

None declared.

FUNDING

The NIHR Health Technology Assessment Programme commissioned this report with project reference number 15/67/01. Data S1. Supporting information Click here for additional data file.
  42 in total

Review 1.  Mild cognitive impairment.

Authors:  Serge Gauthier; Barry Reisberg; Michael Zaudig; Ronald C Petersen; Karen Ritchie; Karl Broich; Sylvie Belleville; Henry Brodaty; David Bennett; Howard Chertkow; Jeffrey L Cummings; Mony de Leon; Howard Feldman; Mary Ganguli; Harald Hampel; Philip Scheltens; Mary C Tierney; Peter Whitehouse; Bengt Winblad
Journal:  Lancet       Date:  2006-04-15       Impact factor: 79.321

2.  Comparison of the Saint Louis University mental status examination and the mini-mental state examination for detecting dementia and mild neurocognitive disorder--a pilot study.

Authors:  Syed H Tariq; Nina Tumosa; John T Chibnall; Mitchell H Perry; John E Morley
Journal:  Am J Geriatr Psychiatry       Date:  2006-11       Impact factor: 4.105

3.  Mild cognitive impairment: a concept in evolution.

Authors:  R C Petersen; B Caracciolo; C Brayne; S Gauthier; V Jelic; L Fratiglioni
Journal:  J Intern Med       Date:  2014-03       Impact factor: 8.989

4.  Reversible dementia caused by vitamin B12 deficiency.

Authors:  D O'Neill; R D Barber
Journal:  J Am Geriatr Soc       Date:  1993-02       Impact factor: 5.562

Review 5.  Computerized cognitive assessment in primary care to identify patients with suspected cognitive impairment.

Authors:  Mary C Tierney; Miriam A Lermer
Journal:  J Alzheimers Dis       Date:  2010       Impact factor: 4.472

6.  Self-administered screening for mild cognitive impairment: initial validation of a computerized test battery.

Authors:  Jane B Tornatore; Emory Hill; Jo Anne Laboff; Mary E McGann
Journal:  J Neuropsychiatry Clin Neurosci       Date:  2005       Impact factor: 2.198

Review 7.  Is MCI really just early dementia? A systematic review of conversion studies.

Authors:  Maddalena Bruscoli; Simon Lovestone
Journal:  Int Psychogeriatr       Date:  2004-06       Impact factor: 3.878

8.  Applicability of the CANTAB-PAL computerized memory test in identifying amnestic mild cognitive impairment and Alzheimer's disease.

Authors:  Jenny Junkkila; Sakari Oja; Matti Laine; Mira Karrasch
Journal:  Dement Geriatr Cogn Disord       Date:  2012-08-27       Impact factor: 2.959

9.  Clinical utility of the cogstate brief battery in identifying cognitive impairment in mild cognitive impairment and Alzheimer's disease.

Authors:  Paul Maruff; Yen Ying Lim; David Darby; Kathryn A Ellis; Robert H Pietrzak; Peter J Snyder; Ashley I Bush; Cassandra Szoeke; Adrian Schembri; David Ames; Colin L Masters
Journal:  BMC Psychol       Date:  2013-12-23

10.  Validity of a novel computerized cognitive battery for mild cognitive impairment.

Authors:  Tzvi Dwolatzky; Victor Whitehead; Glen M Doniger; Ely S Simon; Avraham Schweiger; Dena Jaffe; Howard Chertkow
Journal:  BMC Geriatr       Date:  2003-11-02       Impact factor: 3.921

View more
  7 in total

Review 1.  Cognitive assessment tools for mild cognitive impairment screening.

Authors:  Lei Zhuang; Yan Yang; Jianqun Gao
Journal:  J Neurol       Date:  2019-08-14       Impact factor: 4.849

2.  Discriminating memory disordered patients from controls using diffusion model parameters from recognition memory.

Authors:  Roger Ratcliff; Douglas W Scharre; Gail McKoon
Journal:  J Exp Psychol Gen       Date:  2021-11-04

3.  cCOG: A web-based cognitive test tool for detecting neurodegenerative disorders.

Authors:  Hanneke F M Rhodius-Meester; Teemu Paajanen; Juha Koikkalainen; Shadi Mahdiani; Marie Bruun; Marta Baroni; Afina W Lemstra; Philip Scheltens; Sanna-Kaisa Herukka; Maria Pikkarainen; Anette Hall; Tuomo Hänninen; Tiia Ngandu; Miia Kivipelto; Mark van Gils; Steen Gregers Hasselbalch; Patrizia Mecocci; Anne Remes; Hilkka Soininen; Wiesje M van der Flier; Jyrki Lötjönen
Journal:  Alzheimers Dement (Amst)       Date:  2020-08-25

4.  A systematic review of the diagnostic accuracy of automated tests for cognitive impairment.

Authors:  Rabeea'h W Aslam; Vickie Bates; Yenal Dundar; Juliet Hounsome; Marty Richardson; Ashma Krishan; Rumona Dickson; Angela Boland; Joanne Fisher; Louise Robinson; Sudip Sikdar
Journal:  Int J Geriatr Psychiatry       Date:  2018-01-22       Impact factor: 3.485

5.  Study protocol for a non-inferiority randomised controlled trial of SKY breathing meditation versus cognitive processing therapy for PTSD among veterans.

Authors:  Danielle C Mathersul; Julia S Tang; R Jay Schulz-Heik; Timothy J Avery; Emma M Seppälä; Peter J Bayley
Journal:  BMJ Open       Date:  2019-04-03       Impact factor: 2.692

6.  Neurocognitive impact of Zika virus infection in adult rhesus macaques.

Authors:  Kayvon Modjarrad; Sandhya Vasan; Denise C Hsu; Kesara Chumpolkulwong; Michael J Corley; Taweewun Hunsawong; Dutsadee Inthawong; Alexandra Schuetz; Rawiwan Imerbsin; Decha Silsorn; Panupat Nadee; Jumpol Sopanaporn; Yuwadee Phuang-Ngern; Chonticha Klungthong; Matthew Reed; Stefan Fernandez; Lishomwa C Ndhlovu; Robert Paul; Luis Lugo-Roman; Nelson L Michael
Journal:  J Neuroinflammation       Date:  2022-02-07       Impact factor: 8.322

7.  Cognitive screening for adult psychiatric outpatients: Comparison of the Cognivue® to the Montreal Cognitive Assessment.

Authors:  Amanda F Rose; Alan F Gilbertson; Constance Cottrell; Rajesh R Tampi
Journal:  World J Psychiatry       Date:  2021-07-19
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.