Noor Habib1, Joshua Stott1. 1. a Department of Clinical, Educational and Health Psychology , University College London , London , United Kingdom.
Abstract
OBJECTIVE: This systematic review aims to review the evidence for the diagnostic accuracy of the non-English updated versions of Addenbrooke's Cognitive Examination (ACE) - the ACE-Revised (ACE-R) and the ACE-III - in the diagnosis of dementia. METHODS: A systematic search resulted in 16 eligible studies evaluating the diagnostic accuracy of ACE-R and ACE-III in ten different languages. Most studies were assessed as of medium to low quality using Standards for Reporting of Diagnostic Accuracy (STARD) guidance. RESULTS: The findings of excellent diagnostic accuracy are compromised by the methodological limitations of studies. While studies generally reported excellent diagnostic accuracy across and within different languages, optimal cut-offs even within particular language versions, varied. CONCLUSION: There is a need for future research to address these limitations through adherence to STARD guidelines. The ACE-III is particularly under-evaluated and should be a focus of future research. The variance in obtained optimal cut-offs within language versions is an issue compromising clinical utility and could be addressed in future work through use of a-priori defined thresholds.
OBJECTIVE: This systematic review aims to review the evidence for the diagnostic accuracy of the non-English updated versions of Addenbrooke's Cognitive Examination (ACE) - the ACE-Revised (ACE-R) and the ACE-III - in the diagnosis of dementia. METHODS: A systematic search resulted in 16 eligible studies evaluating the diagnostic accuracy of ACE-R and ACE-III in ten different languages. Most studies were assessed as of medium to low quality using Standards for Reporting of Diagnostic Accuracy (STARD) guidance. RESULTS: The findings of excellent diagnostic accuracy are compromised by the methodological limitations of studies. While studies generally reported excellent diagnostic accuracy across and within different languages, optimal cut-offs even within particular language versions, varied. CONCLUSION: There is a need for future research to address these limitations through adherence to STARD guidelines. The ACE-III is particularly under-evaluated and should be a focus of future research. The variance in obtained optimal cut-offs within language versions is an issue compromising clinical utility and could be addressed in future work through use of a-priori defined thresholds.