Literature DB >> 34237113

Converting from the Montreal Cognitive Assessment to the Mini-Mental State Examination-2.

Hwabeen Yang1, Daehyuk Yim1, Moon Ho Park1.   

Abstract

OBJECTIVE: The Montreal Cognitive Assessment (MoCA) and Mini-Mental State Examination-2 (MMSE-2) are useful psychometric tests for cognitive screening. Many clinicians want to predict the MMSE-2 score based on the MoCA score. To facilitate the transition from the MoCA to the MMSE-2, this study developed a conversion method.
METHODS: This study retrospectively examined the relationship between the MoCA and MMSE-2. Overall, 303 participants were evaluated. We produced a conversion table using the equipercentile equating method with log-linear smoothing. Then, we evaluated the reliability and accuracy of this algorithm to convert the MoCA to the MMSE-2.
RESULTS: MoCA scores were converted to MMSE-2 scores according to a conversion table that achieved a reliability of 0.961 (intraclass correlation). The accuracy of this algorithm was 84.5% within 3 points difference from the raw score.
CONCLUSIONS: This study reports a reliable and easy conversion algorithm for transforming MoCA scores into converted MMSE-2 scores. This method will greatly enhance the utility of existing cognitive data in clinical and research settings.

Entities:  

Year:  2021        PMID: 34237113      PMCID: PMC8266092          DOI: 10.1371/journal.pone.0254055

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The Mini-Mental State Examination (MMSE) is the most widely used cognitive screening test. In 2010, the MMSE-Second Edition, a revised version of the MMSE, was introduced. Importantly, MMSE-Second Edition: Standard version (MMSE-2) scores can be switched from MMSE scores without any score conversion [1]. Most clinical studies for dementia medication have used the MMSE score or changes in this score to determine the severity of dementia or effectiveness of treatments [2, 3]. Verifying MMSE scores in clinical practice is very important because many dementia practice guidelines refer to MMSE scores as a standardized tool for cognitive testing [4, 5]. However, under copyright restrictions, the MMSE can no longer be used, and the MMSE-2 must be purchased, potentially limiting its routine use in clinical and research settings [6]. The Montreal Cognitive Assessment (MoCA) is another cognitive screening test that has higher sensitivity than the MMSE for detecting early-stage cognitive decline [7]. The use of the MoCA is increasing for cognitive screening because it can be obtained free of charge and it has specific clinical characteristics. Unfortunately, clinical trials vary in their use of these two scales, which makes comparisons between studies and meta-analyses difficult because the direct comparison of scores between the MMSE and MoCA is complicated. In addition, many clinicians want to predict the MMSE score based on the MoCA score. Previous studies have attempted to develop MoCA to MMSE conversion algorithms or equivalency tables [8-14]. These conversion methods have been validated in other languages [15, 16]. However, to the best of our knowledge, no study has converted the MoCA to the MMSE-2. Therefore, this study attempted to derive a conversion table from the MoCA to the MMSE-2 to develop a simple and reliable conversion algorithm for the two scales and to compare the new conversion algorithm with those described in previous studies.

Methods

Participants

This was a retrospective observational study of subjects who visited a memory clinic at a university hospital in the Republic of Korea and who were referred for neuropsychological screening. Overall, 303 participants who had visited the hospital from April 2020 to February 2021 were evaluated. The participants included 95 subjects with dementia, 172 subjects with mild cognitive impairment (MCI), and 36 cognitively unimpaired (CU) subjects. MMSE data were referenced in this study because there is little information on the relationship between the MoCA and MMSE-2. A previous study reported that the converted score of the MMSE and the raw (original, actual, observed) score of the MMSE had an intraclass correlation coefficient (ICC) of 0.85 [17]. A sample size requirement of 31 subjects was calculated using Walter’s method at a confidence interval of 0.95, a confidence interval width within ± 0.1, two raters per subject, and a given ICC of 0.85 [18]. The current study met and exceeded the minimal requirements of these sample sizes for each cognitive subgroup. Subjects with dementia met the criteria for a major neurocognitive disorder proposed by the Diagnostic and Statistical Manual of Mental Disorders (DSM‐5), American Psychiatric Association [19]. Subjects with MCI were diagnosed according to the criteria proposed by the International Working Group on MCI [20]. In this study, CU subjects did not meet the criteria for MCI or dementia but were recruited and assessed in a manner identical to that used for those with MCI and dementia [21]. CU subjects were functionally independent. Demographic data, including age and sex, and information regarding years of education were collected. All participants underwent a comprehensive evaluation consisting of a detailed medical history, neurological examinations, and a neuropsychological evaluation. In addition, laboratory tests were used to confirm there were no other causes for dementia or cognitive impairment. Magnetic resonance imaging was performed on all patients. All participants underwent the MMSE-2 followed by the MoCA on the same day. The results of the MoCA and MMSE-2 were not available during the consensus diagnosis. The study protocol was reviewed and approved by the Institutional Review Board of Korea University Ansan Hospital (2021AS0066), and informed consent was not necessary because of the retrospective design of the study and the de-identified nature of the data.

MoCA and MMSE-2

The MoCA is the most widely used screening test and was developed as a brief screening test for MCI and the early stages of dementia [7, 22]. This test evaluates visuospatial (5 points), naming (3 points), attention (6 points), language (3 points), abstract (2 points), memory (5 points), and orientation (6 points) abilities. Possible scores range from 0 to 30 points, where higher scores indicate better cognitive function. This study used the Korean version of the MoCA [22]. The MMSE-2 is the most commonly used test to screen for cognitive impairment and has been the most extensively used in clinical and research settings due to its practicality [1]. The MMSE-2 has potential scores that range from 0 to 30 points, where higher scores indicate better cognitive function. The MMSE-2, similar to the MMSE, examines the following six cognitive domains: orientation in time (5 points), orientation in place (5 points), memory registration (3 points), memory recall (3 points), attention and calculation (5 points), and language and other functions (8 points). This study used the Korean version of MMSE-2: Standard Version, Blue Form [23].

Statistical analysis

Data are expressed as the means (standard deviations) for continuous variables and as percentages for categorical variables. Demographic and clinical characteristics were evaluated with chi-squared tests for differences between proportions, and the Kruskal–Wallis test was used to test for differences between continuous variables after performing Levene’s test for equality of variance. Bonferroni correction was used for post-hoc comparisons. The overall agreement between the MoCA and MMSE-2 was assessed using Pearson’s correlation coefficient (r). For comparisons among correlations for cognitive subgroups, coefficients were converted and compared with Fisher’s Z-transformation. In addition, the concordance correlation coefficient (CCC) was also calculated. The CCC is a more conservative measure of agreement without the issue of linear versus nonlinear association, which measures agreement by assessing how well the relationship between the measurements is represented by a line through the origin at an angle of 45 degrees. The CCC values were interpreted as poor (CCC < 0.90), moderate (CCC = 0.90–0.95), substantial (CCC = 0.95–0.99), or almost perfect (CCC > 0.99) [24]. The equipercentile equating method with log-linear smoothing was used to estimate scores from the MoCA to the MMSE-2. This equating method matched the MoCA score and the raw score of the MMSE-2 based on their respective percentile ranks after smoothing the corresponding distribution. A comprehensive explanation of equipercentile equating has been described previously [25]; in summary, scores from two different measures are considered equivalent if their corresponding percentile ranks are equal. The strength of this method is that the equated scores always fall within the range of possible scores; a limitation is that this method can lead to an irregular distribution of scores. A log-linear transformation of the raw value of each score before equipercentile equating is required to smooth the raw scores and to create a normal distribution without irregularities that are attributable to sampling. Log-linear transformation enhances the equating accuracy. Based on results from the equipercentile equating analysis, MMSE-2 converted scores with equating were made. For equipercentile equating analyses, although MoCA and MMSE-2 scores are continuous, they are integers without decimal points; thus, all estimating scores were made to the nearest integer, which restricted the range of the score to between 0 to 30. We evaluated the converting method in this study using the ICC to measure the agreement between the raw and converted MMSE-2 scores according to cognitive subgroups. The ICC values were interpreted as poor (ICC < 0.40), fair (ICC  =  0.40–0.59), good (ICC  =  0.60–0.74), or excellent (ICC  =  0.75–1.0) [26]. Moreover, the agreement between the raw and converted MMSE-2 score was determined by examining a Bland-Altman plot [27], with a limit of agreement (LOA) ± 1.96 standard deviations from the mean difference. The 95% LOA between the raw MMSE-2 score and the converted score expressed the degree of error proportional to the mean of the measurement units. If the differences between the measurements tended to agree, the results were close to zero. These plots showed the difference between each pair of measurements on the y-axis against the mean of each pair of measurements on the x-axis. Bias was assessed using a linear regression analysis. Finally, we evaluated the accuracy of the conversion algorithm with a percentage of converted scores within 0, ± 1–2, and ± 3 points of error, where an error was defined as the difference between the raw of MMSE-2 score and the converted score. In addition, the accuracy of previous methods [8-10] for converting from MoCA to MMSE scores have been evaluated and compared with the those of converting from MoCA to MMSE-2 scores because it is possible to switch from the MMSE to the MMSE-2 without any change in their scores [1]. Analyses were performed using SPSS for Windows, version 20.0 (IBM Corporation, Armonk, NY, USA) and R 4.0.2 software with its appropriate packages (The R Foundation for Statistical Computing, Vienna, Austria). Statistical tests were two-tailed, and α was set at <0.05.

Results

The demographic and clinical data of the study population are summarized in Table 1. The mean age of all participants was 70.52 (10.74) years, and the mean duration of education was 7.78 (4.86) years. Approximately 57% of participants were women. There were statistical differences in age, sex, and education among the cognitive subgroups. In addition, the mean raw MMSE-2 scores were higher than the mean MoCA scores. As expected, each mean MMSE-2 and MoCA raw score of was statistically significantly different according to cognitive subgroups.
Table 1

Demographic data and MoCA and MMSE-2 scores and their correlations.

TotalDementiaMCICUP*
(n = 303)(n = 95)(n = 172)(n = 36)
Demographics
    Age, years70.52 (10.74)75.25 (9.31)69.08 (10.40)64.94 (11.49)<0.001a
    Sex, female173 (57.1%)52 (54.7%)100 (58.1%)21 (58.3%)0.854b
    Education, year7.78 (4.86)6.74 (4.98)7.90 (4.76)9.96 (4.31)0.002a
Cognitive screening test
    MMSE-221.37 (7.02)14.08 (6.39)23.96 (4.16)28.19 (2.35)<0.001a
[23 (17–27)][14 (9–19)][25 (22.25–27)][29 (27.25–30)]
    MoCA14.91 (7.41)7.62 (5.49)17.10 (5.25)23.67 (3.41)<0.001a
[16 (10–21)][7 (3–11)][18 (14–21)][24 (22–25)]
MoCA-MMSE-2 correlation
    Pearson’s r0.916**0.864**0.849**0.681**
    CCC0.6520.5350.4020.286
[0.607–0.693][0.441–0.618][0.568–0.688][0.144–0.416]

Note. Values are presented as the means (standard deviations) or numbers (%). The MMSE-2 and MoCA scores of are additionally presented as the median (interquartile range) with square brackets. The correlation coefficients are presented as coefficient values or its values [95% confidence interval]. P values were compared among cognitive subgroups.

aKruskal-Wallis test.

bChi-squared test. P < 0.01.

Abbreviations: MCI, mild cognitive impairment; CU, cognitively unimpaired; MMSE, Mini-Mental State Examination; MoCA, Montreal Cognitive Assessment; CCC, concordance correlation coefficient.

Note. Values are presented as the means (standard deviations) or numbers (%). The MMSE-2 and MoCA scores of are additionally presented as the median (interquartile range) with square brackets. The correlation coefficients are presented as coefficient values or its values [95% confidence interval]. P values were compared among cognitive subgroups. aKruskal-Wallis test. bChi-squared test. P < 0.01. Abbreviations: MCI, mild cognitive impairment; CU, cognitively unimpaired; MMSE, Mini-Mental State Examination; MoCA, Montreal Cognitive Assessment; CCC, concordance correlation coefficient.

Agreement of the two scales (MoCA and MMSE-2)

For all participants, Pearson’s correlation coefficient (r) for the raw MMSE-2 score and MoCA score was 0.916 (P<0.01) (Table 1), which indicated strong agreement [28]. Among the cognitive subgroups, there were no statistical differences in Pearson’s correlation coefficient for the MMSE-2 and MoCA between subjects with dementia and MCI (z = 0.435), between those with dementia and CU individuals (z = 2.355), and between those with MCI and CU individuals (z = 2.215) (all P>0.05 after Bonferroni correction). In addition, the CCC between the raw MMSE-2 score and the MoCA score was 0.652 (95% confidence interval [CI], 0.607–0.693) for all participants, indicating poor agreement [24]. Table 1 shows each CCC according to cognitive subgroup.

Conversion table

The plot of equipercentile equivalents of MoCA and MMSE-2 scores is presented in Fig 1. For example, an MoCA score of 12 is equivalent to an MMSE-2 score of 20, with both of these scores falling at approximately the same percentile rank. Table 2 shows MoCA scores and their respective equivalents on the MMSE-2.
Fig 1

Equipercentile equating of the MoCA and MMSE-2.

Equipercentile equating of the MoCA (black color) and MMSE-2 (gray color) corresponding to test scores and percentiles allows conversion of MoCA scores to MMSE-2 scores.

Table 2

Equipercentile equating table for potential conversion of MoCA scores to MMSE-2 scores.

MoCA scoreEquivalent MMSE-2 score
00
12
24
37
410
513
614
716
817
918
1019
1120
1220
1321
1422
1522
1623
1724
1825
1926
2026
2127
2227
2328
2428
2529
2629
2730
2830
2930
3030

Abbreviations: MoCA, Montreal Cognitive Assessment; MMSE, Mini-Mental State Examination.

Equipercentile equating of the MoCA and MMSE-2.

Equipercentile equating of the MoCA (black color) and MMSE-2 (gray color) corresponding to test scores and percentiles allows conversion of MoCA scores to MMSE-2 scores. Abbreviations: MoCA, Montreal Cognitive Assessment; MMSE, Mini-Mental State Examination.

Reliability of raw and converted MMSE-2 scores

For all participants, the analysis of the reliability of the raw and converted MMSE-2 scores showed excellent intrarater reliability with an ICC(2,1) = 0.961 (Table 3). Moreover, according to the cognitive subgroups, the ICC(2,1) between these scores also had excellent reliability.
Table 3

Analysis of reliability between the converted and raw MMSE-2 scores according to cognitive subgroups.

TotalDementiaMCICU
(n = 303)(n = 95)(n = 172)(n = 36)
0.9610.9240.9190.849
(0.952–0.969)(0.886–0.946)(0.890–0.940)(0.703–0.923)

Note. Values are presented as intraclass correlation coefficients (95% confidence intervals).

Abbreviations: MCI, mild cognitive impairment; CU, cognitively unimpaired; MMSE, Mini-Mental State Examination.

Note. Values are presented as intraclass correlation coefficients (95% confidence intervals). Abbreviations: MCI, mild cognitive impairment; CU, cognitively unimpaired; MMSE, Mini-Mental State Examination. A Bland-Altman plot showed that the mean difference between the raw and converted MMSE-2 scores was 0.277 with an upper and a lower LOA of 5.776 and −5.221, respectively. The scores of 286 participants (94.4%) fell within or on the 95% LOA, with a reasonably even distribution across the mean scores. However, there was an indication of bias according to the regression coefficient (y = −0.073×x+1.8, P<0.05) (Fig 2).
Fig 2

A Bland-Altman plot of the difference in the raw and converted MMSE-2 scores.

The solid line indicates the reference (no mean difference), the middle-dotted line is the mean difference, and the upper- and lower-dotted lines are the limits of agreement representing ±1.96 standard deviations (SDs) from the mean difference within which 95% of the differences between the two scores are expected to fall. The gray solid lines are the fitted regression lines, indicating a significant linear trend.

A Bland-Altman plot of the difference in the raw and converted MMSE-2 scores.

The solid line indicates the reference (no mean difference), the middle-dotted line is the mean difference, and the upper- and lower-dotted lines are the limits of agreement representing ±1.96 standard deviations (SDs) from the mean difference within which 95% of the differences between the two scores are expected to fall. The gray solid lines are the fitted regression lines, indicating a significant linear trend.

Accuracy of the converted MMSE-2

This study showed that 70.0% of the converted MMSE-2 scores were within ± 1–2 points of the raw MMSE-2 scores, and 84.5% of the converted score of MMSE-2 were within ± 3 points of the raw score of MMSE-2 (Table 4).
Table 4

Accuracy of the converted MMSE-2 scores against the clinically administered raw MMSE-2 scores.

Methods*Difference between the raw score and the converted score of MMSE-2
(0)(±1~2)(±3)
Current study16.8%70.0%84.5%
Roalf et al [8]19.1%68.6%83.5%
van Steenoven et al [9]15.2%59.1%72.6%
Trzepacz et al [10]17.8%66.3%82.5%

Note: *These differences (accuracies) in four methods were evaluated with this study participants.

Abbreviations: MMSE, Mini-Mental State Examination.

Note: *These differences (accuracies) in four methods were evaluated with this study participants. Abbreviations: MMSE, Mini-Mental State Examination.

Discussion

The purpose of this study was to facilitate the transition from the MoCA to the MMSE-2. This study reports a simple and reliable method for converting the MoCA to the MMSE-2. Using a conversion table, MoCA scores can be expressed in MMSE-2 terms. This conversion algorithm provides a straightforward way of comparing the MoCA with the MMSE-2, allowing for continuity of cognitive tracking in clinical settings and comparability of data between longitudinal studies. To the best of our knowledge, this is the first study to examine conversion from the MoCA to the MMSE-2. To construct a reliable algorithm to convert scores between the MoCA and MMSE-2, we initially evaluated the relationship between these two scales. It is often assumed that because the MoCA and MMSE-2 measure the same general construct of cognition that they are easily interchangeable. However, regarding agreement properties, Pearson’s correlation coefficient for the MoCA and MMSE-2 indicated strong agreement in this study, but the CCC indicated poor agreement because the latter presumably evaluates components of the degree of variation and degree of location or scale shift [24]. For psychometric properties, the MoCA and MMSE-2 emphasize different aspects of cognition: the MoCA examines more cognitive domains than the MMSE-2, including executive functions and attention [7]. Based on these agreement and psychometric properties, this study evaluated the relationship between the MoCA and MMSE-2 to construct a conversion table using an equipercentile equating method rather than simple linear analyses. In addition, this study chose a conversion table with reliability and intuitive and convenient usage in clinical practice. Equipercentile equating methods have been widely used in previous studies [8-13] because these methods enable direct and easy comparison of scores. Previous studies did not evaluate the entire range of MoCA scores (0–30 points) [11, 12]; however, this study introduced a conversion table including MMSE-2 scores corresponding to all possible MoCA scores. Reliability and agreement are other considerations in this conversion algorithm. This study showed an excellent ICC between the raw and converted MMSE-2 scores. Furthermore, this excellent ICC was maintained among cognitive subgroups and among all subjects. Our conversion rule compared very favorably with those described in previous studies, demonstrating improved agreement. However, the Bland-Altman plot showed systemic bias in the agreement between the raw and converted scores of the two tests. The conversion table showed a negative correlation between the mean and the difference. Therefore, considering these characteristics, correlations should be evaluated carefully when using this algorithm. To facilitate the transition from the MoCA to the MMSE-2, performance on the MoCA must be translated into the MMSE-2 within an acceptable margin of error. Thus, this study evaluated the accuracy of the algorithm used for the conversion of MMSE-2 scores. In this study, the Bland-Altman analyses showed the ranges of the upper and lower LOAs were within approximately ±5 points of difference between the raw and converted MMSE-2 scores. However, this study chose ± 0–3 points of difference. In previous studies, the reliable change score, which is an individual’s change in test performance that refers to real changes in underlying cognitive abilities and not to chance trends, was reported to be 3 points [23] or 4 points [1] for the MMSE-2 and 2–4 points [29], 3–4 points [30], or 3.3 points [31] for the MMSE. With reference to these reliable change scores, this study evaluated the accuracy of the converted MMSE-2 score by the difference from the raw score and within 0, ± 1–2, and ± 3 points. The conversion algorithm used in this study had an accuracy of 16.8% with no difference (perfect matched score) and an accuracy of 84.5% within 3 points of difference between the raw and converted MMSE-2 scores. Using the previously suggested methods [8-10], we obtained an accuracy of 72.6–83.5% within 3 points of difference between the raw and converted MMSE-2 scores. This suggests that the conversion algorithm used in this study is comparable with the previously suggested methods. This study had some limitations. First, it was subject to all of the limitations inherent to the use of a retrospective study design. In addition, there may be some degree of selection bias in this retrospective study. A prospective study is therefore warranted to validate our results. Second, the data in this study were collected from patients in the Korean population, and the generalizability of score mapping in other conditions such as Parkinson’s disease or stroke remains to be tested; thus, the relationship between MoCA and MMSE-2 scores may differ between other demographic or clinical conditions. Participants with subjective cognitive decline might have been recruited as CU subjects. No dementia subtypes or MCI subtypes were specifically examined. Third, most participants with higher MoCA scores (26 or higher MoCA scores) had near the maximum MMSE-2 scores, and most participants with lower MMSE-2 scores (5 or lower MMSE-2 scores) had near the minimum MoCA scores because the MoCA is generally more difficult than the MMSE-2. Participants with lower or higher MoCA scores require further validation because the conversion scales utilized a narrow distribution of MoCA or MMSE-2 scores. Fourth, the order of administration of the MMSE and MoCA was not randomized to minimize the effect of learning one test prior to taking the other. Furthermore, because the tests were administered in a specific language, the generalizability of the score conversion to other languages needs to be explored further. Fifth, this study evaluated only basic information, including age, sex, and education level, as the real primary clinical field. However, other comorbidities and laboratory parameters can affect cognitive function [32]. A more detailed screening evaluation using these variables should be performed. In conclusion, this study validated an algorithm to convert MoCA scores to MMSE-2 scores to allow comparison of data from these two cognitive screening tests. The findings of this study should serve as a useful reference for clinicians to continue clinical care using the MMSE-2 in subjects who were previously administered the MoCA. This will greatly enhance the utility of existing research data and facilitate greater collaboration and shared analyses, leading to more robust research findings. (CSV) Click here for additional data file. 13 May 2021 PONE-D-21-11110 Converting from Montreal Cognitive Assessment to Mini-Mental State Examination-2 PLOS ONE Dear Dr. PARK, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process that are detailed below. Particular issues seem to be the origin of the study population, the order of testing, and the validity of the conversion across the full range of scores (0-30). Please submit your revised manuscript by Jun 26 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Antony Bayer Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Thank you for including your ethics statement: "The study protocol was reviewed and approved by the Institutional Review Board of Korea University Ansan Hospital (2021AS0066) and informed consent was not necessary because of the study’s retrospective design and the de-identified nature of the data." a) Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”). For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research. 3. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Simple methods to convert test scores from commonly administered cognitive screening instruments (CSIs) to approximate MMSE scores is of recognised clinical utility. There are various methods for doing this, including calculation of linear regression equations and, as in this paper, deriving a conversion table of equivalent scores from equipercentile equating with log-linear smoothing. A potential problem with the latter is that it includes all those MMSE items which are recognized to be easy and which are of little value in patient assessment. Queries to address: Introduction P3: “Previous studies have attempted to develop MoCA to MMSE conversion algorithms or equivalent tables”. The authors might also include here Int J Geriatr Psychiatry 2017;32:351-2. Methods P4-5: 303 study participants who attended a clinic. Was this all who attended, or were some patients unable/unwilling to complete bot MoCA and MMSE-2, or other aspects of the diagnostic assessment? P4: “36 subjects with being cognitively unimpaired (CU).”. Were these healthy controls, in which case this is an experimental study, or patients referred to the clinic with subjective memory complaints, in which case you have a pragmatic study? This sentence in the Discussion (P16) “Participants with subjective cognitive decline might have been recruited as subjects with CU.” suggests the former. This point has important implications for the potential generalisability or transferability of the equipercentile equating table. P5: “All participants underwent the MoCA and MMSE-2 examinations on the same day”. Were these performed in a counterbalanced order to avoid bias? Answered in the Discussion at P16. Results: P13: Figure 2 legend implies 3 solid and 2 dotted lines, whereas Figure 2 as presented on P29 has 2 solid and 3 dotted lines. Interpretation not clear. Reviewer #2: PONE-D-21-11110 Converting from Montreal Cognitive Assessment to Mini-Mental State Examination-2 The author add to a growing literature of cross-walks between cognitive screening tests in aging/dementia. The authors implement previously validated methods to compare the MoCA and MMSE-2 in a moderately sized aging sample that included cognitive normal as well as MCI and AD patients. The author report that the MoCA and MMSE-2 can reliably be converted, and, in general, show that the conversion is similar to previous work converting MoCA to MMSE. The authors should address a few concerns to improve the current state the manuscript: General: The manuscript would benefit from thorough copy-editing for grammar. Methods/Results: -The authors should state whether the order of test was consistent. Was MMSE-2 always given before the MoCA? -In the description of the MoCA and MMSE-2 the authors state the MoCA is ‘the most widely used screening test for cognitive dysfunction” and then state that the MMSE-2 is ‘the most commonly used test for the screening of cognitive impairment’: I find it rather difficult to discern the difference between these two claims. Can the authors provide more detail about how these differ? -In my opinion there is no need to include the explanation of the meanings of commonly used stats in terms of relative strength (e.g. Pearson r). -The authors need to provide the robust range of MMSE-2 and MoCA scores in each of the samples. That is, what are the minimum and maximum scores? This is relevant for understanding equipercentile equating and the LOA. It is likely that most of the lower end of the scales (particularly for the MMSE-2) is not observed, thus making scores at the lower range less stable b/c more interpolation is needed in the equating approach. There is not much to overcome this, but should be discussed as a likely contributor to less concordant scores at the lower end of the scales. -What is the benefit or the point in comparing the Pearson r-values between MMSE-2 and MoCA to other studies? Other important factor such as sample size, age, education, etc. cannot be accounted for and could explain these differences. This does not seem to be thoroughly discussed and as such make me question why this analysis was performed and what additional information is gained from this analysis. -The authors should provide more clarity on Table 4 as the methods are not clear. I believe that the authors are using previously published conversion tables on their sample. It should be made clear that other cross-walks are being applied to their sample and that the author are did not recalculate data from the originals sources. Reviewer #3: In this manuscript, the authors present a conversion table from the MoCA to the MMSE-2 using the equipercentile equating method with log-linear smoothing. The manuscript is well-written and the statistical analyses have been properly performed. I have some minor comments: - the sample size is relatively small, especially regarding patients with dementia (n=94). The authors present the conversion table for all MoCA scores (including MoCA = 0). It would be important to mention how many patients had a MoCA 0-5, 5-10, etc. in the sample (which seems low based on figure 2) and discuss whether the conversion is valid at these low scores given the low sample size - A mean MoCA of 17 seems low for patients with MCI. I would expect that some patients with MoCA<15 in this group probably have a functional impairment suggestive of dementia. - authors could provide more detail regarding the cause of cognitive impairment of patients included in the sample. For instance, it is known that the conversion between two cognitive tests is slightly different in patients with vascular dementia (dysexecutive profile) compared with patients with Alzheimer's disease (amnestic profile). - In Table 1, I would refer to the MMSE-2 and MoCA as cognitive screening tests rather than neuropsychological tests. Altogether, my recommendation is to accept the manuscript pending minor revisions. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Andrew J Larner Reviewer #2: Yes: David R. Roalf Reviewer #3: Yes: David Bergeron [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 3 Jun 2021 Reviewer #1: Simple methods to convert test scores from commonly administered cognitive screening instruments (CSIs) to approximate MMSE scores is of recognised clinical utility. There are various methods for doing this, including calculation of linear regression equations and, as in this paper, deriving a conversion table of equivalent scores from equipercentile equating with log-linear smoothing. A potential problem with the latter is that it includes all those MMSE items which are recognized to be easy and which are of little value in patient assessment. Queries to address: Introduction P3: “Previous studies have attempted to develop MoCA to MMSE conversion algorithms or equivalent tables”. The authors might also include here Int J Geriatr Psychiatry 2017;32:351-2. → Thank you for this comment. As the reviewer’s comments, we added the reference: Int J Geriatr Psychiatry 2017;32:351-2. 14. Larner AJ. Converting cognitive screening instrument test scores to MMSE scores: regression equations. Int J Geriatr Psychiatry 2017;32(3):351-2. doi: 10.1002/gps.4622 PMID: 28170137 Methods P4-5: 303 study participants who attended a clinic. Was this all who attended, or were some patients unable/unwilling to complete bot MoCA and MMSE-2, or other aspects of the diagnostic assessment? → We are grateful for this valuable comment. The basic screening tests for all participants visiting the memory clinic are the MoCA and the MMSE-2. The MoCA and the MMSE-2 tests were evaluated all subjects (n=303) who visited the memory clinic during the study period. P4: “36 subjects with being cognitively unimpaired (CU).”. Were these healthy controls, in which case this is an experimental study, or patients referred to the clinic with subjective memory complaints, in which case you have a pragmatic study? This sentence in the Discussion (P16) “Participants with subjective cognitive decline might have been recruited as subjects with CU.” suggests the former. This point has important implications for the potential generalisability or transferability of the equipercentile equating table. → We are grateful for this insightful comment. There are various participants/patients who are referred to the memory clinic. The patients with MCI or dementia are commonly referred to, but the subjects who want to be evaluated for only cognitive screening test, the subjects with subjective memory complaints, or the subject who want pre-evaluation with high risk of cognitive decline, etc.. Many of them currently have normal cognitive function, but we fully agree that they are clinically heterogenous. However, in this study, since there were not many participants with normal cognitive function, these subjects were combined as the CU (cognitively normal) group. We fully agree with the weakness pointed out, and have described these limitations in the discussion section. P5: “All participants underwent the MoCA and MMSE-2 examinations on the same day”. Were these performed in a counterbalanced order to avoid bias? Answered in the Discussion at P16. → We are grateful for this kind comment. Unfortunately, this is a retrospective analysis and we didn’t control the order of two tests’ performance. As the reviewer’s comments, there is a bias of unperforming in a counterbalanced order of two tests. Thus, we mentioned it in discussion section- limitation part at P17. Results: P13: Figure 2 legend implies 3 solid and 2 dotted lines, whereas Figure 2 as presented on P29 has 2 solid and 3 dotted lines. Interpretation not clear. → We are grateful for this kind comment. We are very sorry that there is a mistake in figure 2 legend. We revised and re-wrote the figure 2 legend as the reviewer’s comments as below: “The solid line indicates the reference (no mean difference), the middle-dotted line is the mean difference, and the upper- and lower-dotted lines are the limits of agreement…” Reviewer #2: PONE-D-21-11110 Converting from Montreal Cognitive Assessment to Mini-Mental State Examination-2 The author add to a growing literature of cross-walks between cognitive screening tests in aging/dementia. The authors implement previously validated methods to compare the MoCA and MMSE-2 in a moderately sized aging sample that included cognitive normal as well as MCI and AD patients. The author report that the MoCA and MMSE-2 can reliably be converted, and, in general, show that the conversion is similar to previous work converting MoCA to MMSE. The authors should address a few concerns to improve the current state the manuscript: General: The manuscript would benefit from thorough copy-editing for grammar. → We are grateful for this valuable comment. We are very sorry for poor English grammar. The revision manuscript was edited by native speaker proofreading. In the revised manuscript, this proofreading was described in the acknowledgements. “The authors appreciate Essay Review (https://essayreview.co.kr) for the English language editing.” Methods/Results: -The authors should state whether the order of test was consistent. Was MMSE-2 always given before the MoCA? → We are grateful for this insightful comment. As the reviwer’s comments, the order of tests (MoCA and MMSE-2) is important. Unfortunately, this study is a retrospective analysis and we didn’t perform in a counterbalanced order to avoid bias. In our memory clinic, we evaluated the MMSE-2 followed by the MoCA. We revised and re-wrote the methods section [P5, L10-11] as below: “All participants underwent the MMSE-2 followed by the MoCA on the same day.” -In the description of the MoCA and MMSE-2 the authors state the MoCA is ‘the most widely used screening test for cognitive dysfunction” and then state that the MMSE-2 is ‘the most commonly used test for the screening of cognitive impairment’: I find it rather difficult to discern the difference between these two claims. Can the authors provide more detail about how these differ? → We are grateful for this insightful comment. In the initial submitted manuscript, we described two tests shortly because these tests have been well known. However, as the reviewer’s comments, we revised and rewrote the methods section with more detailed description about two tests as below [P5, L18-21 ~ P6, L2-8]: “The MoCA is the most widely used screening test and was developed as a brief screening test for MCI and the early stages of dementia. This test evaluates visuospatial (5 points), naming (3 points), attention (6 points), language (3 points), abstract (2 points), memory (5 points), and orientation (6 points) abilities... The MMSE-2 is the most commonly used test to screen for cognitive impairment and has been the most extensively used in clinical and research settings due to its practicality… The MMSE-2, similar to the MMSE, examines the following six cognitive domains: orientation in time (5 points), orientation in place (5 points), memory registration (3 points), memory recall (3 points), attention and calculation (5 points), and language and other functions (8 points)…” -In my opinion there is no need to include the explanation of the meanings of commonly used stats in terms of relative strength (e.g. Pearson r). → We are grateful for this kind comment. In the initial submitted manuscript, we described the Pearson’s r with interpretation because there were the concordance correlation coefficient (CCC) and the intra-class correlaion with their interpretation. However, as the reviewer’s comments, the well-known stats is not needed its detailed description. Thus, we deleted the meanings of relative strength of the Pearson’s r as below [P6, L17-18]: “The overall agreement between the MoCA and MMSE-2 was assessed using Pearson’s correlation coefficient (r). For comparisons among correlations for cognitive subgroups…” -The authors need to provide the robust range of MMSE-2 and MoCA scores in each of the samples. That is, what are the minimum and maximum scores? This is relevant for understanding equipercentile equating and the LOA. It is likely that most of the lower end of the scales (particularly for the MMSE-2) is not observed, thus making scores at the lower range less stable b/c more interpolation is needed in the equating approach. There is not much to overcome this, but should be discussed as a likely contributor to less concordant scores at the lower end of the scales. → We are grateful for this insightful comment. With the reference to the reviewer’s comment, we added the mean and IQR of two tests’ scores (the minimum and maximum scores of two tests were 0 and 30, in this study). There is a difficult matter for equating of lower or upper end of the scales because the level of difficulty and cognitive domains of two scales are different. To overcome this difficulty, we evaluated the reliability (ICC) between the converted score and the raw score of MMSE-2 according to cognitive subgroups (dementia, MCI, and CU). Fortunately, in this study, the reliabilities of all subgroups can be interpretated as excellent (>0.75 in Table 3). The dementia subgroup relatively has lower ends of scale score and the CU subgroup has upper ends of scale score. Indirectly, we thought, the excellent reliabilities of both subgroups can be overcome the difficulty which is pointed out by the reviewer. We added these limitation of reliabilities for cognitive subgroups in discussion section (P16-17) and table 3. “Third, most participants with higher MoCA scores (26 or higher MoCA scores) had near the maximum MMSE-2 scores, and most participants with lower MMSE-2 scores (5 or lower MMSE-2 scores) had near the minimum MoCA scores because the MoCA is generally more difficult than the MMSE-2. Participants with lower or higher MoCA scores require further validation because the conversion scales utilized a narrow distribution of MoCA or MMSE-2 scores.” -What is the benefit or the point in comparing the Pearson r-values between MMSE-2 and MoCA to other studies? Other important factor such as sample size, age, education, etc. cannot be accounted for and could explain these differences. This does not seem to be thoroughly discussed and as such make me question why this analysis was performed and what additional information is gained from this analysis. → We are grateful for this valuable comment. As the reviewer’s comments, we agreed that it is no benefit in comparing the Pearson’s r-value to other studies. Thus, we deleted that comparing sentence. For the overall agreement, since Pearson’s r-value is used the most and has been evaluated in many previous studies, we also evaluated it for intuitive comparison. To convert from MoCA to MMSE-2, we thought that reasonable degree of overall agreement should be guaranteed. So we tested it by Pearson’s r-value and concordance correlation coefficient (CCC), which commonly evaluated agreement, but which also evaluated some different aspects. In this study, the Pearson’s r-value had high value, but the CCC had relative low value. The possible cause of this incongruity of two values was mentioned (psychomotor properties, etc.) in discussion section and we described that the equipercentile method would be useful rather than a simple linear conversion due to this incongruity of two scale values. -The authors should provide more clarity on Table 4 as the methods are not clear. I believe that the authors are using previously published conversion tables on their sample. It should be made clear that other cross-walks are being applied to their sample and that the author are did not recalculate data from the originals sources. → We are grateful for this kind comment. In table 4, all accuracies were evaluated with this study population to compare the results from four methods (current study + three previous methods). We revised and added the method description sentence in table legends as below [P14, L5-6]: “*These differences (accuracies) in four methods were evaluated with this study participants.” Reviewer #3: In this manuscript, the authors present a conversion table from the MoCA to the MMSE-2 using the equipercentile equating method with log-linear smoothing. The manuscript is well-written and the statistical analyses have been properly performed. I have some minor comments: - the sample size is relatively small, especially regarding patients with dementia (n=94). The authors present the conversion table for all MoCA scores (including MoCA = 0). It would be important to mention how many patients had a MoCA 0-5, 5-10, etc. in the sample (which seems low based on figure 2) and discuss whether the conversion is valid at these low scores given the low sample size → We are grateful for this kind comment. In this study, the number of participants with MoCA 0-4 scores are 42, that with MoCA 5-10 scores are 46, and that with MoCA 26-30 scores are 10. We absolutely agreed that the sample size is relatively small. Fortunately, the ICC between the converted MMSE-2 and the raw MMSE-2 had excellent reliability in not only dementia group but also CU group (Table 3). As the reviewer’s comments, we revised and added this limitation in discussion section [P17, L4-6] as below: Participants with lower or higher MoCA scores require further validation because the conversion scales utilized a narrow distribution of MoCA or MMSE-2 scores.” - A mean MoCA of 17 seems low for patients with MCI. I would expect that some patients with MoCA<15 in this group probably have a functional impairment suggestive of dementia. → We are grateful for this valuable comment. We agreed that the MoCA score of MCI group is relatively low. We thought that this lower score is due to demographic characteristics of study participants: Korean people, lower education, and older age. In the validation study of K-MoCA (Ref 22. Kang et al. Korean J Clinical Psychology 2009), the mean score of MoCA is 18.39 (4.42) in all MCI subjects and that is 18.02 (5.62) in 65-79 years old (in this study, mean age of MCI group is about 70 years old). Thus, we wrote this limitation of selection bias should be overcome to re-tested with other demographics [P16, L20-21]: “...thus, the relationship between MoCA and MMSE-2 scores may differ between other demographic or clinical conditions.” - authors could provide more detail regarding the cause of cognitive impairment of patients included in the sample. For instance, it is known that the conversion between two cognitive tests is slightly different in patients with vascular dementia (dysexecutive profile) compared with patients with Alzheimer's disease (amnestic profile). → We are grateful for this insightful comment. We absolutely agreed that participants with vascular dementia and participants with Alzheimer’s disease had different cognitive profile and two scales’ score distribution. Because of the small sample size, the study protocol and IRB permission, and the relatively short MMSE-2 experience (in South Korea, MMSE-2 was officially published in April 2020), this study can’t evaluate the conversion table with various types of dementia (Alzheimer’s disease, vascular dementia, Parkinson’s disease dementia, etc). Further study should evaluate with various cognitive dysfunction. We wrote this limitation in discussion section [P16, L22-23 ~ P17, L1] as below: “No dementia subtypes or MCI subtypes were specifically examined.” - In Table 1, I would refer to the MMSE-2 and MoCA as cognitive screening tests rather than neuropsychological tests. → We are grateful for this insightful comment. As the reviewer’s comments, we revised the sub-title (neuropsychological tests -> cognitive screening tests) in table 1. Altogether, my recommendation is to accept the manuscript pending minor revisions. Submitted filename: 20210603.Response Reviewers_PLOS ONE.docx Click here for additional data file. 21 Jun 2021 Converting from the Montreal Cognitive Assessment to the Mini-Mental State Examination-2 PONE-D-21-11110R1 Dear Dr. PARK, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Antony Bayer Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: All initial reviews address. I have no additional comments. This article should be accepted for publication. Reviewer #3: My comments have been appropriately addressed. I have no further comment. I feel that the manuscript is ready for publication. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: AJ Larner Reviewer #2: Yes: David Roalf Reviewer #3: Yes: David Bergeron 28 Jun 2021 PONE-D-21-11110R1 Converting from the Montreal Cognitive Assessment to the Mini-Mental State Examination-2 Dear Dr. PARK: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Antony Bayer Academic Editor PLOS ONE
  25 in total

1.  Copyright and open access at the bedside.

Authors:  John C Newman; Robin Feldman
Journal:  N Engl J Med       Date:  2011-12-29       Impact factor: 91.245

2.  Conversion between mini-mental state examination, montreal cognitive assessment, and dementia rating scale-2 scores in Parkinson's disease.

Authors:  Inger van Steenoven; Dag Aarsland; Howard Hurtig; Alice Chen-Plotkin; John E Duda; Jacqueline Rick; Lama M Chahine; Nabila Dahodwala; John Q Trojanowski; David R Roalf; Paul J Moberg; Daniel Weintraub
Journal:  Mov Disord       Date:  2014-11-07       Impact factor: 10.338

3.  EFNS-ENS Guidelines on the diagnosis and management of disorders associated with dementia.

Authors:  S Sorbi; J Hort; T Erkinjuntti; T Fladby; G Gainotti; H Gurvit; B Nacmias; F Pasquier; B O Popescu; I Rektorova; D Religa; R Rusina; M Rossor; R Schmidt; E Stefanova; J D Warren; P Scheltens
Journal:  Eur J Neurol       Date:  2012-09       Impact factor: 6.089

4.  A 24-week, double-blind, placebo-controlled trial of donepezil in patients with Alzheimer's disease. Donepezil Study Group.

Authors:  S L Rogers; M R Farlow; R S Doody; R Mohs; L T Friedhoff
Journal:  Neurology       Date:  1998-01       Impact factor: 9.910

5.  Test-retest reliable coefficients and 5-year change scores for the MMSE and 3MS.

Authors:  Tom N Tombaugh
Journal:  Arch Clin Neuropsychol       Date:  2005-06       Impact factor: 2.813

6.  Multicenter Validation of an MMSE-MoCA Conversion Table.

Authors:  David Bergeron; Kelsey Flynn; Louis Verret; Stéphane Poulin; Rémi W Bouchard; Christian Bocti; Tamàs Fülöp; Guy Lacombe; Serge Gauthier; Ziad Nasreddine; Robert Jr Laforce
Journal:  J Am Geriatr Soc       Date:  2017-02-15       Impact factor: 5.562

Review 7.  Mild cognitive impairment--beyond controversies, towards a consensus: report of the International Working Group on Mild Cognitive Impairment.

Authors:  B Winblad; K Palmer; M Kivipelto; V Jelic; L Fratiglioni; L-O Wahlund; A Nordberg; L Bäckman; M Albert; O Almkvist; H Arai; H Basun; K Blennow; M de Leon; C DeCarli; T Erkinjuntti; E Giacobini; C Graff; J Hardy; C Jack; A Jorm; K Ritchie; C van Duijn; P Visser; R C Petersen
Journal:  J Intern Med       Date:  2004-09       Impact factor: 8.989

8.  Comparative accuracies of two common screening instruments for classification of Alzheimer's disease, mild cognitive impairment, and healthy aging.

Authors:  David R Roalf; Paul J Moberg; Sharon X Xie; David A Wolk; Stephen T Moelter; Steven E Arnold
Journal:  Alzheimers Dement       Date:  2012-12-21       Impact factor: 21.566

9.  Conversion of MoCA to MMSE scores.

Authors:  Jed A Falkowski; Linda S Hynan; Kamini Krishnan; Kirstine Carter; Laura Lacritz; Myron Weiner; Heidi Rossetti; C Munro Cullum
Journal:  Alzheimers Dement (Amst)       Date:  2015-03-29

10.  Relationship between the Montreal Cognitive Assessment and Mini-mental State Examination for assessment of mild cognitive impairment in older adults.

Authors:  Paula T Trzepacz; Helen Hochstetler; Shufang Wang; Brett Walker; Andrew J Saykin
Journal:  BMC Geriatr       Date:  2015-09-07       Impact factor: 3.921

View more
  2 in total

1.  Validation of Four Methods for Converting Scores on the Montreal Cognitive Assessment to Scores on the Mini-Mental State Examination-2.

Authors:  Sung Hoon Kang; Moon Ho Park
Journal:  Dement Neurocogn Disord       Date:  2021-09-27

2.  Practice effect and test-retest reliability of the Mini-Mental State Examination-2 in people with dementia.

Authors:  Ya-Chen Lee; Shu-Chun Lee; En-Chi Chiu
Journal:  BMC Geriatr       Date:  2022-01-21       Impact factor: 3.921

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.