Literature DB >> 33911772

Pervasive Misclassification and Misconception of Study Designs in Asian Dermatology Journals Listed in Science Citation Index-Expanded.

Sungjun Choi1, Hyun-Sun Yoon2.   

Abstract

BACKGROUND: Misclassification of study designs of journals can hinder the readers from assessing the strengths and weaknesses of the study and evaluating the applicability of the study in the real-world setting. However, it seems that it is common for authors to neglect to classify the study design.
OBJECTIVE: We aimed to evaluate the accuracy of the classification of study designs and examine the common errors.
METHODS: This descriptive study analyzed four Asian dermatology journals listed in the science citation index expanded from January 2018 to December 2018. We investigated discrepancies between author-reported and actual study designs. Design Algorithm for Medical Literature on Intervention (DAMI) was used to determine the actual study design.
RESULTS: Of the 177 papers analyzed, only 72 articles (40.7%) revealed their study design and among them, 23 articles (32.0%) showed discrepancies between the author-reported and the actual study designs. Case-control studies were the most commonly misclassified study designs by authors.
CONCLUSION: There were considerable differences between the author-reported study design and the actual study design in Asian dermatology journals. Proper classification of study designs by the authors is essential to strengthen evidencebased medicine.
Copyright © 2020 The Korean Dermatological Association and The Korean Society for Investigative Dermatology.

Entities:  

Keywords:  Classification; Evidence-based medicine; Observational study; Research design

Year:  2020        PMID: 33911772      PMCID: PMC7992578          DOI: 10.5021/ad.2020.32.5.383

Source DB:  PubMed          Journal:  Ann Dermatol        ISSN: 1013-9087            Impact factor:   1.444


INTRODUCTION

Physicians practice evidence-based medicine (EBM) by building knowledge based on clinical experiences, academic exchanges, and reading journals. Assessing the reliability of scientific findings of journals, recognizing the outcomes of clinical studies, and deciding how to apply the results to the daily practice are the key steps in EBM1. Journals can be of high quality only when researches are conducted by following appropriate study designs. Researchers inform the readers of the purpose of the study and the methods to derive these results23. The readers evaluate the applicability of the study in the real-world setting based on the study method, level of evidence, and pros and cons of the study design23. Each study design has its own unique strengths and weaknesses24. If the research design reported by authors is different from the actual study design, then there is a high chance that the advantages of each study design were not exploited suitably in the research. Additionally, since the study design is directly related to the level of evidence, misreporting low level of the study design at a higher level could mislead the readers5. For instance, if a cross-sectional study is misreported as a case-control study, it would make the readers believe that exposures precede the outcomes15. According to the prior study on American and English dermatology journals, the discrepancy between the study design claimed by authors and the actual study design was significant6. Almost half of the author-reported prospective cohort, retrospective cohort, or case-control studies were miscategorized. Moreover, most authors tended to classify study designs into the higher level of evidence study design rather than the original study design. Although it seems that it is common for authors to neglect to classify the study design, it is fundamental to clarify study designs and proceed with research accordingly in order to contribute to the basis of EBM46. Among Asian dermatology journals that are listed in the science citation index expanded (SCIE), here we evaluate the accuracy of the classification of study designs and examine the common errors.

MATERIALS AND METHODS

This study covered the original articles that analyzed the relation between exposures and outcomes from Asian dermatology journals listed in SCIE from January 2018 to December 2018. Annals of Dermatology (AD), Indian Journal of Dermatology (IJD), Indian Journal of Dermatology Venereology and Leprology (IJDVL), and Journal of Dermatology (JD) were included. Reviews, systematic reviews, laboratory studies, and diagnostic studies were excluded. As determined by the Institutional Review Board of the SMG-SNU Boramae Medical Center, institutional ethics approval was not required for this study because individual-level data were not used for this analysis. The study design categorized by authors and the actual study design were analyzed by two authors (S.C, H.S.Y). We applied the same research methods as the previous study6. We determined the author-reported study design referring to the full contents of the study. We used the Design Algorithm for Medical Literature on Intervention (DAMI) to categorize the actual study design, which was originally developed to precisely categorize study designs for conducting systematic reviews (Fig. 1)7. The accuracy and consistency of DAMI classifying study designs in medical articles have been proven 68.
Fig. 1

Modified version of Design Algorithm for Medical Literature on Intervention (DAMI) algorithm for classifying dermatology journals. Although the original DAMI includes 13 study designs, nine study designs that actually are used most in the dermatology literature are included. Exposures here mean either risk factors, protective factors or interventions. Data from the article of Seo et al. J Clin Epidemiol 2016;70:200–2057.

Statistical analysis was only done for descriptive statistics. Inferential statistics were not analyzed in this article.

RESULTS

Overview of analyzed articles

Of the total 208 articles of the four dermatology journals, 17 laboratory studies, eight diagnostic studies, and six systematic reviews were excluded. The remaining 177 papers were analyzed: AD (n=36), IJD (n=44), IJDVL (n=24), and JD (n=73). The majority were observational studies (n=145). More than half of the studies (n=105) did not mention the study design, while only 72 studies (40.7%) did. Total 13 research designs are presented in the DAMI classification, but only eight were used in the articles. Re-classification of the study design by DAMI showed that noncomparative study (n=65) was the most common, followed by cross-sectional study (n=63), randomized controlled trial (RCT) (n=15), and retrospective cohort study (n=15) (Table 1).
Table 1

Revised classification of study designs by DAMI (n=177)

Study designRevised classification by DAMI
Noncomparative65 (36.7)
Before-after1 (0.6)
Cross-sectional63 (35.6)
Case-control11 (6.2)
Retrospective cohort15 (8.5)
Prospective cohort3 (1.7)
Non-RCT4 (2.3)
RCT15 (8.5)

Values are presented as number (%). DAMI: Design Algorithm for Medical Literature on Intervention, RCT: randomized controlled trial.

Analysis of articles of which study design author clarified

In the analysis of the 72 studies where authors explicitly mentioned the study design in the article, the most common author-reported study design was case-control study (n=23), followed by cross-sectional study (n=19), and RCT (n=14). After re-categorization by DAMI, cross-sectional study (n=28) was found to be most common, followed by RCT (n=14), and noncomparative study (n=13) (Fig. 2).
Fig. 2

Comparison between original classification by author(s) and revised classification by Design Algorithm for Medical Literature on Intervention (DAMI). Only included articles that author(s) clearly stated study designs. RCT: randomized controlled trial.

Authors most frequently misclassified their research as case-control study (Table 2). Among 23 author-reported case-control studies, 14 were misclassified (AD: 1/5, JD: 2/3, IJDVL: 2/5, and IJD: 9/10), and most of them were actually cross-sectional studies (n=11). Retrospective cohort study was next, and four of nine studies showed inconsistency (AD: 1/1 and JD: 3/8), and those misclassified were mostly cross-sectional studies (n=3). On the other hand, among 19 cross-sectional studies, six were miscategorized (AD: 2/2, JD: 1/4, IJD: 1/8, and IJDVL: 2/5), and most of them were noncomparative studies (n=5). Meanwhile, all 14 RCT studies were categorized correctly.
Table 2

Evaluation of the accuracy of author-reported study designs in the articles in which the authors revealed the study design

Author-reported study designReclassification by DAMIAD (n=14)IJD (n=19)IJDVL (n=15)JD (n=24)Total (n=72)
NoncomparativeConcordant30025
Discordant00000
Before-afterConcordant10001
Discordant00000
Cross-sectionalConcordant173314
Discordant11215*
Case-controlConcordant41319
Discordant192214
Retrospective cohortConcordant00055
Discordant10034
Prospective cohortConcordant00101
Discordant00000
RCTConcordant241714
Discordant00000

Values are presented as number only. DAMI: Design Algorithm for Medical Literature on Intervention, AD: Annals of Dermatology, IJD: Indian Journal of Dermatology, IJDVL: Indian Journal of Dermatology Venereology and Leprology, JD: Journal of Dermatology, RCT: randomized controlled trial. *All except one article in AD were reclassified as noncomparative study. †All except two articles in JD were reclassified as cross-sectional study. ‡All except one article in JD were reclassified as cross-sectional study.

Among 72 articles, 21/52 (40.0%) observational studies and only 2/20 experimental studies (10.0%) were misclassified, revealing that observational studies accounted for most of the misclassified articles.

Analysis of articles of which study design author did not clarify

Referring to the number of articles of each study design classified by DAMI, the proportion of studies not clarifying the study design in the article by authors was obtained (Table 3). Noncomparative study (n=52, 80.0%) was the most common study design that was not clarified by the authors, followed by cross-sectional study (n=35, 55.6%), and retrospective cohort study (n=8, 53.3%). Observational studies were less classified (n=93, 64.1%) than experimental studies (n=12, 37.5%).
Table 3

Analysis of articles which did not report their study design in the text

CharacteristicValue
Sex
 Male79 (77.5)
 Female23 (22.5)
Age (yr)
 Mean age22.0±10.3
 10~1945 (44.1)
 20~2934 (33.3)
 30~3910 (9.8)
 ≥407 (6.9)
EASI score
 16~2626 (25.5)
 ≥2676 (74.5)
Previous treatment
 Topical treatment102 (100)
 Oral CS43 (42.2)
 Oral CsA99 (97.1)
 Other systemic treatment1 (1.0)
Reason for other systemic immunosuppressants
 Low response61 (59.8)
 Poor compliance*25 (24.5)
 Showing adverse events16 (15.7)

Values are presented as number (%). DAMI: Design Algorithm for Medical Literature on Intervention, RCT: randomized controlled trial.

DISCUSSION

Among the original articles published in the four Asian dermatology journals listed in SCIE, 60% of the articles did not reveal their study design in the text and 32% of articles clarifying study designs in the text showed discrepancies between the author-reported and the actual study designs. In the analysis of misclassified study designs, observational studies accounted for a large proportion, and case-control studies were the most commonly misclassified study designs by authors. These results were consistent with the prior study6. Most of author-reported case-control studies and one-third of the author-reported retrospective cohort studies were identified as cross-sectional studies. This reflects that researchers usually get confused among cross-sectional, case-control, and cohort studies. The main difference between cross-sectional studies and the other two study designs is the temporal relation between exposures and outcomes. When the data for exposures and outcomes are ascertained at the same point in time and the temporal relation between the two is unclear, the study should be classified as a cross-sectional study, regardless of the characteristics of the comparison groups2. Case-control and cohort studies show clear temporal relation between exposures and outcomes2. Exposures precede outcomes. Case-control studies start with an outcome (such as disease) and retrospectively compare exposures that might have caused the outcome. Thinking backwards is not intuitive for clinicians, and hence case-control studies are widely misunderstood9. In contrast, cohort studies first identify groups with or without an exposure of interest, and then follow two groups in time to determine the outcomes. Exposure is identified at baseline before the occurrence of outcomes29. In this study, approximately 20% of cross-sectional studies were actually noncomparative studies. Cross-sectional studies should have two or more groups, like case-control or cohort studies2. When studies follow up a single group without comparators over time, they should be classified as noncomparative studies (single-arm study, case series, or patient group study), rather than cross-sectional studies7. We found that 60% studies did not state their study design in the text. This is partly because the research is getting much more sophisticated than before and studies use various ways to produce results making it hard to define their study design6. Commonly, noncomparative studies were found to be not classified. Since noncomparative studies do not have a particular clinical research design or a high level of evidence, it is likely that the authors did not find it necessary to reveal the study design in the text. Although our study only analyzed limited number of dermatology journals and more than half the articles did not clarify their study design in the text, there were considerable differences between the author-reported study design and the actual study design in Asian dermatology journals. Misclassification of study designs can hinder the readers from assessing the strengths and weaknesses of the study and applying the results properly. Proper classification and appropriate reporting of study designs by the authors is the first step to strengthen EBM and to help the readers understand the study results correctly.
  9 in total

Review 1.  An overview of clinical research: the lay of the land.

Authors:  David A Grimes; Kenneth F Schulz
Journal:  Lancet       Date:  2002-01-05       Impact factor: 79.321

2.  Users' guide to the surgical literature. How to assess a randomized controlled trial in surgery.

Authors:  Achilleas Thoma; Forough Farrokhyar; Mohit Bhandari; Ved Tandan
Journal:  Can J Surg       Date:  2004-06       Impact factor: 2.089

3.  A newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity.

Authors:  Hyun-Ju Seo; Soo Young Kim; Yoon Jae Lee; Bo-Hyoung Jang; Ji-Eun Park; Seung-Soo Sheen; Seo Kyung Hahn
Journal:  J Clin Epidemiol       Date:  2015-09-25       Impact factor: 6.437

Review 4.  Study designs in dermatology: Practical applications of study designs and their statistics in dermatology.

Authors:  Jonathan I Silverberg
Journal:  J Am Acad Dermatol       Date:  2015-11       Impact factor: 11.527

Review 5.  Study designs in dermatology: A review for the clinical dermatologist.

Authors:  Jonathan I Silverberg
Journal:  J Am Acad Dermatol       Date:  2015-11       Impact factor: 11.527

6.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration.

Authors:  Jan P Vandenbroucke; Erik von Elm; Douglas G Altman; Peter C Gøtzsche; Cynthia D Mulrow; Stuart J Pocock; Charles Poole; James J Schlesselman; Matthias Egger
Journal:  Epidemiology       Date:  2007-11       Impact factor: 4.822

Review 7.  Progress in evidence-based medicine: a quarter century on.

Authors:  Benjamin Djulbegovic; Gordon H Guyatt
Journal:  Lancet       Date:  2017-02-17       Impact factor: 79.321

8.  Misclassification of study designs in the dermatology literature.

Authors:  Jungyoon Ohn; Sang Jun Eun; Do-Yeop Kim; Hyun-Sun Park; Soyun Cho; Hyun-Sun Yoon
Journal:  J Am Acad Dermatol       Date:  2017-11-08       Impact factor: 11.527

9.  Qualitative Assessment and Reporting Quality of Intracranial Vessel Wall MR Imaging Studies: A Systematic Review.

Authors:  J W Song; S C Guiry; H Shou; S Wang; W R Witschey; S R Messé; S E Kasner; L A Loevner
Journal:  AJNR Am J Neuroradiol       Date:  2019-11-14       Impact factor: 3.825

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.