Literature DB >> 35245326

The incremental value of the contribution of a biostatistician to the reporting quality in health research-A retrospective, single center, observational cohort study.

Ulrike Held1, Klaus Steigmiller1, Michael Hediger2, Victoria L Cammann3, Alexandru Garaiman4, Sascha Halvachizadeh5, Sylvain Losdat6, Erin Ashley West7, Martina Gosteli8, Kelly A Reeve1, Stefanie von Felten1, Eva Furrer1.   

Abstract

BACKGROUND: The reporting quality in medical research has recently been critically discussed. While reporting guidelines intend to maximize the value from funded research, and initiatives such as the EQUATOR network have been introduced to advance high quality reporting, the uptake of the guidelines by researchers could be improved. The aim of this study was to assess the contribution of a biostatistician to the reporting and methodological quality of health research, and to identify methodological knowledge gaps.
METHODS: In a retrospective, single center, observational cohort study, two groups of publications were compared. The group of exposed publications had an academic biostatistician on the author list, whereas the group of non-exposed publications did not include a biostatistician of the evaluated group. Rating of reporting quality was done in blinded fashion and in duplicate. The primary outcome was a sum score based on six dimensions, ranging between 0 (worst) and 11 (best). The study protocol was reviewed and approved as a registered report.
RESULTS: There were 131 publications in the exposed group published between 2017 and 2018. Of these, 95 were either RCTs, observational, or prediction / prognostic studies. Corresponding matches in the group of non-exposed publications were identified in a reproducible manner. Comparison of reporting quality overall revealed a 1.60 (95%CI from 0.92 to 2.28, p <0.0001) units higher reporting quality for exposed publications. A subgroup analysis within study types showed higher reporting quality across all three study types.
CONCLUSION: Our study is the first to report an association of a higher reporting quality and methodological strength in health research publications with a biostatistician on the author list. The higher reporting quality persisted through subgroups of study types and dimensions. Methodological knowledge gaps were identified for prediction / prognostic studies, and for reporting on statistical methods in general and missing values, specifically.

Entities:  

Mesh:

Year:  2022        PMID: 35245326      PMCID: PMC8896706          DOI: 10.1371/journal.pone.0264819

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Despite measures to increase the reporting quality in the field of health research, for example, by introducing reporting guidelines and inclusion of such guidelines in recommendations for authors by many publishers, quality standards are still oftentimes not met. Recent evaluations of the literature showed that for observational studies, the corresponding STROBE guideline was not used by nearly 18% of the authors because the authors had not heard of the guideline before. An additional 19% of authors had heard of it but still did not use it [1]. Journals obviously play an important role, and a systematic evaluation showed that journal endorsement rates to the STROBE guidelines are only around 50% [2]. When it comes to the reporting of randomized trials, Dechartres et al. [3] have systematically evaluated reporting of more than 20’000 trials included in Cochrane reviews. They conclude that poor reporting has decreased over time, but that especially lower impact factor journals show room for improvement. Reporting quality of clinical prediction models has recently been evaluated systematically in the context of research on Severe acute respiratory syndrome coronavirus 2 (Sars-Cov-2) [4]. The authors concluded that almost all published models for predicting mortality were poorly reported, and that the corresponding Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD [5]) guideline was largely omitted. In Switzerland, the government paid 22.9 billion Swiss francs for research and development, representing more than 3% of the gross domestic product in 2019. Publications in the field of “clinical medicine” represent 25% of all publications [6], and given the large amounts of resources, value from research and publications should be maximized. The objectives of the current study were, first, to assess the contribution of a biostatistician as co-author on the quality of reporting and methodological strength in health research publications; second, to identify dimensions of reporting quality and study types with methodological knowledge gaps; and third, to promote the awareness of the importance of good reporting among clinical researchers and biostatisticians.

Materials and methods

Study design

The study is a retrospective, single-center observational cohort study, conducted at the University of Zurich (UZH) and its University Hospital (USZ).

Selection of exposed and non-exposed publications

In this study, two groups of publications were compared. The group of “exposed” publications was defined according to their exposure to one or more of a set of 13 academic biostatisticians from the Epidemiology, Biostatistics and Prevention Institute, and the Institute of Mathematics, both localized at University of Zurich, as a co-author. The group will be referred to as biostatisticians in the following. The group of exposed publications was published between 2017 and 2018, and it was retrieved in a PubMed search, with a search string as specified in S1 Appendix on Dec 9, 2019. Methodological publications as well as non-English language publications were excluded. To define the group of “non-exposed” publications for comparison, all medical research publications found in PubMed between 2017 and 2018, with the affiliation UZH or USZ or any of the affiliated university hospitals for the first and / or the second author were extracted on Dec 16, 2019. Details on the search string can again be found in S1 Appendix. The non-exposed publications have none of the defined set of biostatisticians on the author list. It cannot be excluded that a biostatistician from outside of the group was on the author list. The full list of affiliations considered can be found in S2 Appendix. Based on the full list, a list of affiliations relevant for this study was created, in which for example typographic errors were removed. The large number of non-exposed publications resulting from the affiliation list was used in a random but replicable order—aiming to remove potential chronological ordering or any other systematic ordering while adhering to high standards of reproducibility.

Categorization into study types

For each of the exposed publications, the study type was determined, and the subset of all RCTs, observational studies, and prediction / prognostic studies was evaluated further. Categorization into study types was performed by the set of biostatisticians. For most publications, the authors themselves determined the study type. For some publications, the biostatistician as co-author had left the department, and thus the study type was categorized independently and in duplicate by two authors (UH, EF). After consensus on study type was reached, record count for each study type for each publication year was obtained. The three study types RCT, observational study, and prediction / prognostic study were the most frequent types. Other types (e.g. systematic reviews) had been abandoned a priori. The number of non-exposed publications was much larger than the number of exposed publications. For that reason, the categorization of the non-exposed publications into RCTs, observational studies and prediction / prognostic studies was performed in random but replicable order until the numbers of non-exposed publications of these study types matched the corresponding number of exposed publications per year. Categorization was performed independently and in duplicate by the authors UH and MH (for papers published in 2017) and by EF and MH (for 2018). Any discrepancies were resolved by discussion and third-party arbitration (KS). The final set of publications was considered the non-exposed group of publications.

Selection of items from reporting guideline

For each study type, a set of six items measuring reporting quality were identified by reaching group consensus among the set of biostatisticians. The quality criteria were based on the reporting guidelines CONSORT [7], STROBE, and TRIPOD, and they reflect characteristics of a publication that are especially important for judging the validity of the results and methodological strength.

Specification of the reporting quality items

The reporting guideline items chosen for the ratings represented the following general dimensions for all three study types: 1. variable specification; 2. how study size was arrived at; 3. missing data; 4. statistical methods; 5. precision of results; and 6. whether the corresponding reporting guideline was mentioned. The rating of publications regarding these six items was operationalized and piloted, such that they could be used efficiently and robustly to rate each publication consistently. Each dimension had different possible answer categories, also dependent on study type, resulting in a rating varying between 0 (lowest) to 2 (highest) for dimensions 1 to 5, plus an additional point for mentioning the corresponding reporting guideline. Details of the operationalization can be found in S3 Appendix. The range of the total score was from 0 (lowest) to 11 (highest).

Outcomes

The primary outcome of this study was the sum score of reporting quality and methodological strength in exposed and non-exposed publications, with respect to the six dimensions. The primary outcome was assessed in blinded fashion and in duplicate by two independent raters. The raters were recruited from outside of the departments. Blinding to whether the publication belonged to the exposed or non-exposed group was guaranteed by removing author names, affiliation lists, journal name, corresponding author name, author contributions, date, acknowledgements, references, and DOI from every publication’s PDF. Discrepancies in the ratings between the two raters were resolved by a third rating and discussion until consensus was reached. The secondary outcome of this study was the number of citations in the group of exposed and non-exposed publications at a fixed date (July 20, 2021).

Outcome rating and rater training

The outcome rating and its operationalization was developed by four authors (UH, KS, MH, EF). After operationalization was finalized the resulting questions for each study type were programmed to be evaluated through an R Shiny app [8], which underwent quality review and a testing period. The questionnaire can be found in S3 Appendix. To find raters, outside of the core study team and outside of the departments, PhD programs in health research across Switzerland, as well as groups of researchers interested in Research on Research were contacted. Each candidate rater could chose a study type, and received written instructions for the rating task. The candidate raters were instructed and trained by rating vignette publications for calibration. These vignette publications of all study types were similar publications as those under study, but they were published in 2019 and were rated with scrutiny by the study authors, including detailed explanation. Only upon successful completion of test ratings, the raters received sets of 11–12 papers of the same study type for rating. The raters were obliged to rate the reporting quality based on the blinded PDF’s alone, and not to use additional information from the internet while doing so. Ratings were performed in blinded fashion, meaning that the raters were unaware of the classification of publications as exposed or non-exposed, and of authors on the publications. The ratings were performed in duplicate, and any discrepancies were resolved by a third independent rating. The raters were reimbursed with vouchers for every set of 11–12 publications. Additionally, raters were asked for co-authorship after completion of 33 or more ratings. In total, 15 raters were recruited. The ratings were done between May and July 2021.

Sample size considerations

The sample size was justified a priori, based on the consideration that with 95 publications in the exposed group, and 95 publications in the non-exposed group at a significance level of 5% and with a power of 80% an effect size of 0.41 (Cohen’s d) could be detected, using a 2-sided, 2-sample t-test with equal variance assumption. The effect size would be considered a medium effect size. The number of 95 publications corresponded to all publications in the exposed group in the years 2017 and 2018.

Data management

Data collection in the context of this study had to cover two different aspects. First, categorization of the exposed and non-exposed publications into the three study types was performed with the help of a specifically programmed R Shiny app, in which the title and abstract, as well as the link to the full text was provided, such that the categorization could be performed independently and in duplicate and that any discrepancies could be detected and resolved by discussion. Second, reporting quality rating was performed using another R Shiny app, implementing the operationalized quality dimensions. The electronic records of the two independent ratings, and the consensus rating were saved. The use of R Shiny apps in this research guaranteed highly reliable data entry.

Risk of bias

The study was designed to compensate the following biases a priori. Risk of detection bias was addressed with blinded and duplicate outcome ratings by researchers not otherwise involved in the study. Risk of selection bias was addressed by considering all publications within two years for the exposed group and by reproducible random sub sampling of PubMed publications from medical publications with UZH / USZ affiliation for the group of non-exposed publications. The results of the study could be confounded by indication, if more complex research projects were brought to the group of biostatisticians’ attention whereas less complex projects were addressed by the clinicians without asking for help from an academic biostatistician. This bias was partially addressed by comparing the number of citations of exposed and non-exposed publications, under the hypothesis that equal citation numbers would indicate that less confounding by indication was present.

Statistical methods and programming

For assessing the level of agreement of reporting quality between the two independent ratings, squared-weighted Cohen’s κ values were estimated, and reported with 95% confidence intervals based on 1000 bootstrap samples. These analyses of agreement were reported overall and in subgroups of study type. Interpretation of the κ values were based on the categorization suggested by Altman [9]. Statistical methods for the primary outcome included visualization of the results with dot-plots (lollipop plots), in which the means of the outcome in the exposed and non-exposed publications are shown, overall (score 0 to 11), and in subgroups of study type (score 0 to 11) and reporting quality dimension (score 0 to 2). Besides that, the estimated between-group differences, overall and in subgroups of study type with 95% confidence intervals (CI) were reported. The two-sided, two-sample t-test under the assumption of equal variances was used to test the hypothesis of no difference in reporting quality between exposed and non-exposed publications. Corresponding Cohen’s d was calculated using pooled standard deviations assuming equal variances. The number of citations was reported overall, and in subgroups of study type, with medians and interquartile ranges, as the distribution was right-skewed. The non-parametric exact Wilcoxon-Mann-Whitney method was used to test the hypothesis of no difference in number of citations between exposed and non-exposed publications, and to estimate a confidence interval. The between-group difference in location was estimated and reported with 95% CI, based on rank statistics. The software used for statistical analysis was extracted across all publications. It was reported as number and percentage of total. In many publications, more than one software was used, and for that reason the percentages exceed 100%. All analyses, including subgroup analyses described above were pre-specified in the registered report study protocol [10]. The unit of analysis was the individual publication, or the reporting quality dimension. Statistical programming was performed with R 4.1.1 [11], in combination with dynamic reporting. Statistical programming included downloading all potential non-exposed publications, random reordering, development of an R Shiny app for categorization of the publications, development of an R Shiny app for the recording of reporting quality ratings, as well as statistical programming of the methods for data analysis and visualization. Results of the study were reported according to STROBE guidelines [12]. All anonymized data was uploaded in an OSF repository.

Results

In total there were 131 exposed papers published in 2017 and 2018. Of these, 95 publications were of the study types RCT, observational study, or prediction / prognostic study. There were six RCTs, 77 observational studies, and 12 prediction / prognostic studies. The literature search for non-exposed publications with first and / or second author with suitable affiliation and year resulted in a total number of 3420 publications. Four hundred publications of these in random order were categorized into one of the three study types RCT, observational, or prediction / prognostic study, and the retrieved case numbers of the exposed papers could be frequency matched individually for 2017 and 2018. The corresponding flow-chart is shown in Fig 1. All data was made available on OSF [13].
Fig 1

Flow chart.

Selection process for the exposed publications (left) and the non-exposed publications (right), including screening of affiliation lists of first and second author.

Flow chart.

Selection process for the exposed publications (left) and the non-exposed publications (right), including screening of affiliation lists of first and second author. Ten of the exposed publications and two of the non-exposed publications mentioned the corresponding reporting guideline. In 48 of the exposed publications, and in 14 of the non-exposed publications, the programming language R was used for the statistical analysis. All descriptive results can be found in Table 1. There were no missing values in the data throughout.
Table 1

Descriptive statistics.

ExposedNon-exposed
n9595
Study type (%)
 Randomized Studies6 (6.3)6 (6.3)
 Observational Studies77 (81.1)77 (81.1)
 Prediction Studies12 (12.6)12 (12.6)
Software used (%)
 Excel2 (2.1)3 (3.2)
 Graph Pad Prism2 (2.1)2 (2.1)
 Matlab0 (0.0)6 (6.3)
 Python0 (0.0)1 (1.1)
 R48 (50.5)14 (14.7)
 SAS1 (1.1)5 (5.3)
 SPSS38 (40.0)40 (42.1)
 STATA15 (15.8)12 (12.6)
 Other software2 (2.1)5 (5.3)
 Not mentioned11 (11.6)21 (22.1)
Year (%)
 201754 (56.8)54 (56.8)
 201841 (43.2)41 (43.2)
Guideline = Mentioned (%)10 (10.5)2 (2.1)

Agreement

The agreement between the two ratings of each publication was 0.52 (95%CI from 0.46 to 0.57) overall, indicating moderate agreement, according to Altman [9]. For the three different study types, however, the agreement varied between 0.31 (95%CI from 0.05 to 0.52) for RCTs and 0.52 (95%CI from 0.46 to 0.59) and 0.53 (95%CI from 0.35 to 0.68) for observational and prediction studies, respectively. To reach consensus for all ratings with discrepancies a third blinded rater was involved.

Primary outcome

The estimated between-group difference for the primary outcome was 1.60 (95%CI from 0.92 to 2.28, p < 0.0001) in favor of the exposed publications. This result corresponds to a Cohen’s d of 0.67 (95%CI from 0.38 to 0.97). In the pre-specified subgroups of study type, the estimated between group difference was 3.33 (95%CI from -0.84 to 7.51), 1.39 (95%CI from 0.68 to 2.09) and 2.08 (95%CI from 0.12 to 4.04) for randomized, observational and prediction / prognostic studies, respectively (Fig 2), showing higher reporting quality across all study types. In addition to the estimation of the between group difference, the representation of each subgroup’s mean values shows that generally for RCTs the reporting quality was higher than for observational and prediction / prognostic studies.
Fig 2

Estimated between-group difference with 95%CI in the pre-specified subgroups of study type (left); raw means in exposed and non-exposed publications (right).

Unit of analysis is publication.

Estimated between-group difference with 95%CI in the pre-specified subgroups of study type (left); raw means in exposed and non-exposed publications (right).

Unit of analysis is publication.

Dimension-specific score values

For each of the five reporting dimensions, the between group difference was estimated. The corresponding range of values was between 0 (worst) and 2 (best). Again the results are shown in a graphical representation (Fig 3). The dimension “Variables” had a smaller between group difference, and a higher reporting quality overall, whereas the “Missing data” and “Statistical methods” dimensions were generally reported with less detail. The mean reporting quality in the exposed publications was higher throughout, than that of the non-exposed publications.
Fig 3

Estimated between-group difference per dimension with 95%CI (left); raw means in exposed and non-exposed publications (right).

Unit of analysis is dimension.

Estimated between-group difference per dimension with 95%CI (left); raw means in exposed and non-exposed publications (right).

Unit of analysis is dimension.

Number of citations

The number of citations, extracted on July 20, 2021, had a non-normal, right-skewed distribution, and for that reason the non-parametric exact Wilcoxon-Mann-Whitney method was used for the estimation of the between group difference and its confidence interval. The estimate was -2 (95%CI from -4 to 0, p = 0.07) indicating weak evidence for higher citation numbers for the non-exposed publications. All descriptive statistics for number of citations can be found in Table 2. It can be seen that the number of citations was relatively balanced for observational studies and prediction / prognostic studies, whereas in the RCTs the number of citations was much larger in the non-exposed group of publications as compared to the exposed publications.
Table 2

Descriptive statistics for number of citations.

Estimates show the median [IQR].

ExposedNon-exposed
Overall8.0 [3.0, 14.5]9.0 [6.0, 18.5]
Randomized Studies6.5 [2.0, 10.2]35.5 [12.8, 417.5]
Observational Studies8.0 [3.0, 15.0]8.0 [6.0, 17.0]
Prediction Studies8.5 [6.0, 12.5]11.5 [8.0, 17.0]

Descriptive statistics for number of citations.

Estimates show the median [IQR].

Discussion

Summary

Our study demonstrates that the associated effect of academic biostatisticians as co-authors is to increase reporting quality and methodological strength in health research publications, overall and in subgroups of study types. In addition to that, the subgroup analyses demonstrated that there was evidence for a higher reporting quality in the exposed publications for observational studies and prediction / prognostic studies. The CONSORT statement seems to have been taken up well, because reporting quality was highest generally, for both exposed and non-exposed publications in RCTs. Citation numbers were comparable for exposed and non-exposed publications in the study types observational and prediction / prognostic studies, but the median number of citations for RCTs was higher in the non-exposed group of publications. The number of citations was evaluated to address the potential bias of confounding by indication. Our findings for observational and prediction / prognostic studies were reassuring, since balanced citation numbers showed that there was no evidence for confounding by indication. The imbalance in citation numbers for RCTs is not necessarily concerning since RCTs may anyhow be considered a special case. They are heavily regulated, CONSORT is generally well enforced by journals, they are expensive studies usually focused on “important” research questions, and they are often multi-center studies and hence likely to be including statisticians from other centers. RCTs are more frequently published in high-ranked medical journals and may therefore have higher citation numbers automatically. Together with the fact that only a low number of RCTs was assessed in this study we believe that there is a low risk of confounding by indication also in the case of RCTs. Methodological knowledge gaps seemed to be more prominent in the areas of statistical methods, and missing values. Nevertheless the mean reporting quality was higher in the exposed publications, throughout all subgroups. While it seems reasonable to assume that in the exposed papers the biostatisticians knew the methods well, there was still sub optimal reporting of these. The rating of reporting quality was performed in duplicate, and the agreement between first and second ratings were moderate to good, overall. The difficulties in the rating tasks were an indicator of sub optimal reporting in itself. Our study is the first to our knowledge to develop and use a rating score that is usable across study types, and which allows the comparison of reporting quality across study types. Low citation numbers of corresponding reporting guidelines in both, the group of exposed and non-exposed publications may be an indication of lack of awareness among study authors.

Results in the light of the literature

Our findings are in line with research on research studies evaluating the reporting quality of RCTs, observational studies and prediction / prognostic studies. While the development of reporting guidelines has been ongoing over the last 20 years, and the use of CONSORT is well-established for RCTs, there seem to remain areas in which good reporting less frequently observed. The Cochrane collaboration has initiated the “Prognosis Methods Group” to encourage and facilitate the systematic review and meta analysis of prognostic models in clinical research. Similar to the many systematic evaluations of research questions addressed with RCTs in Cochrane, the field of prediction / prognostic research will benefit and reporting as well as methodological quality will likely increase. Currently, there are 12 prognostic model reviews being undertaken in different fields of clinical research, of which one has been published [14]. Observational studies were the most frequent study type in the sample at hand, and reporting as well as methodological quality was only moderately higher in the exposed publications than in the non-exposed publications. Although the STROBE reporting guidelines have been published in 2004, and taken up by many journals, study authors need continuing reminders as a recent publication in JAMA Surgery by Brooke et al. showed [15].

Limitations and strengths

Our study has several limitations. The sum score to assess reporting quality and methodological strength was derived and used for the first time in the context of this study. We took multiple means to propose a consistent and valid sum score by using items from the corresponding reporting guidelines CONSORT, STROBE, and TRIPOD directly, and thorough piloting and testing. The sum score addressed the reporting quality in dimensions in which biostatisticians play a relatively prominent role. However, methodological strength could not be rated explicitly. In our view, the assessment of methodological quality is hampered if the reporting quality is low, making reporting assessment and improvement a first important milestone in improving the quality of health research in general. In the selection process, reporting items were chosen that would partially allow the assessment of methodological quality. For example in the dimension “Study size’, post-hoc power calculations for observational studies were assigned zero points, and in “Statistical methods” for prediction / prognostic studies zero points were assigned if model performance measures like discrimination or calibration were not reported. Questions across study types addressed the question of “Missing values”, and explicitly asked for methods to address them if present. Risk of bias assessment could be facilitated if reporting quality was higher generally. Another limitation of our study was the low agreement between ratings for the RCTs, which turned out to be only fair. An explanation for this could have been the small number of RCTs being rated by only two different raters. Both raters were relatively consistent in their ratings: one of them being somewhat strict, and the other one relatively lenient. The discrepancies led to the fact that many questions had to be rated by a third rater to come to a consensus. Our study has several strengths. First of all, we had written a clear study protocol, receiving an external review as a registered report. Upon review of the protocol, the study design and operationalization could be revised and improved. Second, several measures were taken to compensate for different sources of bias, as our study was observational and retrospective. These included double ratings of reporting quality, unbiased assessment of reporting quality through blinded PDFs, and highly reliable data entry through the specifically designed R Shiny app.

Implications

Our study has several implications for future research. First of all, the study design can repeatedly be applied for future assessments of reporting quality in our group or other academic centers over time. The continuing discussions about the assessment already had an impact on the awareness of the topic among the people involved. In addition, the setup can be generalized to address other documents, e.g., systematic reviews (based on PRISMA [16]), statistical analysis plans [17], or research proposals (SPIRIT [18]). Academic biostatisticians should take more responsibility in the review of final manuscript versions, and verify the adherence to established reporting guidelines. For reporting of statistical methods and of results with precision, there should be left enough room in the publication. More emphasis should be put on adequate methods to deal with missing values and the reporting thereof.

Conclusions

Our study is the first to systematically assess the valuable impact of a biostatistician on reporting quality and methodological strength in health research. Higher reporting quality persisted through subgroups of study types and dimensions. The operationalization of the quality assessment allows the direct comparison across study types and dimensions. Methodological knowledge gaps were identified for prediction and prognostic studies, and for the reporting on statistical methods and missing values.

Search string.

Search string for the identification of potential control publications. (PDF) Click here for additional data file.

Affiliation list.

Affiliation list for control publications. (PDF) Click here for additional data file.

Questionnaire.

Questionnaires for the assessment of reporting quality. (PDF) Click here for additional data file. 10 Jan 2022
PONE-D-21-37986
The incremental value of the contribution of a biostatistician to the reporting quality in health research - a retrospective, single center, observational cohort study
PLOS ONE Dear Dr. Held, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 24 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Dylan A Mordaunt, MB ChB, FRACP, FAIDH Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. 3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 4. Thank you for stating the following in the Competing Interests section: "I have read the journal's policy and SL has the following competing interests: SL is employed by CTU Bern, University of Bern, which has a staff policy of not accepting honoraria or consultancy fees. However, CTU Bern is involved in design, conduct, or analysis of clinical studies funded by not-for-profit and for-profit organizations. In particular, pharmaceutical and medical device companies provide direct funding to some of these studies. For an up-to-date list of CTU Bern’s conflicts of interest: http://www.ctu.unibe.ch/research/declaration of interest/index eng.html. UH, KS, MH, VLC, AG, SH, EAW, MG, KAR, SVF, and EF declare to have no conflict of interest." Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests).  If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. Additional Editor Comments: Thank you for your submission. With regards to the criteria for publication: 1. The study appears to present the results of original research. 2. Results reported don't appear to have been published elsewhere. 3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail. Minor suggestions are made by the reviewers. 4. Conclusions are presented in an appropriate fashion and are supported by the data. Minor suggestions are made. 5. The article is presented in an intelligible fashion and is written in standard English. 6. The research meets all applicable standards for the ethics of experimentation and research integrity. 7. The article adheres to appropriate reporting guidelines and community standards for data availability. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does the manuscript adhere to the experimental procedures and analyses described in the Registered Report Protocol? If the manuscript reports any deviations from the planned experimental procedures and analyses, those must be reasonable and adequately justified. Reviewer #1: Yes Reviewer #2: Yes ********** 2. If the manuscript reports exploratory analyses or experimental procedures not outlined in the original Registered Report Protocol, are these reasonable, justified and methodologically sound? A Registered Report may include valid exploratory analyses not previously outlined in the Registered Report Protocol, as long as they are described as such. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Are the conclusions supported by the data and do they address the research question presented in the Registered Report Protocol? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. The conclusions must be drawn appropriately based on the research question(s) outlined in the Registered Report Protocol and on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: 1. Abstract, Background: “...the contribution of a biostatistician to the reporting and methodological quality..” This idea that the study can determine the biostatistician’s contribution or impact on reporting quality is probably incorrect. This is making a causal assumption where there is only an association. If papers by study teams that include a biostatistician differ on any outcome from papers without a biostatistician among the authors, this can occur for a variety of reasons. One possibility is that the biostatistician made suggestions or wrote sections of the study plan and manuscript, that led to differences in reporting or conduct of the study. There are many other possible explanations, involving differences on the study teams with biostat that led them to include a biostatistician on the team. In other words, background covariates may differ between groups, and these are not measured or controlled for. This causal phrasing is pervasive throughout the manuscript, and I believe it should be removed. For example the first sentence in Discussion (“Our study demonstrates that academic biostatisticians as co-authors have a positive impact on reporting quality and methodological strength in health research publications.. “) should probably be re-written. 2. The difference in number of references in RCTs between groups is not discussed, but seems important. Obviously, study teams without a biostatistician doing RCTs seem to be doing something right. (Note: I notice that the authors are not drawing the conclusion that biostatisticians make RCTs less influential. This would obviously be a wrong interpretation of this finding, since associations cannot be interpreted causally – as discussed in point #1 above). 3. Table 1: I’m curious why the authors have a category for software use, and why R is the only option. I would be interested to know the numbers using SAS as well. 4. Using different pairs of raters for each type of study (RCT, observational, prediction) makes it hard to compare study types on their average compliance with reporting guidelines, since we don’t know if the raters of different study types had the same strictness. So it is not clear if any difference between study types is due to study characteristics or rater characteristics. For example, did RCTs have higher mean scores than observational studies because one of the RCT raters was extremely lenient (rating everything as ‘present’)? 5. Supplemental Table S3 is very helpful for showing how the scores were calculated! Reviewer #2: See attached files with notes taken during the reads, review and analyses. In general, this work is important to the fields and actions related to meta-analyses, and improving data quality and exploration. The variations seen between different studies of the same topic(s), as a central data resource worker for these research group, offers me the advantage of seeing or predicting the better approaches, and seeing their performance in terms of findings and publication. Applying that experience to this review, I notice I have less understanding of data at the ends of the facilities and researchers engaged with this particular study, and so evaluating people and their performance in the way these authors are doing this, requires more information about the process in general for this research. The purpose of this kind of work should be to encourage others to follow through with the same, and take the same paths, for which reason a little more information on this process is requested for the reader, since the reader can be too self limiting to learn the authors' new method without being able to understand it completely enough. So, in the criticisms I request a few additions, but focus on how to allow future readers to make the best use of understanding the author's thinking and processes for engaging in their analyses. For all of my corporation related work, I do the same myself; yet, it is never required. It is perhaps a little too old fashioned now to, as a statistician, wish to keep your secrets on how you accomplished something to yourself. Your logic should be: for everything I "discover", I have to realize and wonder how many else are out there right now, discovering the same thing. So we, as discoverers, publish our discoveries with the hopes of being recognized as one of those first discoverers, if that is what we are searching for out of our work. The exploratory nature--liking to combine the different methods to produce an more effective, clearer reporting, better research method, is always a better way to explore, than to just reiterate something discovered, without advancing its uses further. And every time a major step is taken, my hope is these researchers are already working on the next generation of this discovery of theirs. In general, working internally, I have found its feels "safest" releasing your novel products when you are two or more levels of advancement, above what you are ready to release. I focus a lot on little things, in relation to overall use, and may be overreacting to the lack of certain items in small sections of the writing, or recommendations when it comes to additions for appendix, or the inclusion of more figures. What tends to be lacking in this field of analytics is spatial thinking and representation, so in many writings, I am frustrates when there are just lists of numbers and flat values, especially if I can't visually analyze them. But unfortunately, the publisher limits their writer's ability to provide their readers with possible multidimensional reviews of what is being discovered. Thus the final product becomes limiting, it seems, and thus people like me asking for more visualization, so that me and others who looks at numbers this way benefit just as much as the table viewer and "memorizer." That being said, this starts up a new line of reasoning, and recommends are more active way of integrating new ideas be taken to produce more helpful and thorough final results in medical research. The authors state, this is a part of the goal of their research--to make better use of the resources that in an ideal setting should already be there. In the US, the biostatistician is mostly hired as a part timer, on a contract basis--industry loves to hire many analysts, who repeat findings and regurgitate the same results again and again. For every several dozen analysts (or more) there are in the health care industry, there is one true tester and discoverer of all possible relationships that may be there. The more we integrate our numbers, the more we see that most others cannot see, and most likely may never see. In health care, there are many things biostatisticians can find, see and display, that analysts will never think of searching for. I just don't get how it is they don't know these things, that they should know. That certainly slows down our numbers related technological advancements, and the improvements of the health care system in general. So that defines the value of these researchers' work. One day, maybe health will stop being a reactionary, retrospective research field, and engage in true spatial prediction modeling. Just like biostatisticians can tell you more, spatial biostatisticians tell you even more. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Brian L Altonen [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
Submitted filename: PONEreview_notes.docx Click here for additional data file. Submitted filename: PONE-D-21-37986_reviewer.pdf Click here for additional data file. 11 Feb 2022 Point to point reply for manuscript PONE-D-21-37986 The incremental value of the contribution of a biostatistician to the reporting quality in health research - a retrospective, single center, observational cohort study February 11, 2022 Reply to Reviewer # 1 1. Abstract, Background: “...the contribution of a biostatistician to the reporting and methodological quality..” This idea that the study can determine the biostatistician’s contribution or impact on reporting quality is probably incorrect. This is making a causal assumption where there is only an association. If papers by study teams that include a biostatistician differ on any outcome from papers without a biostatistician among the authors, this can occur for a variety of reasons. One possibility is that the biostatistician made suggestions or wrote sections of the study plan and manuscript, that led to differences in reporting or conduct of the study. There are many other possible explanations, involving differences on the study teams with biostat that led them to include a biostatistician on the team. In other words, background covariates may differ between groups, and these are not measured or controlled for. This causal phrasing is pervasive throughout the manuscript, and I believe it should be removed. For example the first sentence in Discussion (“Our study demonstrates that academic biostatisticians as co-authors have a positive impact on reporting quality and methodological strength in health research publications.. “) should probably be re-written. Reply: Thank you for the suggestion. We have reworded the manuscript throughout, to avoid a causal interpretation of the findings, and replaced it by terms pointing into the direction of an association. 2. The difference in number of references in RCTs between groups is not discussed, but seems important. Obviously, study teams without a biostatistician doing RCTs seem to be doing something right. (Note: I notice that the authors are not drawing the conclusion that biostatisticians make RCTs less influential. This would obviously be a wrong interpretation of this finding, since associations cannot be interpreted causally – as discussed in point #1 above). Reply: We agree with the Reviewer that a thorough discussion of the number of citations of RCTs with and without biostatistician was not included. We would like to mention that the citation numbers in our paper were used specifically to address the potential risk of bias due to confounding by indication. We extended the discussion section as requested. 3. Table 1: I’m curious why the authors have a category for software use, and why R is the only option. I would be interested to know the numbers using SAS as well. Reply: Thank you for bringing this topic up. Our results showed that in many publications, more than one software package was being used, e.g. R and Stata, or SPSS and Stata. Therefore, no simple categories of different software packages that would add up to 100% could be provided. We revised the analysis according to your suggestion and made new categories, showing which software was used (R, Stata, SAS, SPSS, other), potentially in combination with other software. The manuscript was revised to explain the new definition of categories. 4. Using different pairs of raters for each type of study (RCT, observational, prediction) makes it hard to compare study types on their average compliance with reporting guidelines, since we don’t know if the raters of different study types had the same strictness. So it is not clear if any difference between study types is due to study characteristics or rater characteristics. For example, did RCTs have higher mean scores than observational studies because one of the RCT raters was extremely lenient (rating everything as ‘present’)? Reply: Thank you for this comment. We do not believe that this issue is a concern because we had a large number of different raters across study types, and except for the very small number of RCTs there were no “pairs” of raters across observational studies and prediction / prognostic studies. Please let us clarify that the training to prepare raters to individual study types was very time consuming for the raters and it was not reimbursed by vouchers, so it was a strategic decision to train raters primarily for a specific study type. 5. Supplemental Table S3 is very helpful for showing how the scores were calculated! Reply: Thank you for this comment. Reply to Reviewer # 2 1. In general, this work is important to the fields and actions related to meta-analyses, and improving data quality and exploration. The variations seen between different studies of the same topic(s), as a central data resource worker for these research group, offers me the advantage of seeing or predicting the better approaches, and seeing their performance in terms of findings and publication. Applying that experience to this review, I notice I have less understanding of data at the ends of the facilities and researchers engaged with this particular study, and so evaluating people and their performance in the way these authors are doing this, requires more information about the process in general for this research. The purpose of this kind of work should be to encourage others to follow through with the same, and take the same paths, for which reason a little more information on this process is requested for the reader, since the reader can be too self limiting to learn the authors' new method without being able to understand it completely enough. So, in the criticisms I request a few additions, but focus on how to allow future readers to make the best use of understanding the author's thinking and processes for engaging in their analyses. For all of my corporation related work, I do the same myself; yet, it is never required. It is perhaps a little too old fashioned now to, as a statistician, wish to keep your secrets on how you accomplished something to yourself. Your logic should be: for everything I "discover", I have to realize and wonder how many else are out there right now, discovering the same thing. So we, as discoverers, publish our discoveries with the hopes of being recognized as one of those first discoverers, if that is what we are searching for out of our work. The exploratory nature--liking to combine the different methods to produce an more effective, clearer reporting, better research method, is always a better way to explore, than to just reiterate something discovered, without advancing its uses further. And every time a major step is taken, my hope is these researchers are already working on the next generation of this discovery of theirs. In general, working internally, I have found its feels "safest" releasing your novel products when you are two or more levels of advancement, above what you are ready to release. I focus a lot on little things, in relation to overall use, and may be overreacting to the lack of certain items in small sections of the writing, or recommendations when it comes to additions for appendix, or the inclusion of more figures. What tends to be lacking in this field of analytics is spatial thinking and representation, so in many writings, I am frustrates when there are just lists of numbers and flat values, especially if I can't visually analyze them. But unfortunately, the publisher limits their writer's ability to provide their readers with possible multidimensional reviews of what is being discovered. Thus the final product becomes limiting, it seems, and thus people like me asking for more visualization, so that me and others who looks at numbers this way benefit just as much as the table viewer and "memorizer." That being said, this starts up a new line of reasoning, and recommends are more active way of integrating new ideas be taken to produce more helpful and thorough final results in medical research. The authors state, this is a part of the goal of their research--to make better use of the resources that in an ideal setting should already be there. In the US, the biostatistician is mostly hired as a part timer, on a contract basis--industry loves to hire many analysts, who repeat findings and regurgitate the same results again and again. For every several dozen analysts (or more) there are in the health care industry, there is one true tester and discoverer of all possible relationships that may be there. The more we integrate our numbers, the more we see that most others cannot see, and most likely may never see. In health care, there are many things biostatisticians can find, see and display, that analysts will never think of searching for. I just don't get how it is they don't know these things, that they should know. That certainly slows down our numbers related technological advancements, and the improvements of the health care system in general. So that defines the value of these researchers' work. One day, maybe health will stop being a reactionary, retrospective research field, and engage in true spatial prediction modeling. Just like biostatisticians can tell you more, spatial biostatisticians tell you even more. Reply: We thank the reviewer for these general comments regarding the role of a biostatistician, in general and specifically. We would like to point to the fact that the level of evidence of “discovery” in our study is high due to the thorough and rigorous writing up of the study protocol and review thereof as registered report on PLOS ONE. While the study was conducted and analysed, we fully adhered to the study protocol. Publication recommended, with: 2. expansion of or expounding/clarification of bias sections notes [see lines 145-156] Reply: Thank you for your comment on risk of bias. The section was revised to comment on biases that could have resulted from our way of selection publications in this study. 3. R Studio resource link: i.e. https://shiny.rstudio.com/ or better—[add to test or appendix] Reply: Thank you for bringing this point to our attention. The reference for the shiny package was added to the references of our paper. 4. provide a “visualization” example, flowchart, something for exemplification [see lines 163-165] Reply: We would like to point to the fact that our study includes a flow chart, which was uploaded as figure 1. Additionally, the lollipop plots are provided as figures 2 and 3. 5. “Table 1. Descriptive Statistics”, a footnote detailing the “other” software products might be worth considering (although not required). [after line 203] Reply: We would like to thank the reviewer for this comment, it was an issue raised also by Reviewer #1. We added categories of other software packages to the table 1, including Stata, SPSS, SAS, etc. Optionals (for readers): --Discussion Section. further description, definition, examples of “sub optimal reporting” [254-256]; are there qualifiers defined for this? —[add to appendix] Reply: The qualifiers for reporting quality are the corresponding items of the reporting guidelines of the selected study types. We used the original reporting guideline items to operationalize the questionnaire of this study. This is described in the methods section, and also in S3- Appendix, in which the detailed Questionnaire is described. --summary of all qualifiers/quantifiers for 0,1,2 scoring, as used by raters—[add to appendix]. Reply: See above, the information is provided in S3 Appendix. --rater/review sheet, etc. [preferred, although optional(?)] actual sample of result, or idealized example of result, in format used to keep the notes and observations, either onscreen, in a (excel like) table, on paper, in a notebook, etc.. —[add to appendix] Reply: The review was performed with an R shiny app. Therefore no paper sheets were used for the rating task. The data was made available on OSF. --data descriptive metadata file—[add to appendix] Reply: The data was made available on OSF. We give the link in the manuscript. Submitted filename: Point to point reply BIQMR_final.docx Click here for additional data file. 18 Feb 2022 The incremental value of the contribution of a biostatistician to the reporting quality in health research - a retrospective, single center, observational cohort study PONE-D-21-37986R1 Dear Dr. Held, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Dylan A Mordaunt, MB ChB, MPH, MHLM, FRACP, FAIDH Academic Editor PLOS ONE Additional Editor Comments (optional): Thank you for your resubmission. This now meets the criteria for publication. Reviewers' comments: 24 Feb 2022 PONE-D-21-37986R1 The incremental value of the contribution of a biostatistician to the reporting quality in health research - a retrospective, single center, observational cohort study Dear Dr. Held: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Dylan A Mordaunt Academic Editor PLOS ONE
  13 in total

1.  Using the STROBE statement: survey findings emphasized the role of journals in enforcing reporting guidelines.

Authors:  Melissa K Sharp; Lorenzo Bertizzolo; Roser Rius; Elizabeth Wager; Guadalupe Gómez; Darko Hren
Journal:  J Clin Epidemiol       Date:  2019-08-06       Impact factor: 6.437

2.  The PRISMA 2020 statement: An updated guideline for reporting systematic reviews.

Authors:  Matthew J Page; Joanne E McKenzie; Patrick M Bossuyt; Isabelle Boutron; Tammy C Hoffmann; Cynthia D Mulrow; Larissa Shamseer; Jennifer M Tetzlaff; Elie A Akl; Sue E Brennan; Roger Chou; Julie Glanville; Jeremy M Grimshaw; Asbjørn Hróbjartsson; Manoj M Lalu; Tianjing Li; Elizabeth W Loder; Evan Mayo-Wilson; Steve McDonald; Luke A McGuinness; Lesley A Stewart; James Thomas; Andrea C Tricco; Vivian A Welch; Penny Whiting; David Moher
Journal:  Int J Surg       Date:  2021-03-29       Impact factor: 6.071

3.  A cross-sectional bibliometric study showed suboptimal journal endorsement rates of STROBE and its extensions.

Authors:  Melissa K Sharp; Ružica Tokalić; Guadalupe Gómez; Elizabeth Wager; Douglas G Altman; Darko Hren
Journal:  J Clin Epidemiol       Date:  2018-11-10       Impact factor: 6.437

4.  Guidelines for the Content of Statistical Analysis Plans in Clinical Trials.

Authors:  Carrol Gamble; Ashma Krishan; Deborah Stocken; Steff Lewis; Edmund Juszczak; Caroline Doré; Paula R Williamson; Douglas G Altman; Alan Montgomery; Pilar Lim; Jesse Berlin; Stephen Senn; Simon Day; Yolanda Barbachano; Elizabeth Loder
Journal:  JAMA       Date:  2017-12-19       Impact factor: 56.272

5.  SPIRIT 2013 statement: defining standard protocol items for clinical trials.

Authors:  An-Wen Chan; Jennifer M Tetzlaff; Douglas G Altman; Andreas Laupacis; Peter C Gøtzsche; Karmela Krleža-Jerić; Asbjørn Hróbjartsson; Howard Mann; Kay Dickersin; Jesse A Berlin; Caroline J Doré; Wendy R Parulekar; William S M Summerskill; Trish Groves; Kenneth F Schulz; Harold C Sox; Frank W Rockhold; Drummond Rennie; David Moher
Journal:  Ann Intern Med       Date:  2013-02-05       Impact factor: 25.391

6.  Effective Use of Reporting Guidelines to Improve the Quality of Surgical Research.

Authors:  Benjamin S Brooke; Amir A Ghaferi; Melina R Kibbe
Journal:  JAMA Surg       Date:  2021-06-01       Impact factor: 14.766

7.  Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

Authors:  Gary S Collins; Johannes B Reitsma; Douglas G Altman; Karel G M Moons
Journal:  Ann Intern Med       Date:  2015-01-06       Impact factor: 25.391

8.  Prognostic models for newly-diagnosed chronic lymphocytic leukaemia in adults: a systematic review and meta-analysis.

Authors:  Nina Kreuzberger; Johanna Aag Damen; Marialena Trivella; Lise J Estcourt; Angela Aldin; Lisa Umlauff; Maria Dla Vazquez-Montes; Robert Wolff; Karel Gm Moons; Ina Monsef; Farid Foroutan; Karl-Anton Kreuzer; Nicole Skoetz
Journal:  Cochrane Database Syst Rev       Date:  2020-07-31

9.  Is reporting quality in medical publications associated with biostatisticians as co-authors? A registered report protocol.

Authors:  Ulrike Held; Klaus Steigmiller; Michael Hediger; Martina Gosteli; Kelly A Reeve; Stefanie von Felten; Eva Furrer
Journal:  PLoS One       Date:  2020-11-06       Impact factor: 3.240

10.  Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal

Authors:  Laure Wynants; Ben Van Calster; Gary S Collins; Richard D Riley; Georg Heinze; Ewoud Schuit; Marc M J Bonten; Darren L Dahly; Johanna A A Damen; Thomas P A Debray; Valentijn M T de Jong; Maarten De Vos; Paul Dhiman; Maria C Haller; Michael O Harhay; Liesbet Henckaerts; Pauline Heus; Michael Kammer; Nina Kreuzberger; Anna Lohmann; Kim Luijken; Jie Ma; Glen P Martin; David J McLernon; Constanza L Andaur Navarro; Johannes B Reitsma; Jamie C Sergeant; Chunhu Shi; Nicole Skoetz; Luc J M Smits; Kym I E Snell; Matthew Sperrin; René Spijker; Ewout W Steyerberg; Toshihiko Takada; Ioanna Tzoulaki; Sander M J van Kuijk; Bas van Bussel; Iwan C C van der Horst; Florien S van Royen; Jan Y Verbakel; Christine Wallisch; Jack Wilkinson; Robert Wolff; Lotty Hooft; Karel G M Moons; Maarten van Smeden
Journal:  BMJ       Date:  2020-04-07
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.