Literature DB >> 33186405

Publication rates from biomedical and behavioral and social science R01s funded by the National Institutes of Health.

William T Riley1, Katrina Bibb2, Sara Hargrave1, Paula Fearon2.   

Abstract

Prior research has shown a serious lack of research transparency resulting from the failure to publish study results in a timely manner. The National Institutes of Health (NIH) has increased its use of publication rate and time to publication as metrics for grant productivity. In this study, we analyze the publications associated with all R01 and U01 grants funded from 2008 through 2014, providing sufficient time for these grants to publish their findings, and identify predictors of time to publication based on a number of variables, including if a grant was coded as a behavioral and social sciences research (BSSR) grant or not. Overall, 2.4% of the 27,016 R01 and U01 grants did not have a publication associated with the grant within 60 months of the project start date, and this rate of zero publications was higher for BSSR grants (4.6%) than for non-BSSR grants (1.9%). Mean time in months to first publication was 15.2 months, longer for BSSR grants (22.4 months) than non-BSSR grants (13.6 months). Survival curves showed a more rapid reduction of risk to publish from non-BSSR vs BSSR grants. Cox regression models showed that human research (vs. animal, neither, or both) and clinical trials research (vs. not) are the strongest predictors of time to publication and failure to publish, but even after accounting for these and other predictors, BSSR grants continued to show longer times to first publication and greater risk of no publications than non-BSSR grants. These findings indicate that even with liberal criteria for publication (any publication associated with a grant), a small percentage of R01 and U01 grantees fail to publish in a timely manner, and that a number of factors, including human research, clinical trial research, child research, not being an early stage investigator, and conducting behavioral and social sciences research increase the risk of time to first publication.

Entities:  

Year:  2020        PMID: 33186405      PMCID: PMC7665634          DOI: 10.1371/journal.pone.0242271

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

There has been an increasing emphasis on replication, openness, and transparency across all of the sciences, including in the health sciences. Many aspects of research transparency have been pursued, including study registration [1] and data sharing [2], but study reporting remains a critical component of research transparency [3]. The National Institutes of Health (NIH) clinical trials policies require registration and reporting of results from all experimental studies involving humans to encourage greater research transparency and minimize publication bias [4]. One impetus for these policies was research showing that some studies fail to result in published results. Lack of timely publication and publication bias, particularly from industry-supported trials, have been well-documented [5-7]. Among clinical trials funded by the NIH and registered in ClinicalTrials.gov, only 46% were published within 30 months of trial completion, and a third remained unpublished after an average of 51 months following trial completion [8]. A subsequent analysis of clinical trials funded by the National Heart, Lung, and Blood Institute found that only 64% of these trials had published their primary results 30 months after completion of the trial [9]. Even among larger clinical trials (500 or more participants), 29% remain unpublished, either in the literature or on ClinicalTrials.gov, within an average of 60 months since trial completion [10]. This failure to publish in a timely manner is not unique to clinical trials. Among observational studies evaluating the safety of interventions that were registered in ClinicalTrials.gov, only 39% had published results within 30 months of study completion [11]. Although there is considerable literature on publication bias in behavioral and social sciences research [e.g., 12], we were unable to identify any studies in the literature that specifically evaluated the timeliness of publications in behavioral and social science studies. Some of the clinical trial publication rates reported above include behavioral interventions, and to the degree that behavioral interventions tend to utilize surrogate endpoints (e.g., smoking, weight, blood pressure, cholesterol), these reports suggest that behavioral interventions may be less timely in publishing results than those with clinical endpoints (e.g., morbidity or mortality). While timely and unbiased publication reporting of clinical trials research with potentially immediate impacts on clinical practice is a clear public health need, it is valuable to the scientific enterprise that all types of studies, including basic research, publish in a timely manner. The NIH has increasingly focused on zero publications as an indicator of productivity, or lack thereof, for the investments that it makes in biomedical and behavioral research. Recently, the NIH began an extensive continuous quality improvement effort of its Center for Scientific Review (CSR) study sections, called ENQUIRE [13]. Among the variables considered in determining the scientific productivity of the grant applications reviewed by study sections is the percentage of funded grants with no publications associated with that grant as per PubMed ID. Further, some preliminary analyses of this criterion suggested that study sections reviewing a higher proportion of behavioral and social sciences research (BSSR) grants had higher rates of zero publication grants. The purpose of this study was to assess the extent of this problem of zero publications, particularly for behavioral and social science grants funded by the NIH, and identify study characteristics associated with R01 grants that fail to publish anything related to that grant within a reasonable period from the project start date.

Methods

Data used in these analyses were obtained from the NIH Information for Management, Planning, Analysis, and Coordination II (IMPAC II), selecting NIH awarded grants, R01 and U01 type 1s, awarded from 2008 to 2014. Although time to publication is of interest for grant mechanisms other than R01s and U01s, these grant mechanisms are the primary grant mechanisms that would be expected to produce publications, and the timeframe was selected (2008 to 2014) to provide adequate time for grants awarded in 2014 to produce a publication during the typical five-years project period of the typical R01s and U01s. From the set of awarded grants, we extracted and cleaned the following as potential predictors of time to publication. Behavioral and Social Sciences Research (BSSR) vs. not: RCDC coding [14] of BSSR, all other grants not BSSR. This RCDC coding is based on the NIH definition of BSSR—the systematic study of behavioral and social phenomena relevant to health. Basic vs. Applied BSSR: RCDC coding of basic BSSR (bBSSR), all other BSSR not coded as bBSSR coded as applied. Neither are all non-BSSR grants. Human vs. animal subjects: Determined by IMPAC II flag; coded as human, animal, both, or neither. Clinical Trial vs. not: Determined from clinical trial flag as checked on the grant application (Note that these grants predate the current NIH clinical trials definition and policy; therefore, this applicant determined flag likely indicates the traditional efficacy/effectiveness clinical trial, not the broader definition that includes any experimental manipulation. Child vs. adult subjects: Determined from study code 2A - Children only, scientifically acceptable, vs study code 3A –No children included, scientifically acceptable. (Note that NIH defined children as those ages < 21 years during the time period accessed). Grants were coded as child, adult, or neither. Early Stage Investigator (ESI) vs. not: For PI with type 1 grants awarded from 2010–2014, the eRA Commons ESI flag was used. For years 2008 and 2009, ESI status was manually coded based on a) no prior substantial award and b) within 10 years of their terminal degree. Time from terminal degree: Calculated in years and equal to fiscal year of grant award minus contact PI’s latest degree year. This variable was dichotomized based on median split as < 17 years vs. ≥ 17 years. Multiple PI vs. not: Determined if grant award had more than one PI listed as an applicant or not. Highest degree received: Obtained for the contact PI and merged into the following categories: Masters, Medical, Doctorate, or Other. Carnegie Classification: Institutions receiving grant award were classified as per Carnegie classification of institutions of higher learning, extracted from 2015 definitions and updated in 2018 Public File [15] and merged into the following categories: R1 –Highest research activity doctorate; R2 –Higher research activity doctorate; M1, M2, M3 –moderate research doctoral/masters university or other academia; and special focus—medical schools and centers and other research centers (non-Carnegie medical centers and hospitals included in the special focus category). Dependent variables were the time to publication and the proportion of grants with zero publications during the 60 months from project start date. PubMed was the source for publications and includes all MEDLINE journal articles plus non-MEDLINE journal articles deposited in PubMed Central, the repository that all NIH research funded by the NIH is required to be deposited. Project start dates and publication dates were exported, and the time calculation determined by subtracting project start date from the publication date of the earliest publication linked to that grant (days). Data removed for analysis included Pubmed IDs (PMIDs) with negative time to first publication values (coded as n/a, n = 15,983 from a total of 456,401 total publications) and those with no publications (coded as n/p, n = 655). A ratio of 30.44:1 was used to convert days to months. Grants were counted as having zero publications if no publications linked to that grant award were identified as being published during the 60 months from the project start date, the typical duration of an R01 or U01 project period. Survival analysis was carried out in RStudio version 3.6.0 using survival and survminer packages primarily. Kaplan-Meier plots were created to visualize survival curves while log-rank tests were used to compare the survival curves of different groups. Cox hazard regression was executed to describe the effect of predictor variables on time to publication. P-values were computed, and p < .05 was considered statistically significant although the size of the sample resulted in small absolute differences being statistically significant. A deidentified, public dataset is available at https://figshare.com/s/ef60aad738fcb5e2e273 for those who wish to replicate findings or conduct additional analyses.

Results

Of the 27,016 R01 and U01 grants awarded by NIH from 2008 to 2014, 655 grants (2.4%) had zero publications linked to these grants in the 60 months since the project start date. The mean number of publications per grant was 17, and the distribution was positively skewed with a range of 0 to 436, nearly all (99.8%) within a range of 0 to 150. The mean time to first publication was 15.22 months with a range of 1 to 128. The mean time to first publication for non-BSSR grants was 13.6 months (SD = 13.76), and the mean time to first publication of BSSR grants was 22.44 months (SD = 19.52). Within BSSR, basic BSSR time to first publication was 19.51 months (SD = 17.57) and for applied BSSR, 23.85 months (SD = 20.24). For non-BSSR grants, 421 of 21986 grants (1.9%) had zero publications whereas for BSSR grants, 234 of 5030 (4.6%) had zero publications in the 60 months from the project start date. Since the proportion of human subjects research (75% for BSSR, 32% for not BSSR) and of clinical trials research (28% BSSR, 7% not BSSR) may partly explain the differences between non-BSSR and BSSR time to publication, Table 1 shows a breakdown of mean time in months to first publication for non-BSSR vs. BSSR (and by basic vs. applied BSSR) by clinical trial and human research code. As shown below, time to publication for clinical trials grants is 7 to 8 months longer on average than for non-clinical trial grants, and BSSR clinical trials, regardless of basic vs. applied subtype, have longer times to publication than non-BSSR clinical trials. Time to publication for human research is also longer, by on average about 7 months, than for other types of research (animal, neither, both). BSSR grants involving research with humans have about a 6-month longer time to first publication than non-BSSR grants.
Table 1

Time to first publication (months) by BSSR or not and by CT and human subject codes.

VariableStrataNot BSSRBSSRBasic BSSRApplied BSSR
Clinical TrialNot CT13.120.218.921.0
CT20.828.425.928.7
HumanNeither12.418.719.318.3
Animal12.013.312.913.4
Human19.225.223.326.1
Both11.015.411.517.3
Kaplan Meier (KM) survival curves for each of the categorical predictors were computed. Fig 1 shows the KM plot for not BSSR (red) and BSSR (green). As shown in the figure, there is a significantly higher probability that BSSR grants will be at risk, especially in the first one to two years of the project, of not publishing by the end of the 60 month period compared to non-BSSR grants, although this risk differential decreases by the 60 month point.
Fig 1

KM survival curve of time to first publication for BSSR and non-BSSR grants.

KM plots for the other categorical covariates were computed. Fig 2 shows the KM plot for human vs. animal research which evidenced the largest survival curve difference for risk of not publishing. As shown in Fig 2, grants coded as human research; compared to animal, neither, or both; had substantially greater risk of not publishing throughout the 60 month period from time of award, with the largest differences in risk occurring in the 12 to 48 month time period, but remaining substantially more at risk even at the 60 month mark. KM plots for other predictor variables showed greater risk of not publishing for applied vs. basic BSSR, for clinical trials vs. not, and for child vs. adult.
Fig 2

KM survival curve of time to first publication for human and animal research codes.

Among PI and institutional predictors, while being an ESI or being less than 17 years from the terminal degree significantly reduced the risk of not publishing, the absolute differences in the curves were small. The ESI KM plot is shown in Fig 3.
Fig 3

KM survival curve of time to first publication for ESI (1) vs. not-ESI (0) investigators.

To assess the relative predictive value of these variables controlling for the others, we performed a Cox regression analysis. Table 2 shows the hazard ratios of the full Cox model compared to reference. All of the predictors included in the model were significant with the exception of time to terminal degree which is highly related to ESI status. Accounting for the effects attributable to other predictors, human research has among the lowest hazard ratio for risk of not publishing (.65) compared to the neither reference category (animal and both slightly higher). Even accounting for human vs. animal participants and clinical trial vs. not, both basic and applied BSSR had a lower risk of publishing (.78 and .77 respectively) relative to non-BSSR grants. Being an ESI, having a medical or doctorate degree, and submitting a multiple PI grant were all predictive factors that increased the likelihood of publishing in a timely manner.
Table 2

Covariate and hazard ratios for full Cox model.

StrataExp(β)95% CI±p-valueβ coefSE
BasicApplied1 = Basic0.780.74–0.83< 2e-16 ***-0.240.03
2 = Applied0.770.73–0.80< 2e-16 ***-0.270.02
HumanAnimal1 = Human0.650.62–0.68< 2e-16 ***-0.430.02
2 = Animal1.051.02–1.100.00159 **0.060.02
3 = Both1.151.08–1.211.12e-06 ***0.140.03
CT1 = yes1.050.76–0.83< 2e-16 ***-0.230.02
ChildAdult1 = Child0.930.86–1.000.06-0.080.04
2 = Adult1.061.01–1.110.01162 *0.060.02
TFTDn/a0.990.99–0.99< 2e-16 ***-0.010.00
ESI1 = yes1.051.01–1.090.00805 **0.050.02
MultiplePIs1 = yes1.141.10–1.183.44e-13 ***0.130.02
Degree1 = Medical1.181.10–1.253.41e-07 ***0.160.03
2 = Doctorate1.121.06–1.194.77e-05 ***0.120.03
3 = Other1.020.86–1.200.830.120.08
Carnegie1 = R20.980.93–1.030.43-0.020.03
2 = Other academia0.960.90–1.030.29-0.040.04
3 = Medical school/center0.970.94–1.000.07-0.030.02
4 = Research/Other0.730.70–0.76< 2e-16 ***-0.310.02

BSSR removed above from full model due to its inclusion in BasicApplied neither level; Exp(β): Hazard Ratio (HR) per predictor after accounting for all other predictors in model CI: Confidence Interval, ±: for Exp(β), Significance codes:

*** at p = 0.001,

** at p = 0.01,

* at p = 0.05 β: regression coefficient, SE: Standard Error.

BSSR removed above from full model due to its inclusion in BasicApplied neither level; Exp(β): Hazard Ratio (HR) per predictor after accounting for all other predictors in model CI: Confidence Interval, ±: for Exp(β), Significance codes: *** at p = 0.001, ** at p = 0.01, * at p = 0.05 β: regression coefficient, SE: Standard Error. A comparison of simple and multiple Cox models along with separate Schoenfeld residual p values based on unstratified covariates revealed that the hazards of the strata were not proportional to one another. Therefore, stratified models were computed (e.g., Cox regressions for BSSR only) and a dichotomous time to terminal degree was computed. These stratified models revealed some differences to the full model in the absolute hazard ratio, but the pattern of predictors across most stratified models were consistent with the full model. Table 3 below shows the Cox model stratified by BSSR or not-BSSR. For both groups, human research substantially increases the hazard of not publishing, as does conducting a clinical trial. Being an ESI or having less than 17 years since terminal degree reduces the hazard of not publishing.
Table 3

Full Cox regression models stratified by BSSR or not-BSSR.

BSSR only n = 4927Non-BSSR n = 21566
CovariateStrataExp(β)95% CIp-valueExp(β)95% CIp-value
BasicApplied^2 = Applied0.990.93–1.060.77---
HumanAnimal1 = Human0.730.64–0.843.44e-06 ***0.660.63–0.70< 2e-16 ***
2 = Animal1.501.30–1.721.08e-08 ***1.031.00–1.070.07
3 = Both1.401.04–1.860.03*1.141.07–1.216.46e-06 ***
CT1 = yes0.770.71–0.835.05e-12 ***0.810.76–0.865.97e-11 ***
ChildAdult1 = Child1.070.95–1.200.260.830.75–0.930.000873 ***
2 = Adult1.060.97–1.150.201.051.00–1.100.07
TFTD21: ≥ 17 years0.890.84–0.960.003 **0.890.89–0.923.68e-12 ***
ESI1 = yes1.080.99–1.170.061.071.03–1.110.000714 ***
MultiplePIs1 = yes1.060.99–1.150.141.151.11–1.205.76e-13 ***
Degree1 = Medical1.151.01–1.310.03 *1.161.08–1.253.17e-05 ***
2 = Doctorate1.030.93–1.140.571.131.06–1.220.000201 ***
3 = Other0.950.72–1.260.721.040.85–1.280.70
Carnegie1 = R21.100.97–1.250.120.960.90–1.010.12
2 = Other academia0.990.82–1.190.930.970.89–1.040.38
3 = Medical school/center0.980.90–1.060.550.970.93–1.000.06
4 = Research/Other0.800.73–0.889.96e-06 ***0.720.69–0.76< 2e-16 ***

-: not in model, Exp(β): Hazard Ratio (HR) per predictor after accounting for all other predictors in model CI: Confidence Interval, Significance codes:

*** at p = 0.001,

** at p = 0.01,

* at p = 0.05 β.

-: not in model, Exp(β): Hazard Ratio (HR) per predictor after accounting for all other predictors in model CI: Confidence Interval, Significance codes: *** at p = 0.001, ** at p = 0.01, * at p = 0.05 β.

Discussion

The results of these data from over 27,000 NIH R01 and U01 grants awarded between 2008 and 2014 show that 2.4% of these awards had zero publications associated with the grant award in the 60 months since the start of the project period. This rate of zero publications is much less than found in prior studies. Most prior studies, however, focused on publication of the primary outcome results from clinical trials. In contrast, this study included all NIH R01 and U01 grants, only a small proportion of which were conducting clinical trials, and used a much more liberal outcome criteria, any PubMed ID publication associated with the grant, not the primary outcome results from the project. This approach may underestimate the true rate of zero publications resulting from a specific grant since investigators can associate any grant to a publication, regardless of how tangential that publication may be to that grant. Consistent with this possibility that publications are associated with grants that may have not contributed in any substantive way to the publication, we identified during the data cleaning process nearly 16,000 publications (about 3.5% of total publications) for which the publication preceded the project start date and were excluded from analyses because our results otherwise would have negative values for time to publication from the grant start date. Given that linking a grant number to a publication is all that is required to meet the criterion of a publication resulting from a grant, the fact that even 2.4% of R01 and U01 grants (655 NIH funded R01s or U01s) failed to be associated with even one publication in the 60 months following the project start date is potentially problematic for research transparency. The mean length of time to the first publication was approximately 15 months from the project start date, and this time to first publication varies substantially based on the type of research being conducted. Clinical trials add, on average, approximately 9 months to the time to first publication, and research with humans adds, on average, approximately 7 months to the time to first publication. These are not unexpected findings. Human subjects research requires Institutional Review Board (IRB) approvals and informed consent procedures. Long-term longitudinal studies take more time to conduct than cross-sectional studies. Recruitment and retention of human subjects in research studies is a significant challenge that often delays study timelines. One purpose of this study was to examine further some preliminary NIH findings suggesting that behavioral and social sciences research grants and study sections may not be as productive as non-BSSR or biomedical research study sections. Evaluations of study sections using the Relative Citation Ratio (RCR) [16] had shown that the RCRs from some BSSR-focused study sections may not be as high as for some more biomedically focused study sections. Although RCR normalizes the citation behavior of a research community, it is a publication-level metric that does not normalize the publication behavior of a research community when aggregated across investigators, study sections, institutes, etc. Therefore, when the CSR Enquire effort began, a simpler index of the proportion of zero publications was considered. This study shows that, even after accounting for the disproportionate rate of human subjects research and clinical trials among BSSR grants, BSSR grants still have a higher risk of not publishing within a 5 year period from their project start date than their non-BSSR biomedical counterparts. Without controlling for other variables, the average time to first publication is 14 months for a non-BSSR grant and 22 months for a BSSR grant. Within clinical trial grants, BSSR is slower to first publication than non-BSSR, and the same holds true for human subjects research. The survival curves for risk of not publishing drop substantially faster for non-BSSR vs BSSR grants. After accounting for all other variables in the model, the risk of not publishing is 22 to 23 percent less for BSSR compared to non-BSSR research. It is possible that BSSR research takes inherently longer to conduct, even after accounting for other variables, and without publication data beyond 60 months, we do not know if, with more time, these non-BSSR and BSSR curves continue out until near zero risk of not publishing. However, the data from this study and from others [17] suggest that likelihood of publishing decreases over time. Although the focus of this study was primarily on how the types of studies proposed in grants (BSSR or not, clinical trial or not, human research or not, child or not) affect the time to publication and the risk of not publishing, we also were able to assess certain awardee (both PI and institution) characteristics associated with the risk of not publishing. Although there was little differentiation among institutions based on Carnegie classification, grants to “research other” institutions had a higher risk of not publishing than the R1 and R2 research institutions. “Research other” is a heterogenous group that includes medical centers unaffiliated with academic institutions, private research institutions, and institutions without a Carnegie category. These institutions may have less resources to assist investigators in conducting studies and publishing in a timely manner. Although being an ESI or being less than 17 years from terminal degree was not associated with a substantial reduction in risk of not publishing, it was a significant reduction compared to those who were not ESIs or those with 17 or more years since terminal degree. These findings add further support to the NIH’s Next Generation Researchers Initiative [18]. Being on a multiple PI grant also reduced the risk of not publishing. This may simply be the result of having more PI leadership for publishing results, and/or it may be related to the transdisciplinary nature of some multi-PI grants. Prior research has shown that while slower to publish initially, transdisciplinary teams eventually publish more than non-transdisciplinary projects [19]. This study has a number of limitations to consider when interpreting these results. As noted previously, the association of a PMID to a grant is a liberal criterion for a publication that resulted from a research grant. Given the large sample size, manually identifying the primary or clearly relevant publications from a given grant was not feasible. We identified publications for up to 60 months from the project start date, and for some larger longitudinal studies, the first publication may occur beyond that time period. The predictors were selected based on their availability in the NIH IMPACII grant database; other predictors of risk of not publishing may be important, but we were not able to consider them given the limited number of relevant variables in this database. The criteria for categorizing these predictors also were based on the tools available (e.g., RCDC for BSSR, investigator checking “clinical trial” on an application), which may produce categorization errors. Within these limitations, the findings from this study show that while most NIH R01 and U01 grants publish at least one publication within 5 years of the project start date, a small percentage do not. There are legitimate reasons for no publications to result from a research grant (e.g., unfeasible to conduct, design flaws that limit internal validity and reproducibility, negative results that are difficult to publish), but for R01 and U01 grants that require sufficient preliminary data to demonstrate that the proposed research is feasible and is likely to result in meaningful results that will advance the science, it is also reasonable for taxpayers to question their investment in research grants that produce no publications. Based on these analyses, there appears to be a higher risk of not publishing in a timely manner if the research involves humans, clinical trials, children, or behavioral and social sciences research. Since these types of research increase the risk of not publishing in a timely manner, increased monitoring of progress, such as per NIH’s clinical trials monitoring policies, appears appropriate. As noted previously, the Center for Scientific Review (CSR) considers zero publication rates from grants as criteria for its ENQUIRE process of evaluating study sections. The publication productivity of grantees is typically considered in Type 2 applications, and there are examples such as in the National Institute of Child Health and Human Development (NICHD) Neonatal Research Network in which investigators were restricted from proposing new grants until the manuscripts from their prior work were completed [20]. Increased weight in review and in funding decisions on the quality and timeliness of publications from prior grant awards of the research team has the potential to increase the productivity and transparency of research funded by the NIH. Further research on the factors associated with the risk of not publishing could identify grants that require additional monitoring, support, and incentives to publish so that the taxpayer funding of NIH-supported research results in transparent findings that advance the science. 11 Sep 2020 PONE-D-20-26044 Rates of Zero Publications from Biomedical and Behavioral and Social Science R01s funded by the National Institutes of Health PLOS ONE Dear Dr. Riley, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised by an  expert reviewer and additional editor comments during the review process. Please submit your revised manuscript by Oct 26 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Dr. Sakamuri V. Reddy Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Thank you for stating the following in the Financial Disclosure section: "The authors received no specific funding for this work." We note that one or more of the authors are employed by a commercial company: "Lexical Intelligence" a) Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b) Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to  PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and  there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests Additional Editor Comments (if provided): The manuscript focusses on NIH supported R01 and U01 grants and publication lag. This study predicts the reasons for publication lag in type of research and investigator status. However, to further improve the manuscript, please clarify the Methods section at the end for Statistical analysis of data and significance considered. Although, the authors have given time for first publication, they should clarify discussion for variability reasons in publication time between clinical trial grants vs R01 grants or human subjects vs animal subjects involvement in the study. Also, publication time variability for early stage vs established investigators. They may discuss the difficulty to identify or analyze abstract of the studies presented in a scientific society meeting annually. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Reviewer’s Preface There may be a dearth of literature evaluating the timeliness of publications in behavioral and social science studies as the authors point out, but there are many good peer-reviewed studies on publication rates for human clinical studies. Of the latter, publication rates are most often based upon the study completion date, and publication in an indexed, peer-reviewed journal. Since this manuscript is likely to be read by healthcare professionals familiar with previous methodologies for publication rate studies, the authors should explain why theirs differ. The following review offers some specific observations and suggestions. Methods and Results: BSSR vs. non-BSSR Grants: The authors assume the reader is familiar with BSSR grants. Could the authors explain the difference between BSSR and non-BSSR grants in a few sentences? Term of the Study: Many previous publication rate reports are based on post-study completion, making a direct comparison to this study difficult. Miller (2015) evaluated publication rates based upon Section 801 of the Food and Drug Administration Amendments Act of 2007 (FDAAA) requiring posting of study results no later than one year after the date of completion or early termination. Prenner (2011), Chen (2016) and Archer (2016) evaluated publication rates 24 months post study completion. Gordon (2014) chose to evaluate publication rates 30 months post study completion. Post-study completion takes into account the duration of the study. Post-grant award does not. Can the authors explain their rationale for using “60 months from grant award” as the term of this study? Publication Rates: A zero publication rate of 2.4% seems low and may be the result of the type of study (BSSR), term of the study (60 months from project implementation) and the author’s liberal criteria for publication. Gordon (2014) reported a publication rate of 64% 30 months post-study, Archer (2016) 64% after 2 years, Miller (2015) 67% after 1 year, Prenner (2011) 54% after 2 years, Chen (2016) 36% after 24 months. Can the authors provide further clarification for the difference in their publication rates? Definition of Publication: Authors refer to their “liberal criteria for publication” without clearly defining what constitutes a publication. Were these citable, peer-reviewed publications that appeared in indexed journals? Were they non-peer-reviewed communication activities such as a posting on a website, or on social media? Publications per Grant: The range of publications per grant was 0 to 436 (pg. 7). The latter number is large and may confound the mean. Was the Standard Deviation calculated? Could a handful of grants skew the data? If so, should they should be identified and removed from the calculation to provide an accurate picture of the range of publications per grant? Publications Identified: The authors note in two locations that 16,000 pre-project publications were excluded from the analysis (pp 7, 14), but the total number of publications that met their criteria for inclusion does not appear in the manuscript. We know that 27,016 grants were included (pp 2, 7). We know the average number of publications per grant was 17 (pg. 7). Is the reader to assume that 459,272 publications were identified (17 x 27,016)? Size of Grant: Did the size of the study grant have any bearing on the rate of publication? Human Studies: Studies on human subjects require review by an Institutional Review Board and often the informed consent of the subject as well. Could this account for the 7-8 month delay in publication (pg. 8)? Publishing Incentives: Could the incentives for publishing at “research other” institutions (pg 16) relate to the fact that smaller institutions do not have the resources of an academic center (eg. post-docs, students, and professional medical writers who can assist with reconciling case report forms, collating data, preparing tables, and drafting and submitting study manuscripts)? Study Limitations: Mostly addressed above, with one addition. Does the sheer size of the data set and the comparison of disparate subgroups preclude the nuanced analysis necessary to recommend and implement specific policy changes? Discussion: Zero Risk of Not Publishing: Is the achievement of zero rates of non-publication (pg. 15) a practical goal? Many studies fail to achieve their objectives. Knowing how many studies failed, and the reasons behind the study’s failure is useful for future researchers as they design their protocols. However, there is little interest on the part of investigators and journal editors to spend time on negative or irreproducible results. Could this be a reason why some grants will never have an associated publication? Do the authors know how many grants had negative or irreproducible results? Additional Recommendations Needed: The authors recommend further support for the NIH’s Next Generation Researchers Initiative (pg. 16). Could additional successful examples be included such as the National Institute of Child Health and Human Development Neonatal Research Network (Archer 2016) that restricted investigators with unfinished manuscripts from proposing new grant requests? Would a policy similar to that of the NRN help to improve BSSR publication rates by holding investigators accountable for publication? Some Editorial Comments Manuscript Title: Would on-line searchability improve if the title was simplified as follows: “Publication Rates from Biomedical and Social Science R01s by the National Institutes of Health.” Will the word “zero” in the title serve to confound a future literature search? Typo: Pg. 12, second sentence. Should the word be “predictive” instead of “protective”? Reviewer Citations Archer SW, Carlo WA, Truog WE, et. al. Improving publication rates in a collaborative clinical trials research network. Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network. Semin Perinatol. 2016 Oct;40(6):410-417. Chen R, Desai NR, Ross JS, et. al. Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016 Feb 17;352:i637. Food and Drug Administration Amendments Act of 2007 (FDAAA), Section 801 Gordon D, Taddei-Peters W, Mascette A, et. al. Publication of trials funded by the National Heart, Lung, and Blood Institute. N Engl J Med. 2013 Nov 14;369(20):1926-34. Miller JE, Korn D, Ross JS. Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012. BMJ Open. 2015 Nov 12;5(11) Prenner JL, Driscoll SJ, Fine HF, Salz DA, Roth DB. Publication rates of registered clinical trials in macular degeneration. Retina. 2011 Feb;31(2):401-4. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 28 Oct 2020 October 26, 2020 Dr. Sakamuri V. Reddy Academic Editor PLOS ONE Re: PONE-D-20-26044, “Rates of Zero Publications from Biomedical and Behavioral and Social Science R01s funded by the National Institutes of Health” PLOS ONE Dr. Reddy, Thank you and the reviewer for the comments on this manuscript. We have addressed each point raised in this response and submitted a marked-up manuscript based on these review points and our responses as well as a clean version of the revised manuscript. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. We have rechecked the manuscript to ensure that it adheres to the PLOS ONE style requirements. 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. Although the data for this study were extracted from NIH’s IMPACII database which is restricted access to protect grantees and their proprietary information, we have created a deidentified dataset (i.e., no grant number, appl ID, PI name, or PI institution) that can be made available for those who desire to reproduce our results. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We have created a deidentified dataset that is posted on figshare.com for others to access for replication or further analyses. 3. Thank you for stating the following in the Financial Disclosure section: "The authors received no specific funding for this work." We note that one or more of the authors are employed by a commercial company: "Lexical Intelligence" a) Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders, in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. We have updated Author Contributions to indicate that Lexical Intelligence was contracted by the NIH to perform these analyses. We have amended the funding statement to reflect this. We did amend that statement above slightly since the funder, NIH, was involved in the data collection as part of its administrative responsibilities to monitor grants and their associated publications, but NIH did not have a role in any of the other study roles described above. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” We have added this statement. If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. Lexical Intelligence was contracted by the NIH to perform these analyses and does not have any conflict of interest regarding the results of the study. Actually, having an independent contractor perform these analyses eliminates any appearance of conflict the NIH might have in publishing results of the productivity of the grants it awards. b) Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. We have updated the Competing Interests Statement declaring the commercial affiliation of the co-authors. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. We have confirmed that this commercial affiliation does not alter adherence to all PLOS ONE policies. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. We have done so in our cover letter. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests. We appreciate the competing interest efforts of PLOS ONE, but in this case, the commercial entity has less competing interest in a full and objective presentation of the results than does the NIH which could be perceived as wanting their grant awards to appear productive regarding publication rates, hence why we contracted to have an independent analysis of these data. Additional Editor Comments (if provided): The manuscript focuses on NIH supported R01 and U01 grants and publication lag. This study predicts the reasons for publication lag in type of research and investigator status. However, to further improve the manuscript, please clarify the Methods section at the end for Statistical analysis of data and significance considered. Although, the authors have given time for first publication, they should clarify discussion for variability reasons in publication time between clinical trial grants vs R01 grants or human subjects vs animal subjects involvement in the study. Also, publication time variability for early stage vs established investigators. They may discuss the difficulty to identify or analyze abstract of the studies presented in a scientific society meeting annually. We have clarified in the methods section the statistical analyses and significance considered. We also have further clarified in the discussion some of the potential reasons for the differences noted in publication lags between types of grants. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: Reviewer’s Preface There may be a dearth of literature evaluating the timeliness of publications in behavioral and social science studies as the authors point out, but there are many good peer-reviewed studies on publication rates for human clinical studies. Of the latter, publication rates are most often based upon the study completion date, and publication in an indexed, peer-reviewed journal. Since this manuscript is likely to be read by healthcare professionals familiar with previous methodologies for publication rate studies, the authors should explain why theirs differ. The following review offers some specific observations and suggestions. Methods and Results: BSSR vs. non-BSSR Grants: The authors assume the reader is familiar with BSSR grants. Could the authors explain the difference between BSSR and non-BSSR grants in a few sentences? We have included the NIH definition of BSSR that is the basis for RCDC coding of grants as BSSR. Term of the Study: Many previous publication rate reports are based on post-study completion, making a direct comparison to this study difficult. Miller (2015) evaluated publication rates based upon Section 801 of the Food and Drug Administration Amendments Act of 2007 (FDAAA) requiring posting of study results no later than one year after the date of completion or early termination. Prenner (2011), Chen (2016) and Archer (2016) evaluated publication rates 24 months post study completion. Gordon (2014) chose to evaluate publication rates 30 months post study completion. Post-study completion takes into account the duration of the study. Post-grant award does not. Can the authors explain their rationale for using “60 months from grant award” as the term of this study? We have explained this further in the revised manuscript. The source of our data is different from those from clinical trials studies in that our source includes all research, not just clinical trials research. As a result, we are only able to specify the project start date, not the study completion date. Therefore, we chose 60 months from grant award to include the full 5-year period of most U01 and R01 awards. We considered a longer time period to account for no cost extensions but the longer the window, the more dated the data become to provide sufficient time for publication. We believe 60 months is a reasonable compromise between providing sufficient time for most grants to publish something during their project period but not so long as to push back further the inclusion criteria for when grants were initially awarded. Publication Rates: A zero publication rate of 2.4% seems low and may be the result of the type of study (BSSR), term of the study (60 months from project implementation) and the author’s liberal criteria for publication. Gordon (2014) reported a publication rate of 64% 30 months post-study, Archer (2016) 64% after 2 years, Miller (2015) 67% after 1 year, Prenner (2011) 54% after 2 years, Chen (2016) 36% after 24 months. Can the authors provide further clarification for the difference in their publication rates? We discussed this in the original manuscript but have further elaborated on these differences in this revision. Most prior research on publication lags has focused on clinical trials and assessed the time to the publication of the primary results. That more stringent criterion for publication is not possible or appropriate within the dataset we have analyzed. Instead, this data set only allows us to identify any and all publications that are associated with a given grant in PubMed Central. Therefore, any publication associated with a grant, not the primary outcome publication from a clinical trial, is included. Further, as indicated by our results, clinical trials have longer lag times to any publication than do grants that are not clinical trials. Definition of Publication: Authors refer to their “liberal criteria for publication” without clearly defining what constitutes a publication. Were these citable, peer-reviewed publications that appeared in indexed journals? Were they non-peer-reviewed communication activities such as a posting on a website, or on social media? We have clarified what we mean by “liberal criteria” – any publication associated with a grant in PubMed Central. Most of these publications are peer-reviewed journal publications although PubMed Central does sometimes include proceedings from meetings if published in a journal. It, however, does not include posting to websites, social media, etc. By “liberal” criteria, we mean that all that is required is for an investigator to login to PubMed Central and associate a grant, or multiple grants, with a publication. Publications per Grant: The range of publications per grant was 0 to 436 (pg. 7). The latter number is large and may confound the mean. Was the Standard Deviation calculated? Could a handful of grants skew the data? If so, should they be identified and removed from the calculation to provide an accurate picture of the range of publications per grant? The reviewer is correct that this distribution is positively skewed, but there are no obvious outliers in this distribution. We have included further information on the distribution of publications per grant in the revised manuscript, but it is important to note that we provide the number of publications per grant in this manuscript only for descriptive purposes. One reason we focused our analyses on time to publication and presence or absence of any publication is because the number of publications is so positively skewed. Publications Identified: The authors note in two locations that 16,000 pre-project publications were excluded from the analysis (pp 7, 14), but the total number of publications that met their criteria for inclusion does not appear in the manuscript. We know that 27,016 grants were included (pp 2, 7). We know the average number of publications per grant was 17 (pg. 7). Is the reader to assume that 459,272 publications were identified (17 x 27,016)? We have included the total number of publications associated with grants (456,401) and the number excluded because they were associated before the project start date. Size of Grant: Did the size of the study grant have any bearing on the rate of publication? We restricted our analyses to R01s and U01s in part to assess the publication rates of grants of comparable funding size. One might expect differences in publication rates for smaller (R03, R21) or larger (P30, P50) grants, but for R01s and U01s, the grant funding sizes are quite similar. Human Studies: Studies on human subjects require review by an Institutional Review Board and often the informed consent of the subject as well. Could this account for the 7-8 month delay in publication (pg. 8)? This is a good point and we’ve added it to the potential reasons for the delay in conducting and publishing studies involving humans. Publishing Incentives: Could the incentives for publishing at “research other” institutions (pg 16) relate to the fact that smaller institutions do not have the resources of an academic center (eg. post-docs, students, and professional medical writers who can assist with reconciling case report forms, collating data, preparing tables, and drafting and submitting study manuscripts)? We agree that this is a potential cause of the difference, and hence why we considered Carnegie categories as a predictor variable. We have made this point more explicit in the revised manuscript. Study Limitations: Mostly addressed above, with one addition. Does the sheer size of the data set and the comparison of disparate subgroups preclude the nuanced analysis necessary to recommend and implement specific policy changes? More nuanced analyses could reveal potential policy changes that would encourage publication and as a program evaluation effort, NIH regularly assesses publication rates of specific types of grants and considers policy changes, but for the purposes of this manuscript, we wanted to document the overall rates of publication, publication time-lags, and predictors of those lags in the most common form of grant that the NIH awards. By making a deidentified public dataset available, we hope others can pursue more nuanced questions as well. Discussion: Zero Risk of Not Publishing: Is the achievement of zero rates of non-publication (pg. 15) a practical goal? Many studies fail to achieve their objectives. Knowing how many studies failed, and the reasons behind the study’s failure is useful for future researchers as they design their protocols. However, there is little interest on the part of investigators and journal editors to spend time on negative or irreproducible results. Could this be a reason why some grants will never have an associated publication? Do the authors know how many grants had negative or irreproducible results? This is an excellent point that we have attempted to make more explicit in the discussion section. We don’t know how many grants had negative or irreproducible results. It is true that some studies turn out to be unfeasible to carry out and others are carried out, produce negative results, and are unable to publish in light of publication bias. We don’t believe the optimal number of grants with no publications should be zero, but given the typical $3 to $4 million investment of a typical R01 and U01, some published contribution from nearly all grants seems reasonable. Additional Recommendations Needed: The authors recommend further support for the NIH’s Next Generation Researchers Initiative (pg. 16). Could additional successful examples be included such as the National Institute of Child Health and Human Development Neonatal Research Network (Archer 2016) that restricted investigators with unfinished manuscripts from proposing new grant requests? Would a policy similar to that of the NRN help to improve BSSR publication rates by holding investigators accountable for publication? This is an excellent point and we have added this example as well as other potential strategies to improve timely publication. Some Editorial Comments Manuscript Title: Would on-line searchability improve if the title was simplified as follows: “Publication Rates from Biomedical and Social Science R01s by the National Institutes of Health.” Will the word “zero” in the title serve to confound a future literature search? Thank you for this title revision. We have modified the title accordingly. Typo: Pg. 12, second sentence. Should the word be “predictive” instead of “protective”? Yes, this has been corrected. Reviewer Citations Archer SW, Carlo WA, Truog WE, et. al. Improving publication rates in a collaborative clinical trials research network. Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network. Semin Perinatol. 2016 Oct;40(6):410-417. Chen R, Desai NR, Ross JS, et. al. Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ. 2016 Feb 17;352:i637. Food and Drug Administration Amendments Act of 2007 (FDAAA), Section 801 Gordon D, Taddei-Peters W, Mascette A, et. al. Publication of trials funded by the National Heart, Lung, and Blood Institute. N Engl J Med. 2013 Nov 14;369(20):1926-34. Miller JE, Korn D, Ross JS. Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012. BMJ Open. 2015 Nov 12;5(11) Prenner JL, Driscoll SJ, Fine HF, Salz DA, Roth DB. Publication rates of registered clinical trials in macular degeneration. Retina. 2011 Feb;31(2):401-4. We appreciate these additional reference list. We are aware of these publications and have added the Archer neonatal network example. Thank you again for the opportunity to revise and resubmit this manuscript. 30 Oct 2020 Publication Rates from Biomedical and Behavioral and Social Science R01s funded by the National Institutes of Health PONE-D-20-26044R1 Dear Dr. Riley, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Dr. Sakamuri V. Reddy Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 6 Nov 2020 PONE-D-20-26044R1 Publication Rates from Biomedical and Behavioral and Social Science R01s funded by the National Institutes of Health Dear Dr. Riley: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sakamuri V. Reddy Academic Editor PLOS ONE
  15 in total

1.  Time to publication among completed clinical trials.

Authors:  Joseph S Ross; Marian Mocanu; Julianna F Lampropulos; Tony Tse; Harlan M Krumholz
Journal:  JAMA Intern Med       Date:  2013-05-13       Impact factor: 21.873

2.  Compliance with results reporting at ClinicalTrials.gov.

Authors:  Monique L Anderson; Karen Chiswell; Eric D Peterson; Asba Tasneem; James Topping; Robert M Califf
Journal:  N Engl J Med       Date:  2015-03-12       Impact factor: 91.245

3.  Improving publication rates in a collaborative clinical trials research network.

Authors:  Stephanie Wilson Archer; Waldemar A Carlo; William E Truog; David K Stevenson; Krisa P Van Meurs; Pablo J Sánchez; Abhik Das; Uday Devaskar; Leif D Nelin; Carolyn M Petrie Huitema; Margaret M Crawford; Rosemary D Higgins
Journal:  Semin Perinatol       Date:  2016-08-08       Impact factor: 3.300

4.  Publication of trials funded by the National Heart, Lung, and Blood Institute.

Authors:  David Gordon; Wendy Taddei-Peters; Alice Mascette; Melissa Antman; Peter G Kaufmann; Michael S Lauer
Journal:  N Engl J Med       Date:  2013-11-14       Impact factor: 91.245

5.  Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis.

Authors:  Joseph S Ross; Tony Tse; Deborah A Zarin; Hui Xu; Lei Zhou; Harlan M Krumholz
Journal:  BMJ       Date:  2012-01-03

6.  Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level.

Authors:  B Ian Hutchins; Xin Yuan; James M Anderson; George M Santangelo
Journal:  PLoS Biol       Date:  2016-09-06       Impact factor: 8.029

7.  Registration of published randomized trials: a systematic review and meta-analysis.

Authors:  Ludovic Trinquart; Adam G Dunn; Florence T Bourgeois
Journal:  BMC Med       Date:  2018-10-16       Impact factor: 8.775

8.  Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation.

Authors:  Kristin Rising; Peter Bacchetti; Lisa Bero
Journal:  PLoS Med       Date:  2008-11-25       Impact factor: 11.069

Review 9.  Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review.

Authors:  Kerry Dwan; Carrol Gamble; Paula R Williamson; Jamie J Kirkham
Journal:  PLoS One       Date:  2013-07-05       Impact factor: 3.240

10.  Public availability of results of observational studies evaluating an intervention registered at ClinicalTrials.gov.

Authors:  Marie Baudart; Philippe Ravaud; Gabriel Baron; Agnes Dechartres; Romana Haneef; Isabelle Boutron
Journal:  BMC Med       Date:  2016-01-28       Impact factor: 8.775

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.