| Literature DB >> 26039440 |
Mark D Lindner1, Richard K Nakamura1.
Abstract
The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH.Entities:
Mesh:
Year: 2015 PMID: 26039440 PMCID: PMC4454673 DOI: 10.1371/journal.pone.0126938
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1[A] Percent of applications funded decreases as peer review percentile scores increase. Approximately 95% of all applications with peer review percentile scores in the 0.1–10.0 range are funded, but only 3.2% of the applications with peer review scores in the 35.1–40 percentile range are funded. [B] Cumulative percentage of all funded applications with increasing peer review percentile scores. Almost 50% of all funded applications have peer review percentile scores in the 0.1–10 range, and 97% of all funded applications have peer review percentile scores equal to or less than 30. [C] Number of applications reviewed for each application funded increases as peer review percentiles increase. Almost every application reviewed with peer review percentile scores in the 0.1–10.0 range is funded, but only one of every 6 applications reviewed with peer review percentile scores in the 25.1–30.0 range is funded.
New R01s by Funding Institute.
| Institute | Institute Size (% of NIH Budget) | Award Rate 2007–2008 | Awards with Percentile Scores Above Award Rates |
|---|---|---|---|
| NCI | 16.4% | 18% | 13% |
| NIAID | 15.1% | 16% | 12% |
| NHLBI | 10.0% | 19% | 10% |
| NIGMS | 6.6% | 22% | 17% |
| NIDDK | 6.4% | 19% | 18% |
| NINDS | 5.3% | 18% | 18% |
| NIMH | 4.8% | 19% | 15% |
| NICHD | 4.3% | 16% | 10% |
| NCRR | 3.9% | 18% | 24% |
| NIA | 3.6% | 19% | 3% |
| NIDA | 3.4% | 21% | 14% |
| NIEHS | 2.5% | 16% | 30% |
| NEI | 2.3% | 24% | 22% |
| NIAMS | 1.7% | 18% | 12% |
| NHGRI | 1.7% | 28% | 12% |
| NIAAA | 1.5% | 24% | 8% |
| NIDCD | 1.3% | 25% | 21% |
| NIDCR | 1.3% | 21% | 15% |
| NLM | 1.1% | 21% | 20% |
| NIBIB | 1.0% | 19% | 9% |
| NINR | 0.5% | 22% | 30% |
| NCCAM | 0.4% | 10% | 42% |
| FIC | 0.2% | 31% | 21% |
Fig 2Awarded applications are classified by the range of percentile scores: 0.1–10 (N = 3,169), 10.1–20 (N = 2,486), 20.1–30 (N = 961), 30.1–40 (N = 178), and 40.1–60 (N = 36).