| Literature DB >> 29750807 |
Jonathan Shepherd1, Geoff K Frampton1, Karen Pickett1, Jeremy C Wyatt1.
Abstract
OBJECTIVE: To investigate methods and processes for timely, efficient and good quality peer review of research funding proposals in health.Entities:
Mesh:
Year: 2018 PMID: 29750807 PMCID: PMC5947897 DOI: 10.1371/journal.pone.0196914
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart.
Overview of peer review (PR) studies included in the systematic review.
| Study ID, country | Innovation intervention(s) | Comparator(s) | Efficiency outcome(s) | Effectiveness outcome(s) | Process measures | Study design |
|---|---|---|---|---|---|---|
| Barnett et al. 2015 [ | Short proposal with simplified scoring and accelerated PR | None | Proposal preparation time; PR time | Funding outcome | Applicants’ views (summary only) | Observational; part of a quality improvement evaluation |
| Fleurence et al. 2014 [ | Process to engage patients and other stakeholders in PR | None | Reviewer agreement | Funding outcome | Reviewers’ views | (Unclear whether retrospective) analysis of PCORI inaugural funding round |
| Gallo et al. 2013 [ | Teleconference PR panels; Videoconferencing panels (pilot test) | Face-to-face PR panels | PR time; Reviewer agreement | Funding outcome (assumption-based) | Reviewers’ views | Retrospective analysis |
| Herbert et al. 2015 [ | 2 simplified face-to-face assessments: (1) 7-reviewer panel assessed 9-page proposal + applicant track record; (2) 2-reviewer panel assessed 9-page proposal only | Standard face-to-face assessment: 12-reviewer panel assessed longer proposals (around 100 pages) | Costs of PR; PR time | Funding outcome | None | Prospective parallel group study |
| Holliday and Robotin 2010 [ | Delphi process for ranking proposals | None | Reviewer agreement | None | Reviewers’ views | Prospective single group study |
| Mayo et al. 2006 [ | 2-reviewer ‘CLASSIC’ critique method | All-panel members’ independent ‘RANKING’ method | Reviewer agreement; Optimal number of reviewers | Funding outcome | None | Prospective parallel group study |
| Sattler et al. 2015 [ | 11-minute PR training video to improve reviewer reliability | No-training group (included basic video) | Accuracy of rating scale selection; PR time; Reviewer agreement | None | None | Randomised controlled trial |
| Vo et al. 2015 [ | Virtual PR | Face-to-face PR | Cost per reviewer; PR time | None | Reviewers’ views | Retrospective comparison of several virtual and face-to-face meetings conducted in the same year |
Factors potentially affecting internal validity of the studies, and key uncertainties identified.
| Study | Factors supporting internal validity | Factors potentially reducing internal validity | Key uncertainties |
|---|---|---|---|
| Barnett et al. 2015 [ | Prospective Replicated (4 funding rounds over 2 years) | Single-group cross-sectional type study Descriptive analysis with no quantitative testing Continuous quality improvement assessment but without a defined baseline condition against which to assess changes | Unclear whether data collection instruments were validated Unclear whether applicant views were reported selectively Unclear whether reviewers were aware they were in a research study (unclear performance bias risk) |
| Fleurence et al. 2014 [ | Focus groups (providing some stakeholder views) were based on randomly-selected stakeholders | Single-group, cross-sectional/before-after type study | Unclear whether efficiency and effectiveness assessments were prospective or retrospective Unclear whether web survey instrument was validated Unclear timing of focus groups & web survey (unclear recall bias risk) Unclear analysis method for focus group & web survey results Unclear whether reviewers were aware they were in a research study (unclear performance bias risk) |
| Gallo et al. 2013 [ | Replicated (2 funding rounds over 2 years) Reviewers unaware they were in a research study | Retrospective Case-control type study | Unclear whether the questionnaire was validated Unclear whether reviewer views were reported selectively Not fully clear who consumer reviewers were Reviewer opinions sought by questionnaire but limited description given |
| Herbert et al. 2015 [ | Prospective Parallel 3-group study | Non-randomised The number of reviewers, duration of review, length of proposals & the scoring approaches differed between simplified and standard peer review approaches and therefore effects of these are not separable | Unclear when data on costs and timing were collected in relation to the timing of peer review (unclear recall bias risk) Unclear whether the sample of grant proposals was representative, as it was a convenience sample acquired through existing contacts Unclear whether reviewers were aware they were in a research study (unclear performance bias risk) |
| Holliday and Robotin 2010 [ | Prospective | Single-group study Small set of questions may have limited the views that reviewers could provide | Unclear whether data collection instruments were validated Unclear how reviewers’ views were sought and analysed Unclear why views were not reported for all reviewers (unclear risk of selective reporting bias) Timing of data collection unclear (but appears to have been within a four-week period) Unclear whether reviewers were aware they were in a research study (unclear performance bias risk) |
| Mayo et al. 2006 [ | Prospective Parallel 2-group study | Non-randomised One member of each two-reviewer group was also a committee member (i.e. non-independence of innovation and comparator) Provision of ranking criteria differed between two-reviewer and committee reviewer groups and therefore effects of these are not separable | Unclear what the 5-point rating scale for proposals was Unclear whether the scoring and ranking sheets were validated Unclear whether reviewers were aware they were in a research study (unclear performance bias risk) |
| Sattler et al. 2015 [ | Prospective Randomised Parallel 2-group study Reported that there were no missing data (low attrition bias risk) | Unclear methods of randomisation and whether allocation concealed (risk of selection bias unclear) Unclear whether reviewers were aware they were involved in a research study (blinding not reported; unclear performance bias risk) Unclear how time taken to read grant criteria information was measured Unclear how long after the innovation or comparator the reviewer questionnaire was administered (unclear recall bias risk) Unclear whether the intervention and comparator were run at the same time (unclear contamination bias risk) | |
| Vo et al. 2015 [ | Replicated (6 parallel innovation sessions within 1 month) | Retrospective Case-control type study No details of the comparator face-to-face meetings reported, so unclear whether they were reflective of usual Agency for Healthcare Research and Quality (AHRQ) face-to-face sessions and how different they were from the innovation | Only limited details of the peer review process reported Unclear how many proposals each reviewer was required to read, how many reviewers were required to read each proposal, or whether this differed between sessions Unclear process for scoring proposals Unclear interval between peer review and questionnaire (unclear recall bias risk) Unclear whether questionnaire was tested or validated Low questionnaire response rate, so unclear representativeness of results Uncertainty around the cost and time savings since similarity of innovation and comparator sessions unclear |
Factors potentially influencing generalisability of the studies.
| Barnett et al. 2015 [ | Fleurence et al. 2014 [ | Gallo et al. 2013 2015 [ | Herbert et al. 2015 [ | Holliday and Robotin 2010 [ | Mayo et al. 2006 [ | Sattler et al. 2015 [ | Vo et al. 2015 [ | |
|---|---|---|---|---|---|---|---|---|
| Implemented in review sessions of a regional funder AusHSI) | Implemented in review sessions of a national funder (PCORI) | Implemented in review sessions of a national funder (AIBS) | Implemented in review sessions of a national funder (NHMRC) | Implemented in review sessions of a national funder (CCNSW) | Implemented in review session of local university pilot project (MUHCRI) | Study focusing specifically on reliability of scoring in ‘artificial’ experimental setting | Implemented in review sessions of a national funder (AHRQ) | |
| Broad range of applied health services research topics (examples reported) | Comparative effectiveness research, but health topics not specified | Broad range of biomedical & health projects (examples reported) | Basic science & public health, but topics not specified | Pancreatic cancer | Broad range including clinical, epi-demiological, health services | Not applicable (experimental study) | Not reported but funder has broad health topic remit | |
| Australian $80,000 per 12-month project | US $1,500,000 in direct costs over 3 years | US $725,000–1,000,000 in direct costs per 3-year project | Not reported | Australian $100,000 per 12-month project | Not reported | Not applicable (experimental study) | Not reported | |
| 31 to 89 per review session (4 sessions) | 480 | 1600 over 4 year period (291 to 347 per year) | 72 (voluntary sample of submissions) | 10 | 32 | Not applicable (experimental study) | 198 reviewed (6 to 59 per session), of which 128 discussed (6 to 34 per session) (6 sessions) | |
| 1200 word limit | Not reported | Not reported | 9 pages (comparator circa 100 pages) | 6 pages | Maximum of 5 pages | Not applicable (experimental study) | Not reported | |
| Applicants’ partnership, research question, method, budget, and expected improvements to health services | 8 PCORI Merit criteria (relating to scientific rigour, patient centeredness, engagement of patients and stakeholders) | Scientific merit | Different sections of the full NHMRC proposals form | Scientific merit, innovativeness & level of risk | Innovation: No criteria used. Comparator: research question, background, population characteristics, methods, measures & data analysis (5-point scale). | Not applicable (experimental study) | Not reported | |
| 9 (specialisms reported) | Phase 1: 363 scientists; Phase 2: 111 (59 scientists, 21 patients & 31 stakeholders) | 7–12 subject experts + ‘in recent years’ ≥ 1 consumer reviewer per panel | 2 (‘journal’ panel), 7 (simplified panel), or 12 (comparator 100) | 5 | 11 (innovation); 2 (comparator) | 75 randomly assigned to training and no training; numbers per group not reported | 110 (7 to 24 per session) | |
| Members of AusHSI scientific review committee (specialisms reported) | Scientists, patients & ‘stakeholders’ (caregivers, including nurses & physicians) | Scientists & ‘consumer’ reviewers (had ‘direct experience’ with relevant diseases) | Senior academic researchers (qualification & experience given) | Non-conflicted independent holders of overseas (US) pancreatic cancer grants | Members of a university health centre research institute (committee members and experienced researchers) | Public health professors from research universities across the US | Members of study sections or special emphasis panels (no further details) | |
| Not reported (presumed to follow standard AusHSI process) | Open calls for reviewers & automated search using “Reviewer Finder” | Not reported (presumed to follow standard AIBS process) | Sourced from existing contacts, not selected randomly | Not reported | Selected on content and methodology and statistical expertise(process not specified) | Identified from web-based search for public health programmes | Not reported (presumed to follow standard AHRQ process) | |
| None reported | Training (mandatory) given on PCORI review process in webinars & 1-day face-to-face meeting | Received ‘online and face-to-face ‘orientations’ (when applicable) of the process | None reported | None reported | Reviewers were provided with instructions about the processes (no further details) | The innovation was itself a training programme to improve scoring | 30 min of basic training in WebEx software use | |
| 1.5 to 2 months (submission to notification) per funding round (mean review circa 46 min per proposal) | Not reported | Teleconference: mean 19 to 22 min per proposal; face to face: mean 23–29 min per proposal | 1.5 days (innovation); 1 week (comparator) | 3 Delphi rounds; total 16 days | Not reported | Not applicable (experimental study) | Virtual review: mean 7.2 hours per session (20 min per proposal); comparator 9.8 hours per session; 26 min per proposal | |
| Secure web-based portal | Not reported | Bespoke online system for submitting confidential electronic score sheets | Not reported | Scoring sheet (not described) to submit scores online & funder to collate them | Not reported | Not applicable (experimental study) | WebEx software platform | |
| Compre-hensive, from detailed transcription of discussions | None reported | None reported | None reported | None reported | Explanation of comparator (CLASSIC) scores were provided | Not applicable (experimental study) | None reported | |
| Criteria for AusHSI funding require partnership between healthcare professional and researcher; AusHSI was a new initiative; process included interviews for shortlisted applicants | After this research study PCORI changed their 2-phase peer-review to a 1-phase process | Anonymised written critiques and summary statements were edited by funder’s staff for accuracy and consistency; Ad hoc (i.e. not standing) review panels; about 50% of members were new each year | Authors stated the innovation appeared to attract higher-quality proposals than the standard process | Reviewers did not assess any research proposals; The training innovation focused specifically on the accuracy of interpreting and applying scoring criteria, and did not address all potential areas of training or sources of ‘noise’ | Ad hoc (unplanned) peer review sessions |