| Literature DB >> 31956434 |
Mark Skopec1, Hamdi Issa2, Julie Reed1,3, Matthew Harris1.
Abstract
BACKGROUND: Descriptive studies examining publication rates and citation counts demonstrate a geographic skew toward high-income countries (HIC), and research from low- or middle-income countries (LMICs) is generally underrepresented. This has been suggested to be due in part to reviewers' and editors' preference toward HIC sources; however, in the absence of controlled studies, it is impossible to assert whether there is bias or whether variations in the quality or relevance of the articles being reviewed explains the geographic divide. This study synthesizes the evidence from randomized and controlled studies that explore geographic bias in the peer review process.Entities:
Keywords: Geographic bias; Narrative synthesis; Randomized controlled trials; Systematic review
Year: 2020 PMID: 31956434 PMCID: PMC6961296 DOI: 10.1186/s41073-019-0088-0
Source DB: PubMed Journal: Res Integr Peer Rev ISSN: 2058-8615
Combination of key words and MeSH terms used for to search databases
MeSH terms are followed by “/”. Keywords are in quotation marks. Asterisks (*) denote truncation of keywords
Fig. 1Flowchart detailing the study selection process
Summary characteristics of included studies
| Title | Author and year | Journal | Study question(s) | Sample size | Study design | Intervention | Outcome measures | Results |
|---|---|---|---|---|---|---|---|---|
| Do physicians judge a study by its cover? An investigation of journal attribution bias | Christakis, 2000 | Journal of Clinical Epidemiology | Does attribution of an article to a "high-prestige" journal versus a "low-prestige" journal affect readers' impressions of the quality of the article, and does formal training in epidemiology and biostatistics mitigate these effects? | 264 physicians who listed internal medicine as their primary specialty recruited from the American Medical Association’s master list of licensed physicians. | Randomized, single-blind. It is unclear from the article how randomization was achieved. | Participants were asked to read an article and abstract from either the SMJ or the NEJM. They were given the abstracts or articles either attributed or unattributed. After each article or abstract, respondents were asked to rate the quality of the study, the appropriateness of the methodology employed, the significance of the findings and its likely effects on their practice. Ratings were on a Likert scale, and responses were used to generate an aggregate ‘Impression Score’ ranging from 5-25. | Difference in ‘Impression Score’ given by reviewers who read either correctly attributed abstracts or articles or unattributed abstracts or articles. | The predicted odds for review score prediction for “Top universities” are 1.58 [95% CI (1.09–2.29]. The predicted odds for review score prediction for “Paper from the U.S.” are 1.01 [95% CI (0.66–1.55)]. The predicted odds for review score prediction for “Same country as reviewer” are 1.15 [95% CI (0.71–1.86)]. |
| Explicit bias toward high-income country research: a randomized, blinded, crossover experiment of English clinicians | Harris, 2017 | Health Affairs | Assessed the within-individual change in evaluation of research abstracts when the source is experimentally altered - in this case, between high- and low-income countries. | 347 clinicians, of any speciality, living and practicing in England. | Randomized, controlled, blinded crossover experiment. The survey platform carried out simple randomization in real-time while respondents entered the survey. | Participants rated the same abstracts on two separate occasions, one month apart, with the source of these abstracts changing, without their knowledge, between high- and low-income countries. Participants were asked to rate the abstracts based on strength of evidence, relevance to their practice, and likelihood to recommend the paper to a colleague. Scores were assigned in each of these categories on a scale of 0–100. | Difference in review scores between the two rounds of reviewing, therefore comparing review scores from HIC abstracts to review scores from LIC abstracts. | Overall mean difference in rating of strength between abstracts from HIC and LIC source was 1.35 [95% CI (− 0.06–2.76)]. Overall mean difference in rating of relevance and likelihood of recommendation to a peer between abstracts HIC and LIC source was 4.50 [95% CI (3.16–5.83)] and 3.05 [95% CI (1.77–4.33)], respectively. |
| Reviewer bias in single- versus double-blind peer review | Tomkins, 2017 | Proceedings of the National Academy of Sciences | Investigated bias resulting from the fame or quality of the authors’ institution(s). | 1,957 review committee members at the Web Search and Data Mining (WSDM 2017) conference. | Randomized, double- and single-blind. The authors do not specify how reviewers were randomized into their respective groups. | Four committee members reviewed each paper. Two of these four reviewers are given access to author information (single-blind); the other two are not (double-blind). Reviewer behavior is studied in two settings: reviewing papers and also a preliminary "bidding" stage in which reviewers express interest in papers to review. | A “Blinded paper quality score” (bpqs, the average quality score of the double-blind reviews for that paper) is used as a proxy measure for the intrinsic quality of a paper. This is used to calculate the odds of acceptance among single- versus double-blind reviewers. | The predicted odds for review score prediction for “Top universities” are 1.58 [95% CI (1.09–2.29]. The predicted odds for review score prediction for “Paper from the U.S.” are 1.01 [95% CI (0.66–1.55)]. The predicted odds for review score prediction for “Same country as reviewer” are 1.15 [95% CI (0.71–1.86)]. |
Fig. 2Results from Harris et al. [23]. Dotted line at 0 represents no difference in review scores. Overall mean difference in rating of strength between abstracts from HIC and LIC source was 1.35 [95% CI (− 0.06–2.76)]. Overall mean difference in rating of relevance and likelihood of recommendation to a peer between abstracts HIC and LIC source was 4.50 [95% CI (3.16–5.83)] and 3.05 [95% CI (1.77–4.33)], respectively
Fig. 3Results from Tomkins et al [24]. Dotted line at 1 represents no difference in odds of acceptance or rejection. The predicted odds for review score prediction for “Top universities” are 1.58 [95% CI (1.09–2.29]. The predicted odds for review score prediction for “Paper from the U.S.” are 1.01 [95% CI (0.66–1.55)]. The predicted odds for review score prediction for “Same country as reviewer” are 1.15 [95% CI (0.71–1.86)]
Fig. 4Results from Christakis et al. [25]. Dotted line at 0 represents no difference in impression scores. Mean differences in impression scores associated with attribution of an article or an abstract to the NEJM were 0.71 [95% CI (− 0.44–1.87)] and 0.50 [95% CI (− 0.87–1.87), respectively. Mean differences in impression scores associated with attribution of an article or an abstract to the SMJ were − 0.12 [95% CI (− 1.53–1.30)] and − 0.95 [95% CI (− 2.41–0.52)], respectively
Fig. 5Risk of bias assessment. Risk of bias in each included study was assessed using the Cochrane Collaboration’s Risk of Bias Assessment tool [26]. Green indicates a low risk, yellow medium risk, and red high risk of bias. A more detailed discussion can be found in Additional file 1
Fig. 6Heuristic framework. Reviewers may see “Harvard University,” and through a series of reasonable assumptions arrive at the conclusion that Harvard produces high-quality research (blue arrows). The heuristic occurs when reviewers see “Harvard University” and necessarily assume that the research is of high quality, when this may not be the case