| Literature DB >> 29138317 |
Andrew Tomkins1, Min Zhang2, William D Heavlin3.
Abstract
Peer review may be "single-blind," in which reviewers are aware of the names and affiliations of paper authors, or "double-blind," in which this information is hidden. Noting that computer science research often appears first or exclusively in peer-reviewed conferences rather than journals, we study these two reviewing models in the context of the 10th Association for Computing Machinery International Conference on Web Search and Data Mining, a highly selective venue (15.6% acceptance rate) in which expert committee members review full-length submissions for acceptance. We present a controlled experiment in which four committee members review each paper. Two of these four reviewers are drawn from a pool of committee members with access to author information; the other two are drawn from a disjoint pool without such access. This information asymmetry persists through the process of bidding for papers, reviewing papers, and entering scores. Reviewers in the single-blind condition typically bid for 22% fewer papers and preferentially bid for papers from top universities and companies. Once papers are allocated to reviewers, single-blind reviewers are significantly more likely than their double-blind counterparts to recommend for acceptance papers from famous authors, top universities, and top companies. The estimated odds multipliers are tangible, at 1.63, 1.58, and 2.10, respectively.Entities:
Keywords: double-blind; peer review; scientific method
Mesh:
Year: 2017 PMID: 29138317 PMCID: PMC5715744 DOI: 10.1073/pnas.1707323114
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 11.205
Fig. 1.Cumulative distribution function of number of bids for single- and double-blind reviewers.
Summary of features and prevalence
| Factor | Feature name | No. of papers | Fraction of Papers, % |
| Paper from United States | United States | 176 | 35 |
| Same country as reviewer | Same | 146 | 29 |
| Female author | Wom | 219 | 44 |
| Famous author | Fam | 81 | 16 |
| Academic | Aca | 370 | 74 |
| Top university | Uni | 135 | 27 |
| Top company | Com | 90 | 18 |
Learned coefficients and significance for review score prediction
| Name | Coefficient | SE | Confidence interval | Odds multiplier | bpqs equivalent | |
| Const | -1.83 | 0.24 | [−2.31, −1.36] | 0.000 | 0.16 | — |
| bpqs | 0.80 | 0.08 | [0.64, 0.97] | 0.000 | 2.23 | 1.00 |
| Com | 0.74 | 0.24 | [0.27, 1.21] | 0.002 | 2.10 | 0.92 |
| Fam | 0.49 | 0.22 | [0.05, 0.93] | 0.027 | 1.63 | 0.61 |
| Uni | 0.46 | 0.18 | [0.09, 0.83] | 0.012 | 1.58 | 0.57 |
| Wom | -0.25 | 0.18 | [−0.60, 0.10] | 0.160 | 0.78 | −0.31 |
| Same | 0.14 | 0.24 | [−0.34, 0.62] | 0.564 | 1.15 | 0.17 |
| Aca | 0.06 | 0.22 | [−0.38, 0.51] | 0.775 | 1.07 | 0.08 |
| United States | 0.01 | 0.21 | [−0.42, 0.44] | 0.964 | 1.01 | 0.01 |
Learned coefficients and significance for bid prediction
| Name | Coefficient | SE | Confidence interval | Odds multiplier | |
| Const | -4.87 | 0.08 | [−5.04, −4.71] | 0.000 | 0.01 |
| bbr | 0.05 | 0.00 | [0.04, 0.05] | 0.000 | 1.05 |
| bbp | 0.08 | 0.00 | [0.07, 0.09] | 0.000 | 1.09 |
| Com | 0.16 | 0.06 | [0.04, 0.28] | 0.010 | 1.17 |
| Uni | 0.12 | 0.05 | [0.03, 0.22] | 0.011 | 1.13 |
| Fam | 0.07 | 0.06 | [−0.06, 0.19] | 0.287 | 1.07 |
| Wom | 0.05 | 0.04 | [−0.04, 0.14] | 0.268 | 1.05 |
| United States | 0.02 | 0.05 | [−0.07, 0.11] | 0.681 | 1.02 |
| Aca | 0.01 | 0.06 | [−0.10, 0.12] | 0.881 | 1.01 |
Aggregate comparison of review statistics
| Measure | Single-blind average | Double-blind average | Mann–Whitney |
| Review length | 2,073 | 2,061 | 0.81 |
| Reviewer score | −2.07 | −1.90 | 0.51 |
| Reviewer rank | 1.89 | 1.87 | 0.52 |