| Literature DB >> 30147203 |
Federico Bianchi1, Francisco Grimaldo2, Giangiacomo Bravo3, Flaminio Squazzoni1.
Abstract
This paper looks at peer review as a cooperation dilemma through a game-theory framework. We built an agent-based model to estimate how much the quality of peer review is influenced by different resource allocation strategies followed by scientists dealing with multiple tasks, i.e., publishing and reviewing. We assumed that scientists were sensitive to acceptance or rejection of their manuscripts and the fairness of peer review to which they were exposed before reviewing. We also assumed that they could be realistic or excessively over-confident about the quality of their manuscripts when reviewing. Furthermore, we assumed they could be sensitive to competitive pressures provided by the institutional context in which they were embedded. Results showed that the bias and quality of publications greatly depend on reviewer motivations but also that context pressures can have a negative effect. However, while excessive competition can be detrimental to minimising publication bias, a certain level of competition is instrumental to ensure the high quality of publication especially when scientists accept reviewing for reciprocity motives.Entities:
Keywords: Agent-based model; Cooperation; Game theory; Peer review; Scientist strategies
Year: 2018 PMID: 30147203 PMCID: PMC6096663 DOI: 10.1007/s11192-018-2825-4
Source DB: PubMed Journal: Scientometrics ISSN: 0138-9130 Impact factor: 3.238
Authors’ payoffs in the simplified PRG
Payoffs are expressed as probability of acceptance of the manuscript
Combined payoff matrix in the PRG
In each round, players play as both authors and referees
Simulation parameters
| Parameter | Value |
|---|---|
|
| 500 |
| Steps | 500 |
| Distribution of | Uniform |
| Distribution of | Uniform |
|
| 6 |
|
| 0.05 |
|
| 0.25 |
Fig. 1Evolution of evaluation bias over time for each institutional setting (results averaged over 100 repetitions for each scenario). a No comparison, b strive for publication—objective self-evaluation, c strive for publication—overconfident self-evaluation, d strive for excellence—objective self-evaluation, e strive for excellence—overconfident self-evaluation
Average evaluation bias (%) in all simulation scenarios
| Behavioural strategy |
| Institutional setting | |||
|---|---|---|---|---|---|
|
|
| ||||
| Objective | Overconfidence | Objective | Overconfidence | ||
|
| 57.61 | ||||
|
| 32.71 | 40.56 | 29.47 | 62.79 | 58.01 |
|
| 66.91 | 27.86 | 28.05 | 30.66 | 27.04 |
Average publication quality in all simulation scenarios (normalized values ranging 0–1)
| Behavioural strategy |
| Institutional setting | |||
|---|---|---|---|---|---|
|
|
| ||||
| Objective | Overconfidence | Objective | Overconfidence | ||
|
| 0.60 | ||||
|
| 0.98 | 0.71 | 0.85 | 0.44 | 0.49 |
|
| 0.41 | 0.00 | 0.01 | 1.00 | 0.36 |
Average publication quality of top 10 published papers in all simulation scenarios (normalized values ranging 0–1)
| Behavioural strategy |
| Institutional setting | |||
|---|---|---|---|---|---|
|
|
| ||||
| Objective | Overconfidence | Objective | Overconfidence | ||
|
| 0.51 | ||||
|
| 0.91 | 0.94 | 1.00 | 0.75 | 0.83 |
|
| 0.36 | 0.01 | 0.00 | 0.93 | 0.34 |