Literature DB >> 32366645

Estimating the deep replicability of scientific findings using human and artificial intelligence.

Yang Yang1,2, Wu Youyou1,2, Brian Uzzi3,2.   

Abstract

Replicability tests of scientific papers show that the majority of papers fail replication. Moreover, failed papers circulate through the literature as quickly as replicating papers. This dynamic weakens the literature, raises research costs, and demonstrates the need for new approaches for estimating a study's replicability. Here, we trained an artificial intelligence model to estimate a paper's replicability using ground truth data on studies that had passed or failed manual replication tests, and then tested the model's generalizability on an extensive set of out-of-sample studies. The model predicts replicability better than the base rate of reviewers and comparably as well as prediction markets, the best present-day method for predicting replicability. In out-of-sample tests on manually replicated papers from diverse disciplines and methods, the model had strong accuracy levels of 0.65 to 0.78. Exploring the reasons behind the model's predictions, we found no evidence for bias based on topics, journals, disciplines, base rates of failure, persuasion words, or novelty words like "remarkable" or "unexpected." We did find that the model's accuracy is higher when trained on a paper's text rather than its reported statistics and that n-grams, higher order word combinations that humans have difficulty processing, correlate with replication. We discuss how combining human and machine intelligence can raise confidence in research, provide research self-assessment techniques, and create methods that are scalable and efficient enough to review the ever-growing numbers of publications-a task that entails extensive human resources to accomplish with prediction markets and manual replication alone.

Entities:  

Keywords:  computational social science; machine learning; replicability

Year:  2020        PMID: 32366645      PMCID: PMC7245108          DOI: 10.1073/pnas.1909046117

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


  22 in total

1.  Drug development: Raise standards for preclinical cancer research.

Authors:  C Glenn Begley; Lee M Ellis
Journal:  Nature       Date:  2012-03-28       Impact factor: 49.962

2.  P-curve: a key to the file-drawer.

Authors:  Uri Simonsohn; Leif D Nelson; Joseph P Simmons
Journal:  J Exp Psychol Gen       Date:  2013-07-15

3.  Variation in journal peer review systems. Possible causes and consequences.

Authors:  L L Hargens
Journal:  JAMA       Date:  1990-03-09       Impact factor: 56.272

4.  Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015.

Authors:  Colin F Camerer; Anna Dreber; Felix Holzmeister; Teck-Hua Ho; Jürgen Huber; Magnus Johannesson; Michael Kirchler; Gideon Nave; Brian A Nosek; Thomas Pfeiffer; Adam Altmejd; Nick Buttrick; Taizan Chan; Yiling Chen; Eskil Forsell; Anup Gampa; Emma Heikensten; Lily Hummer; Taisuke Imai; Siri Isaksson; Dylan Manfredi; Julia Rose; Eric-Jan Wagenmakers; Hang Wu
Journal:  Nat Hum Behav       Date:  2018-08-27

5.  Semantics derived automatically from language corpora contain human-like biases.

Authors:  Aylin Caliskan; Joanna J Bryson; Arvind Narayanan
Journal:  Science       Date:  2017-04-14       Impact factor: 47.728

6.  Association between contextual dependence and replicability in psychology may be spurious.

Authors:  Yoel Inbar
Journal:  Proc Natl Acad Sci U S A       Date:  2016-08-10       Impact factor: 11.205

Review 7.  Science of science.

Authors:  Santo Fortunato; Carl T Bergstrom; Katy Börner; James A Evans; Dirk Helbing; Staša Milojević; Alexander M Petersen; Filippo Radicchi; Roberta Sinatra; Brian Uzzi; Alessandro Vespignani; Ludo Waltman; Dashun Wang; Albert-László Barabási
Journal:  Science       Date:  2018-03-02       Impact factor: 47.728

8.  Is there gender bias in JAMA's peer review process?

Authors:  J R Gilbert; E S Williams; G D Lundberg
Journal:  JAMA       Date:  1994-07-13       Impact factor: 56.272

9.  Reviewer bias in single- versus double-blind peer review.

Authors:  Andrew Tomkins; Min Zhang; William D Heavlin
Journal:  Proc Natl Acad Sci U S A       Date:  2017-11-14       Impact factor: 11.205

10.  PSYCHOLOGY. Estimating the reproducibility of psychological science.

Authors: 
Journal:  Science       Date:  2015-08-28       Impact factor: 47.728

View more
  5 in total

1.  Nonreplicable publications are cited more than replicable ones.

Authors:  Marta Serra-Garcia; Uri Gneezy
Journal:  Sci Adv       Date:  2021-05-21       Impact factor: 14.136

2.  Predicting replicability-Analysis of survey and prediction market data from large-scale forecasting projects.

Authors:  Michael Gordon; Domenico Viganola; Anna Dreber; Magnus Johannesson; Thomas Pfeiffer
Journal:  PLoS One       Date:  2021-04-14       Impact factor: 3.240

3.  Testing the reproducibility and robustness of the cancer biology literature by robot.

Authors:  Katherine Roper; A Abdel-Rehim; Sonya Hubbard; Martin Carpenter; Andrey Rzhetsky; Larisa Soldatova; Ross D King
Journal:  J R Soc Interface       Date:  2022-04-06       Impact factor: 4.118

4.  How failure to falsify in high-volume science contributes to the replication crisis.

Authors:  Sarah M Rajtmajer; Timothy M Errington; Frank G Hillary
Journal:  Elife       Date:  2022-08-08       Impact factor: 8.713

Review 5.  Digital Technologies and Data Science as Health Enablers: An Outline of Appealing Promises and Compelling Ethical, Legal, and Social Challenges.

Authors:  João V Cordeiro
Journal:  Front Med (Lausanne)       Date:  2021-07-08
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.