| Literature DB >> 33906604 |
Anna H Noel-Storr1, Patrick Redmond2, Guillaume Lamé3,4, Elisa Liberati4, Sarah Kelly4, Lucy Miller4,5, Gordon Dooley6, Andy Paterson4, Jenni Burt4.
Abstract
BACKGROUND: Crowdsourcing engages the help of large numbers of people in tasks, activities or projects, usually via the internet. One application of crowdsourcing is the screening of citations for inclusion in a systematic review. There is evidence that a 'Crowd' of non-specialists can reliably identify quantitative studies, such as randomized controlled trials, through the assessment of study titles and abstracts. In this feasibility study, we investigated crowd performance of an online, topic-based citation-screening task, assessing titles and abstracts for inclusion in a single mixed-studies systematic review.Entities:
Keywords: Citations; Citizen science; Crowdsourcing; Evidence synthesis; Information retrieval; Systematic review
Mesh:
Year: 2021 PMID: 33906604 PMCID: PMC8077753 DOI: 10.1186/s12874-021-01271-4
Source DB: PubMed Journal: BMC Med Res Methodol ISSN: 1471-2288 Impact factor: 4.615
Fig. 1Screenshot from the task hosted on the Cochrane Crowd platform
The agreement algorithm used for the Crowd task. Breaks in the consecutive chain or any ‘unsure’ classification sends the records to resolvers to make the final decision
| Decision 1 | Decision 2 | Decision 3 | Final decision |
|---|---|---|---|
| Potentially relevant | Potentially relevant | Potentially relevant | Potentially relevant |
| Not relevant | Not relevant | Not relevant | Not relevant |
| Potentially relevant | Potentially relevant | Not relevant | Resolver decision |
| Potentially relevant | Not relevant | Not applicable | Resolver decision |
| Not relevant | Not relevant | Potentially relevant | Resolver decision |
| Not relevant | Potentially relevant | Not applicable | Resolver decision |
| Unsure | Not applicable | Not applicable | Resolver decision |
Outcome variables assessed
| Outcome variable | Definition |
|---|---|
| Final sensitivity | The number of citations deemed relevant by the research team (included in the final set of studies for the review after both screening and full-text review) that were correctly identified by the crowd (true positives), divided by the number of true positives plus the number of citations included in the final set of studies by the research team that were not included by the crowd (false negative). |
| Screening specificity | The number of citations excluded by the crowd that were also excluded from the final set of studies by the research team (true negative), divided by the number of true negatives plus the number of citations included by the crowd that were not deemed relevant by the research team after both screening and full-text review (false positive). |
| Efficiency | Total time taken for the crowd versus the research team to complete the screening task. |
Fig. 2Citation screening decisions made by the review team and the crowd. 1Sensitivity and specificity compared to core author team as reference standard
Fig. 3Clustered bar chart showing crowd contributor backgrounds for original and replication tasks. 63 out of 78 (81%) participants completed the survey for the original task; 64 out of 85 (75%) participants completed the survey for the replication task