| Literature DB >> 28960834 |
Young Ji Lee1,2, Janet A Arida1, Heidi S Donovan1,3.
Abstract
Crowdsourcing is "the practice of obtaining participants, services, ideas, or content by soliciting contributions from a large group of people, especially via the Internet." (Ranard et al. J. Gen. Intern. Med. 29:187, 2014) Although crowdsourcing has been adopted in healthcare research and its potential for analyzing large datasets and obtaining rapid feedback has recently been recognized, no systematic reviews of crowdsourcing in cancer research have been conducted. Therefore, we sought to identify applications of and explore potential uses for crowdsourcing in cancer research. We conducted a systematic review of articles published between January 2005 and June 2016 on crowdsourcing in cancer research, using PubMed, CINAHL, Scopus, PsychINFO, and Embase. Data from the 12 identified articles were summarized but not combined statistically. The studies addressed a range of cancers (e.g., breast, skin, gynecologic, colorectal, prostate). Eleven studies collected data on the Internet using web-based platforms; one recruited participants in a shopping mall using paper-and-pen data collection. Four studies used Amazon Mechanical Turk for recruiting and/or data collection. Study objectives comprised categorizing biopsy images (n = 6), assessing cancer knowledge (n = 3), refining a decision support system (n = 1), standardizing survivorship care-planning (n = 1), and designing a clinical trial (n = 1). Although one study demonstrated that "the wisdom of the crowd" (NCI Budget Fact Book, 2017) could not replace trained experts, five studies suggest that distributed human intelligence could approximate or support the work of trained experts. Despite limitations, crowdsourcing has the potential to improve the quality and speed of research while reducing costs. Longitudinal studies should confirm and refine these findings.Entities:
Keywords: Cancer/neoplasm; citizen science; citizen scientists; crowdsourced; crowdsourcing; diffusion of innovation
Mesh:
Year: 2017 PMID: 28960834 PMCID: PMC5673951 DOI: 10.1002/cam4.1165
Source DB: PubMed Journal: Cancer Med ISSN: 2045-7634 Impact factor: 4.452
Extracted data from articles reviewed
| Primary author, year, country | Cancer type | Study objective | Study outcome | Potential limitations | Dataset | Crowd size, length of time, recruitment platform, monetary incentive |
|---|---|---|---|---|---|---|
| Candido dos Reis (2015), UK, Brazil, The Netherlands, Spain Australia, Germany | Breast |
(1) To evaluate citizen scientists’ estrogen receptor (ER) classification and the association between ER status and prognosis by comparing their test performance against that of trained pathologists. |
(1) Citizen scientists were able to classify ER expression in breast tumors with accuracy rate similar to that of trained pathologists (area under ROC curve for cancer cell identification: 0.95, 95% CI 0.94–0.96); area under ROC for ER status: 0.97, 95% CI 0.96–0.97). | N/A | 12,326 tissue microassays from samples from 6,378 patients in 10 studies |
98,293 10/2012–6/2014 (20 months) Web‐based (Cell Slider) Media/news articles (Facebook, Reddit, UK television channel) None |
| Carter (2014), USA | Ovarian | To examine public awareness and knowledge about ovarian cancer as compared with breast cancer by assessing a reasonable proxy of the US population through crowdsourcing using Amazon Mechanical Turk (AMT) |
(1) Survey respondents consistently presented a lack of awareness of ovarian cancer impact or significance. | Survey respondents limited to US citizens with valid social security numbers, potentially falsely over‐representing Internet‐savvy and/or younger participants. | N/A |
202 (of 232 eligible) 3/17/2013–3/25/2013 (8 days) Amazon Mechanical Turk $0.40 per completed survey |
| Eickhoff (2014), Switzerland | Breast |
(1) To explicitly compare the crowd‐powered expert to the individual performances of crowd or expert using a crowd of untrained workers to support medical experts. | The crowd was unable to outperform trained medical personnel in any of the investigated settings when used as a replacement for trained experts; however, untrained workers could support the work of experts, making it more efficient and less costly. | N/A | 569 biopsy images |
A total of 389 individual workers; each crowd varied in from 1/2014–2/2014 Amazon Mechanical Turk and Crowd‐Flower $0.05/image |
| Ewing (2015), USA, Australia, Canada | Unspecified | To identify the most accurate methods for calling somatic mutations in cancer genomes through evaluation of different approaches using crowdsourcing. | Teams routinely improved overall performance, especially in precision and especially with initial performance estimates, suggesting that studies may benefit from a multistep procedure. | Significant computational demands involved in aggregating multiple algorithms to enhance quality of mutation calls | 248 analyses of 3 |
21 teams (unclear how many/team) 157 days Specifics of recruitment not provided in this article None |
| Good (2014), USA | Breast |
(1) To test the hypothesis that knowledge linking expression patterns of specific genes to breast cancer outcomes could be captured from players of an open, Web‐based game. | Gene sets provided comparable performance to gene sets generated using other methods, including those used in commercial tests | Tasks presented in The Cure were knowledge intensive, requiring a significant level of preexisting expertise or substantial commitment to learning prior to playing the game. | 25 different genes |
1077 players 9/2012–9/2013 The Cure (web‐based game) None specified |
| King (2013), USA | Skin | To explore the potential of crowdsourcing as a component of a more comprehensive skin cancer prevention effort, this study evaluates whether collective effort outperforms individual effort in the context of visual identification of atypical nevi. | Collective effort overcame the limitations of individual effort and exhibited superior sensitivity (.90) | N/A | 40 nevi images |
500 participants Not specified Recruited from shopping mall $15/participant |
| Leiter (2014), USA | Prostate | To evaluate the feasibility and utility of using an Internet‐based crowdsourcing platform to inform the design of a clinical trial exploring the use of an antidiabetic drug, metformin, in prostate cancer. | Four major and five minor protocol modifications were made, including modifications to eligibility criteria and study procedures. |
(1) Tech‐savvy crowd may not be representative. | N/A |
60 physicians/researchers and 42 patients/advocates Six weeks Secure, web‐based platform (Transparency Life Sciences) enabling input (closed‐ and open‐ended responses) regarding important design elements of a planned clinical trial None mentioned |
| Margolin (2013), USA, UK, Norway | Breast |
(1) To assess whether a crowdsourced community challenge would generate models of breast cancer prognosis commensurate with or exceeding current best‐in‐class approaches. |
(1) Models submitted by challenge participants quickly exceeded the performance of a baseline model and steadily improved over time, though noted improvement was modest compared to baseline model. | Intentional simplification of model and analysis strategies to focus only on concordance index would need refinement to yield more nuanced and meaningful data in future studies. | 1981 breast cancer samples |
354 registered participants from more than 35 countries July–October 2012: Three phases over 4 months (orientation, training, and validation) The Sage Bionetworks–DREAM Breast Cancer Prognosis Challenge (Actual recruitment strategies were not specified in this article) None mentioned |
| McKenna (2012), USA | Colorectal | To investigate human performance in classifying polyp candidates under different presentation strategies |
(1) Distributed human intelligence improved significantly with the additional information provided by the candidate polyp over a single image |
(1) Only one radiologist served as the expert | 600 polyp candidates from 50 patients |
160 independent knowledge workers Time unspecified Amazon Mechanical Turk $0.01 per human intelligence task plus $5 bonus to the best knowledge workers |
| Parry (2015), USA | Survivorship care planning | To increase the use of publicly available shared measures to enable comparability across studies and to facilitate identification of strategies for implementing care planning (or barriers to that planning) for cancer survivors | Provided a space to connect researchers and practitioners in ways usually not possible, but demonstrated that barriers to data harmonization cannot be overcome from a social media perspective. | Small crowd | 7 domains comprising 51 constructs for which there were 124 measures |
79 unique users February‐August 2012 (6 months) National Institute of Cancer (NCI)'s Grid‐Enabled Measures database None |
| Santiago‐Rivas (2015), USA | Skin |
(1) To determine whether people could be differentiated on the basis of their sun protection belief profiles and individual characteristics. |
(1) Identified three distinct clusters of sun protection barriers and three distinct clusters of sun protection facilitators. | Potential bias in interpretation required for identifying subgroups (replication across two samples and use of variables not included in clustering process were used to limit bias). | 40 sun protection belief questions |
461 participants One day in July 2014 Amazon Mechanical Turk $0.40/survey |
| Wagholikar (2013), USA | Cervical |
(1) To report the methodology used to evaluate and improve the Clinical Decision Support System (CDSS) with participation of multiple users and experts before clinical deployment. |
(1) Mismatch between provider and CDSS recommendations in 75/169 cases. | Expert review only for cases of mismatch; reviewers were not blinded to source of recommendations (provider vs. CDSS) though they were blind to identity of providers | 175 test cases from patients at Mayo Clinic |
25 potential users of CDSS 4/12/12–5/4/12 Web‐based application deployed on institution's internal network None |
Figure 1Flow diagram of the literature search.