Literature DB >> 30656779

Making inference with messy (citizen science) data: when are data accurate enough and how can they be improved?

John D J Clare1, Philip A Townsend1, Christine Anhalt-Depies1, Christina Locke2, Jennifer L Stenglein2, Susan Frett2, Karl J Martin3, Aditya Singh1, Timothy R Van Deelen1, Benjamin Zuckerberg1.   

Abstract

Measurement or observation error is common in ecological data: as citizen scientists and automated algorithms play larger roles processing growing volumes of data to address problems at large scales, concerns about data quality and strategies for improving it have received greater focus. However, practical guidance pertaining to fundamental data quality questions for data users or managers-how accurate do data need to be and what is the best or most efficient way to improve it?-remains limited. We present a generalizable framework for evaluating data quality and identifying remediation practices, and demonstrate the framework using trail camera images classified using crowdsourcing to determine acceptable rates of misclassification and identify optimal remediation strategies for analysis using occupancy models. We used expert validation to estimate baseline classification accuracy and simulation to determine the sensitivity of two occupancy estimators (standard and false-positive extensions) to different empirical misclassification rates. We used regression techniques to identify important predictors of misclassification and prioritize remediation strategies. More than 93% of images were accurately classified, but simulation results suggested that most species were not identified accurately enough to permit distribution estimation at our predefined threshold for accuracy (<5% absolute bias). A model developed to screen incorrect classifications predicted misclassified images with >97% accuracy: enough to meet our accuracy threshold. Occupancy models that accounted for false-positive error provided even more accurate inference even at high rates of misclassification (30%). As simulation suggested occupancy models were less sensitive to additional false-negative error, screening models or fitting occupancy models accounting for false-positive error emerged as efficient data remediation solutions. Combining simulation-based sensitivity analysis with empirical estimation of baseline error and its variability allows users and managers of potentially error-prone data to identify and fix problematic data more efficiently. It may be particularly helpful for "big data" efforts dependent upon citizen scientists or automated classification algorithms with many downstream users, but given the ubiquity of observation or measurement error, even conventional studies may benefit from focusing more attention upon data quality.
© 2019 by the Ecological Society of America.

Entities:  

Keywords:  automated classification; citizen science; crowdsourcing; false-positive error; misclassification; remote camera; species distribution model

Mesh:

Year:  2019        PMID: 30656779     DOI: 10.1002/eap.1849

Source DB:  PubMed          Journal:  Ecol Appl        ISSN: 1051-0761            Impact factor:   4.657


  2 in total

1.  The importance of evaluating standard monitoring methods: Observer bias and detection probabilities for moose pellet group surveys.

Authors:  Anne Loosen; Olivier Devineau; Barbara Zimmermann; Karen Marie Mathisen
Journal:  PLoS One       Date:  2022-07-27       Impact factor: 3.752

2.  Snapshot Wisconsin: networking community scientists and remote sensing to improve ecological monitoring and management.

Authors:  Philip A Townsend; John D J Clare; Nanfeng Liu; Jennifer L Stenglein; Christine Anhalt-Depies; Timothy R Van Deelen; Neil A Gilbert; Aditya Singh; Karl J Martin; Benjamin Zuckerberg
Journal:  Ecol Appl       Date:  2021-09-12       Impact factor: 6.105

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.