| Literature DB >> 30713813 |
Nirwan Sharma1,2, Laura Colucci-Gray3,4, Advaith Siddharthan5, Richard Comont6, René van der Wal2.
Abstract
In recent years, the number and scale of environmental citizen science programmes that involve lay people in scientific research have increased rapidly. Many of these initiatives are concerned with the recording and identification of species, processes which are increasingly mediated through digital interfaces. Here, we address the growing need to understand the particular role of digital identification tools, both in generating scientific data and in supporting learning by lay people engaged in citizen science activities pertaining to biological recording communities. Starting from two well-known identification tools, namely identification keys and field guides, this study focuses on the decision-making and quality of learning processes underlying species identification tasks, by comparing three digital interfaces designed to identify bumblebee species. The three interfaces varied with respect to whether species were directly compared or filtered by matching on visual features; and whether the order of filters was directed by the interface or a user-driven open choice. A concurrent mixed-methods approach was adopted to compare how these different interfaces affected the ability of participants to make correct and quick species identifications, and to better understand how participants learned through using these interfaces. We found that the accuracy of identification and quality of learning were dependent upon the interface type, the difficulty of the specimen on the image being identified and the interaction between interface type and 'image difficulty'. Specifically, interfaces based on filtering outperformed those based on direct visual comparison across all metrics, and an open choice of filters led to higher accuracy than the interface that directed the filtering. Our results have direct implications for the design of online identification technologies for biological recording, irrespective of whether the goal is to collect higher quality citizen science data, or to support user learning and engagement in these communities of practice.Entities:
Keywords: Biological recording; Bumblebees; Citizen science; Cognition; Data quality; Field guides; Identification keys; Learning; Species identification; User learning
Year: 2019 PMID: 30713813 PMCID: PMC6354666 DOI: 10.7717/peerj.5965
Source DB: PubMed Journal: PeerJ ISSN: 2167-8359 Impact factor: 2.984
Comparison of characteristics of the three different identification tools evaluated in this study (field guide, feature selection and decision tree).
| Characteristics | Field guide (Control) | Feature selection | Decision tree |
|---|---|---|---|
| Type of identification key | Paper-based single access (dichotomous/polytomous) | Interactive multi-access | Interactive single access, (dichotomous/polytomous) |
| Order of decision-making | Partitioning species into biologically-informed subcategories | Open-choice selection of visual features | Directed by interface: easy visual features decided first, and harder features later on |
| Identification mode | Visual comparison of all species | Interactive filtering out of species that do not match selected features | Interactive filtering out of species that do not match selected features |
Figure 1Field guide.
Source: Bumblebee Conservation Trust (http://bumblebeeconservation.org).
Figure 3Feature selection tool.
When activating drop-down filters, all species not corresponding with the choices made are ‘shaded out’. In this specific example, the respective filter settings for ‘Abdomen’, ‘Antennae’, ‘Face’ and ‘Wings’ shade out all but the Red-tailed cuckoo bumblebee. A more detailed description of the resulting species is then provided.
Figure 2Decision tree tool.
Workflow from (A) to (D). The order of selections is ‘Mostly ginger/brown with some black or brown’ in (A), option ‘2’ in (B) and ‘Common Carder Bee’ in (C).
Figure 4Quantitative analysis graphs.
(A) Mean (± SE) accuracy (0–1), (B) mean time taken (in sec) and (C) mean workload scores (scale 0–100) for each of the three studied interface types, for easy (grey bars) and difficult images (black bars) separately.