| Literature DB >> 30695030 |
Glenda M Yenni1, Erica M Christensen1, Ellen K Bledsoe2, Sarah R Supp3, Renata M Diaz2, Ethan P White1,4, S K Morgan Ernest1,5.
Abstract
Over the past decade, biology has undergone a data revolution in how researchers collect data and the amount of data being collected. An emerging challenge that has received limited attention in biology is managing, working with, and providing access to data under continual active collection. Regularly updated data present unique challenges in quality assurance and control, data publication, archiving, and reproducibility. We developed a workflow for a long-term ecological study that addresses many of the challenges associated with managing this type of data. We do this by leveraging existing tools to 1) perform quality assurance and control; 2) import, restructure, version, and archive data; 3) rapidly publish new data in ways that ensure appropriate credit to all contributors; and 4) automate most steps in the data pipeline to reduce the time and effort required by researchers. The workflow leverages tools from software development, including version control and continuous integration, to create a modern data management system that automates the pipeline.Entities:
Mesh:
Year: 2019 PMID: 30695030 PMCID: PMC6368360 DOI: 10.1371/journal.pbio.3000125
Source DB: PubMed Journal: PLoS Biol ISSN: 1544-9173 Impact factor: 8.029
Fig 1Data workflow for regularly updated data.
1. All field-collected data are double entered with automated checks to prevent invalid values from being entered. 2. The two versions of the double-entered data are compared using an R script, and mismatches are corrected. 3. A pull request is submitted to the data repository (i.e., GitHub), which triggers data checks run by the continuous integration system (i.e., Travis CI). 4. If the system detects any issues, the update is reviewed again, and corrections are made to the pull request, automatically triggering the data checks to run again. 5. Once the new data pass all automated checks, a data manager reviews the changes and merges the new data into the main data repository. 6. Addition of the new data triggers the continuous integration system to run additional scripts to get data from automated sensors (e.g., weather data) and to check for errors. 7. The system then runs scripts that automatically update the supporting tables (information not collected in the field that helps with data use) and updates the version number. 8. Once all tables have been automatically updated, the updates are automatically merged into the main repository. 9. The system automatically triggers a new release on GitHub. 10. The GitHub–Zenodo integration sends the new data release as a new version to Zenodo for archiving. CI, continuous integration.