| Literature DB >> 35260671 |
Bryan Conroy1, Ikaro Silva1, Golbarg Mehraei1, Robert Damiano1, Brian Gross1, Emmanuele Salvati1, Ting Feng1, Jeffrey Schneider2, Niels Olson2, Anne G Rizzo3, Catherine M Curtin4, Joseph Frassica1,5, Daniel C McFarlane6.
Abstract
Infectious threats, like the COVID-19 pandemic, hinder maintenance of a productive and healthy workforce. If subtle physiological changes precede overt illness, then proactive isolation and testing can reduce labor force impacts. This study hypothesized that an early infection warning service based on wearable physiological monitoring and predictive models created with machine learning could be developed and deployed. We developed a prototype tool, first deployed June 23, 2020, that delivered continuously updated scores of infection risk for SARS-CoV-2 through April 8, 2021. Data were acquired from 9381 United States Department of Defense (US DoD) personnel wearing Garmin and Oura devices, totaling 599,174 user-days of service and 201 million hours of data. There were 491 COVID-19 positive cases. A predictive algorithm identified infection before diagnostic testing with an AUC of 0.82. Barriers to implementation included adequate data capture (at least 48% data was needed) and delays in data transmission. We observe increased risk scores as early as 6 days prior to diagnostic testing (2.3 days average). This study showed feasibility of a real-time risk prediction score to minimize workforce impacts of infection.Entities:
Mesh:
Year: 2022 PMID: 35260671 PMCID: PMC8904796 DOI: 10.1038/s41598-022-07764-6
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Toy example of an execution graph calculated at runtime. The PHYSIOLOGICAL_LABELS feature represents the standardized wearables data inputs and acts as
source nodes in the DAG.
Figure 2Toy example of the feature pipeline orchestrated by the triggering mechanism on the runtime execution DAG. The trigger maintains the state of the execution graph and its requirements at each step of the process. In this example, the machine learning prediction (COVID risk score) gets first reported as soon as the respiratory feature becomes available. As additional features (e.g., those from heart rate) become available, the prediction is updated with the new information.
Figure 3The admin UI allows site coordinators to monitor the infection risk at a site level. Selecting an individual on the scatter plot shows the infection score trend for that user.
Figure 4Graphical summary of definitions for positive and negative classes.
Figure 5(a) Illustration of sliding window-based feature extraction. (b) Example of true positive and true negative classification under multiple instance learning training objective. (c) Illustration of learned feature risk scores (black curves) along with population distribution underlays for Sick (red) and Control (blue) populations included in the infection prediction mode.
Age demographics of the study participants.
| Age category | Participant count (%) |
|---|---|
| 18–34 years old | 5865 (62.5%) |
| 35–49 years old | 2804 (29.9%) |
| 50–64 years old | 680 (7.2%) |
| 65–74 years old | 30 (0.3%) |
| 75–84 years old | 0 (0%) |
| > 84 years old | 2 (< 0.1%) |
Figure 6Timeline of total users and COVID + users through April 1, 2021.
Figure 7Positive and negative class size in the machine learning dataset at each filtering step.
Figure 8(a) ROC curve of the algorithm for each fold. Dash line represents guessing. AUC is indicated for each fold in the legend. (b) Infection predictive model performance. Random model results based on dummy classifier using stratified approach with random guessing proportional to the class distribution/prevalence (~ 9%).
Figure 9Mean RATE risk score in 179 COVID-19 positive + users as a function of day relative to COVID testing (red line). Grey region depicts 95% confidence interval (standard error).