Literature DB >> 35538201

Who was at risk for COVID-19 late in the US pandemic? Insights from a population health machine learning model.

Elijah A Adeoye1, Yelena Rozenfeld2, Jennifer Beam1, Karen Boudreau1, Emily J Cox3, James M Scanlan4.   

Abstract

Notable discrepancies in vulnerability to COVID-19 infection have been identified between specific population groups and regions in the USA. The purpose of this study was to estimate the likelihood of COVID-19 infection using a machine-learning algorithm that can be updated continuously based on health care data. Patient records were extracted for all COVID-19 nasal swab PCR tests performed within the Providence St. Joseph Health system from February to October of 2020. A total of 316,599 participants were included in this study, and approximately 7.7% (n = 24,358) tested positive for COVID-19. A gradient boosting model, LightGBM (LGBM), predicted risk of initial infection with an area under the receiver operating characteristic curve of 0.819. Factors that predicted infection were cough, fever, being a member of the Hispanic or Latino community, being Spanish speaking, having a history of diabetes or dementia, and living in a neighborhood with housing insecurity. A model trained on sociodemographic, environmental, and medical history data performed well in predicting risk of a positive COVID-19 test. This model could be used to tailor education, public health policy, and resources for communities that are at the greatest risk of infection.
© 2022. International Federation for Medical and Biological Engineering.

Entities:  

Keywords:  COVID-19; Infection; Risk; Social determinants of health

Mesh:

Year:  2022        PMID: 35538201      PMCID: PMC9090454          DOI: 10.1007/s11517-022-02549-5

Source DB:  PubMed          Journal:  Med Biol Eng Comput        ISSN: 0140-0118            Impact factor:   3.079


Introduction

Early in the coronavirus disease 2019 (COVID-19) pandemic, a popular interest in predicting risk of infection gave rise to mobile applications and tools for predicting exposure risk. These tools used factors such as medical history, mask compliance, location, demographics, and social activity to predict likelihood of infection or mortality [1]. As the pandemic progressed, systematic reviews elucidated additional individual- and population-level characteristics associated with disease progression and mortality. At-risk groups identified by our group and others included people who were older, had laboratory markers of kidney or liver dysfunction, were current smokers, had pre-existing cardiovascular disease, or were Asian, Black, Hispanic or Latino, and non-English-speaking [2-4]. These early efforts to categorize at-risk populations were instructive and shaped the initial clinical and population-level responses to the pandemic. However, they generally relied on traditional statistical techniques and limited amounts of data available at the time. In parallel with simpler prediction tools, artificial intelligence (AI) has been used since the early days of the pandemic to classify and predict risk. For example, a recent review of 130 publications found 71 papers related to computational epidemiology of COVID-19, 40 papers related to early detection and diagnosis of COVID-19, and 19 papers related to COVID-19 disease progression [5]. Common techniques used by these studies were deep learning and transfer learning [5]. Elsewhere, an analysis of 264 papers found that the convolutional neural network method was the most frequently applied AI technique in COVID-19 studies, followed by random forest classifier, ResNet, Support Vector Machine, and deep learning [6]. These studies described the rapid expansion in machine learning and AI tools during the COVID-19 pandemic. In 2021, mass vaccinations altered risk of COVID-19 infection for much of the US population, but did not eliminate the need for risk prediction. Emergence of vaccine-eluding variants, barriers to accessing vaccines, and widespread vaccine refusal have made it important to continuously re-evaluate risk on an ongoing basis, particularly because disparities in vaccine acceptance may overlap with disparities in infection and/or severe outcomes. For example, older individuals (who were the first to be offered vaccines) are more likely to accept COVID-19 vaccinations than younger individuals, and acceptance rates are highest among Asian and Alaska Native/American Indian populations, and lowest among Black people [7, 8]. In 2020, we used logistic regression to examine risk factors associated with COVID-19 infection in 34,503 cases from the Providence health system [2]. As the pandemic evolved, we recognized the need for updated risk assessments and the utility of AI in risk assessment across our growing numbers of cases. Thus, the present paper updates our previous risk predictions [2] using a more sophisticated machine learning technique in a larger sample of patient data. Our findings confirm the need for ongoing risk assessment and focusing public resources on the highest-risk communities.

Methods

Ethical approval

The Providence Institutional Review Board (IRB) approved this study and waived the requirement for written informed consent (IRB identifier STUDY2020000220). The study was conducted in compliance with IRB rules and the Declaration of Helsinki.

Data sources

Data for the development and validation data sets were collected from the electronic medical record (EMR) of Providence St. Joseph Health. Records were included for all people from Alaska, Washington, Oregon, Montana, and California who had at least one COVID-19 PCR test result on a nasal swab sample between February 21, 2020, and October 20, 2020. People with at least one positive test were coded as a positive for infection; people with exclusively negative tests were coded as negative for infection. Location outcomes were evaluated by linking EMR geocoded data to data from the US Census Bureau’s 2018 American Community Survey at the census block group or tract level as previously described [2]. Two rounds of data splitting were employed. In initial tests, data were split into training and test sets with a 75/25 ratio, respectively, and a random seed for reproducibility (Fig. 1). After we determined that a light gradient boosted model (LGBM) produced the most accurate results, we performed additional modeling with a train, test, and validation split (80/10/10 ratio, respectively). This was done (1) to increase the size of the training set and (2) to avoid overfitting by exploring its performance in both a test and a validation set. Two sets of training data were also generated: with clinical symptoms (fever, cough, myalgia, sore throat, chills, and shortness of breath) and without (Fig. 1).
Fig. 1

Schematic of predictive modeling experiments performed to predict risk of initial COVID-19 infection. Legend: ^LGBM outperformed other models on the 25% test set. Thus, we re-trained an LGBM model on a 80/10/10 split (1) to increase the size of the training set and (2) to avoid overfitting by exploring its performance in both a test and a validation set. The training samples, for the 80% split, was 253,279 (train_negative = 233,889; train_positive = 19,390). After case–control augmentation (downsampling the training samples count to the positive samples count), we arrived at a train_negative = 19,390 and train_positive = 19,390. RFE = recursive feature elimination, LR = logistic regression, LGBM = light gradient boosting machine, PCA = principal component analysis, SMOTE = synthetic minority oversampling technique. *LGBM was the final selected model. The refresh icon indicates that the LGBM model was put through a second round of modeling with a train, test, and validation split of 80/10/10, respectively, for the final steps

Schematic of predictive modeling experiments performed to predict risk of initial COVID-19 infection. Legend: ^LGBM outperformed other models on the 25% test set. Thus, we re-trained an LGBM model on a 80/10/10 split (1) to increase the size of the training set and (2) to avoid overfitting by exploring its performance in both a test and a validation set. The training samples, for the 80% split, was 253,279 (train_negative = 233,889; train_positive = 19,390). After case–control augmentation (downsampling the training samples count to the positive samples count), we arrived at a train_negative = 19,390 and train_positive = 19,390. RFE = recursive feature elimination, LR = logistic regression, LGBM = light gradient boosting machine, PCA = principal component analysis, SMOTE = synthetic minority oversampling technique. *LGBM was the final selected model. The refresh icon indicates that the LGBM model was put through a second round of modeling with a train, test, and validation split of 80/10/10, respectively, for the final steps

Data analysis

Computational environment

All major statistical analyses were performed using Python versions 3.6.12 on a 64-bit computer and 3.6.10 leveraging a GPU instance in the Azure Machine Learning ecosystem.

Data cleaning

Continuous variables were standardized or log normalized to address skew and the influence of large values and outliers on the predictive power of trained models. Count of mental health diagnoses, comorbidities, community size, polypharmacy, and population density each had a skew of 2.58, 1.56, 28.85, 1.04, and − 0.44, respectively. Scaling did not impact the skew for any of these variables. However, log transforming community size reduced its skewness to 4.39. Categorical variables were encoded, and dummy variables were created for those variables with more than two classes. Variables were treated mostly as missing not at random (MNAR) except body mass index (BMI) and gender. Missing data for MNAR variables were coded as a separate category, e.g. “Unknown.” For BMI, median imputation was used to fill in the large amount of missing data (n = 25,646 from initial participant pool, approximately 8%). Gender was analyzed as legal sex, and missing values were dropped (n = 119; 0.04% of initial participant pool).

Hyperparameter tuning and cross validation

We used a randomized search approach, with cross validation, to tune and identify critical hyperparameters for each model (Supplementary Material Table 1). A set of hyperparameters that produced the best area under the curve (AUC) on the training set were selected as part of the final ensemble. This was performed with a repeated, stratified k-fold cross validation with 10 splits and 3 repeats. A random seed was set for reproducibility of the cross-validation step. We chose a randomized approach due to the computationally intensive nature of the alternative, more comprehensive grid search approach. We report the best hyperparameters selected for the best model with symptoms (Supplementary Material Table 1).

Data augmentation

Most COVID-19 test results were negative. Thus, different data augmentation techniques were applied to address class imbalance by over-sampling and/or down-sampling the minority and majority class, respectively. This was done to address model bias towards the negative class (i.e., the population of persons who tested negative for COVID-19), which is important to prevent the model from learning to predict the dominant negative class. We used a synthetic minority oversampling technique (SMOTE) and case–control approach to augment the training data as part of multiple modeling experiments. SMOTE is used to create synthetic data that is close, or nearest neighbor, to the minority class in the feature space [9]. We also experimented with a case–control (CC) approach typically used in epidemiological studies to create a 1:1 match by down-sampling the majority class (COVID-19 negative) to the size of the minority class. Negative classes were selected using a simple random sample method without replacement. This strategy, unlike SMOTE techniques, uses real, non-synthetic data for model training. These approaches helped to create a 1:1 match of the negative (majority) class and the positive class. No augmentation was performed on the validation/test data set. Twelve experiments were conducted such that at each experiment, models were fitted on the training set depending on whether data augmentation and dimensionality reduction techniques were applied to that set (Fig. 1). For dimensionality reduction, we applied principal component analysis (PCA) to compute the minimal set of principal components that explained 95% of the variance in the data. Recursive feature elimination (RFE) approach was also used, as part of different experiments, to select the minimal set of predictors that were most predictive for a COVID-19 positive test. Dimensionality reduction techniques were also applied on the test/validation sets; however, no augmentation was applied to the validation/test data set. PCA was not applied to comparative logistic regression models.

Model training and selection

An ensemble approach was used as the predictive model for each possible experiment. Four models — logistic regression, random forest, and two gradient boosting libraries, XGBoost (XGB) and LightGBM (LGBM) — were used as classifiers for training. We selected the best hyperparameters for each classifier, after hyperparameter tuning, and included these as part of the ensemble for the prediction task. We used a soft-voting ensemble due to the need to compute probabilities of a positive test or event.

Model explainability

Two LGBMs were generated, one with symptoms and one without symptoms (Fig. 1). We used the Python implementation of SHAP (SHapley Additive exPlanations) [10] to examine the key predictor variables that contribute to a patient’s probability of a positive COVID-19 test result. The library computes Shapley values, which aim to demonstrate the marginal contribution of a feature to the predicted outcome of a vector or an instance [11]. This approach examines how much each feature in the model pushes the predicted value of that instance from a baseline, or average, prediction (expected value). Using the SHAP methodology provides a method for improving the interpretability of a machine learning model. SHAP values were computed using the final selected model.

Results

Study participants

A total of 316,599 participants were included in this study, and approximately 7.7% tested positive for COVID-19 (n = 24,358). The average age was 47 ± 22 years old, 56.7% (179,381) were female, 63% (199,492) were identified as white or Caucasian, and 55.2% (174,683) had at least one chronic condition (Table 1).
Table 1

Study participant demographics and characteristics

Tested peopleTested positiveTested negative
(N = 316,599)(N = 24,358)(N = 292,241)
N% of totalaNIn-group, %bNIn-group, %b
Sociodemographic
  Age
     < 1825,6408.1017666.8923,87493.11
     18–2951,32816.2149929.7346,33690.27
     30–3949,57015.6638757.8245,69592.18
     40–4941,63413.1535658.5638,06991.44
     50–5945,76014.4537078.1042,05391.90
     60–6945,97614.5228046.1043,17293.90
     70–7934,05710.7619415.7032,11694.30
    80 + 22,6347.1517087.5520,92692.45
  Gender
     Female179,38156.6612,8267.15166,55592.85
     Male137,21843.3411,5328.40125,68691.60
  Education
     Education < 12 years219,44469.3113,4096.11206,03593.89
  Employment
     Student17,4755.5215749.0115,90190.99
     Employed131,01941.3810,7258.19120,29491.81
     Not employed58,38018.4449468.4753,43491.53
     Retired63,32420.0038646.1059,46093.90%
     Unknown46,40114.6632497.0043,15293.00%
  Race
    White199,49263.0197424.88189,75095.12
    American Indian|Alaska Native40691.292937.20377692.80
    Asian13,3344.2110447.8312,29092.17
    Black|African American12,0183.8010959.1110,92390.89
    Native Hawaiian | Pacific Islander27000.8542415.70227684.30
    Hispanic | Latino39,99712.63796219.9132,03580.09
    Unknown44,98914.2137988.4441,19191.56
  Ethnicity
    Other ethnic groups276,60287.3716,3965.93260,20694.07
    Hispanic or Latino39,99712.63796219.9132,03580.09
    Religious affiliation
    Agnostic90,65528.6355856.1685,07093.84
    Christian121,55738.3910,2938.47111,26491.53
    Other religion10,5343.336796.45985593.55
    Unknown93,85329.6478018.3186,05291.69
  Relationship
    Single123,85039.1210,0968.15113,75491.85
    Divorced or legally separated37,79711.9424126.3835,38593.62
    Married or significant other128,94440.7398177.61119,12792.39
    Unknown26,0088.2120337.8223,97592.18
  Language
     English288,25291.0518,9646.58269,28893.42
     Sino-Tibetan21920.6924411.13194888.87
     Spanish12,4353.93367929.59875670.41
     Other languages13,7204.33147110.7212,24989.28
Clinical
  Body mass index
     Normal66,17920.9042316.3961,94893.61
     Underweight51801.642965.71488494.29
     Moderately obese45,91814.5040618.8441,85791.16
     Overweight70,93322.4059188.3465,01591.66
     Severely obese23,3347.3720788.9121,25691.09
     Very severely obese19,9816.3116438.2218,33891.78
     Unknown85,07426.876,1317.2178,94392.79
  Number of chronic conditions
     0141,91644.8312,5518.84129,36591.16
    1–2103,46432.6876297.3795,83592.63
    3–446,63214.7329056.2343,72793.77
     5 + 24,5877.7712735.1823,31494.82
  Clinical diagnosis
     Diagnosis of diabetes34,93011.0333409.5631,99291.59
     Diagnosis of kidney disease7890.259411.9170989.86
     Diagnosis of HIV/AIDS7670.24547.0471893.61
     Diagnosis of dementia73162.3191012.44651088.98
  Polypharmacy
     0 prescriptions104,27332.9490668.6995,20791.31
     1–9 prescriptions160,38750.6612,4037.73147,98492.27
     10–19 prescriptions38,65612.2122385.7936,41894.21
     20–29 prescriptions98093.104814.90932895.10
     30 + prescriptions34741.101704.89330495.11
  Mental health and substance use
     History of illicit drug use35,58811.2415614.3934,02795.61
     History of tobacco use40,35212.7518364.5538,51695.45
     Diagnosis of serious persistent mental illness30,2469.5512864.2528,96095.75
     Diagnosis of substance use disorder24,7577.8210714.3323,68695.67
  Primary care affiliation
     Internal primary care provider112,19135.4470176.25105,17493.75
     External primary care provider116,34836.7587087.48107,64092.52
     Unknown primary care provider88,06027.8186339.8079,42790.20
  Symptoms
     Fever101,38832.0215,15714.9586,23185.05
     Cough113,04735.7116,31914.4496,72885.56
     Breath107,21633.8613,64212.7293,57487.28
     Chills64432.0495014.74549385.26
     Myalgia85872.71168619.63690180.37
Environmental
  Region
Oregon83,29326.3150186.0278,27593.98
     Alaska17,2695.458574.9616,41295.04
     Puget Sound34,43710.8821446.2332,29393.77
     Southern California65,81520.79738911.2358,42688.77
     Washington|Montana115,58936.5189317.73106,65892.27
     Unknown1960.06199.6917790.31
  Age-stratified communal living
     Non-communal living230,41072.7816,6247.21213,78692.79
     Adult community12,5343.9610558.4211,47991.58
     Adult and youth46,99614.8444609.4942,53690.51
     Multigenerational15,4814.8915359.9213,94690.08
     Senior living28760.9130010.43257689.57
     Other83022.623844.63791895.37
  Financial insecurity98,53731.1210,28510.4488,25289.56
  Housing insecurity72,08122.77884912.2863,23287.72
  Transportation insecurity88,40127.9272408.1981,16191.81

Legend: Characteristics of the patient population included in this analysis

a% of total is the percentage of the total N (316,599)

bIn-group % is the percentage of the total tested people for each row

Study participant demographics and characteristics Legend: Characteristics of the patient population included in this analysis a% of total is the percentage of the total N (316,599) bIn-group % is the percentage of the total tested people for each row

Model performance

In general, models trained with CC augmented data performed better on test/validation sets than SMOTE augmented data. Area under the receiver operating characteristic curve (AUC) scores for models that included symptoms and were trained on augmented data ranged approximately from 0.756 to 0.816, while the logistic regression model trained on non-augmented data yielded an AUC of 0.767. The gradient boosting library, LightGBM (LGBM), produced an AUC of 0.816. Because this model is computationally lightweight compared to ensembling all models, separate analyses were performed with this model on CC augmented training data split into training/testing/validation sets (80/10/10 ratio, respectively). LGBM AUC on the training set with repeated, stratified k-fold cross validation with 10 splits and 3 repeats gave a mean AUC of 0.811 ± 0.007. AUC was approximately 0.819 on the test set and 0.814 on the validation set. When symptoms (fever, cough, myalgia, sore throat, chills, and shortness of breath) were not included as predictive variables, AUC on the training set with the same cross validation approach was acceptable, but comparatively poorer (0.735 ± 0.007). AUC on the test and validation sets was 0.734 and 0.727, respectively (Table 2).
Table 2

Area under the curve (AUC) of modeling experiments run to predict COVID-19 risk of infection

TrialAugmentation/feature reductionModelAUCSensitivitySpecificity
1RFELR0.7670.0930.994
2CCLGBM*0.8140.7180.754
3CCLGBM**0.7270.6230.713
4CCEnsemble0.8160.7170.760
5CCLR0.8000.7210.730
6CC-PCAEnsemble0.8050.7140.745
7CC-RFEEnsemble0.8160.7150.759
8CC-RFELR0.8000.7210.731
9SMOTEEnsemble0.7970.5520.864
10SMOTELR0.7590.6240.759
11SMOTE-PCAEnsemble0.8020.6220.823
12SMOTE-RFEEnsemble0.7920.5550.858
13SMOTE-RFELR0.7560.6210.760

Legend: All models included symptoms as predictors except for trial 3. Except for the Light Gradient Boosting Machine model (LGBM), reported area under the receiver operating characteristic curve (AUC ROC) scores is for the 25% held-out test set of the 75/25 train/test split. For the LGBM model, a 80/10/10 training/test/validation split was used, and AUC is given for performance on the final validation set

RFE = recursive feature elimination, LR = logistic regression, CC = case–control, LGBM = light gradient boosting machine, PCA = principal component analysis, SMOTE = synthetic minority oversampling technique

Final selected model. This was the model that was used for the SHAP scores with symptoms presented in Fig. 2

**Final selected model without symptoms. This was the model that was used for the SHAP scores without symptoms presented in Fig. 3

Area under the curve (AUC) of modeling experiments run to predict COVID-19 risk of infection Legend: All models included symptoms as predictors except for trial 3. Except for the Light Gradient Boosting Machine model (LGBM), reported area under the receiver operating characteristic curve (AUC ROC) scores is for the 25% held-out test set of the 75/25 train/test split. For the LGBM model, a 80/10/10 training/test/validation split was used, and AUC is given for performance on the final validation set RFE = recursive feature elimination, LR = logistic regression, CC = case–control, LGBM = light gradient boosting machine, PCA = principal component analysis, SMOTE = synthetic minority oversampling technique Final selected model. This was the model that was used for the SHAP scores with symptoms presented in Fig. 2
Fig. 2

Relative contribution of predictors in a machine learning model predicting COVID-19 infection based on symptoms and demographic information. Legend: A SHapley Additive exPlanations (SHAP) scores showing the average impact of each predictor on the model. SHAP values were computed using the final LGBM model. Higher SHAP values correspond to increased COVID-19 infection risk. B The relative importance of the top 20 COVID-19 predictors in descending order is shown here. The plot is made of dots corresponding to each prediction for a single patient. The horizontal axis shows the relative impact of a low or high prediction value for each variable, the impact ranging from blue (least associated with infection) to red (most associated with infection). Blue on the left to red on the right shows increasing infection risk as the feature increases (i.e., Cough: 0 = No Cough, 1 = Cough). Red on the left to blue on the right shows decreasing infection risk as the feature increases (i.e., polypharmacy)

**Final selected model without symptoms. This was the model that was used for the SHAP scores without symptoms presented in Fig. 3
Fig. 3

Relative contribution of predictor variables in a machine learning model trained to predict COVID-19 infection based on demographic information alone. Legend: A SHAP scores showing the average impact of each predictor on the model using the final LGBM model. Higher SHAP values correspond to increased COVID-19 infection risk. B The top 20 COVID-19 demographic predictors, without symptoms, are shown here in descending order. All other computational and graphic elements (use of dots, color coding, variable score association strength shown by horizontal axis) are identical with those used for Fig. 2a and b

Feature importance

Model with symptoms

When symptoms were included as predictors of infection risk, cough and fever were the two most important predictors (Fig. 2A). Being a member of the Hispanic or Latino community, living in the Washington-Montana or Southern California regions, being non-English-speaking and especially Spanish-speaking, polypharmacy, and having shortness of breath were all comparable influences on the risk of a positive COVID-19 test (SHAP scores 0.10–0.30). All of these features except polypharmacy were also directly associated with risk of infection from COVID-19, while polypharmacy, co-morbidity, higher income, and tobacco or alcohol use were inversely associated with risk of infection (Fig. 2B). Relative contribution of predictors in a machine learning model predicting COVID-19 infection based on symptoms and demographic information. Legend: A SHapley Additive exPlanations (SHAP) scores showing the average impact of each predictor on the model. SHAP values were computed using the final LGBM model. Higher SHAP values correspond to increased COVID-19 infection risk. B The relative importance of the top 20 COVID-19 predictors in descending order is shown here. The plot is made of dots corresponding to each prediction for a single patient. The horizontal axis shows the relative impact of a low or high prediction value for each variable, the impact ranging from blue (least associated with infection) to red (most associated with infection). Blue on the left to red on the right shows increasing infection risk as the feature increases (i.e., Cough: 0 = No Cough, 1 = Cough). Red on the left to blue on the right shows decreasing infection risk as the feature increases (i.e., polypharmacy)

Model without symptoms

Because symptom information may not always be available for risk assessments of the population at large, a second model was developed to assess the importance of static population factors. When symptoms were removed from the predictive model, being of Hispanic/Latino ethnicity became the most important predictor of COVID-19 infection (Fig. 3A) in this patient population. Other risk factors with at least two-fold lower SHAP scores included speaking Spanish, being from Montana or a region with housing instability, identifying with an “other” race category, using tobacco, being male, being Christian, and having an “other” BMI. Tobacco use, co-morbidity, polypharmacy, an “other” BMI category, income level, and illicit drug use were inversely associated with risk of infection, while other features were positively associated with this risk (Fig. 3B). Relative contribution of predictor variables in a machine learning model trained to predict COVID-19 infection based on demographic information alone. Legend: A SHAP scores showing the average impact of each predictor on the model using the final LGBM model. Higher SHAP values correspond to increased COVID-19 infection risk. B The top 20 COVID-19 demographic predictors, without symptoms, are shown here in descending order. All other computational and graphic elements (use of dots, color coding, variable score association strength shown by horizontal axis) are identical with those used for Fig. 2a and b

Discussion

Although COVID-19 vaccines are now widely available, predicting the risk of COVID-19 infection remains critical. Unvaccinated populations and new variants of COVID-19 present an ongoing threat to disease control worldwide, and risk prediction is still needed to (1) to assist clinicians and care managers in patient education, (2) guide policy, and (3) allocate resources to the highest risk areas and populations. Our findings indicate that, as expected, fever and cough were the strongest predictors of infection. This validates public guidance to quarantine based on symptoms alone. However, when we removed symptoms from the model to assess static (i.e., not symptom-based) features alone, the following groups in the western USA emerged with the highest risk for infection: Hispanic and Latino people, individuals in the “other” race category, non-English-speaking people (particularly Spanish-speaking people), people living in areas with housing insecurity, and people from the Washington-Montana region. Compared to previous similar projects, advantages of the current analysis are the size and geographical spread of the dataset, and the machine learning technique which allows the results to be updated in nearly real-time. We intend to update these results as the pandemic continues. Immediate recommendations based on the results of this project are as follows. Culturally literate and language-appropriate resources are needed to combat surging infection rates in Hispanic, Latino, and non-English-speaking populations in the western USA. Partnering with communities to assure broad availability of information and access to services is critical to reducing disproportionate burden, and such partnership may increase trust in the information that is provided. Clinicians should be aware that individuals from these populations may be at higher risk and should conduct assessments and provide education accordingly. For example, clinicians may ask their patients whether they have access to masks and cleaning/disinfection supplies, or whether they need assistance accessing vaccine appointment registration systems. Individuals who are not at high risk themselves but have frequent contact with high-risk groups may require more frequent or intense training on infection control precautions. Finally, public efforts to combat the spread of COVID-19 must address issues such as access, physical proximity of vaccine clinics to high-risk populations, and pro-active program development for non-English speaking groups. We have previously published modeling work on this topic [2]. The previous model employed a logistic regression (LR) model and achieved an acceptable AUC of 0.78 on the validation set. It is important to note that features selected as strong predictors can be different across different machine and statistical learning approaches. This can be due to factors such as, but not limited to, penalization or regularization methods to reduce overfitting of the model. Other factors include how the model, such as decision tree-based models like LGBM, estimate information gained from all possible splits (using predictor values), different hyperparameters (e.g., tree sizes, number of subsamples, learning rate), etc. Nevertheless, we computed a comparative logistic regression model and report the output of the model (see Supplementary Table 2). Variables with a P < 0.25 were considered for the final model consistent with the previous model [2]. This model was trained on 75% of data and validated on the remaining 25%. AUC on validation data was 0.80 slightly outperforming the previous logistic regression model (AUC = 0.78). Results from the LR and LGBM models are consistent with the previous model with respect to symptoms (cough, fever, shortness of breath, and myalgia), Hispanic or Latino racial/ethnic group, non-English language (specifically Spanish), having housing insecurity, age 18 to 29, and Washington-Montana and Southern California regions being more predictive, or “associated,” with a positive COVID-19 test result. Likewise, having a history of tobacco use, higher number of prescription drugs and chronic conditions were more associated with a negative COVID-19 test — also consistent with the previous model (see Supplementary Table 2). The new LGBM model was notably different from the previous LR model regarding age. There was a relatively small impact of being between ages 18 to 29 on the prediction of a positive test. The comparative new LR model is consistent with the previous model in that adults, 40 and older, have greater adjusted odds of contracting COVID-19 when compared to younger patients (reference group: ages 17 or younger in this LR model vs. 18 to 29 in the previous model). We also observed differences in the impact of existing comorbidities (e.g., diagnoses of diabetes, HIV/AIDS, dementia, and kidney disease) across models. The LGBM and the comparative new LR models do indicate some impact of an existing diagnosis of diabetes and dementia on the increased probability of a COVID-19 infection consistent with the previous model. Also consistent with the previous model is that the LR model shows some impact of having a history of kidney disease (OR 1.70; 95% CI 1.07–2.72, p = 0.026) on COVID-19 risk. Neither model, unlike the previous model, indicates that being immunocompromised (HIV/AIDS diagnosis) increases an individual’s risk of an initial infection. Notwithstanding, we suspect that comorbidities will be significant predictors of severe illness or mortality after a COVID-19 infection. These results differ from our previous results from the early period of the pandemic [2]. The present results did not confirm that older, immunocompromised, or Black people were at significantly greater risk of COVID-19 infection in this study population. This difference may reflect the change in technique from traditional logistic regression to a machine learning algorithm. The previous LR model was conducted on data available early in the pandemic between February 28, 2020, and April 27, 2020, with data ten times less than current data. This more sophisticated technique may have elucidated underlying factors that were not immediately apparent with logistic regression, because it focused on predictive performance rather than traditional inference about individual variables and strict cut-off thresholds based on statistical significance. It is also possible that these groups are genuinely at higher risk but became under-represented and under-counted in the larger dataset, and thus, their risk levels may have been underestimated. An additional explanation for the shifting results is the expansion of the window of time over which results were counted. The previous work examined data from February to April of 2020 [2], while the present work extended the data to October of 2020, encompassing the second and early third “waves” of cases occurring between mid-June and October. During this later period, state and local public health departments instituted substantially more stringent transmission-reduction strategies including tight restrictions on public gatherings, remote school and work, universal masking requirements in public spaces, and “stay-at-home” policies. Thus, we may have captured real changes in population risk as the pandemic progressed. This may underlie the finding that young people between 18 and 29 were at higher risk, while older people were no longer at higher risk. As the pandemic progressed, older individuals may have been more compliant with stringent quarantine and isolation precautions due to well-publicized fears of mortality, while younger individuals were perhaps less cautious, and thus continued to become infected. We suspect that differences in results from current data reflect varied shifts in phased stay-at-home policies across the regions. Providence serves over time. Comparing results from both models is, nevertheless, encouraging as the new model demonstrates a stable and excellent ability to discriminate using new data as the previous model. In the present study, we developed two predictive models that either included or excluded symptoms for different purposes. Modeling risk of infection without symptoms was done to evaluate static risk for populations in the western USA. The intention of this step was to aid in planning for disease control and prevention within the Providence St. Joseph Health system. In response to this model, Providence St. Joseph Health tailored the selection of sites for COVID testing and vaccination as well as engagement with community organizations. We recommend that other large health systems implement models of this kind to understand underlying risk factors in their patient populations and target infection control responses accordingly. There are a number of strengths to this study. We used advanced analytic procedures and tested a variety of models seeking the optimal solution. We have a very large data set (319,599 participants) collected across a single hospital system. Our very large data set gave us the statistical power to examine many possible influences on risk of infection simultaneously. The use of a single hospital system ensures that data collection, variable coding, and data extraction was done in a consistent manner, in contrast to meta-analyses and reviews which are forced to merge data sets which can have real methodological differences. Our list of examined variables is long and comprehensive, including age, gender, education, employment, race, ethnicity, religious affiliation, relationship status, language, BMI, chronic illness conditions, drug use, COVID-19 symptoms, geographic region, and living environment. Ours may be the only paper to date which has examined all of these variables, in a single hospital system, with > 300,000 participants. There are several limitations to this study. First, models were trained based on data that would be available to an outpatient clinician (patient medical history, sociodemographic, self-reportable symptoms, and environmental data). While this was intentional in order to make the model generalizable to various clinical settings, laboratory values such as white blood cell counts (lymphocyte, eosinophil, basophil, and neutrophil values) [12] may have improved performance of the model that included symptoms. Second, the data collection period (February–October 2020) spanned a period of rapidly evolving public health guidelines. This may have influenced some of the findings. For example, the finding that older age was not predictive of a higher risk of COVID-19 infection may reflect greater caution and compliance with stay-at-home orders among older populations. Third, the study did not include the largest part of the third wave, from October 2020 to March 2021; consequently, we intend to update these findings using the same machine learning method as the pandemic continues to progress. Fourth, we suggest that that the population-level characteristics spotlighted by this model (e.g., race, ethnicity, language) are not inherent predictors of risk, but rather are proxy indicators for living conditions (housing density and ability to socially isolate) and social structures, such as systemic racism in healthcare and public policy.

Conclusions

Our results confirm that the following social and demographic factors increased the risk of COVID-19 infection between February and October of 2020: being Hispanic and Latino, being non-English-speaking (and especially Spanish speaking), residing in an area that had housing insecurity, or being from the region of Washington and Montana. These findings confirm that social determinants of health were major drivers of infection risk in the late part of the pre-vaccine US COVID-19 pandemic. Language-appropriate and community-based education is needed to mitigate the effects of social factors on infection risk. Additionally, providers should focus education efforts on patients who fall into high-risk categories or are frequently in contact with individuals from high-risk categories. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 32 KB)
  8 in total

1.  What's your risk of catching COVID? These tools help you to find out.

Authors:  Michael Eisenstein
Journal:  Nature       Date:  2020-12-21       Impact factor: 49.962

2.  A systematic review on AI/ML approaches against COVID-19 outbreak.

Authors:  Onur Dogan; Sanju Tiwari; M A Jabbar; Shankru Guggari
Journal:  Complex Intell Systems       Date:  2021-07-05

3.  Determinants of COVID-19 vaccine acceptance in the US.

Authors:  Amyn A Malik; SarahAnn M McFadden; Jad Elharake; Saad B Omer
Journal:  EClinicalMedicine       Date:  2020-08-12

4.  A model of disparities: risk factors associated with COVID-19 infection.

Authors:  Yelena Rozenfeld; Jennifer Beam; Haley Maier; Whitney Haggerson; Karen Boudreau; Jamie Carlson; Rhonda Medows
Journal:  Int J Equity Health       Date:  2020-07-29

5.  The Role of Machine Learning Techniques to Tackle COVID-19 Crisis: A Systematic Review.

Authors:  Hafsa Bareen Syeda; Mahanazuddin Syed; Kevin Wayne Sexton; Shorabuddin Syed; Salma Begum; Farhanuddin Syed; Fred Prior; Feliciano Yu
Journal:  JMIR Med Inform       Date:  2020-11-15

6.  COVID-19 Vaccination Hesitancy in the United States: A Rapid National Assessment.

Authors:  Jagdish Khubchandani; Sushil Sharma; James H Price; Michael J Wiblishauser; Manoj Sharma; Fern J Webb
Journal:  J Community Health       Date:  2021-01-03

7.  Risk factors of critical & mortal COVID-19 cases: A systematic literature review and meta-analysis.

Authors:  Zhaohai Zheng; Fang Peng; Buyun Xu; Jingjing Zhao; Huahua Liu; Jiahao Peng; Qingsong Li; Chongfu Jiang; Yan Zhou; Shuqing Liu; Chunji Ye; Peng Zhang; Yangbo Xing; Hangyuan Guo; Weiliang Tang
Journal:  J Infect       Date:  2020-04-23       Impact factor: 6.072

8.  Risk factors for Covid-19 severity and fatality: a structured literature review.

Authors:  Dominik Wolff; Sarah Nee; Natalie Sandy Hickey; Michael Marschollek
Journal:  Infection       Date:  2020-08-28       Impact factor: 7.455

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.