Literature DB >> 35793312

COVID-19 Time of Intubation Mortality Evaluation (C-TIME): A system for predicting mortality of patients with COVID-19 pneumonia at the time they require mechanical ventilation.

Robert A Raschke1,2, Pooja Rangan2,3, Sumit Agarwal2,3, Suresh Uppalapu2,3, Nehan Sher2, Steven C Curry1,4, C William Heise1,4.   

Abstract

BACKGROUND: An accurate system to predict mortality in patients requiring intubation for COVID-19 could help to inform consent, frame family expectations and assist end-of-life decisions. RESEARCH
OBJECTIVE: To develop and validate a mortality prediction system called C-TIME (COVID-19 Time of Intubation Mortality Evaluation) using variables available before intubation, determine its discriminant accuracy, and compare it to acute physiology and chronic health evaluation (APACHE IVa) and sequential organ failure assessment (SOFA).
METHODS: A retrospective cohort was set in 18 medical-surgical ICUs, enrolling consecutive adults, positive by SARS-CoV 2 RNA by reverse transcriptase polymerase chain reaction or positive rapid antigen test, and undergoing endotracheal intubation. All were followed until hospital discharge or death. The combined outcome was hospital mortality or terminal extubation with hospice discharge. Twenty-five clinical and laboratory variables available 48 hours prior to intubation were entered into multiple logistic regression (MLR) and the resulting model was used to predict mortality of validation cohort patients. Area under the receiver operating curve (AUROC) was calculated for C-TIME, APACHE IVa and SOFA.
RESULTS: The median age of the 2,440 study patients was 66 years; 61.6 percent were men, and 50.5 percent were Hispanic, Native American or African American. Age, gender, COPD, minimum mean arterial pressure, Glasgow Coma scale score, and PaO2/FiO2 ratio, maximum creatinine and bilirubin, receiving factor Xa inhibitors, days receiving non-invasive respiratory support and days receiving corticosteroids prior to intubation were significantly associated with the outcome variable. The validation cohort comprised 1,179 patients. C-TIME had the highest AUROC of 0.75 (95%CI 0.72-0.79), vs 0.67 (0.64-0.71) and 0.59 (0.55-0.62) for APACHE and SOFA, respectively (Chi2 P<0.0001).
CONCLUSIONS: C-TIME is the only mortality prediction score specifically developed and validated for COVID-19 patients who require mechanical ventilation. It has acceptable discriminant accuracy and goodness-of-fit to assist decision-making just prior to intubation. The C-TIME mortality prediction calculator can be freely accessed on-line at https://phoenixmed.arizona.edu/ctime.

Entities:  

Mesh:

Year:  2022        PMID: 35793312      PMCID: PMC9258832          DOI: 10.1371/journal.pone.0270193

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

The Coronavirus disease 2019 (COVID-19) pandemic raised concern that an overwhelming surge of critically-ill patients might require exclusion of patients with high predicted mortality from receiving mechanical ventilation [1]. The majority of COVID-19 ventilator triage policies surveyed in 2020 incorporated the Sequential Organ Failure Assessment (SOFA) score to predict mortality [2]. The SOFA score was originally designed to predict the mortality of sepsis patients based on assessment of the respiratory, renal, cardiovascular, hepatobiliary, coagulation and central nervous systems [3], and was externally validated in general ICU patient populations [4, 5]. However, a recent study using SOFA score data collected 48 hours prior to intubation in patients with COVID-19 pneumonia yielded a discriminant accuracy for mortality prediction of only 0.59 (95%CI: 0.55–0.63) [6]. Among other general ICU mortality scoring systems, the acute physiology and chronic health evaluation, version IVa (APACHE IVa) is notable, incorporating 145 variables and disease-specific regression models [7]. APACHE IVa has been shown to have superior discriminant accuracy compared to other general ICU mortality prediction models [8-10] and has been externally validated for COVID-19 patients [11], but it is based on variables obtained at the time of admission rather than at the time of intubation. Although many other scoring systems have been specifically developed to predict mortality in patients with COVID-19 [12-34], none focused on assessing the patient at the time of intubation, when patients, families and providers are forced to make critical decisions regarding life support. Informed consent for endotracheal intubation should include an objective discussion of prognosis, and the need for ventilator triage based on predicted mortality might yet arise in future regional covid hotspots. Therefore, the point in time when it becomes apparent that a patient with COVID-19 pneumonia is going to require mechanical intubation is arguably the most important time to determine their prognosis. Our aim was to develop a mortality prediction system we called C-TIME (COVID-19 Time of Intubation Mortality Evaluation) using variables typically available in the 48 hours before intubation, in order to inform consent, frame family expectations and assist end-of-life planning. Our secondary aims were to validate C-TIME, determine its discriminant accuracy and calibration, and compare it to SOFA and APACHE IVa mortality prediction models.

Materials and methods

Study design

A retrospective cohort study, approved by the research determination committee of the University of Arizona IRB, with waived necessity for consent to use de-identified data, was set in 18 medical surgical ICUs in the Southwest United States between 6/1/2020 and 3/23/2021. June was chosen for cohort inception when preliminary results of the RECOVERY trial [35] were released, and administration of dexamethasone rapidly adopted in our study ICUs. We randomly assigned study patients to model-development and validation cohorts.

Participants

Consecutive ICU patients were included based on the following eligibility criteria: ≥18 years of age; positive SARS-CoV 2 RNA by reverse transcriptase polymerase chain reaction or positive rapid antigen test; and undergoing endotracheal intubation ≥4 hours after admission. All patients were followed until hospital discharge or death.

Variables and data sources

The main outcome variable was hospital mortality or discharge to hospice after terminal extubation–henceforth this combined outcome is referred to as “mortality”. We chose candidate predictor variables to use in model development based on previous literature [12-34] and hypotheses generated by our clinical research team. We examined our clinical dataset and only included candidate predictor variables that were missing in less than 10% of study patients. We made an exception for the partial pressure of arterial oxygen/fraction of inspired oxygen (PaO2/FiO2) ratio, which we hypothesized would be a particularly important predictor [36]; therefore we planned a-priori to impute missing PaO2/FiO2 data (see statistics section below). The following 25 candidate predictor variables, collected in the time period before intubation, were chosen to include in model development. Patient characteristics included: age, gender, body mass index (BMI), prior history of diabetes mellitus, hypertension, COPD, coronary artery disease, cancer or solid organ transplant. Physical examination findings included maximum temperature, lowest mean arterial pressure and lowest Glasgow Coma scale in the 48 hours prior to intubation. Laboratory variables included the highest concentration of creatinine and bilirubin, and the lowest platelet count and PaO2/FiO2 ratio in the 48 hours prior to intubation. Management variables comprised hospital days prior to intubation; hospital days receiving non-invasive respiratory support (high-flow nasal canula oxygen; continuous positive airway pressure or bilevel positive airway pressure) prior to intubation; hospital days receiving corticosteroids (dexamethasone, methylprednisolone or prednisone) prior to intubation; and administration of any of the following drugs: corticosteroids, therapeutic dose heparin/enoxaparin, oral Xa inhibitors, subcutaneous or intravenous insulin, or norepinephrine infusion. We also included intubation during surge conditions, defined as the time period(s) during which ≥ 400 ventilators (>5.5 ventilators per 100,000 population) were in use by COVID-19 patients in the state of Arizona where most of our study hospitals were located. By this criteria, surge conditions occurred in our ICUs in the summer (6/23/2020–8/7/2020) and winter (12/3/2020–2/14/2021) [37]. Variables needed to calculate the SOFA score were also extracted from the Cerner Millennium® electronic medical record, using the worst values in the 48 hours prior to intubation. SOFA variables include: PaO2, FiO2, use of invasive or non-invasive ventilatory support, lowest MAP, use of intravenous vasopressors, GCS, platelet count, serum creatinine and bilirubin. These variables are used to assign a score of 0–4 to each of the corresponding organ systems–higher scores indicating worse organ function. The resulting cumulative SOFA score ranging from 0–24 determines predicted mortalities of 0–95% based on previous validation studies [3-5]. Data used to calculate the APACHE IVa predicted mortality were collected by direct electronic interface between Cerner Millennium® and Philips Healthcare Analytics. These included the worst physiological values occurring during the first ICU day, chronic health conditions and admission information. Predicted hospital mortality were provided by Philips Healthcare using proprietary APACHE IVa methodology (Cerner Corp. Kansas City, MO) [7].

Study size

We calculated that a sample size of 2500 patients would allow analysis of 25 candidate predictor variables in our logistic regression. This was based on assumed mortality of 50%, providing 25 events for each predictor variable in both the model-development and validation cohorts.

Statistical analysis

All study patients underwent randomization into one of the two cohorts. Missing FiO2 values were imputed as the mean FiO2 for all study patients for whom FiO2 was known. Missing PaO2 values were imputed as the mean PaO2 of all study patients receiving the same FiO2. The 25 candidate predictor variables were entered into backwards, step-wise, multiple logistic regression (MLR) using the model-development cohort, with mortality as the dependent outcome variable. We retained all variables that remained in the model at P≤0.05. The MLR logistic equation was then applied to calculate predicted mortality for each patient in the validation cohort and this data was used to calculate area under the receiver operator curve (AUROC), Nagelkerke’s pseudo R2 and Hosmer-Lemeshow goodness-of-fit tests. AUROC of all patients for whom all three predicted mortalities (C-TIME, SOFA and APACHE IVa) were calculable were compared using the Chi-squared statistic. Calibration of each of the three models were compared using calibration belts [38]. We explored model performance in relation to the assumption that a predicted mortality of ≥75% or ≥90% might influence the decision whether or not to intubate. Therefore, we identified patient subgroups with ≥75% and ≥90% predicted mortality for each of the three models, and enumerated the observed mortality for each subgroup. This allowed calculation of the sensitivity of each model for mortality (the observed mortality in each subgroup divided by overall mortality) at the two cutoffs. The Wilson method was used to calculate 95% confidence intervals for single proportions. We used STATA® Version 17 (Statacorp, College Station, TX) for all statistical analyses.

Sensitivity analysis

We calculated AUROCs for C-TIME and SOFA using only validation cohort patients for whom FiO2 and PaO2 were known (i.e. excluding patients with imputed values). These were compared to the AUROCs of our primary analysis by using z-tests on equality of proportions to test whether data imputation affected AUROC. Recalculation of APACHE IVa AUROC was not necessary because it did not incorporate imputed data.

Results

Between 6/1/2020 and 3/23/2021, 18,431 patients with COVID-19 were admitted to study hospitals. Of these, 4,695 were admitted to the ICU and 2,440 were intubated ≥4 hours after admission. Characteristics of these 2,440 study patients are presented in the Table 1. The median age was 66 years, 61.6 percent were men, and 50.5 percent were Hispanic, Native American or African American. Eighty-six percent of patients received corticosteroids. Eleven variables were significant in the final MLR model (see Table 2). The validation cohort comprised 1,219 patients of whom 1,179 had complete data for analysis by MLR. Observed mortality in the validation cohort was 65.1%.
Table 1

Clinical characteristics of 2440 study patients.

Model Development Cohort (n = 1,221)Validation Cohort (n = 1,219)
Age in years, median (IQR) 66 (57–74)66 (56–75)
Age in years, no. (%)
    18–44122 (10.0%)123 (10.1%)
    45–64429 (35.1%)436 (35.8%)
    65–74395 (32.3%)347 (28.5%)
    75–84226 (18.5%)273 (22.4%)
    >8549 (4.0%)40 (3.3%)
Male, no. (%) 740 (60.7%)762 (62.4%)
Race/ethnicity, no. (%) *
Non-Hispanic white542 (44.4%)549 (45.0%)
Hispanic481 (39.4%)462 (37.9%)
Native American88 (7.2%)94 (7.7%)
African American45 (3.7%)63 (5.2%)
Asian/Pacific Islander22 (1.8%)18 (1.5%)
Other/Multiple Race/Unknown43 (3.5%)33 (2.7%)
Body Mass Index, median (IQR) 31.3 (27.2–37.1)31.8 (27.4–38.0)
Admitted during surge, no. (%) 897(73.5%)893(73.3%)
Medications, no. (%)
Steroids1068 (87.5%)1,046 (85.8%)
Insulin655 (53.6%)680 (55.8%)
Therapeutic heparin/enoxaparin105 (8.6%)96 (7.9%)
Oral Xa inhibitors77 (6.3%)83 (6.8%)
Norepinephrine249 (20.4%)233 (19.1%)
Comorbidities, no. (%)
Diabetes726 (59.5%)730 (60.1%)
Hypertension929 (76.1%)922 (75.6%)
Coronary Artery Disease343 (28.1%)353 (29.0%)
COPD187 (15.3%)180 (14.8%)
Cancer117 (9.6%)126 (10.3%)
Solid organ transplant17 (1.4%)17 (1.4%)
Physical examination, mean (SD)
Minimum mean arterial pressure (mmHg)70.7 (61.7–80.3)71.0 (62.3–80.0)
Maximum temp (°C.)99.0 (98.4–100.0)99.0 (98.4–99.9)
Minimum Glasgow Coma Scale score, median (IQR)15 (14–15)15 (14–15)
Labs, median median (IQR)
    C-reactive protein, mg/L120.5 (62.4–194.8)126.6 (71.6–209.4)
    Creatinine**, mg/dl0.93 (0.7–1.4)0.97 (0.7–1.5)
    Bilirubin**, mg/dL0.6 (0.4–0.8)0.5 (0.4–0.8)
    PaO2/FiO2 ratio**73.7 (58.0–79.0)73.7 (57.0–80.6)
    Platelets**, K/mm3229 (163–303)223 (160–300)
Pre-intubation hospital course, median (IQR)
Hours from admission to intubation91.8 (31.4–213.1)86.5 (31.8–190.3)
Days on non-invasive respiratory support before intubation3.0 (1.0–7.0)3.0 (1.0–6.0)
Days receiving steroids before intubation4.0 (1.0–8.0)3.0 (1.0–8.0)
Outcomes, no. (%)
In-hospital death771(63.2%)789 (64.6%)
Terminal extubation and discharge to hospice42 (3.6%)5 (0.4%)
Combined death/discharge to hospice813 (66.6%)794 (65.1%)

*Race/ethnicity was as reported by the patient at time of admission.

**Variables incorporated into SOFA score.

Table 2

Multiple logistic regression model with significant predictor variables for the outcome mortality in the model-development cohort.

Significant predictor variables in the C-TIME MLR model:Odds ratio* (95% CI)P value
Age (years) 1.71 (1.47–1.98)<0.001
Male Gender 1.41 (1.06–1.89)0.019
COPD 1.63 (1.07–2.49)0.024
Minimum mean arterial pressure (mmHg) 0.81 (0.70–0.93)0.004
Minimum Glasgow Coma Scale score 0.82 (0.70–0.95)0.008
PaO2/FiO2 (mmHg)0.73 (0.62–0.86)<0.001
Maximum creatinine (mg/dl) 1.17 (1.00–1.36)0.050
Maximum bilirubin (mg/dl) 1.55 (1.13–2.13)0.006
Days receiving non-invasive respiratory support 1.52 (1.08–2.13)0.017
Days receiving corticosteroids 1.43 (1.05–1.94)0.024
Received oral Xa inhibitors 2.37 (1.16–4.85)0.018

*Odds ratios are associated with a one standard deviation (SD) increment for continuous variables. Values used for SD: age: 13.7 years, MAP: 13.7 mmHg, PaO2/FiO2: 78.3 mmHg, creatinine: 1.9 mg/dl, bilirubin: 2.0 mg/dl, days receiving corticosteroids: 5 days; minimum Glasgow Coma Scale score: 3; days receiving non-invasive respiratory support before intubation: 5 days.

*Race/ethnicity was as reported by the patient at time of admission. **Variables incorporated into SOFA score. *Odds ratios are associated with a one standard deviation (SD) increment for continuous variables. Values used for SD: age: 13.7 years, MAP: 13.7 mmHg, PaO2/FiO2: 78.3 mmHg, creatinine: 1.9 mg/dl, bilirubin: 2.0 mg/dl, days receiving corticosteroids: 5 days; minimum Glasgow Coma Scale score: 3; days receiving non-invasive respiratory support before intubation: 5 days. C-TIME AUROC was 0.75 (95%CI 0.72–0.79), Nagelkerke’s pseudo R2 = 0.25, and the Hosmer-Lemeshow Chi2 showed acceptable goodness-of-fit with P = 0.29. (Note: this P value >0.05 shows that there is no significant difference between predicted and observed mortalities in subgroups of cohort patients, i.e., good calibration). Two-hundred seventeen of 1179 validation cohort patients did not meet criteria for APACHE IVa calculations, and the remaining 962 patients were included in our comparison between the three models. The median (Inter-quartile range) of predicted mortality from C-TIME, SOFA and APACHE were 0.71 (0.54–0.83), 0.18 (0.07–0.26) and 0.20 (0.09–0.38) respectively. C-TIME had the highest AUROC of 0.75 (95%CI: 0.72–0.79), vs 0.67 (0.64–0.71) and 0.59 (0.55–0.62) for APACHE and SOFA, respectively (Chi2 P<0.0001) (see Fig 1). C-TIME was well calibrated (see Fig 2), with P = 0.215 [C-TIME predicted mortalities were not significantly different from observed mortalities]. APACHE and SOFA had poor overall calibration (see Figs 3 and 4), deviating significantly from observed mortality (P<0.001 for each). Calibration belt plots showed APACHE and SOFA were only acceptably calibrated when predicted mortality was ≥84% and >73% respectively, and post-hoc analysis revealed that mortality in that range was uncommonly predicted by either method. For instance, we noted 84% of patients had SOFA scores ≤9 corresponding with <26% mortality.
Fig 1

Comparative AUROC of C-TIME, APACHE IVa, and SOFA mortality prediction systems.

Fig 2

Comparative calibration belt plots for C-TIME.

Fig 3

Comparative calibration belt plots for APACHE IVa.

Fig 4

Comparative calibration belt plots for SOFA.

C-TIME classified 486 patients as having ≥75% mortality, of whom 399 died, yielding sensitivity of 50% (95%CI: 47–54%) at that cutoff. In comparison, APACHE and SOFA classified 46 and 120 patients as having ≥75% mortality yielding sensitivities of 5%, (95%CI:4–7%) and 12% (95%CI: 9–14%) respectively. C-TIME classified 141 patients as having ≥90% mortality, of whom 128 died, yielding sensitivity of 16% (95%CI: 14–19%) at that cutoff. In comparison, APACHE and SOFA classified 15 and 0 patients as having ≥90% mortality, yielding sensitivities of 2% (95%CI:1–3%) and zero respectively.

Sensitivity analysis in relationship to imputed data

Eighty percent of study patients for whom FiO2 was recorded had an FiO2 of 100%. FiO2 was imputed to be 96% in 202/2440 patients (8.3%) with missing data. PaO2 was imputed in 647/2440 patients (26.5%). Sensitivity analysis showed that C-TIME and SOFA AUROCs in the subset of 896 validation patients without imputed PaO2 were 0.75 (CI 0.71–0.79) and 0.58 (CI 0.54–0.62) respectively–identical to AUROCs calculated for the full validation cohort.

Discussion

The C-TIME mortality prediction model, based on eleven easily obtained clinical and laboratory variables, has better discriminant accuracy than APACHE IVa with 145 variables [7]. Furthermore, the C-TIME model has acceptable calibration and sensitivity in patients with high predicted mortality, in whom C-TIME may be helpful in making end-of-life decisions. Our study hospitals range from tertiary academic centers to community and critical access facilities serving a variety of persons from urban and rural communities with a wide diversity of racial/ethnic backgrounds and socioeconomic status, enhancing the external generalizability of our findings. Well over one hundred prognostic systems, including general ICU systems (such as SOFA and APACHE), and novel systems specifically developed for COVID-19 patients have already been published to predict clinical outcomes in patients with COVID-19 [12]. These vary by target patient population, predictor variables and outcomes of interest. To provide context for C-TIME, we reviewed comparable scoring systems that were developed and validated specifically for hospitalized COVID-19 patients and which incorporated commonly available clinical and laboratory predictor variables, and which reported AUROCs for in-hospital mortality [13-34]. Several features distinguish C-TIME from other validated COVID-19 mortality prediction systems we reviewed. 1) C-TIME is the only system that specifically evaluates patients with COVID-19 pneumonia just before they require mechanical ventilation. The discriminant accuracy of other prognostic models at this point in a patient’s clinical course are unknown, due to spectrum effect (40). 2) The C-TIME study cohort had by far the highest reported mortality (65%) of any of the previous studies, as would be expected for intubated COVID-19 patients [39]. The mortality of the study cohort has a strong influence on the operating characteristics of associated mortality prediction systems [40]–another reason why previously reported mortality prediction scores are likely not accurate if used at the time of intubation. 3) Other mortality prediction systems utilized study cohorts that included patients admitted prior to 6/2020, when preliminary results of RECOVERY were released. The inclusion of significant numbers of patients who did not receive corticosteroids could limit their generalizability in relationship to current practice patterns. Eighty-six percent of our study patients received corticosteroids before intubation. 4) C-TIME is the only model that incorporates treatment variables. Days receiving corticosteroids and days receiving non-invasive respiratory support prior to intubation were associated with mortality in our model-development and validation cohorts, and were also significantly associated with surge conditions (p = 0.0003 and 0.005, respectively). Surviving patients received a median of two days steroids and two days non-invasive respiratory support; non-survivors received a median of nine days steroids and eight days non-invasive respiratory support. A recent study showed that mortality increased significantly during the winter 2020 COVID-19 surge [41] however a meta-analysis concluded that delaying intubation does not influence mortality [42]. It is possible that the associations observed in our study might be due to prolonged efforts at non-invasive respiratory support and corticosteroid treatment of patients during surge conditions, selecting treatment non-responders for intubation. One particular C-TIME variable deserves brief comment–the association of factor Xa inhibitors with increased mortality. Analysis of a sample of these patients showed that pre-existing atrial fibrillation was the indication for factor Xa inhibitors in 80%. It is possible that receiving factor Xa inhibitors was a confounding variable representing pre-existing atrial fibrillation in our model. Several other mortality prediction systems with acceptable operating characteristics are available for prognostication at the time of admission to the hospital, rather than at the time of intubation. The 4C score is supported by the largest study cohort and has an AUROC (0.77) similar to C-TIME [13]. We noted the highest AUROCs were reported for systems that prognosticated using variables from the time of admission in Hubei province early in the pandemic [16, 22, 26, 30, 32]. We feel these results are likely irreproducible outside the special circumstances under which they were reported. This contention is supported by a study using data from the Veterans Affairs Data Warehouse [19] that externally-validated two of these prediction scores [30, 32] and found much lower AUROCs than those originally reported: 0.68 vs. 0.91, and 0.72 vs. 0.94, respectively. This phenomenon was also demonstrated for the SOFA score, which achieved AUROCs of 0.89 (0.83–0.96) [43] and 0.99 (0.98–1.00) [44] in Hubei province early in the pandemic, versus 0.58 and 0.61 (0.53–0.70) in larger, more recent studies from the US and UK [6, 15]. The calibration belt plots in Fig 2 show that C-TIME has good fit across the entire range of mortality prediction, and that APACHE and SOFA have poor fit, underestimating mortality over much of the range. Calibration is better for APACHE and SOFA when predicted mortality is above 75%. However, APACHE and SOFA uncommonly predict mortality in this range; the upper limits of their IQR for predicted mortality are 38% and 26% respectively, despite the overall mortality of 65% in the validation cohort. This is consistent with findings of our prior study which showed that SOFA is likely to under-estimate mortality in covid patients at the time of intubation because many only have single organ system failure at that point [6]. Such a patient, receiving 100% oxygen by bilevel positive airway pressure ventilation, with a resulting PaO2 of 55 mmHg, would have a SOFA score of 4 indicating <10% mortality. The lack of sensitivity for APACHE and SOFA at cutoffs of 75% and 90% predicted mortality would severely limit their utility in identifying high risk patients less likely to benefit from intubation.

Limitations of the study

Missing data was a major complication of our retrospective cohort design that limited us from including less-frequently-ordered predictor variables such as C-reactive protein, and led us to impute missing PaO2 and FiO2 data. Our sensitivity analysis showed that the later did not affect our AUROC estimates. Our EMR data source limited our ability to include variables not recorded as discrete data, such as COVID-19 vaccination status and pre-existing atrial fibrillation. The discriminant accuracy achieved by C-TIME was modest, although similar to several other COVID-19 mortality prediction systems with AUROCs ranging 0.72–0.79 [15, 18, 25, 27, 29, 45]. We believe that it is inherently difficult to predict COVID-19 mortality at the time of intubation because such patients are relatively clinical homogeneous; most have life-threatening, single organ, respiratory failure (see Table 1) [3]. Low variation in predictor variables reduces discriminant accuracy. This could explain why APACHE IVa, which achieved AUROC of 0.88 in a large general ICU population [7], only yielded an AUROC of 0.66 in our study cohort. C-TIME (and all other COVID-19 prognostic systems) are likely to lose discriminant accuracy over time, as factors influencing survival evolve. These factors might include advances in therapy and emergence of new viral strains. The aforementioned decline in discriminant accuracy for SOFA reported in Hubei vs the US and UK shows that discriminant accuracy reported in one historical setting may not be generalizable in later settings. Thus, any prognostic scoring system for COVID-19 will likely require repeated validation over time. We have begun the process of re-validating C-TIME using data collected during the Omicron surge.

Conclusions

C-TIME is the only currently available mortality predictive score specifically developed and validated for COVID-19 patients who require intubation. It has acceptable discriminant accuracy and goodness-of-fit to assist informed consent for intubation and other end-of-life issues that occur specifically at this critical juncture in the patient’s care. The C-TIME predicted mortality calculator can be accessed free on-line at: https://phoenixmed.arizona.edu/ctime 11 Mar 2022
PONE-D-21-40997
COVID-19 Time of Intubation Mortality Evaluation (C-TIME): A System for Predicting Mortality of Patients with COVID-19 Pneumonia at the Time They Require Mechanical Ventilation.
PLOS ONE Dear Dr. Raschke, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. You have developed and validated a promising mortality prediction system C-TIME in patients requiring intubation for COVID-19, which showed better discriminant accuracy compared with existed methods. The paper is well-written and you applied appropriate statistical methods. However, appropriate revisions and replies to reviewers’ comments are recommended before acceptance. Please submit your revised manuscript by Apr 25 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Yuyan Wang, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: 1. The introduction part needs to be expanded by adding more backgrounds and making the paragraph more logical. 2. Missing problem for predictors needs more clarification and details. 3. Some revisions are suggested to make reported results more rigorous. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Congratulations on the outstanding work. This paper suggests a simple and feasible method for predicting patient prognosis at time of intubation in Covid-19 related cases. Acessible variables have been used and a modest accuracy rate has been achieved, which makes the method reproducible and obtainable worldwide. It is intriguing that anticoagulation was associated with worst prognosis and that anti Xa inhibitors were associated with increased mortality, specially beacause other studies have demonstrated the importance of anticoagulation in critical patients admitted due to Covid-19. Reviewer #2: The authors developed and validated a mortality prediction system called CTIME (COVID-19 Time of Intubation Mortality Evaluation) for patients requiring intubation for COVID-19, which can be very useful in clinical application. I’m impressed by the well-written paper, especially the discussion part. Appropriate statistical methods are applied in this study, but there are substantial issues that need to be addressed for acceptance. Major: 1. Introduction section lists many drawbacks of current methods, but it lacks logit and is little bit short. Suggest to expand the introduction content, for example, consider to give more background about APACHE Iva and SOFA 2. Does the missingness problem only exist in PaO2/FiO2? If not, what’s the missing proportion of other candidate predictors? How do you handle their missingness? 3. I’m confused about the final numbers of participants in both model development and validation? Could the author clarify the final validation sample size used for C-TIME, APACHE Iva and SOFA, respectively, and if the numbers are different, could you explain why? Minor: 1. Abstract: when abbreviating the terms (APACHE Iva/ SOFA/ AUROC), suggest to use the full term the first time you use it, followed by abbreviations in parentheses. 2. Method, Study design: "We randomly split our cohort in half". Why splitting 2440 into 1221 and 1219 (Table 1), not 1220 and 1220? 3. Method, Data sources: suggest to briefly explain how APACHE Iva and SOFA calculates predicted hospital mortality. 4. Method, Study size: “We calculated that a sample size of 2500 patients would allow analysis of 25 candidate predictor variables in our logistic regression”. More details are needed. What are the parameters you used for this sample size estimation? 5. Method, Statistical Analysis: “AUROC compared using the Chi-squared statistic; Chi2 P<0.0001”, is this p-value for comparison among three methods or between two methods? 6. Table 1: suggest to add “Overall” column for all 2440 subjects description and “p-value” column for comparison between development and validation cohorts; “Admitted during surge/ Outcomes” missing “No. (%)”, and “Physical examination/ Pre-intubation hospital course” missing “median (IQR)”; “**” in “Creatinine/ Bilirubin/ratio/ Platelets”, forget any footnote? any missingness for variables described in Table 1? 7. Table 3: it seems predicted mortality probabilities from APACHE Iva and SOFA were much lower than predicted probabilities from C-TIME. It would be helpful if the author could give the distributions of the predicted probabilities of these three methods. (Not necessarily add in the results, but just to check the distribution and explain numbers in Table 3) 8. What’s the final sample size in sensitivity analysis? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Beatriz Martinelli Menezes Goncalves Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
20 Apr 2022 To the Editors PLOS ONE and the reviewers, Thank you for your thoughtful consideration of our manuscript and recommendations to improve it. We have extensively rewritten the paper to address your concerns and those of the reviewers. We added calibration curves, an important methodological criteria for critical appraisal of research involving prognostic models (BMJ 2020;369:m1328). This additional analysis was revealing and entailed replacement of a table with a three-part figure in the revised version of our manuscript. Editors comments: 1) Our manuscript meet’s PLOS ONE’s style requirements. 2) We are restricted by our Data Use Agreement (DUA) with Banner Health from sharing data without a request. Here is the applicable section from the DUA: “The Data will be used solely to conduct the Project and solely by Recipient Scientist and Recipient’s faculty, employees, fellows, students, and agents (“Recipient Personnel”) and Collaborator Personnel (as defined in Attachment 3) that have a need to use, or provide a service in respect of, the Data in connection with the Project and whose obligations of use are consistent with the terms of this Agreement (collectively, “Authorized Persons”).Except as authorized under this Agreement or otherwise required by law, Recipient agrees to retain control over the Data and shall not disclose, release, sell, rent, lease, loan, or otherwise grant access to the Data to any third party, except Authorized Persons, without the prior written consent of Provider. Recipient agrees to establish appropriate administrative, technical, and physical safeguards to prevent unauthorized use of or access to the Data and comply with any other special requirements relating to safeguarding of the Data.” We have designated a non-author individual with data access to be contacted for data requests: Kieran Richardson, Research Development Manager, University of Arizona College of Medicine-Phoenix (KieranR@arizona.edu). 3) We have included our full ethics statement in the Methods section as requested. 4) Our reference list is complete and correct. None have been retracted. 5) The introduction was expanded and clarified. 6) The problem of missing data for predictor variables was clarified. 7) Revisions suggested to make reported results more rigorous have been made. Response to Reviewers. We greatly appreciate the reviewers comments. Reviewer #1 did not request any revisions. Reviewer #2 Major revision requests: 1. Introduction section lists many drawbacks of current methods, but it lacks logic and is little bit short. Suggest to expand the introduction content, for example, consider to give more background about APACHE Iva and SOFA. We expanded the introduction, improving the logic, and adding background regarding APACHE and SOFA. We added references currently numbered 4,5 and 8-11, to support this expansion of the introduction. 2. Does the missingness problem only exist in PaO2/FiO2? If not, what’s the missing proportion of other candidate predictors? How do you handle their missingness? We did not include esoteric predictor variables in our model development, but only variables which >90% of the patients in our study population had available in the 48 hours prior to intubation due to typical clinical practice. Multiple logistic regression only included patients with no missing data (except for P/F). In the validation cohort, 1179/1219 (96.7%) of patients had results for all predictor variables (except P/F) – conversely, 40 patients (3.3%) were missing at least one of the predictor variables and could not be included in MLR. The only exception to missing data was PaO2/FiO2. We provide the rationale for why we imputed missing P/F data in methods. Several of our study site hospitals used a protocol that encouraged use of oximetry and capnography instead of serial arterial blood gases in the monitoring of mechanically ventilated patients, and we were therefore missing P/F data on a surprisingly large minority of patients, but we felt PaO2 was too important of a variable to be left out of the model. Therefore missing P/F data were imputed and a sensitivity analysis performed which compared AUROC in patients with and without imputed P/F data, that showed essentially identical AUROCs whether or not imputed P/F data were used. 3. I’m confused about the final numbers of participants in both model development and validation? Could the author clarify the final validation sample size used for C-TIME, APACHE Iva and SOFA, respectively, and if the numbers are different, could you explain why? The validation cohort included 1219 patients of whom 1179 had complete data and were included in the MLR validation analysis. All 1179 patients had sufficient data to calculate SOFA scores, but 217/1179 (18.4%) were excluded from APACHE IVa analysis, provided by Philips Healthcare using proprietary APACHE IVa methodology. They have their own exclusion criteria independent from our study design, the most common of which are: ICU length of stay < four hours, missing data and transfer from another ICU. This left 962 patients for a fair comparison between C-TIME, SOFA and APACHE, as we thought comparing all 1179 patients with C-TIME and SOFA data to 962 patients with APACHE data would introduce bias. We clarified this in the revised manuscript. Minor: 1. Abstract: when abbreviating the terms (APACHE Iva/ SOFA/ AUROC), suggest to use the full term the first time you use it, followed by abbreviations in parentheses. Done, although this increased the word count of the abstract. 2. Method, Study design: "We randomly split our cohort in half". Why splitting 2440 into 1221 and 1219 (Table 1), not 1220 and 1220? We worded this poorly – the phrase “split our cohort in half” is inaccurate and has been clarified. Each patient’s randomization is independent of the randomization assignments of the other patients, so there is no guarantee that equal numbers of patients will be randomized to each group. We used block randomization to mitigate against large discrepancies, but small discrepancies like this are expected. 3. Method, Data sources: suggest to briefly explain how APACHE Iva and SOFA calculates predicted hospital mortality. This was added to the methods section. This entailed additional references currently numbered 4,5 and 8-11. 4. Method, Study size: “We calculated that a sample size of 2500 patients would allow analysis of 25 candidate predictor variables in our logistic regression”. More details are needed. What are the parameters you used for this sample size estimation? We have added the following statement: “ . . . based on the assumption of 50% mortality providing 25 events per predictor variable in each of the development and validation cohorts.” 5. Method, Statistical Analysis: “AUROC compared using the Chi-squared statistic; Chi2 P<0.0001”, is this p-value for comparison among three methods or between two methods? Between all three methods – the null hypothesis is that all three AUROCs are not significantly different that each other. 6. Table 1: suggest to add “Overall” column for all 2440 subjects description and “p-value” column for comparison between development and validation cohorts; “Admitted during surge/ Outcomes” missing “No. (%)”, and “Physical examination/ Pre-intubation hospital course” missing “median (IQR)”; “**” in “Creatinine/ Bilirubin/ratio/ Platelets”, forget any footnote? any missingness for variables described in Table 1? We made the minor changes to table 1 suggested above and added intended footnotes. Regarding statistical comparison of development and validation cohorts, we had originally included a column with P-values, but one of our coauthors (S Curry) argued strongly against it, on the basis that statistical comparisons are inappropriate in the absence of a research hypothesis. He suggested that we use the format typically used by the NEJM for comparing control and treatment groups, an example of which is pasted below. 7. Table 3: it seems predicted mortality probabilities from APACHE Iva and SOFA were much lower than predicted probabilities from C-TIME. It would be helpful if the author could give the distributions of the predicted probabilities of these three methods. (Not necessarily add in the results, but just to check the distribution and explain numbers in Table 3) This was a very helpful comment, and we learned a great deal trying to adequately respond to it. The median (Inter-quartile range) of predicted mortality from SOFA and APACHE are 0.18 (0.07-0.26) and 0.20 (0.09-0.38) respectively, which we didn’t appreciate in our former data analysis. We therefore reworked the manuscript quite a bit, eventually deciding to simplify, clarify and move the information previously in table 3 to the text of the results section and performing a formal treatment of calibration using the calibration belt technique of Nattino and Lemeshow (with new reference 39). This is added to the manuscript as figure 2, which graphically illustrates that APACHE and SOFA underpredict mortality over much of it’s range. This change is supported by a recent review on critical appraisal of research involving prognostic models, in which analysis of calibration is considered an important criteria (BMJ 2020;369:m1328). We also added a brief post-hoc analysis of the frequency of SOFA scores which showed that 83% of patients had SOFA scores <9 which correspond to predicted mortality 27%. This is quite illuminating since the overall observed mortality in the validation cohort was 65%. 8. What’s the final sample size in sensitivity analysis? This section was clarified – 896 patients in the validation cohort had non-imputed PaO2 and were used to recalculate AUROC. We hope this revision meets with your approval. Respectfully, Robert Raschke MD (and coauthors) Submitted filename: Responsetoreviewers.docx Click here for additional data file. 7 Jun 2022 COVID-19 Time of Intubation Mortality Evaluation (C-TIME): A System for Predicting Mortality of Patients with COVID-19 Pneumonia at the Time They Require Mechanical Ventilation. PONE-D-21-40997R1 Dear Dr. Raschke, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Yuyan Wang, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Thank you for addressing all comments and submitting your revised paper. Nice work! Reviewers' comments: 9 Jun 2022 PONE-D-21-40997R1 COVID-19 Time of Intubation Mortality Evaluation (C-TIME):  A System for Predicting Mortality of Patients with COVID-19 Pneumonia at the Time They Require Mechanical Ventilation. Dear Dr. Raschke: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Yuyan Wang Academic Editor PLOS ONE
  41 in total

1.  The Toughest Triage - Allocating Ventilators in a Pandemic.

Authors:  Robert D Truog; Christine Mitchell; George Q Daley
Journal:  N Engl J Med       Date:  2020-03-23       Impact factor: 91.245

Review 2.  Clinical review: scoring systems in the critically ill.

Authors:  Jean-Louis Vincent; Rui Moreno
Journal:  Crit Care       Date:  2010-03-26       Impact factor: 9.097

3.  Acute respiratory distress syndrome: the Berlin Definition.

Authors:  V Marco Ranieri; Gordon D Rubenfeld; B Taylor Thompson; Niall D Ferguson; Ellen Caldwell; Eddy Fan; Luigi Camporota; Arthur S Slutsky
Journal:  JAMA       Date:  2012-06-20       Impact factor: 56.272

4.  Real-time electronic health record mortality prediction during the COVID-19 pandemic: a prospective cohort study.

Authors:  Peter D Sottile; David Albers; Peter E DeWitt; Seth Russell; J N Stroh; David P Kao; Bonnie Adrian; Matthew E Levine; Ryan Mooney; Lenny Larchick; Jean S Kutner; Matthew K Wynia; Jeffrey J Glasheen; Tellen D Bennett
Journal:  J Am Med Inform Assoc       Date:  2021-10-12       Impact factor: 4.497

5.  Predicting outcomes in the Machine Learning era: The Piacenza score a purely data driven approach for mortality prediction in COVID-19 Pneumonia.

Authors:  Geza Halasz; Michela Sperti; Matteo Villani; Umberto Michelucci; Piergiuseppe Agostoni; Andrea Biagi; Luca Rossi; Andrea Botti; Chiara Mari; Marco Maccarini; Filippo Pura; Loris Roveda; Alessia Nardecchia; Emanuele Mottola; Massimo Nolli; Elisabetta Salvioni; Massimo Mapelli; Marco Agostino Deriu; Dario Piga; Massimo Piepoli
Journal:  J Med Internet Res       Date:  2021-05-16       Impact factor: 5.428

6.  Comparison of in-hospital mortality risk prediction models from COVID-19.

Authors:  Ali A El-Solh; Yolanda Lawson; Michael Carter; Daniel A El-Solh; Kari A Mergenhagen
Journal:  PLoS One       Date:  2020-12-28       Impact factor: 3.240

7.  The spectrum effect in tests for risk prediction, screening, and diagnosis.

Authors:  Juliet A Usher-Smith; Stephen J Sharp; Simon J Griffin
Journal:  BMJ       Date:  2016-06-22

8.  Ventilator Triage Policies During the COVID-19 Pandemic at U.S. Hospitals Associated With Members of the Association of Bioethics Program Directors.

Authors:  Armand H Matheny Antommaria; Tyler S Gibb; Amy L McGuire; Paul Root Wolpe; Matthew K Wynia; Megan K Applewhite; Arthur Caplan; Douglas S Diekema; D Micah Hester; Lisa Soleymani Lehmann; Renee McLeod-Sordjan; Tamar Schiff; Holly K Tabor; Sarah E Wieten; Jason T Eberl
Journal:  Ann Intern Med       Date:  2020-04-24       Impact factor: 25.391

9.  A novel severity score to predict inpatient mortality in COVID-19 patients.

Authors:  David J Altschul; Santiago R Unda; Joshua Benton; Rafael de la Garza Ramos; Phillip Cezayirli; Mark Mehler; Emad N Eskandar
Journal:  Sci Rep       Date:  2020-10-07       Impact factor: 4.379

10.  Validation of pneumonia prognostic scores in a statewide cohort of hospitalised patients with COVID-19.

Authors:  Yiyun Shi; Aakriti Pandita; Anna Hardesty; Meghan McCarthy; Jad Aridi; Zoe F Weiss; Curt G Beckwith; Dimitrios Farmakiotis
Journal:  Int J Clin Pract       Date:  2020-12-31       Impact factor: 3.149

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.