Literature DB >> 35941407

Deep Learning-Based Time-to-Death Prediction Model for COVID-19 Patients Using Clinical Data and Chest Radiographs.

Toshimasa Matsumoto1, Shannon Leigh Walston1, Michael Walston2, Daijiro Kabata3, Yukio Miki1, Masatsugu Shiba2,3, Daiju Ueda4,5.   

Abstract

Accurate estimation of mortality and time to death at admission for COVID-19 patients is important and several deep learning models have been created for this task. However, there are currently no prognostic models which use end-to-end deep learning to predict time to event for admitted COVID-19 patients using chest radiographs and clinical data. We retrospectively implemented a new artificial intelligence model combining DeepSurv (a multiple-perceptron implementation of the Cox proportional hazards model) and a convolutional neural network (CNN) using 1356 COVID-19 inpatients. For comparison, we also prepared DeepSurv only with clinical data, DeepSurv only with images (CNNSurv), and Cox proportional hazards models. Clinical data and chest radiographs at admission were used to estimate patient outcome (death or discharge) and duration to the outcome. The Harrel's concordance index (c-index) of the DeepSurv with CNN model was 0.82 (0.75-0.88) and this was significantly higher than the DeepSurv only with clinical data model (c-index = 0.77 (0.69-0.84), p = 0.011), CNNSurv (c-index = 0.70 (0.63-0.79), p = 0.001), and the Cox proportional hazards model (c-index = 0.71 (0.63-0.79), p = 0.001). These results suggest that the time-to-event prognosis model became more accurate when chest radiographs and clinical data were used together.
© 2022. The Author(s) under exclusive licence to Society for Imaging Informatics in Medicine.

Entities:  

Keywords:  Artificial intelligence; COVID-19; Chest radiography; Deep learning; Prognosis

Year:  2022        PMID: 35941407      PMCID: PMC9360661          DOI: 10.1007/s10278-022-00691-y

Source DB:  PubMed          Journal:  J Digit Imaging        ISSN: 0897-1889            Impact factor:   4.903


Introduction

As of November 2021, there are 250 million confirmed cases of COVID-19 worldwide, with more than 5 million deaths. The number of new cases is still increasing daily (https://covid19.who.int/). Therefore, it is essential for healthcare providers to efficiently triage patients with COVID-19. Predicting disease severity and progression in COVID-19 patients is important, as early intervention has been shown to reduce mortality [1, 2]. The Cox proportional hazards model, which can contrast variables associated with event and time to event, is a frequently applied analysis in medical research [3]. The model provides us not just the outcome (i.e., deceased or not) but also the time to event, which is more helpful for clinical practice. There are several studies which estimate the prognosis of COVID-19 patients using this model [4-6]. These include models that predict the time to death [4], the severity of illness [5], and the length of hospital stay [6] for patients. In these studies, the Cox proportional hazards model showed high performance but it has a limitation. It assumes linearity rather than performing nonlinear analysis which could better reflect actual clinical characteristics [3]. For example, BMI is a known nonlinear risk factor for COVID-19 admission and death [7]. Therefore, there is a need for a better solution that focuses on nonlinear variables. In recent years, deep learning has been attracting attention in the medical field [8, 9]. With deep learning, it is possible to extract the complex linear and nonlinear relationship between clinical characteristics and individual prognosis. Integrating deep learning into a Cox proportional hazards model has led to the development of the deep learning survival neural network (DeepSurv) [10]. This has been shown to perform as well or better than other survival analysis methods on survival data with linear and nonlinear covariates. The advantage of time-to-death estimation is that it can provide more information than the conventional binary classification task. Conventional binary classification does not estimate how many days until a patient is at increased risk of death. A time-to-death model, on the other hand, can estimate the risk of death over time from data at a fixed point in time (at the time of admission in this model). Medical images are known to be useful for prognostication in COVID-19. For example, the usefulness of chest radiographs [11-14] and chest CT [15, 16] was reported. Although CT is three dimensional and highly sensitive, chest radiographs may be more useful in the COVID-19 pandemic because they are relatively quick, low cost, portable, and accessible. Some reports show chest radiography for COVID-19 patients is indicative of the risk of hospitalization, duration of hospitalization, and risk of serious outcomes [12-14]. We hypothesized that we could build a better prognostic model by using chest radiographs with clinical data. Since DeepSurv does not have a mechanism to handle images, we developed a new model that integrated a convolutional neural network (CNN) which is one of the deep learning fields into DeepSurv. This allows us to handle both clinical data and images at once for prognosis estimation. So far, no study has developed an end-to-end deep learning model to predict time to event which combines clinical data and whole images as inputs. Using this newly created model, we predicted the mortality and time to death of patients hospitalized with COVID-19. Additionally, we scored the importance of the images compared to various clinical data.

Methods

Study Design

At first, we integrated a CNN into DeepSurv (DeepSurv with CNN model). Then, we developed and tested the model to estimate time to death for patients hospitalized with COVID-19. After developing the DeepSurv with CNN model, we compared the importance relative to other clinical data and visualized the region of interest of the radiographs. For comparison, we also developed the model with only the CNN component (CNNSurv model), with only the clinical variables (DeepSurv-only model), and a Cox proportional hazards model. An overview of our study is shown in Fig. 1. Chest radiographs were collected from the Stony Brook University COVID-19 dataset [17] in The Cancer Imaging Archive [18]. Since this dataset is open source, there is no need for review by the ethics board. We have created this article in compliance with the STARD statements [19].
Fig. 1

Overview of the prognostic models. We developed four prognostic models: a Cox proportional hazards model using only clinical data at the time of admission, a DeepSurv model using only clinical data at the time of admission, a DeepSurv with CNN model using clinical data at the time of admission and chest radiographs, and a CNNSurv-only model using chest radiographs

Overview of the prognostic models. We developed four prognostic models: a Cox proportional hazards model using only clinical data at the time of admission, a DeepSurv model using only clinical data at the time of admission, a DeepSurv with CNN model using clinical data at the time of admission and chest radiographs, and a CNNSurv-only model using chest radiographs

Study Patients and Ground Truth Labeling

This dataset was acquired at Stony Brook University from patients who tested positive by PCR for COVID-19. Since this dataset was consecutively extracted from the electronic medical records, it is representative of the population at that center. The dataset consists of pre- and post-admission images (Xp, CT, MR, etc.) and a csv listing test results and patient information. All imaging from the pre- and post-admission periods are available; those closest to the time of admission were extracted for this study. As for the latter records, only data at the time of admission were available, and none of the subsequent data during hospitalization were available. In this data set, anticoagulant therapy is used as a therapeutic intervention. However, the data show no significant difference between those patients who did and did not take anticoagulant therapy [20-22], so the impact of therapeutic interventions taken prior to hospitalization on survival is likely to be minimal. Clinical data includes medical history, blood tests, and vital signs. This dataset included 1384 COVID-19 patients. We extracted the one radiograph taken closest to the time of admission. All radiographs were taken in anterior–posterior view. A total of 1356 patients were used for this study after excluding 28 patients who did not have a chest radiograph at admission. The eligibility flowchart is shown in Fig. 2. Clinical data and chest radiographs at admission were extracted as explanatory variables. As the ground truth, patient outcome (death or discharge) and duration until the outcome were extracted and used as objective variables. Detailed demographics are shown in Table 1.
Fig. 2

Eligibility diagram

Table 1

Demographics

Training/validation datasetTest dataset
Total no. of patients1082274
Male621159
Female461115
Age
  18–596918
  60–7427769
  75–9020851
Period between admission and radiography (mean ± SD)1 ± 1 day1 ± 1 day
Smoking history22458
Body mass index, mean ± std29.4 ± 6.029.2 ± 5.4
Disease history
  Hypertension39495
  Diabetes22153
  Chronic heart disease15142
  Chronic kidney disease6516
  Chronic lung disease16044
  Malignancy7914
Outcomes
  Death14139
  Discharge941235
  Ventilation17538
  ICU admission21545
Eligibility diagram Demographics

Clinical Data Selection

The objective was to create a model that could predict patient prognosis with data available at the time of hospitalization. We chose variables which have been shown to be risk factors for severe COVID-19 [23-25]. Clinical data includes gender, age, smoking history, BMI, and medical history (hypertension, diabetes, chronic heart disease, chronic renal failure, chronic lung disease, and malignancy). Additionally, vital signs (heart rate, systolic blood pressure, respiratory rate, and blood oxygen saturation) and laboratory data (white blood cell count, sodium, potassium, c-reactive protein, aspartate aminotransferase, alanine aminotransferase, urea nitrogen, creatinine, lactate, brain natriuretic peptide, and d-dimer) were used. Mean vital signs and laboratory test results for each dataset are available in the Online Resource, Table 1.

Data Partition

All patients were randomly divided into training and test datasets at a ratio of 4:1. Definition of training and test datasets are shown in the Online Resource, Methods a. Since the partition was performed on a patient basis, there was no overlap of images or patients among the respective datasets. The training dataset included 1082 patients and the test dataset included 274 patients.

Image Processing

All chest radiographs in each dataset were resized to three sizes (256, 320, and 512 pixels). First, the longer side was downscaled while maintaining the aspect ratio. Second, the shorter side of the radiographs was padded black.

Model Implementation

We combined a CNN into DeepSurv [10]. Specifically, we concatenated the output of the CNN to the fully connected layer of DeepSurv to create an end-to-end deep learning model. This model is composed of both CNN and MLP structures. During forward propagation, the output of the CNN calculated from a chest radiograph is concatenated with clinical data and then they are passed to the MLP. The loss value is calculated on the output values of the MLP; in other words, it is calculated on both the radiograph and tabular data. The weights in both the CNN and MLP are then simultaneously updated. In each training session, the model took both the images and clinical data as input, predicted the outcome (death or discharge), and then both DeepSurv and the CNN in the model were simultaneously trained by back propagation. The CNN was developed using ResNet [26], DenseNet [27], and EfficientNet [28] architectures in the PyTorch framework [29]. It was trained from scratch with the training dataset using fivefold cross validation and independently tested with the test dataset. All images were augmented using random rotation, random shift, brightness shifts, and horizontally flipped. Detailed processes for development of the deep learning model are shown in the Online Resource, Methods b; machine environments are shown in the Online Resource, Methods c; an outline of the model is shown in the Online Resource, Fig. 1; and the source code is available online (https://github.com/deepsurv-cnn/). Results of each model CNN convolutional neural network Additionally, we prepared a CNNSurv model, DeepSurv-only model, and a Cox proportional hazards model for comparison. For the CNNSurv model, chest radiographs were used to estimate patients’ prognosis. For the DeepSurv-only and the Cox proportional hazards models, clinical data were used. The CNNSurv model and the DeepSurv-only model were trained from scratch with the training dataset using fivefold cross validation and independently tested with the test dataset. As for the Cox proportional hazards model, principal component analysis was applied and used thirteen variables due to the large number of explanatory variables to prevent overfitting. Then the Cox proportional hazards model was independently evaluated with the test dataset.

Importance Values and Saliency Maps

Importance values for each explanatory variables including chest radiographs were calculated using permutation importance with scikit-learn version 1.1.1 [30]. Permutation feature importance is a model inspection technique that is especially useful for nonlinear or opaque estimators. The permutation feature importance is defined as the decrease in a model score when a single feature value is randomly shuffled. This procedure breaks the relationship between the feature and the target; thus, the drop in the model score is indicative of how much the model depends on the feature. A saliency map was generated for each chest radiograph to visualize the focus of the deep learning model as it estimated patient prognosis. A classification activation map was applied to create class-discriminative visualization of the chest radiograph [31]. A detailed explanation of the saliency map generation model is shown in the Online Resource, Fig. 2, and the source code is available online (https://github.com/deepsurv-cnn/).

Statistical Analysis

To evaluate the performance of the prognosis prediction models, we applied Harrell’s concordance index (c-index) [32] of right-censored data and the brier score [33]. The c-index of the models compared progression information (death or discharged, and duration) with the rank of the predicted risk score. In addition, the Kaplan–Meier method was used to stratify patients into high- and low-risk subgroups according to the median progression risk score. Stratification performance was assessed using the log-rank test based on the predicted risk score of the stratified subgroups [34]. Time-dependent area under the curve (AUC) was calculated based on the predicted results of the DeepSurv with CNN model. Different prediction models were compared using binomial tests to show the difference in performance. A p-value less than 0.05 was considered significant. All analyses were performed using R (version 4.0.0.) and Python 3.8.1.

Results

Model Development

The models were each independently developed using the training dataset applied for 100 training epochs using fivefold cross validation. The final hyperparameters for the DeepSurv with CNN, CNNSurv, and DeepSurv-only models were the Adam optimizer (learning ratio = 0.001), a chest radiograph size of 256 pixels, a batch size of 64, and DenseNet. The cumulative contribution using the principal component analysis was 0.97 for the Cox proportional hazards model.

Model Evaluation

The Cox proportional hazards model had a c-index of 0.71 (0.63–0.79) and a brier score of 0.26 (0.20–0.32), the DeepSurv-only model had a c-index of 0.77 (0.69–0.84) and a brier score of 0.20 (0.13–0.27), and the CNNSurv model had a c-index of 0.70 (0.63−0.79) and a brier score of 0.21 (0.19−0.23), and the DeepSurv with CNN model had a c-index of 0.82 (0.75–0.88) and a brier score of 0.20 (0.13–0.27). The c-index of the DeepSurv with CNN model was significantly higher (p-values were 0.001 compared to the Cox proportional hazards model, 0.001 compared to the CNNSurv model and 0.011 compared to the DeepSurv-only model) than the other models (Table 2).
Table 2

Results of each model

C-index (95% CI)Brier score (95% CI)p value
Cox proportional hazards model0.71 (0.63–0.79)0.26 (0.20–0.32)0.001
Deepsurv model0.77 (0.69–0.84)0.20 (0.13–0.27)0.011
CNNsurv model0.70 (0.63–0.79)0.21 (0.19–0.23)0.001
Deepsurv with CNN model0.82 (0.75–0.88)0.20 (0.13–0.27)ref

CNN convolutional neural network

Kaplan–Meier curves for risk stratification are shown in Fig. 3. As shown, the Cox proportional hazards model, DeepSurv model, CNNSurv model, and DeepSurv with CNN model were discriminative in stratifying patients into high-risk and low-risk subgroups with p-values of 0.01, < 0.005, and < 0.005. Time-dependent AUC was over 0.8 throughout the first week (Fig. 4).
Fig. 3

Kaplan–Meier plots. The high-risk and low-risk patients from each model were divided based on the median model output value. This plot shows the ground truth survival of these patients, and the shaded area represents the accuracy of the prediction

Fig. 4

Time-dependent AUC

Kaplan–Meier plots. The high-risk and low-risk patients from each model were divided based on the median model output value. This plot shows the ground truth survival of these patients, and the shaded area represents the accuracy of the prediction Time-dependent AUC The importance values showed that age was the most important factor, followed by being male. Images were in the top five—the highest of all the examinations and laboratory tests done in the hospital (Fig. 5). As for the saliency maps, the hottest region was on the area of infiltration (Online Resource 1).
Fig. 5

Permutation importance. These values show the relative importance of each of the variables included in the models. These values have been sorted from greatest impact to least impact for ease of reading. The bar for chest radiograph images has been highlighted in pink

Permutation importance. These values show the relative importance of each of the variables included in the models. These values have been sorted from greatest impact to least impact for ease of reading. The bar for chest radiograph images has been highlighted in pink

Discussion

In this study, we developed a deep learning-based model to predict mortality and time to event by integrating clinical data and imaging information of COVID-19 patients. To our knowledge, this is the first study to develop an end-to-end deep learning model to predict time to event which combines clinical data and whole images as inputs. The results showed that the c-index of the DeepSurv with CNN model was 0.82 (0.75–0.88) in the test dataset, which enabled correct stratification of COVID-19 patients. This model performed higher than the Cox proportional hazards model, CNNSurv-only model, and the DeepSurv-only model (p-value < 0.05). The time-dependent AUC shows excellent performance throughout the first week. Predicting disease severity and progression in COVID-19 patients is important, as early intervention has been shown to reduce mortality [1, 2]. In COVID-19, for example, being male, advanced age, diabetes, and chronic respiratory disease are risk factors [23-25]. Chest radiography is important as a versatile imaging modality that has shown promise in aiding diagnosis and prognosis during the COVID-19 pandemic [11-14]. By merging chest radiography information with known risk factors, our model showed higher performance for estimating COVID-19 prognosis. There are some differences between our study and previous studies [35-44]. First of all, our model is a time-to-death predictive model with image and clinical data, which allows us to estimate how likely it is that death will occur in the days following hospitalization, rather than only a binary classification. In this respect, it differs from many previous studies [35-43]. On the other hand, one study showed a CNN and a random survival forest-based model that predicts death or discharge, with time to event for COVID-19 patients [44]. This study is similar in concept to ours. Although the implementation is well designed, the training of the CNN and the random survival forest was performed separately, while our model is trained simultaneously. Training simultaneously allows the model to represent more complex relationships between images and other explanatory variables. There is no study to implement a model which can predict time to event using end-to-end deep learning. Moreover, there has been no research comparing the importance of imaging among these factors. Here, we perform this comparison using permutation importance [45]. The top 10 results showed that in addition to chest radiographs, age, gender, medical history (chronic heart disease, chronic lung disease), oxygen saturation, and blood tests (C-reactive protein, lactate, creatinine) were important. The importance of age, gender, and medical history (chronic heart disease and chronic lung disease) have been reported in previous studies [23-25]. It also makes sense that oxygen saturation is important because it is an indicator of the severity of pneumonia. C-reactive protein and lactate represent the severity of the inflammatory response, and creatinine is a value indicating renal function. All of these are well-known indicators of severity [46, 47]. Chest radiographs contain information such as age, gender, and oxygen saturation, which are covariates, and permutation importance is known to be lower when covariates are present. Even under these unfavorable conditions, the fact that the image is ranked in the top five means that the image is of outstanding importance. DeepSurv, which applies deep learning to the Cox proportional hazards model, is gradually being introduced to the field of medicine [43, 48–50]. For example, it has been applied to head and neck cancer [48], oral cancer [43], lung cancers [49], and brain metastasis [50] to create more accurate and personalized prognostic models. However, the explanatory variables which can be used in the model are tabular data and not images. Until recently, it has been difficult to integrate images into a prognostic model. We can overcome this difficulty by the evolution of CNNs, starting with the neocognitron [51], and advances in machine power. Our DeepSurv with CNN model shows the best performance and may predict prognosis more accurately than the conventional Cox hazards proportional hazards model [3] or the DeepSurv-only model [10], which use risk factors other than imaging. The model presented here has implications for other diseases which also currently rely on tabular clinical data to determine patient prognosis. For example, one of the most famous models for stratification of patient prognosis is the TNM staging of cancer patients [52]. Information about the tumor itself is aggregated into T, which most commonly uses only the diameter of the tumor. The malignancy of the tumor may be defined by the shape, volume, and internal properties of the tumor margins, but these are not taken into account in TNM staging. If imaging information can be used directly for stratification, as in our model, more individualized and accurate prognosis prediction will be possible. Our model does not require high machine power, and the radiographs we handle are 256 × 256-pixel images which is much smaller than chest radiographs in digital imaging and communications in medicine format. Therefore, the model can be implemented into daily practice using any system with a central processing unit [53]. However, systems in hospitals are not simple, and in most cases, multiple systems coexist and cooperate with each other. Therefore, clinical implementation of this model may require additional investment in medical technology, such as an image extraction system from picture archiving and communication systems, a computer analysis system for the images, and a system to provide the results to the physician. This study had several limitations. The data in this study were collected from a single center. Further validation with a test dataset acquired in another institution is needed to show robustness of the model. In addition, this was a retrospective study and should be reviewed prospectively. In the clinical application of this model, it is best to retrain or fine tune it with data taken more recently because of the data set shift problem [54]. In clinical practice, patients admitted with COVID-19 have a chest radiograph taken as routine clinical practice. Our model was able to predict patient survival with high performance by using conventional tests taken upon admission and patient information. It also revealed the importance of the images themselves compared to these tests. Predicting patient prognosis allows healthcare providers to perform appropriate triage and management, and optimize the use of resources. Application of this model may not only support patients but also the hospital systems which have struggled throughout this pandemic to maintain supplies. We plan to validate this model using a multicenter dataset and develop an even more comprehensive model which includes other pneumonias.
  43 in total

1.  Associations between body-mass index and COVID-19 severity in 6·9 million people in England: a prospective, community-based, cohort study.

Authors:  Min Gao; Carmen Piernas; Nerys M Astbury; Julia Hippisley-Cox; Stephen O'Rahilly; Paul Aveyard; Susan A Jebb
Journal:  Lancet Diabetes Endocrinol       Date:  2021-04-28       Impact factor: 32.069

2.  The role of biomarkers in diagnosis of COVID-19 - A systematic review.

Authors:  Muhammed Kermali; Raveena Kaur Khalsa; Kiran Pillai; Zahra Ismail; Amer Harky
Journal:  Life Sci       Date:  2020-05-13       Impact factor: 5.037

3.  Predicting Mortality Due to SARS-CoV-2: A Mechanistic Score Relating Obesity and Diabetes to COVID-19 Outcomes in Mexico.

Authors:  Omar Yaxmehen Bello-Chavolla; Jessica Paola Bahena-López; Neftali Eduardo Antonio-Villa; Arsenio Vargas-Vázquez; Armando González-Díaz; Alejandro Márquez-Salinas; Carlos A Fermín-Martínez; J Jesús Naveja; Carlos A Aguilar-Salinas
Journal:  J Clin Endocrinol Metab       Date:  2020-08-01       Impact factor: 5.958

4.  Deep learning-based survival prediction of oral cancer patients.

Authors:  Dong Wook Kim; Sanghoon Lee; Sunmo Kwon; Woong Nam; In-Ho Cha; Hyung Jun Kim
Journal:  Sci Rep       Date:  2019-05-06       Impact factor: 4.379

5.  Lower mortality of COVID-19 by early recognition and intervention: experience from Jiangsu Province.

Authors:  Qin Sun; Haibo Qiu; Mao Huang; Yi Yang
Journal:  Ann Intensive Care       Date:  2020-03-18       Impact factor: 6.925

6.  Prognostic factors for severity and mortality in patients infected with COVID-19: A systematic review.

Authors:  Ariel Izcovich; Martín Alberto Ragusa; Fernando Tortosa; María Andrea Lavena Marzio; Camila Agnoletti; Agustín Bengolea; Agustina Ceirano; Federico Espinosa; Ezequiel Saavedra; Verónica Sanguine; Alfredo Tassara; Candelaria Cid; Hugo Norberto Catalano; Arnav Agarwal; Farid Foroutan; Gabriel Rada
Journal:  PLoS One       Date:  2020-11-17       Impact factor: 3.240

7.  Factors associated with COVID-19-related death using OpenSAFELY.

Authors:  Elizabeth J Williamson; Alex J Walker; Krishnan Bhaskaran; Seb Bacon; Chris Bates; Caroline E Morton; Helen J Curtis; Amir Mehrkar; David Evans; Peter Inglesby; Jonathan Cockburn; Helen I McDonald; Brian MacKenna; Laurie Tomlinson; Ian J Douglas; Christopher T Rentsch; Rohini Mathur; Angel Y S Wong; Richard Grieve; David Harrison; Harriet Forbes; Anna Schultze; Richard Croker; John Parry; Frank Hester; Sam Harper; Rafael Perera; Stephen J W Evans; Liam Smeeth; Ben Goldacre
Journal:  Nature       Date:  2020-07-08       Impact factor: 49.962

8.  Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal

Authors:  Laure Wynants; Ben Van Calster; Gary S Collins; Richard D Riley; Georg Heinze; Ewoud Schuit; Marc M J Bonten; Darren L Dahly; Johanna A A Damen; Thomas P A Debray; Valentijn M T de Jong; Maarten De Vos; Paul Dhiman; Maria C Haller; Michael O Harhay; Liesbet Henckaerts; Pauline Heus; Michael Kammer; Nina Kreuzberger; Anna Lohmann; Kim Luijken; Jie Ma; Glen P Martin; David J McLernon; Constanza L Andaur Navarro; Johannes B Reitsma; Jamie C Sergeant; Chunhu Shi; Nicole Skoetz; Luc J M Smits; Kym I E Snell; Matthew Sperrin; René Spijker; Ewout W Steyerberg; Toshihiko Takada; Ioanna Tzoulaki; Sander M J van Kuijk; Bas van Bussel; Iwan C C van der Horst; Florien S van Royen; Jan Y Verbakel; Christine Wallisch; Jack Wilkinson; Robert Wolff; Lotty Hooft; Karel G M Moons; Maarten van Smeden
Journal:  BMJ       Date:  2020-04-07

9.  Deep learning-based survival analysis for brain metastasis patients with the national cancer database.

Authors:  Noah Bice; Neil Kirby; Tyler Bahr; Karl Rasmussen; Daniel Saenz; Timothy Wagner; Niko Papanikolaou; Mohamad Fakhreddine
Journal:  J Appl Clin Med Phys       Date:  2020-08-13       Impact factor: 2.243

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.