Literature DB >> 33411162

A Radiomics Signature to Quantitatively Analyze COVID-19-Infected Pulmonary Lesions.

Jiajun Qiu1, Shaoliang Peng2, Jin Yin1, Junren Wang1, Jingwen Jiang1, Zhenlin Li3, Huan Song4, Wei Zhang1.   

Abstract

Assessing pulmonary lesions using computed tomography (CT) images is of great significance to the severity diagnosis and treatment of coronavirus disease 2019 (COVID-19)-infected patients. Such assessment mainly depends on radiologists' subjective judgment, which is inefficient and presents difficulty for those with low levels of experience, especially in rural areas. This work focuses on developing a radiomics signature to quantitatively analyze whether COVID-19-infected pulmonary lesions are mild (Grade I) or moderate/severe (Grade II). We retrospectively analyzed 1160 COVID-19-infected pulmonary lesions from 16 hospitals. First, texture features were extracted from the pulmonary lesion regions of CT images. Then, feature preselection was performed and a radiomics signature was built using a stepwise logistic regression. The stepwise logistic regression also calculated the correlation between the radiomics signature and the grade of a pulmonary lesion. Finally, a logistic regression model was trained to classify the grades of pulmonary lesions. Given a significance level of α = 0.001, the stepwise logistic regression achieved an R (multiple correlation coefficient) of 0.70, which is much larger than Rα = 0.18 (the critical value of R). In the classification, the logistic regression model achieved an AUC of 0.87 on an independent test set. Overall, the radiomics signature is significantly correlated with the grade of a pulmonary lesion in COVID-19 infection. The classification model is interpretable and can assist radiologists in quickly and efficiently diagnosing pulmonary lesions. This work aims to develop a CT-based radiomics signature to quantitatively analyze whether COVID-19-infected pulmonary lesions are mild (Grade I) or moderate/severe (Grade II). The logistic regression model established based on this radiomics signature can assist radiologists to quickly and efficiently diagnose the grades of pulmonary lesions. The model calculates a radiomics score for a lesion and is interpretable and appropriate for clinical use.

Entities:  

Keywords:  COVID-19; Pulmonary lesion; Quantitative assessment; Radiomics signature

Mesh:

Year:  2021        PMID: 33411162      PMCID: PMC7788548          DOI: 10.1007/s12539-020-00410-7

Source DB:  PubMed          Journal:  Interdiscip Sci        ISSN: 1867-1462            Impact factor:   2.233


Introduction

Coronavirus disease 2019 (COVID-19) has spread rapidly in most countries. As of March 29, 2020, there were 722,088 confirmed cases worldwide. Frontier technologies such as IoMT (internet of medical things) and AI (artificial intelligence) are widely used in the diagnosis, treatment, and prevention of COVID-19 [1, 2]. The common diagnosis of COVID-19 is to use RT-PCR (real-time reverse-transcriptase polymerase chain reaction). In addition, CT (computed tomography) also plays an important role in diagnosing COVID-19 cases. To faster examination, techniques for automated diagnoses, such as methods based on AI with deep learning, have been developed [3-7]. Moreover, assessing whether pulmonary lesions are mild or severe using CT images is of great significance to the severity diagnosis and treatment of COVID-19-infected patients. Although radiologists will make a diagnosis report based on the whole CT images of a patient, assessing lesion regions are mild or severe is still an important part of radiologists’ work. Assessing lesion regions that are mild or severe provides radiologists with more detailed diagnostic information. Currently, this assessment mainly relies on the subjective judgment of radiologists, which is inefficient and presents difficultly for radiologists with low levels of experience, especially in rural areas. Few studies have focused on quantitatively analyzing the grades (mild or moderate/severe) of pulmonary lesions in COVID-19 infection. This work labeled mild lesions as Grade I and labeled moderate or severe lesions as Grade II. We used radiomics-based AI technologies to perform binary classification tasks. In radiomics, texture as a quantitative feature can provide interpretability [8, 9]. CT textures as radiomics features have been widely used to assist physicians in making decisions on lung diseases. In 2015, Coroller et al. [10] extracted CT-based texture features to predict lung adenocarcinoma metastasis. In 2016, Liu et al. [11] extracted CT-based texture features to analyze a mutation status in lung adenocarcinoma. In 2017, Yip et al. [12] investigated associations between semantic and CT-based texture features of nonsmall cell lung adenocarcinomas. Given that radiomics-based quantitative assessment is objective and has assisted radiologists in the rapid and accurate diagnosis of lung diseases, radiomics-based AI techniques may be also applicable in the assessment of pulmonary lesions in COVID-19-infected patients. Therefore, this work aims to build a radiomics signature (composed of CT-based texture features) and apply this radiomics signature to quantitatively analyze the grade of pulmonary lesions in COVID-19 infection, including (1) assessing the correlation between the radiomics signature and the grade of a pulmonary lesion and (2) classifying the grades of pulmonary lesions.

Materials and Methods

This work is a retrospective study based on the CT images of COVID-19-infected patients and was approved by the Ethics Committee of West China Hospital of Sichuan University (number 2020190). Figure 1 shows the framework of this work.
Fig. 1

Framework of this work: steps A–E will be described in detail in subsections

Framework of this work: steps A–E will be described in detail in subsections

Patients and Acquisition of ROIs

Chest CT images of eighty-four COVID-19-infected patients were collected. In total, 1160 pulmonary lesions were retrospectively analyzed. The patients were selected from 16 hospitals in Sichuan Province, China, from January 1, 2020 to February 29, 2020, including 49 males and 35 females. In the female patients, the minimum age, the maximum age, and the median age were 28, 74, and 45.2 years old, respectively. In the male patients, the minimum age, the maximum age and the median age were 20, 76, and 42.4 years old, respectively. All patients were confirmed by RT-PCR examinations and received nonenhanced CT scans. Figure 2 illustrates the inclusion and exclusion of patients and the acquisition of ROIs (regions of interest).
Fig. 2

Inclusion and exclusion of patients and acquisition of ROIs. Multiple bounding boxes with overlapping areas were defined as a lesion region. For a lesion region, the bounding box with the largest area was selected and regarded as the ROI

Inclusion and exclusion of patients and acquisition of ROIs. Multiple bounding boxes with overlapping areas were defined as a lesion region. For a lesion region, the bounding box with the largest area was selected and regarded as the ROI Nine senior radiologists with more than 6 years of experience in chest CT diagnosis at West China Hospital filtered the CT images and delineated the bounding boxes. Given the complexity of the prevalent grading system, two of the radiologists assessed the bounding boxes independently. Discrepancies were solved by discussion or consulting a third radiologist. Briefly, a bounding box with scattered GGOs (ground-glass nodules) was regarded as a mild bounding box (Grade I), and a high-density bounding box with continuous GGOs or even large areas of GGOs was regarded as a moderate or severe bounding box (Grade II) [13, 14]. Figure 3 illustrates delineating bounding boxes from DICOM images. Multiple bounding boxes with overlapping areas were defined as a lesion region. For a lesion region, the bounding box with the largest area was selected and regarded as the ROI. The grade of the selected bounding box was the grade of this ROI. An ROI represented a pulmonary lesion (lesion region). In total, 1160 ROIs were acquired, of which 910 ROIs were Grade I and 250 ROIs were Grade II.
Fig. 3

Examples of delineating bounding boxes. a An example with a mild bounding box (Grade I); the patient’s age was 43 years, female. b An example with a moderate bounding box (Grade II); the patient’s age was 36 years, male. c An example with a severe bounding box (Grade II); the patient’s age was 57 years, male

Examples of delineating bounding boxes. a An example with a mild bounding box (Grade I); the patient’s age was 43 years, female. b An example with a moderate bounding box (Grade II); the patient’s age was 36 years, male. c An example with a severe bounding box (Grade II); the patient’s age was 57 years, male

Texture Feature Extraction

The unit of a pixel value in CT images is HU (Hounsfield unit). This paper omits the unit to express the pixel values concisely. We scaled the pixel values of ROIs to [1 128] based on [− 1000 200], as shown in Eq. (1). Here, c is a pixel value, s is the scaled value of c, [l h] is [− 1000 200], and [1 n] is [1 128]. In this work, 936 texture features were extracted from each ROI. These 936 features were used as candidate features, including coefficient statistics features, histogram features, gray-level co-occurrence matrix (GLCM) features, gray-level run-length matrix (GLRLM) features, Laplacian of Gaussian (LoG) features, wavelet features, contourlet features, angle cooccurrence matrix (ACM) features, absolute gradient features, autoregression features, and gray-level differential matrix (GLDM) features [15-25]. The value of a feature extracted from an ROI was normalized based on the number of pixels in this ROI. Section A of the supplemental material describes these texture analysis methods in detail.

Feature Preselection

We performed feature preselection using the least absolute shrinkage and selection operator (LASSO) algorithm on the candidate features [26-28]. This work randomly partitioned the dataset into a training set and a test set at a ratio of 7:3. The classification model was built based on the training set and tested using the independent test set. In the training set, 637 ROIs were Grade I, and 175 ROIs were Grade II. In the independent test set, 273 ROIs were Grade I, and 75 ROIs were Grade II. Feature preselection and the subsequent building of radiomics signature were both calculated on the training set. In the implementation of the LASSO algorithm, tenfold cross-validation was used, and those features corresponding to the smallest MSE (mean squared error) value are selected as the pre-selected features.

Building a Radiomics Signature

Next, we performed stepwise logistic regression on the preselected features to build a radiomics signature. Logistic regression and stepwise logistic regression are both generalized linear regression approaches, and the regression results can be statistically tested and have significant interpretability. Stepwise logistic regression, as the name suggests, will produce a series of models step by step. We used the features in the final model to constitute the radiomics signature. Equation (2) demonstrates a regression equation, where y = 1 represents that its corresponding ROI is Grade II and y = 0 represents that its corresponding ROI is Grade I. The radiomics signature can be expressed by (x1, x2, …, x). To test the linear correlation between y and (x1, x2, …, x), the following should be tested:where H0 is the null hypothesis. To test H0, F test can be used. Equation (4) shows the definition of statistic F. Here, U denotes the regression sum of squares, Q denotes the residual sum of squares, n denotes the number of samples, and R denotes the multiple correlation coefficient used to assess the regression effect. The regression effect represents the quantitative correlation between (x1, x2, …, x) and y. The closer the value of R is to 1, the stronger is the correlation. Equation (5) defines R. We can find F (the critical value of F, α denotes a significance level) from the F distribution table and calculate R (the critical value of R) using Eq. (6). If R > R, the regression effect (the correlation) is statistically significant at significance level α.

Classification

Section 2.3 describes how the dataset was partitioned. We used this partitioning to train a logistic regression model on the training set and classify the lesions in the independent test set. We also conducted a preliminary classification experiment and trained other machine learning models. We calculated the average AUC of the tenfold cross-validation for each model. Considering the interpretability of the models and their AUCs, we chose the logistic regression model to perform a further classification task and constructed a nomogram. Section C of the supplementary material describes the preliminary experiment.

Results

The LASSO algorithm preselected 40 features from 936 candidate features. The stepwise logistic regression ultimately selected nine features from these 40 features to constitute the radiomics signature. Table 1 shows the features of the radiomics signature. For more details of the texture analysis methods, please refer to section A of the supplemental material.
Table 1

Features of the radiomics signature: COM (co-occurrence matrix); RLM (run-length matrix); CS (coefficient statistics)

No.MethodComponentFeature name
x1WaveletThe approximate component in the 1st-level decompositionHomogeneity in the COM at d = 1
x2WaveletThe horizontal component in the 1st-level decompositionCorrelation in the COM at d = 1
x3WaveletThe horizontal component in the 2nd-level decompositionRun percentage in the RLM
x4ContourletThe approximate componentMean in the CS
x5ContourletThe approximate componentContrast in the COM at d = 1
x6ContourletThe 2nd component in the 2nd-level decompositionPercentage of 0.01 in the histogram
x7ContourletThe 1nd component in the 1st-level decompositionPercentage of 0.01 in the histogram
x8ContourletThe 2nd component in the 1st-level decompositionKurtosis in the histogram
x9ContourletThe 4nd component in the 1st-level decompositionPercentage of 0.01 in the histogram
Features of the radiomics signature: COM (co-occurrence matrix); RLM (run-length matrix); CS (coefficient statistics) Using the nine features, the stepwise logistic regression achieved an R of 0.6996. Given α = 0.001, F and R were calculated based on Eqs. (4) and (6), respectively. Using the regression equation shown in Eq. (2), the estimated coefficients (b, β1, β2, …, β) obtained through the stepwise logistic regression are shown in Table 2. The t test was performed on these estimated regression coefficients. According to Eq. (2) and Table 2, the grade of a pulmonary lesion (i.e., the radiomics score for a lesion) can be expressed bywhere y is the grade and (x1, x2, …, x) is the radiomics signature. We trained a logistic regression model based on the radiomics signature to classify the lesions in the independent test set. Figure 4 depicts the ROC (receiver operating characteristic) curves and AUCs (areas under the ROC curves) of the classification results.
Table 2

Results of coefficient estimation: SE (standard error)

EstimateConfidence intervals (α = 0.05)SEt statp value
B− 30.40[− 38.84, − 21.95]4.30− 7.071.58 × 10−12
β11.96[1.13, 2.80]0.434.604.19 × 10−06
β2− 3.79[− 6.65, − 0.92]1.46− 2.599.52 × 10−03
β320.519[7.34, 33.67]6.703.062.21 × 10−03
β4− 1.21 × 10−03[− 1.70 × 10−03, − 7.00 × 10−04]2.33 × 10−04− 5.182.24 × 10−07
β51.16 × 10−03[8.00 × 10−04, 1.50 × 10−03]1.82 × 10−046.342.29 × 10−10
β63.159[− 0.66, 6.98]1.941.620.10
β71.71[0.22, 3.20]0.762.250.02
β80.02[3.70 × 10−03, 0.04]9.64 × 10−032.350.02
β92.54[0.92, 4.16]0.833.072.00 × 10−3
Fig. 4

Results of ROC curves and AUCs in the classification. The validation ROC curve and its corresponding AUC value shown in the figure refer to the average performance

Results of coefficient estimation: SE (standard error) Results of ROC curves and AUCs in the classification. The validation ROC curve and its corresponding AUC value shown in the figure refer to the average performance To simultaneously obtain relatively high values of sensitivity and relatively high values of specificity, we adjusted the threshold of the classification output from 0.1 to 0.9. The results are illustrated in Fig. 5.
Fig. 5

Sensitivity values and specificity values as the threshold varies. High sensitivity values and high specificity values appear simultaneously when the threshold is varied from 0.7 to 0.8

Sensitivity values and specificity values as the threshold varies. High sensitivity values and high specificity values appear simultaneously when the threshold is varied from 0.7 to 0.8 Figure 5 shows that high sensitivity values and high specificity values appear simultaneously when the threshold is varied from 0.7 to 0.8. Table 3 lists some thresholds and their sensitivity and specificity values.
Table 3

Classification results of the logistic regression model as the threshold varies

ThresholdTenfold cross-validationTest
AccuracySensitivitySpecificityAccuracySensitivitySpecificity
0.500.8790.9530.6110.8390.9160.560
0.550.8770.9400.6460.8420.9120.587
0.600.8720.9260.6740.8450.8970.653
0.650.8770.9230.7090.8360.8790.680
0.700.8620.9000.7260.8390.8680.733
0.750.8530.8740.7770.8130.8320.747
0.800.8140.8160.8060.7990.7990.800

The row with high values of accuracy, sensitivity, and high specificity are shown in bold

Classification results of the logistic regression model as the threshold varies The row with high values of accuracy, sensitivity, and high specificity are shown in bold To assist radiologists in quickly diagnosing patients infected with COVID-19, object detection is also very important. Object detection finds the locations of objects in images and classifies the objects. A YOLO model can perform object detection and classify the detected objects [29]. If a YOLO model can achieve a promised classification performance, it is very appropriate to use the objects detected by the YOLO model as ROIs (lesion regions) and to use the classification results of this YOLO model to classify the grades of ROIs. For comparison with the logistic regression model, we trained a YOLO v3 model to perform one-class object detection (only detecting lesion regions and not classifying the grades of these lesion regions) and the classification of grades (detecting lesion regions and classifying the grades of these lesion regions). The mAP (mean average precision) indicator is frequently used to assess an object detection model in training. In the experiment involving one-class object detection (only detecting lesion regions and not classifying the grades of these lesion regions), a mAP of 0.81 was achieved in the training, and an accuracy of 0.948 was achieved in the testing. In the experiment involving the classification of grades (detecting lesion regions and classifying the grades of these lesion regions), the training achieved a mAP of 0.050 for Grade I and a mAP of 0.59 for Grade II, and the testing achieved an accuracy of 0.264 for Grade I and an accuracy of 0.836 for Grade II. Table 4 shows the testing results as the threshold varies.
Table 4

Test results of the objection detection as the threshold varies

ThresholdOne-class classificationGrade classification
AccuracyAccuracy for Grade IAccuracy for Grade II
0.500.9480.2640.836
0.550.9460.2220.789
0.600.9430.1810.570
0.650.9120.0970.484
0.700.8520.0560.359
0.750.7890.0420.258
0.800.68400.172
0.850.59600.117
0.900.45000.055

The bold row shows the best result

Test results of the objection detection as the threshold varies The bold row shows the best result

Discussion

Assessing whether pulmonary lesions are mild or severe using CT images is of great significance to the severity diagnosis and treatment of COVID-19-infected patients. Currently, this assessment is subjective, which is inefficient and presents difficultly for radiologists with low levels of experience, especially in rural areas. Relatively, AI models can quantitatively and objectively analyze images. Recently, some radiomics-based AI models have been developed for aided diagnosis, efficacy evaluation, or prognosis analysis of COVID-19. Wu et al. [30] developed a CT-based signature to perform prognostic analysis in patients with COVID-19. Fang et al. [31] developed a radiomics model to predict COVID-19 pneumonia. Fu et al. [32] used a machine learning-based tool to develop radiomics signatures and perform prognosis analysis of COVID-19 patients. Ozturk et al. [33] developed a COVID-19 detection model based on X-ray images to diagnosis COVID-19. However, few studies have focused on quantitatively analyzing the grades (mild or moderate/severe) of pulmonary lesions in COVID-19 infection. Applications of radiomics-based AI models can greatly save time for radiologists in producing image reports and can reduce the workload of radiologists. Radiologists combined with AI models for diagnosis can reduce the possibility of misdiagnosis and missed diagnosis. AI applications can extend the knowledge and experience of senior experts to medical institutions in less developed regions. Unfortunately, many machine learning-based models, including deep learning models, mainly focus on the accuracy, AUC, etc., and rarely pay attention to interpretability. There are two main schemes for radiomics including deep learning and feature engineering combining classic machine learning methods [8, 9]. Deep learning has achieved good results in some image recognition problems and image segmentation problems [34-37]. But deep learning has substantial difficulties and challenges in AI applications involving small sample sizes, small regions, or expecting interpretability [34, 35]. A deep learning model generally has an N-layer structure. It is difficult to determine which layer is more appropriate for extracting features, and the extracted features are abstract. Although deep learning features are highly versatile, their ability to solve specific problems is relatively weak [38]. In contrast, building an interpretable AI model based on feature engineering is relatively easy. The output of the model is expected to be understood by physicians in clinical applications [39]. We built a nomogram based the logistic regression and used the nomogram to classify ROIs. The nomogram is illustrated Fig. 6.
Fig. 6

Nomogram of classifying ROIs and its calibration curve. a Nomogram: for an unknown lesion, a vertical line of x upward to axis “Points” to assign the score indicating the probability of Grade II. The process is repeated for each variable (from x1 to x9), and the assigned scores are summed. The sum is located on axis “Total Points”, and a vertical line downward to axis “Risk” to find the lesion’s probability of Grade II. b Calibration curve of a: the x-axis represents the nomogram-estimated probabilities and the y-axis represents the observed probabilities. A perfect estimation of an ideal model is represented by the diagonal dotted line. In the diagonal dotted line, the estimated outcome perfectly corresponds to the actual outcome. The performance of a is represented by the solid line. In the solid line, a closer to the diagonal dotted line indicates a better estimation

Nomogram of classifying ROIs and its calibration curve. a Nomogram: for an unknown lesion, a vertical line of x upward to axis “Points” to assign the score indicating the probability of Grade II. The process is repeated for each variable (from x1 to x9), and the assigned scores are summed. The sum is located on axis “Total Points”, and a vertical line downward to axis “Risk” to find the lesion’s probability of Grade II. b Calibration curve of a: the x-axis represents the nomogram-estimated probabilities and the y-axis represents the observed probabilities. A perfect estimation of an ideal model is represented by the diagonal dotted line. In the diagonal dotted line, the estimated outcome perfectly corresponds to the actual outcome. The performance of a is represented by the solid line. In the solid line, a closer to the diagonal dotted line indicates a better estimation Figure 6a shows an interpretable classification process. Figure 6b is the calibration curve of Fig. 6a. It visually evaluates the classification performance of Fig. 6a. We also calculated the index of concordance (C-index) to evaluate the classification performance of the nomogram (Fig. 6a). The nomogram achieved a C-index of 0.875. Both the calibration curve and the C-index were calculated based on the independent test set. Figure 6 indicates that the logistic regression model can achieve a promised classification result and its classification process is interpretable. It interprets what drives the identification, and the identification can be quantitatively assessed by a multiple correlation coefficient. As can be seen from Eq. (2), a logistic regression model can clearly express a value called the radiomics score. In future studies, we can use the radiomics score as a factor and combine it with other clinic factors or demographic factors to perform aided diagnoses. In this work, senior experts in the radiology department of West China Hospital labeled the grades of pulmonary lesions in COVID-19 infected patients. The diagnostic ability of the radiology department of West China Hospital ranks among the best in China’s medical institutions. We quantitatively analyzed the grades of pulmonary lesions in COVID-19 infection using a CT-based radiomics signature. The regression analysis showed that the radiomics signature significantly correlated with the grade of a pulmonary lesion (the value of R was much larger than R, α = 0.001), and the trained model achieved the promised AUC for the independent test set. This indicates that the model can be used to assist radiologists, especially those with low levels of experience, in diagnosing the grades of pulmonary lesions in COVID-19 infection. The grades of pulmonary lesions are considered critical indicators for assessing a patient’s condition in COVID-19 infection or progression, as well as determinants for subsequent treatment strategies. Empirically, patients with more mild pulmonary lesions (Grade I) merely need supportive treatment with close surveillance, while for patients with more moderate or severe pulmonary lesions (Grade II), symptomatic treatment or even ventilator treatment is usually required. Nevertheless, the accurate assessment of the grades of lesions highly relies on the profound knowledge of a radiologist. Although radiologists can also visually confirm ROIs one by one as Grade I or Grade II, it is subjective and is difficult for hospitals in rural areas or radiologists with high workloads. Obviously, our work can greatly improve the diagnosis efficiency, and the calculated aided-diagnosis information is objective. Texture-based radiomics signatures can deeply mine the heterogeneous data contained in CT images and other medical images at the tissue level and even molecular level [40]. We also performed one-class object detection and the classification of grades using a YOLO v3 deep learning model. However, regarding the classification of grades, the YOLO model yields poor classification of the detected objects, and its testing results are better than the training results. This is because the trained YOLO v3 deep learning model lacks interpretability, so the model’s generalization ability may be poor. Section B of the supplemental material describes more details. However, the experiment involving one-class object detection achieved an accuracy of 0.948. Although the YOLO model can detect the ROIs accurately, it yielded a poor classification of the detected objects (ROIs): It failed to classify the ROIs into Grade I and Grade II. By contrast, the developed logistical regression model yielded a promised result for classifying the ROIs into Grade I and Grade II. This work aimed to develop a radiomics signature and explore an interpretable model, and then use this model to calculate the diagnosis information of lesion grades. However, it can be inferred that combining the one-class object detection of the YOLO v3 model and the logistic regression model developed in this paper may greatly assist radiologists in quickly and efficiently diagnosing COVID-19 pulmonary infections. This work is a retrospective study of multicenter institutions. There are also some limitations: (1) more samples need to be collected; (2) we use a bounding box to mark a lesion region such that the ROI includes non-lesion regions, which may affect the results of quantitative analysis; (3) this work aimed to give the diagnosis information of lesion grades, we will collect more information to conduct some patient-level classifications for more comprehensive diagnosing in future studies; and (4) although the trained YOLO model yields poor classification on the detected objects, it achieves a high accuracy in object detection; thus, we may combine the object detection of the YOLO model with the logistic model developed in this paper in further works.

Conclusion

This work built a CT-based radiomics signature to quantitatively analyze the grades of pulmonary lesions in COVID-19 infection. The experimental results indicated that the developed radiomics signature is significantly correlated with the grade of a pulmonary lesion. The logistic regression model established based on this radiomics signature achieved a promised classification performance for Grade I and Grade II. This result indicated that this model can assist radiologists in quickly and efficiently diagnosing the grades of pulmonary lesions in COVID-19 infection. Furthermore, the nomogram based on the logistic regression model showed an interpretable classification process, which is rewarding for clinical use. Below is the link to the electronic supplementary material. Supplementary material 1 (DOCX 852 kb) Supplementary material 2 (TIFF 3044 kb) Supplementary material 3 (TIFF 4569 kb)
  23 in total

1.  Texture information in run-length matrices.

Authors:  X Tang
Journal:  IEEE Trans Image Process       Date:  1998       Impact factor: 10.856

2.  Fully Automatic Arteriovenous Segmentation in Retinal Images via Topology-Aware Generative Adversarial Networks.

Authors:  Jingwen Yang; Xinran Dong; Yu Hu; Qingsheng Peng; Guihua Tao; Yangming Ou; Hongmin Cai; Xiaohong Yang
Journal:  Interdiscip Sci       Date:  2020-07-28       Impact factor: 2.233

Review 3.  A survey on deep learning in medical image analysis.

Authors:  Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez
Journal:  Med Image Anal       Date:  2017-07-26       Impact factor: 8.545

Review 4.  Radiomics: the bridge between medical imaging and personalized medicine.

Authors:  Philippe Lambin; Ralph T H Leijenaar; Timo M Deist; Jurgen Peerlings; Evelyn E C de Jong; Janita van Timmeren; Sebastian Sanduleanu; Ruben T H M Larue; Aniek J G Even; Arthur Jochems; Yvonka van Wijk; Henry Woodruff; Johan van Soest; Tim Lustberg; Erik Roelofs; Wouter van Elmpt; Andre Dekker; Felix M Mottaghy; Joachim E Wildberger; Sean Walsh
Journal:  Nat Rev Clin Oncol       Date:  2017-10-04       Impact factor: 66.675

Review 5.  Internet of Medical Things (IoMT) for orthopaedic in COVID-19 pandemic: Roles, challenges, and applications.

Authors:  Ravi Pratap Singh; Mohd Javaid; Abid Haleem; Raju Vaishya; Shokat Ali
Journal:  J Clin Orthop Trauma       Date:  2020-05-15

6.  Radiomics Analysis of Computed Tomography helps predict poor prognostic outcome in COVID-19.

Authors:  Qingxia Wu; Shuo Wang; Liang Li; Qingxia Wu; Wei Qian; Yahua Hu; Li Li; Xuezhi Zhou; He Ma; Hongjun Li; Meiyun Wang; Xiaoming Qiu; Yunfei Zha; Jie Tian
Journal:  Theranostics       Date:  2020-06-05       Impact factor: 11.556

Review 7.  Artificial intelligence in cancer imaging: Clinical challenges and applications.

Authors:  Wenya Linda Bi; Ahmed Hosny; Matthew B Schabath; Maryellen L Giger; Nicolai J Birkbak; Alireza Mehrtash; Tavis Allison; Omar Arnaout; Christopher Abbosh; Ian F Dunn; Raymond H Mak; Rulla M Tamimi; Clare M Tempany; Charles Swanton; Udo Hoffmann; Lawrence H Schwartz; Robert J Gillies; Raymond Y Huang; Hugo J W L Aerts
Journal:  CA Cancer J Clin       Date:  2019-02-05       Impact factor: 508.702

8.  Automated detection of COVID-19 cases using deep neural networks with X-ray images.

Authors:  Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2020-04-28       Impact factor: 4.589

9.  Radiomics nomogram for the prediction of 2019 novel coronavirus pneumonia caused by SARS-CoV-2.

Authors:  Xu Fang; Xiao Li; Yun Bian; Xiang Ji; Jianping Lu
Journal:  Eur Radiol       Date:  2020-07-03       Impact factor: 7.034

10.  A deep learning algorithm using CT images to screen for Corona virus disease (COVID-19).

Authors:  Shuai Wang; Bo Kang; Jinlu Ma; Xianjun Zeng; Mingming Xiao; Jia Guo; Mengjiao Cai; Jingyi Yang; Yaodong Li; Xiangfei Meng; Bo Xu
Journal:  Eur Radiol       Date:  2021-02-24       Impact factor: 5.315

View more
  4 in total

1.  COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients.

Authors:  Isaac Shiri; Yazdan Salimi; Masoumeh Pakbin; Ghasem Hajianfar; Atlas Haddadi Avval; Amirhossein Sanaat; Shayan Mostafaei; Azadeh Akhavanallaf; Abdollah Saberi; Zahra Mansouri; Dariush Askari; Mohammadreza Ghasemian; Ehsan Sharifipour; Saleh Sandoughdaran; Ahmad Sohrabi; Elham Sadati; Somayeh Livani; Pooya Iranpour; Shahriar Kolahi; Maziar Khateri; Salar Bijari; Mohammad Reza Atashzar; Sajad P Shayesteh; Bardia Khosravi; Mohammad Reza Babaei; Elnaz Jenabi; Mohammad Hasanian; Alireza Shahhamzeh; Seyaed Yaser Foroghi Ghomi; Abolfazl Mozafari; Arash Teimouri; Fatemeh Movaseghi; Azin Ahmari; Neda Goharpey; Rama Bozorgmehr; Hesamaddin Shirzad-Aski; Roozbeh Mortazavi; Jalal Karimi; Nazanin Mortazavi; Sima Besharat; Mandana Afsharpad; Hamid Abdollahi; Parham Geramifar; Amir Reza Radmard; Hossein Arabi; Kiara Rezaei-Kalantari; Mehrdad Oveisi; Arman Rahmim; Habib Zaidi
Journal:  Comput Biol Med       Date:  2022-03-29       Impact factor: 6.698

2.  Prediction of potential severe coronavirus disease 2019 patients based on CT radiomics: A retrospective study.

Authors:  Feng Xiao; Rongqing Sun; Wenbo Sun; Dan Xu; Lan Lan; Huan Li; Huan Liu; Haibo Xu
Journal:  Med Phys       Date:  2022-07-28       Impact factor: 4.506

3.  Wavelet transformation can enhance computed tomography texture features: a multicenter radiomics study for grade assessment of COVID-19 pulmonary lesions.

Authors:  Zekun Jiang; Jin Yin; Peilun Han; Nan Chen; Qingbo Kang; Yue Qiu; Yiyue Li; Qicheng Lao; Miao Sun; Dan Yang; Shan Huang; Jiajun Qiu; Kang Li
Journal:  Quant Imaging Med Surg       Date:  2022-10

4.  Research on the Construction and Application of Breast Cancer-Specific Database System Based on Full Data Lifecycle.

Authors:  Yin Jin; Wang Junren; Jiang Jingwen; Sun Yajing; Chen Xi; Qin Ke
Journal:  Front Public Health       Date:  2021-07-12
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.