Literature DB >> 35626438

COVLIAS 1.0Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans.

Jasjit S Suri1,2, Sushant Agarwal2,3, Gian Luca Chabert4, Alessandro Carriero5, Alessio Paschè4, Pietro S C Danna4, Luca Saba4, Armin Mehmedović6, Gavino Faa7, Inder M Singh1, Monika Turk8, Paramjit S Chadha1, Amer M Johri9, Narendra N Khanna10, Sophie Mavrogeni11, John R Laird12, Gyan Pareek13, Martin Miner14, David W Sobel13, Antonella Balestrieri4, Petros P Sfikakis15, George Tsoulfas16, Athanasios D Protogerou17, Durga Prasanna Misra18, Vikas Agarwal18, George D Kitas19,20, Jagjit S Teji21, Mustafa Al-Maini22, Surinder K Dhanjil23, Andrew Nicolaides24, Aditya Sharma25, Vijay Rathore23, Mostafa Fatemi26, Azra Alizad27, Pudukode R Krishnan28, Ferenc Nagy29, Zoltan Ruzsa30, Mostafa M Fouda31, Subbaram Naidu32, Klaudija Viskovic6, Manudeep K Kalra33.   

Abstract

BACKGROUND: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world.
METHODOLOGY: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models-namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet-were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals.
RESULTS: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests-namely, the Mann-Whitney test, paired t-test, and Wilcoxon test-demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s.
CONCLUSIONS: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.

Entities:  

Keywords:  COVID lesions; COVID-19; computed tomography; ground-glass opacities; hybrid deep learning; segmentation

Year:  2022        PMID: 35626438      PMCID: PMC9141749          DOI: 10.3390/diagnostics12051283

Source DB:  PubMed          Journal:  Diagnostics (Basel)        ISSN: 2075-4418


1. Introduction

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is an infectious disease that poses a concern to humans worldwide. The World Health Organization (WHO) proclaimed COVID-19 (the novel coronavirus disease) as a global pandemic on 11 March 2020. COVID-19 is a rapidly spreading illness worldwide, yet hospital resources are limited. As of 1 December 2021, COVID-19 had led to the infection of 260 million people and 5.2 million deaths worldwide [1]. COVID-19 has clearly shown to have several molecular pathways [2], leading to myocardial injury [3], diabetes [4], pulmonary embolism [5], and thrombosis [6]. Due to the lack of an effective vaccine or medication, early detection of COVID-19 is critical to saving many lives and safeguarding frontline workers. Most medical staff have become infected due to their frequent contact with patients, significantly aggravating the already dire healthcare situation. The early detection of COVID-19 is critical to saving many lives and protecting frontline workers, due to the lack of an appropriate vaccination or therapy. RT-PCR, or “reverse transcription-polymerase chain reaction”, is one of the gold standards for the detection of COVID-19 [7,8]. Furthermore, since the RT-PCR test is slow—causing delays in report generation—and has low sensitivity [9], there is a need for better detection methods. However, imaging-based diagnosis, including ultrasound [10], chest X-ray [11], and chest computed tomography (CT) [12], is becoming more popular in detecting and managing infection with COVID-19 [13,14]. CT has demonstrated great sensitivity and repeatability in the diagnosis of COVID-19, and for body imaging in general [15]. It is a significant and trustworthy complement to RT-PCR testing in identifying the disease [16,17,18]. The main imaging advantage of CT [15,19,20] imaging is capturing anomalies such as ground-glass opacity (GGO) [21,22], consolidation, and other opacities seen in the CT of a COVID-19 patient [23]. The anomaly of GGO is a prevalent feature in most chest CT lung images [14,24,25,26]. Due to time constraints and the sheer volume of studies, most radiologists use a judgmental and semantic approach to evaluate the COVID-19 lesions with different opacities. Furthermore, the manual and semi-automated assessment is subjective, slow, and time-consuming [27,28,29,30]. As a result, rapid and error-free detection and real-time prognostic solutions are required for early COVID-19 illness to improve the speed of diagnosis. Artificial intelligence (AI) has accelerated research and development in almost every field, including healthcare imaging [31,32,33]. The ability of AI techniques to replicate what is done manually has made detection and diagnosis of this disease faster [34,35,36,37,38,39,40,41,42,43,44,45,46]. The AI techniques try to accurately mimic the human brain using deep neural networks. This makes them suitable for solving medical imaging problems. Deep learning (DL) is an extension of AI that uses dense layers to deliver completely automatic feature extraction, classification, and segmentation [47,48,49,50,51,52,53]. DL has advantages, but it also has drawbacks and unknowns, such as optimization of the learning rate, determining the number of epochs, preventing overfitting, handling large datasets, and functioning in a multiresolution framework [54]. This is also known as hyperparameter tuning, which is the most crucial task when accurately training a DL model. Recently published studies by Suri et al. prove that using hybrid DL (HDL) models over solo DL [55,56] models in the medical domain can help to learn complex imaging features quickly and accurately [57,58,59]. Transfer learning can also be adapted for knowledge transfer from one model to another. This process helps train the DL models faster, and with fewer images [60,61]. The proposed study utilizes SDL and HDL models to segment COVID-19-based lesions in CT lung images. To prove the robustness of the AI systems, we postulate two conditions as the hypotheses: (a) the performance of the AI model benchmarked against two manual delineations must be within 10% of one another, and (b) the HDL model outperforms the SDL model in terms of performance. Figure 1 depicts the global COVLIAS 1.0Lesion system for COVID-19-based lesion segmentation using AI models, consisting of volume acquisition, online segmentation, and benchmarking against MedSeg, along with performance evaluation.
Figure 1

AI system workflow for comparing COVLIAS 1.0Lesion against MedSeg.

The main contributions of this study are as follows: (1) The proposed study consists of a combination of solo DL and HDL to tackle the lesion location for faster segmentation. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. (2) The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Performance evaluation was carried out using systems such as (a) Dice similarity, (b) Jaccard index, (c) Bland–Altman plots, and (d) regression plots. (3) COVLIAS 1.0Lesion was benchmarked against the online MedSeg system, demonstrating COVLIAS 1.0Lesion to be superior to MedSeg when compared against Manual Delineation 1 and Manual Delineation 2. (4) The proposed interobserver variability study used tracings from two trained radiologists as part of the validation. (5) Statistical tests—namely, the Mann–Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, along with the p-values. (6) The online system for each slice was <1 s. The layout of this lesion segmentation study is as follows: In Section 2, we present the patient demographics and types of AI architectures. The results of the experimental protocol using the AI architectures, along with the performance evaluation, are shown in Section 3. The in-depth discussion is elaborated in Section 4, where we present our findings, benchmarking tables, strengths, weaknesses, and extensions of our study. The study concludes in Section 5.

2. Methods

2.1. Demographics and Baseline Characteristics

Approximately 3000 CT images (collected from 40 patients from Croatia) were used to create the training cohort (Figure 2). The patients had a mean age of 66 (SD 7.988), with 35 males (71.4 %) and the remainder females. In the cohort, the average GGO and consolidation scores were 2 and 1.2, respectively. Out of the 40 patients who participated in this study, all had a cough, 85% had dyspnoea, 28% had hypertension, 14% were smokers, and none had a sore throat, diabetes, COPD, or cancer. None of them were admitted to the intensive care unit (ICU) or died due to COVID-19 infection.
Figure 2

Raw CT images from the Croatia dataset.

2.2. Image Acquisition and Data Preparation

This proposed study used a Croatian cohort of 40 COVID-19-positive patients. The retrospective cohort study was conducted from 1 March to 31 December 2020, at the University Hospital for Infectious Diseases in Zagreb, Croatia. All patients over the age of 18 who agreed to participate in the study had a positive RT-PCR test for the SARS-CoV-2 virus, underwent thoracic MDCT during their hospital stay, and met at least one of the following criteria: hypoxia (oxygen saturation below 92%), tachypnea (respiratory rate above 22 per minute), tachycardia (pulse rate > 100), or hypotension (systolic blood pressure 100 mmHg) prior to starting the study. The UHID Ethics Committee gave their consent. The acquisition was carried out using a 64-detector scanner from FCT Speedia HD (from Fujifilm Corporation, Tokyo, Japan, 2017), while the acquisition protocol consisted of a single full inspiratory breath-hold for collection of CT scans of the thorax in the craniocaudal direction. Researchers used Hitachi Ltd.’s (Tokyo, Japan) Whole-Body X-ray CT System with Supria Software, and a typical imaging method to view the images (System Software Version: V2.25, Copyright Hitachi, Ltd., 2017). When scanning, the following values were used: wide focus, 120 kV tube voltage, 350 mA tube current, and 0.75 s rotation speed in the IntelligentEC (automatic tube-current modulation) mode. We followed the standardized protocol for reconstruction as adopted in our previous studies where, for multi-recon options, the field of view was 350 mm, the slice thickness was 5 mm (0.625 × 64), and the table pitch was 1.3281. We selected a slice thickness of 1.25 mm and a recon index of 1 mm for picture filter 22 (lung standard) with the Intelli IP Lv.2 iterative algorithm (WW1600/WL600). Furthermore, for picture filter 31 (mediastinal), with the Lv.3 Intelli IP iterative algorithm (WW450/WL45), the slice thickness was 1.25 mm and the recon index was 1 mm. Scanned areas were chosen based on the presence of no metallic objects and reasonable image quality without artefacts or blurriness caused by the movement of the patients during the conduction of the scan. Each patient’s CT volume in this cohort consisted of ~300 slices. The senior radiologist (K.V.) carefully selected ~70 CT slices (512 × 512 px2) that preserved most of the lung region (only accounting for about 20% of the total CT slices). Figure 3 and Figure 4 show the annotated lesions from tracers 1 and 2, respectively, in red, with the raw CT image as the background.
Figure 3

Manual delineation overlays (red) from tracer 1 on raw CT images.

Figure 4

Manual delineation overlays (red) from tracer 2 on raw CT images.

2.3. The Deep Learning Models

The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and lesion segmentation more quickly. It was recently shown that the combination of two DL models has more feature-extraction power compared to the solo DL models; this motivation brought the innovation of combining two solo DL models. This study therefore implemented four HDL models—namely, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—that were trained by an expert radiologist. This was then also benchmarked against a solo DL model, namely, PSPNet. By replacing the kernel filter in the initial layer with 11- and 5-sized filters, the VGGNet architecture was meant to reduce training time [62]. VGGNet was extremely efficient and speedy, but it had a problem in optimization due to vanishing gradients. During backpropagation, it resulted in training with substantially less or no weights, because it was multiplied by the gradient at each epoch, and the update to the initial layers was very modest. Residual Network, or ResNet [63], was created to address this issue. A new link called the “skip connection” was created in this architecture, allowing gradients to bypass a limited number of layers and thereby resolve the issue of the vanishing gradient problem. Furthermore, during the backpropagation step, another modification to the network—namely, an identity function—kept the local gradient value at a non-zero quantity. The HDL models were designed by combining one DL (i.e., VGG or ResNet, in our study) with another DL (i.e., UNet or SegNet, in our study), thereby producing a superior network with the advantages of both parent networks. The VGG-SegNet, VGG-UNet, ResNet-SegNet, and ResNet-UNet architectures employed in this research are made up of three parts: an encoder, a decoder, and a pixel-wise softmax classifier. The details of the SDL and HDL models are discussed in the following sections.

2.3.1. PSPNet—Solo DL Model

The pyramid scene parsing network (PSPNet) [64] is a semantic segmentation network that takes into account the image’s overall context. PSPNet includes four sections to its design (Figure 5): (1) input, (2) feature map, (3) pyramid pooling module, and (4) output [65,66]. The segmented image is sent into the network, which then uses a set of dilated convolution and pooling blocks to extract the feature map. The network’s heart is the pyramid pooling module, which helps capture the global context of the image/feature map constructed in the previous stage. This section is divided into four sections, each with its own scaling capabilities. This module’s scaling options are 1, 2, 3, and 6, with 1 × 1 scaling assisting in the acquisition of spatial data, and thereby increasing the resolution of the acquired features. The higher-resolution features are captured by the 6 × 6 scaling. All of the outputs from these four components are pooled at the end of this module using global average pooling. The global average pooling output is sent to a collection of convolutional layers in the final section. Finally, the output binary mask generates the collection of prediction classes. The main advantage of PSPNet is the global feature extraction using the pyramid pooling strategy.
Figure 5

PSPNet’s architecture.

2.3.2. Two SegNet-Based HDL Model Designs—VGG-SegNet and ResNet-SegNet

The VGG-SegNet architecture used in this study (Figure 6) consists of three components: an encoder, a decoder, and a pixel-wise softmax classifier at the end. It consists of 16 convolution (conv) layers (green in color) compared to the 13 in the SegNet [67] design (VGG backbone). The difference between ResNet-SegNet (Figure 7) and VGG-SegNet (Figure 6) is in the encoder and decoder parts. The VGG is replaced by ResNet [63] architecture in the encoder part of the architecture. Skip connection in VGG-SegNet is shown by the horizontal lines running from encoder to decoder in Figure 7, which help in retaining the features. To overcome the vanishing gradient problem, a new link known as the “skip connection” (Figure 7) was invented in this architecture, allowing the gradients to bypass a set number of levels [68,69]. This consists of conv blocks and identity blocks (Figure 7). The conv block consists of three serial 1 × 1, 3 × 3, and 1 × 1 convolution blocks in parallel to a 1 × 1 convolution block, which is then added in the end. The identity block is similar to the conv block, except that it uses skip connection. Since VGG is faster and SegNet is a basic segmentation network, this segmentation process is relatively faster; thus, VGG-SegNet is more advantageous compared to SegNet alone. On the other hand, ResNet-SegNet is more accurate, since it has a greater number of layers, and prevents the vanishing gradient problem.
Figure 6

VGG-SegNet HDL model’s architecture.

Figure 7

ResNet-SegNet HDL model’s architecture.

2.3.3. Two UNet-Based HDL Model Designs: VGG-UNet and ResNet-UNet

VGG-UNet (Figure 8) and ResNet-UNet (Figure 9) are based on the classic UNet structure, which consists of encoder (downsampling) and decoder (upsampling) components. The VGG-19 [62,70,71,72] and ResNet-51 [58,63,73,74] models replace the downsampling encoder in VGG-UNet and ResNet-UNet, respectively. These architectures are better than the traditional UNet [75], since each level’s traditional convolution blocks are changed by the VGG and ResNet blocks in VGG-UNet and ResNet-UNet, respectively. Note that skip connection in VGG-UNet is shown by the horizontal lines running from encoder to decoder in Figure 8, which help in retaining the features, similar to Figure 7 in VGG-SegNet. To overcome the vanishing gradient problem, a new link known as the “skip connection” (Figure 9) was invented in this architecture, allowing gradients to bypass a set number of levels [68,69]. This consists of conv blocks and identity blocks (Figure 9). This is very similar to ResNet-SegNet, as shown in Figure 7. The conv block consists of three serial 1 × 1, 3 × 3, and 1 × 1 convolution blocks in parallel to a 1 × 1 convolution block, which is then added in the end. The identity block is similar to the conv block, except that it uses skip connection. The key advantage of VGG-UNet over UNet is its higher speed of operation, while ResNet-UNet offers better accuracy and avoids the vanishing gradient problem due to new skip connections.
Figure 8

VGG-UNet’s architecture.

Figure 9

ResNet-UNet’s architecture.

2.4. Loss Function for SDL and HDL Models

The new models adopted the cross-entropy (CE) loss functions during the model generation [76,77,78]. If represents the CE loss function, represents the classifier’s probability used in the AI model, i represents the input gold standard label 1, and (1 − i) represents the gold standard label 0, then the loss function can be expressed mathematically as shown in Equation (1): where represents the product of the two terms.

2.5. Experimental Protocol

The AI models’ accuracy was determined using a standardized cross-validation (CV) technique. Using the AI framework, our group produced a number of CV-based protocols of various types. We adopted a fivefold cross-validation protocol consisting of 80% training (2400 scans), while the remaining 20% were training data (600 CT scans). The choice of the fivefold cross-validation was due to the mild COVID-19 conditions. Five folds were created in such a way that each fold had the opportunity to have a distinct test set. The K5 protocol included an internal validation mechanism in which 10% of the data were considered for validation. The AI systems’ accuracy was determined by comparing anticipated output to ground-truth pixel values. Because the output lung mask was either black or white, these readings were interpreted as binary (0 or 1) integers. Finally, the sum of these binary integers was divided by the total number of pixels in the image. Using the standardized symbols for truth tables for the determination of accuracy, we used TP, TN, FN, and FP to denote true positive, true negative, false negative, and false positive, respectively. The AI systems’ accuracy can be mathematically expressed as shown in Equation (2):

3. Results and Performance Evaluation

3.1. Results

This proposed study is an improvement on the previously published COVLIAS 1.0Lung system with lesion segmentation. This study uses a cohort of 3000 images for a set of 40 COVID-19-positive patients, with five AI models utilizing a fivefold CV technique. The training was carried out on one set of manual delineation from a senior radiologist. Figure 10 shows the accuracy and the loss plot using the best AI model (ResNet-UNet) out of the five models used in this proposed study. Figure 11 shows the overlay of the AI-predicted lesions (green) in rows 3–7 against manual delineation (red, row 2), with raw CT images (row 1) as the background. Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5 show the outputs from PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet, respectively. Figure A6 shows the visual lesion overlays of MedSeg (green) vs. MD (red).
Figure 10

Training accuracy and loss plot for the best AI model (ResNet-UNet).

Figure 11

Row 1: raw CT; row 2: MD 1 (gold standard); rows 3–7: overlay images—AI (green) over MD (red). The 5 AI models are PSPNet (row 3), VGG-SegNet (row 4), ResNet-SegNet (row 5), VGG-UNet (row 6), and ResNet-UNet (row 7).

Figure A1

Results of visual lesion overlays showing PSPNet (green) vs. MD 1 (red).

Figure A2

Results of visual lesion overlays showing VGG-SegNet (green) vs. MD 1 (red).

Figure A3

Results of visual lesion overlays showing ResNet-SegNet (green) vs. MD 1 (red).

Figure A4

Results of visual lesion overlays showing VGG-UNet (green) vs. MD 1 (red).

Figure A5

Results of visual lesion overlays showing ResNet-UNet (green) vs. MD 1 (red).

Figure A6

Results of visual lesion overlays showing MedSeg (green) vs. MD 1 (red).

3.2. Performance Evaluation

This proposed study uses (1) the Dice similarity coefficient (DSC) [79,80], (2) Jaccard index (JI) [81], (3) Bland–Altman (BA) plots [82,83], and (4) receiver operating characteristics (ROC) [84,85,86] for the five AI models against MD 1 and MD 2 for performance evaluation. The same five metrics are used for MedSeg to validate the five AI models against it. Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 show the cumulative frequency distribution (CFD) plots for DSC and JI from PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet, respectively, and depict the score at an 80% threshold. The CFD plots for DSC and JI are shown in Figure 17, which shows the output from the MedSeg model used for validating the COVLIAS 1.0Lesion system. This study also uses manual delineation from two trained radiologists (K.V. and G.L.) to validate the results of the five AI models and MedSeg. Figure 18 shows lesions detected by the best AI model (ResNet-UNet) and MedSeg, along with MD by two trained radiologists (K.V. and G.L.).
Figure 12

Cumulative frequency plots for Dice (left) and Jaccard (right) for PSPNet when computed against MD 1.

Figure 13

Cumulative frequency plot for Dice (left) and Jaccard (right) for VGG-SegNet when computed against MD 1.

Figure 14

Cumulative frequency plot for Dice (left) and Jaccard (right) for ResNet-SegNet when computed against MD 1.

Figure 15

Cumulative frequency plot for Dice (left) and Jaccard (right) for VGG-UNet when computed against MD 1.

Figure 16

Cumulative frequency plot for Dice (left) and Jaccard (right) for ResNet-UNet when computed against MD 1.

Figure 17

Cumulative frequency plot for Dice (left) and Jaccard (right) for MedSeg when computed against MD 1.

Figure 18

Lesions detected by the best AI model (ResNet-UNet) vs. MedSeg vs. MD 1 vs. MD 2.

Table 1 presents the DSC and JI scores for five AI models using MD 1 and MD 2. The left-hand side of the table shows statistical computation using MD 1, while the right-hand side of the table shows the statistical computation using MD 2. The first five rows are the five AI models. The percentage difference is the difference between the AI model and the MedSeg model. As can be seen, the five AI models (ResNet-SegNet, PSPNet, VGG-SegNet, VGG-UNet, and ResNet-UNet) are all better than MedSeg, by 1%, 4%, 4%, 5%, and 9%, respectively. The mean Dice similarity for all five models is 0.8, which is better than that of MedSeg by 5%. The same is true for the Jaccard index where, as can be seen, the five AI models (ResNet-SegNet, PSPNet, VGG-SegNet, VGG-UNet, and ResNet-UNet) are all better than MedSeg, by 2%, 5%, 6%, 8%, and 15%, respectively. The mean JI is 0.66 which is better than that of MedSeg by 7%. Thus, in summary, both the Dice similarity and Jaccard index in all five AI models are better than those of the MedSeg model.
Table 1

Dice similarity coefficient and Jaccard index when computed against MedSeg.

MD 1MD 2
Dice% Diff *Jaccard% Diff *Dice% Diff *Jaccard% Diff *
ResNet-SegNet0.771%0.632%0.744%0.605%
PSPNet0.794%0.655%0.770%0.642%
VGG-SegNet0.794%0.666%0.804%0.688%
VGG-UNet0.805%0.678%0.781%0.653%
ResNet-UNet0.839%0.7115%0.804%0.688%
Mean of AI0.805%0.667%0.783%0.655%
MedSeg0.76-0.62-0.77-0.63-

DSC: Dice similarity coefficient; JI: Jaccard index; * % Diff = absolute (COVLIAS − MedSeg)/MedSeg.

We also used another manual delineation system (G.L.), labelled as MD 2. The behavior was consistent with that of MD 2. The Dice similarity in the five AI models was superior to that of MedSeg by 4%, 0%, 4%, 1%, and 4%, respectively. Similarly, the JI was superior to that of MedSeg by 5%, 2%, 8%, 3%, and 8%, respectively. The mean Dice similarity using MD 2 was superior by 3%, while the mean Jaccard index was superior by 5%, thus proving our hypothesis. Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 show the correlation coefficient (CC) plots for the five AI models against MD 1 and MD 2. The plots also show the CC values of all of the plots with p < 0.0001. Finally, we also present the benchmarking against MedSeg in Figure 24, against MD 1 and MD 2. Table 2 presents the CC scores for the five AI models, along with the means of these AI models and MedSeg against MD 1 and MD 2, and the percentage difference between the results of the AI models and MedSeg.
Figure 19

Correlation coefficient plots for (left) PSPNet vs. MD 1 and (right) PSPNet vs. MD 2.

Figure 20

Correlation coefficient plots for (left) VGG-SegNet vs. MD 1 and (right) VGG-SegNet vs. MD 2.

Figure 21

Correlation coefficient plots for (left) ResNet-SegNet vs. MD 1 and (right) ResNet-SegNet vs. MD 2.

Figure 22

Correlation coefficient plots for (left) VGG-UNet vs. MD 1 and (right) VGG-UNet vs. MD 2.

Figure 23

Correlation coefficient plots for (left) ResNet-UNet vs. MD 1 and (right) ResNet-UNet vs. MD 2.

Figure 24

Correlation coefficient plots for (left) MedSeg vs. MD 1 and (right) MedSeg vs. MD 2.

Table 2

Correlation coefficient plot: 5 AI models vs. MD 1 and 5 AI models vs. MD 2.

MD 1MD 2
CC% Diff *CC% Diff *
ResNet-SegNet0.9011%0.802%
PSPNet0.9011%0.811%
VGG-SegNet0.792%0.794%
VGG-UNet0.810%0.811%
ResNet-UNet0.9214%0.802%
Mean AI0.868%0.802%
MedSeg0.81-0.82-

CC: Correlation coefficient; * % Diff = absolute (COVLIAS − MedSeg)/MedSeg.

3.3. Statistical Validation

To assess the system’s dependability and stability, standard tests—namely, paired t-tests [87,88], Mann–Whitney tests [89,90,91], and Wilcoxon tests [92]—were utilized. MedCalc software was used for the statistical analysis (Osteen, Belgium) [93,94]. To validate the system described in the study, we supplied 13 potential combinations for the five AI models and MedSeg against MD 1 and MD 2. Table 3 displays the Mann–Whitney test, paired t-test, and Wilcoxon test findings. Using the varying threshold strategy, one can compute COVLIAS’s diagnostic performance using receiver operating characteristics (ROC). The ROC curve and area under the curve (AUC) values for the five (two new and three old) AI models are depicted in Figure 25, with AUC values more than ~0.85 and ~0.75 for MD 1 and MD 2, respectively. The BA computation strategy [95,96] was used to demonstrate the consistency of two methods. We show the mean and standard deviation of the lesion area for the AI models (Figure 26, Figure 27, Figure 28, Figure 29 and Figure 30) and MedSeg (Figure 31), plotted against MD 1 and MD 2.
Table 3

Statistical tests for the 5 AI models and MedSeg against MD 1 and MD 2.

Paired t-TestMann-WhitneyWilcoxon
PSPNet vs. MD 1p < 0.0001p < 0.0001p < 0.0001
PSPNet vs. MD 2p < 0.0001p < 0.0001p < 0.0001
VGG-SegNet vs. MD 1p < 0.0001p < 0.0001p < 0.0001
VGG-SegNet vs. MD 2p < 0.0001p < 0.0001p < 0.0001
ResNet-SegNet vs. MD 1p < 0.0001p < 0.0001p < 0.0001
ResNet-SegNet vs. MD 2p < 0.0001p < 0.0001p < 0.0001
VGG-UNet vs. MD 1p < 0.0001p < 0.0001p < 0.0001
VGG-UNet vs. MD 2p < 0.0001p < 0.0001p < 0.0001
ResNet-UNet vs. MD 1p < 0.0001p < 0.0001p < 0.0001
ResNet-UNet vs. MD 2p < 0.0001p < 0.0001p < 0.0001
MedSeg vs. MD 1p < 0.0001p < 0.0001p < 0.0001
MedSeg vs. MD 2p < 0.0001p < 0.0001p < 0.0001
MD 1 vs. MD 2p < 0.0001p < 0.0001p < 0.0001
Figure 25

ROC for COVLIAS (5 AI models) vs. MedSeg using MD 1 (left) and MD 2 (right).

Figure 26

BA plot for PSPNet using MD 1 (left) vs. MD 2 (right).

Figure 27

BA plot for VGG-SegNet using MD 1 (left) vs. MD 2 (right).

Figure 28

BA plot for ResNet-SegNet using MD 1 (left) vs. MD 2 (right).

Figure 29

BA plot for VGG-UNet using MD 1 (left) vs. MD 2 (right).

Figure 30

BA plot for ResNet-UNet using MD 1 (left) vs. MD 2 (right).

Figure 31

BA plot for MedSeg using MD 1 (left) vs. MD 2 (right).

4. Discussion

This proposed study presents automated lesion detection in an AI framework using SDL and HDL models—namely, (1) PSPNet, (2) VGG-SegNet, (3) ResNet-SegNet, (4) VGG-UNet, and (5) ResNet-UNet—trained using a fivefold cross-validation strategy using a set of 3000 manually delineated images. As part of the benchmarking strategy, we compared the five AI models against MedSeg. As part of the variability study, we utilized the lesion annotations from another tracer to validate the results of the five AI models and MedSeg. We used four kinds of metric for evaluation of the five AI models, namely, (1) DSC, (2) JI, (3) BA plots, and (4) ROC. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice similarity and Jaccard index, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann–Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability. The training, testing, and evaluation of the AI model were carried out using NVIDIA’s DGX V100. Multi-GPU training was used to speed up the process. The online system for each slice was <1 s. Table 2 shows the CC values of all of the AI models against MD 1 and MD 2; furthermore, it also presents a benchmark against MedSeg. The results show consistency, where ResNet-UNet is the best model amongst all of the AI models. It is ~14% and ~2% better than MedSeg for MD 1 and MD 2, respectively. The primary attributes used for comparison of the five models are shown in Table 4, including (1) the backbone of the segmentation model, (2) the total number of parameters in the AI models (in millions), (3) the number of neural network layers, (4) the size of the final saved model used in COVLIAS 1.0, (5) the training time of the models, (6) the batch size used while training the network, and (7) the online prediction time per image for COVLIAS 1.0. ResNet-UNet was the AI model with the highest number of NN layers and the largest model size; due to this, it took the maximum amount of time to train the network.
Table 4

Parameters for the five AI models.

SNAttributesPSP-NetVGG-SegNetVGG-UNetResNet-SegNetResNet-UNet
1Backbone-encoderNAVGG-16VGG-16Res-50Res-50
2# Parameters~4.4 M~11.6 M~12.4 M~15 M~16.5 M
3# NN layers543336160165
4Model size (MB)50133142171188
5Batch size88444
6Training time *~15~50~54~60~63
7Prediction time<1 s<1 s<1 s<1 s<1 s

* In minutes; MB: megabytes; M: million; NN: neural network; Res: ResNet.

4.1. Short Note on Lesion Annotation

Ground-truth annotation is always a challenge in AI [97,98]. In our scenario, in certain CT slices, the lesions overlapped, making it difficult to ensure precise lesion annotations. Some opacities are borderline, and the radiologist’s decision may be highly subjective, resulting in false positives or false negatives. When it is difficult to notice and differentiate opacities in patients with COVID-19, or with cardiac disorders, emphysema, fibrosis, or autoimmune diseases with pulmonary manifestation, the differences in experience are particularly significant for the annotation of complex investigations [99,100,101,102,103,104,105].

4.2. Explanation and Effectiveness of the AI-Based COVLIAS System

The proposed study uses five AI-based models—PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—for COVID-19-based lesion detection, and presents a comparison against an existing system in the same domain, known as MedSeg. This proposed study uses (1) DSC (Equation (3)), (2) JI (Equation (4)), (3) BA plots, and (4) ROC curves for the five AI models against MD 1 (or GS 1) and MD 2 (or GS 2) for performance evaluation, to prove the effectiveness of the AI-based COVLIAS system. The same five metrics were used for MedSeg against MD1 and MD2 to validate the five AI-based COVLIAS models against it. where X is the set of pixels of the image 1, ground-truth, or manually delineated image, and Y is the set of pixels of the image 2 or AI-predicted image from COVLIAS 1.0Lesion.

4.3. Benchmarking

Several studies have been published that use deep learning algorithms based on chest CT imaging to identify and segment COVID-19 lesions [73,106,107,108]. However, most investigations lack lesion area measurement, transparency overlay generation, HDL utilization, and interobserver analysis. Our benchmarking analysis consists of 12 studies that use solo deep learning (DL) models and hybrid DL models for lesion detection [109,110,111,112,113,114,115,116,117,118,119,120]. Table 5 shows the benchmarking table, consisting of 21 attributes and 13 studies.
Table 5

Benchmarking table.

A1A2A3A4A5A6A7A8A9A10A11A12A13A14A15A16A17A18
AuthorYearModelClassifier# Patients# Img# GT TracingsFocusObjectiveModalityOpt&Augm#DSCACCAUCRad *CEBench
Ding et al. [109]2021MT-nCov-NetRes2Net50189364858Segm.LesionCT0.8699.610.923
Hou et al. [110]2021Improved Canny edge detectorNA271812NANALesionCT🗶🗶🗶🗶🗶🗶🗶
Lizzi et al. [112]2021Cascaded UNetNANANANAClass. + Segm.LesionCT0.6293🗶1🗶
Qi et al. [113]2021DR-MIL(ResNet-50 and Xception24124101NANACT🗶🗶950.943🗶
Paluru et al. [114]2021Anam-Netcustom (UNet + ENet)6943391Segm.LesionCT🗶0.7798🗶🗶
Zhang et al. [115]2020CoSinGANNA707041Class. + Segm.LesionCT0.75🗶🗶🗶🗶
Singh et al. [111]2021LungINFsegModified UNet2018001Heatmap + Segm.LesionCT0.880🗶🗶🗶
Amyar et al. [117]2020UNetNA136913691Class. + Segm.LesionCT🗶0.88940.97🗶
Budak et al. [116]2021A-SegNetNA694731Segm.LesionCT🗶0.89🗶🗶🗶🗶🗶
Cai et al. [118]2020UNetNA992501Class. + Segm.Lung + lesion + predict ICU stayCT🗶0.77🗶🗶🗶🗶
Ma et al. [119]2021UNetNA70NA1Segm.LesionCT🗶0.67🗶🗶2
Kuchana et al. [120]2020UNet and attention UNet,NA509291Segm.Lung + lesionCT🗶0.84🗶🗶1🗶🗶
Suri et al. [proposed]2021PSPNet, VGG-SegNetResNet-SegNetVGG-UNetResNet-UNetVGG,ResNet4030002Segm.LesionCT🗶0.790.790.770.800.830.950.960.950.970.980.950.940.870.910.872

* Rad: radiologist; Augm#: augmentation; Opt&: optimization; CE: clinical evaluation; Bench: benchmarking; # Img: number of images.

Ding et al. [109] presented MT-nCov-Net which is a multitasking DL network that includes segmentation of both lungs and lesions in CT scans, based on Res2Net50 [121] as its backbone. This study used five different CT image databases, totaling more than 36,000 images. Augmentation techniques such as random flipping, rotation, cropping, and Gaussian blurring were also applied. The Dice similarity was 0.86. Hou et al. [110] demonstrated the use of an improvised Canny edge detector [122,123] for CT images to detect COVID-19 lesions using a dataset of about 800 CT images. Lizzi et al. [112] adopted UNet by cascading it for COVID-19-based lesion segmentation on CT images. Various augmentation techniques—such as zooming, rotation, Gaussian noise, elastic deformation, and motion blur—were used in this study. The Dice similarity coefficient (DSC) was 0.62, which is lower compared to the 0.86 of Ding et al. [109]. ResNet-50 and XceptionNet [124] were used as the backbone of the DR-ML network demonstrated by Qi et al. [113]. This study used ~2400 CT images, with rotation, reflection, and translation as image augmentation techniques. DSC was not reported in this study, but it had an AUC of 0.94. Paluru et al. [114] presented a combination of UNet and ENet, named Anam-Net. It was designed for COVID-19-based lesion segmentation from lung CT images. The model was trained using a cohort of ~4300 images, and the input image to this model had to be a segmented lung. Anam-Net was benchmarked against ENet, UNet++, SegNet, LEDNet, etc. There was no augmentation reported, and the DSC was 0.77. The authors demonstrated an Android application and a deployment on an edge device for Anan-Net to perform COVID-19-based lesion segmentation. Zhang et al. [115] demonstrated CoSinGAN—the only generative adversarial network (GAN) of its kind for COVID-19-based lesion segmentation. Only ~700 CT lung images were used by this GAN in the training process, with no augmentation techniques. The DSC was 0.75 for CoSinGAN, and was benchmarked against other models. Singh et al. [111] modified the basic UNet architecture for lesion detection and heatmap generation. LungINFseg, a modified UNet architecture, was developed using a cohort of 1800 CT lung images with some augmentation techniques, and it reported a DSC of 0.8. The results of the modified UNet were benchmarked against some previously published segmentation networks, such as FCN [125], UNet, SegNet, Inf-Net [126], MIScnn [127,128], etc. The use of UNet with a multiresolution approach was demonstrated by Amyar et al. [117] for lesion detection and classification using 449 COVID-19-positive images. The authors reported an accuracy of 94% and DSC of 0.88, with no augmentation techniques. In only the classification framework, the model performance was benchmarked against some previously published studies. Budak et al. [116] used SegNet with attention gates to solve the problem of lesion segmentation for COVID-19 patients. Hounsfield unit windowing was also used as part of image pre-processing, with different loss functions to deal with small lesions. A cohort consisting of 69 patients was used in this study, where the author only reported a DSC of 0.89. A 10-fold CV protocol on 250 images with the UNet model was demonstrated by Cai et al. [118], with a DSC of 0.77. The authors presented lung and lesion segmentation using the same model. They also proposed a method to predict the duration of intensive care unit (ICU) stay based on the findings of the lesion segmentation. Ma et al. [119] also used the standard UNet architecture on a set of 70 patients for 3D CT volume segmentation. Model optimization was also carried out during the training process, and a DSC of 0.67 was reported in the study. The authors benchmarked the performance of the model with other studies in the same domain. Lastly, Kuchana et al. [120] used a cohort of 50 patients for lung and lesion segmentation with UNet and Attention UNet. During the training process, the authors optimized the hyperparameters, and a 0.84 DSC was reported by the model. Arunachalam et al. [129] recently presented a lesion segmentation system based on a two-stage process. Stage I consisted of region-of-interest estimation using region-based convolutional neural networks (RCNNs), while Stage II was used for bounding-box generation. The performance parameters for the training, validation, and test sets were 0.99, 0.931, and 0.8, respectively. The RCNN was primarily for COVID-19 lesion detection, coupled with automated bounding-box estimation for mask generation.

4.4. Strengths, Weaknesses, and Extension

This is the first pilot study for the localization and segmentation of COVID-19 lesions in CT scans of COVID-19 patients, under the class of COVLIAS 1.0. The main strengths were the design of five AI models that were benchmarked against MedSeg—the current industry standard. Furthermore, we demonstrated that COVLIAS 1.0Lesion is superior to MedSeg using manual lesion tracings MD 1 and MD 2, where MD 1 was used for training and MD 2 was used for evaluation of the AI models. The system was evaluated using several performance metrics. Despite the encouraging results, the study could not include more than one observer (MD 1) for manual delineation, due to factors such as cost, time, and availability of a radiologist during the pandemic. During lesion segmentation, the image analysis component that changes the HU values could affect the training process; therefore, in-depth analysis was needed [130,131,132]. This is currently beyond the scope of our current objectives. Several extensions can be attempted in the future. (1) Multiresolution techniques [133,134] embedded with advanced stochastic image-processing methods could be adapted to improve the speed of the system [135,136]. (2) A big data framework could be adopted, whereby multiple sources of information can be used in a deep learning framework [137]. (3) Our study tested interobserver variability by considering two different observers (MD1 and MD2). Our assumption for intraobserver analysis consisted of very subtle changes, as per our previous conducted studies [46,58,138,139,140]; we therefore did not consider it crucial to conduct intraobserver studies, due to lack of funding and the radiologists’ time constraints. Thus, intraobserver analysis could be conducted as part of future research. [58,96,138]. (4) Furthermore, there could be an additional step involved where, first, the lung is segmented, and then this segmented lung is used for analyzing the lesions [141,142]. This should help to increase the DSC and JI of the AI system. (5) The addition of lung segmentation does, however, increase the system’s time and computational cost. One could use the joint lesion segmentation and classification in a multiclass framework such as classification of GGO, consolidations, and crazy paving, using tissue-characterization approaches [56,143]. (6) One could also conduct multiethnic and multi-institutional studies for lung lesion segmentation, as attempted in other modalities [144]. (7) One could understand the lesion distribution in different COVID-19 symptom categories—i.e., high-COVID-19-symptom lesions vs. low-COVID-19-symptom lesions—as tried in other diseases [36]. (8) Since SDL and HDL strategies have been adapted for lesion segmentation, it is very likely that it can have a bias in AI [54] and, therefore, can be studied for lesion segmentation. (9) Several new ideas have emerged that need shape, position, and scale, and such techniques require spatial attention, channel attention, and scale-based solutions. Recently, advanced solutions have been tried for different applications, such as human activity recognition (HAR) [145]. Methods such as RNN or LSTM can also be incorporated in the skip connection of the UNet or hybrid UNet, which can be used for superior feature map selection [146]. Systems could also be designed where the high-risk lesions (high-valued GGO) and low-risk lesions (low-valued GGO) can be combined using ideas such as deep transfer networks [147]. Furthermore, increased loss function could be explored as part of training the AI models [148,149,150,151,152]. (10) As part of the extension to the system design, one could compare other kinds of cross-validation protocols, such as 2-fold, 3-fold, 4-fold, 10-fold, and jack-knife (JK) protocols such as training equals testing. Examples of such protocols can be seen in our previous studies [45,59,60,153,154,155]. Even though our design had a fivefold protocol, our experiences have shown slight variations in performance with the changes in cross-validation results.

5. Conclusions

The proposed study presents a comparison between COVLIAS 1.0Lesion and MedSeg for lesion segmentation in 3000 CT scans taken from 40 COVID-19 patients. COVLIAS 1.0Lesion (Global Biomedical Technologies, Inc., Roseville, CA, USA) consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy for performance evaluation. As part of the validation, it used tracings from two trained radiologists. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice similarity and Jaccard index, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Other error metrics, such as correlation coefficient plots for lesion area errors and Bland–Altman plots, showed a close correlation with the manual delineations. Statistical tests such as the paired t-test, Mann–Whitney test, and Wilcoxon test were used to demonstrate the stability and reliability of the AI system. The online system for each slice was <1 s. To conclude, our pilot study demonstrated the AI model’s reliability in locating and segmenting COVID-19 lesions in CT scans; however, multicenter data need to be collected and experimented with.
  105 in total

1.  Application of the Bland-Altman plot for interpretation of method-comparison studies: a critical investigation of its practice.

Authors:  Katy Dewitte; Colette Fierens; Dietmar Stöckl; Linda M Thienpont
Journal:  Clin Chem       Date:  2002-05       Impact factor: 8.327

2.  Wilson disease tissue classification and characterization using seven artificial intelligence models embedded with 3D optimization paradigm on a weak training brain magnetic resonance imaging datasets: a supercomputer application.

Authors:  Mohit Agarwal; Luca Saba; Suneet K Gupta; Amer M Johri; Narendra N Khanna; Sophie Mavrogeni; John R Laird; Gyan Pareek; Martin Miner; Petros P Sfikakis; Athanasios Protogerou; Aditya M Sharma; Vijay Viswanathan; George D Kitas; Andrew Nicolaides; Jasjit S Suri
Journal:  Med Biol Eng Comput       Date:  2021-02-05       Impact factor: 2.602

3.  Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation.

Authors:  Jun Ma; Yixin Wang; Xingle An; Cheng Ge; Ziqi Yu; Jianan Chen; Qiongjie Zhu; Guoqiang Dong; Jian He; Zhiqiang He; Tianjia Cao; Yuntao Zhu; Ziwei Nie; Xiaoping Yang
Journal:  Med Phys       Date:  2020-12-23       Impact factor: 4.071

4.  Cardiovascular/stroke risk predictive calculators: a comparison between statistical and machine learning models.

Authors:  Ankush Jamthikar; Deep Gupta; Luca Saba; Narendra N Khanna; Tadashi Araki; Klaudija Viskovic; Sophie Mavrogeni; John R Laird; Gyan Pareek; Martin Miner; Petros P Sfikakis; Athanasios Protogerou; Vijay Viswanathan; Aditya Sharma; Andrew Nicolaides; George D Kitas; Jasjit S Suri
Journal:  Cardiovasc Diagn Ther       Date:  2020-08

5.  Vaccine-induced severe thrombotic thrombocytopenia following COVID-19 vaccination: a report of an autoptic case and review of the literature.

Authors:  D Fanni; L Saba; R Demontis; C Gerosa; A Chighine; M Nioi; J S Suri; A Ravarino; F Cau; D Barcellona; M C Botta; M Porcu; A Scano; F Coghe; G Orrù; P Van Eyken; Y Gibo; G La Nasa; E D'aloja; F Marongiu; G Faa
Journal:  Eur Rev Med Pharmacol Sci       Date:  2021-08       Impact factor: 3.507

6.  Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation.

Authors:  Amine Amyar; Romain Modzelewski; Hua Li; Su Ruan
Journal:  Comput Biol Med       Date:  2020-10-08       Impact factor: 4.589

7.  CT Quantification and Machine-learning Models for Assessment of Disease Severity and Prognosis of COVID-19 Patients.

Authors:  Wenli Cai; Tianyu Liu; Xing Xue; Guibo Luo; Xiaoli Wang; Yihong Shen; Qiang Fang; Jifang Sheng; Feng Chen; Tingbo Liang
Journal:  Acad Radiol       Date:  2020-09-21       Impact factor: 3.173

8.  Systematic Review of Artificial Intelligence in Acute Respiratory Distress Syndrome for COVID-19 Lung Patients: A Biomedical Imaging Perspective.

Authors:  Jasjit S Suri; Sushant Agarwal; Suneet Gupta; Anudeep Puvvula; Klaudija Viskovic; Neha Suri; Azra Alizad; Ayman El-Baz; Luca Saba; Mostafa Fatemi; D Subbaram Naidu
Journal:  IEEE J Biomed Health Inform       Date:  2021-11-05       Impact factor: 5.772

Review 9.  Imaging in COVID-19-related myocardial injury.

Authors:  Riccardo Cau; Pier Paolo Bassareo; Lorenzo Mannelli; Jasjit S Suri; Luca Saba
Journal:  Int J Cardiovasc Imaging       Date:  2020-11-19       Impact factor: 2.357

10.  Automated detection of COVID-19 using ensemble of transfer learning with deep convolutional neural network based on CT scans.

Authors:  Parisa Gifani; Ahmad Shalbaf; Majid Vafaeezadeh
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-11-16       Impact factor: 2.924

View more
  1 in total

Review 1.  Deep Learning Paradigm for Cardiovascular Disease/Stroke Risk Stratification in Parkinson's Disease Affected by COVID-19: A Narrative Review.

Authors:  Jasjit S Suri; Mahesh A Maindarkar; Sudip Paul; Puneet Ahluwalia; Mrinalini Bhagawati; Luca Saba; Gavino Faa; Sanjay Saxena; Inder M Singh; Paramjit S Chadha; Monika Turk; Amer Johri; Narendra N Khanna; Klaudija Viskovic; Sofia Mavrogeni; John R Laird; Martin Miner; David W Sobel; Antonella Balestrieri; Petros P Sfikakis; George Tsoulfas; Athanase D Protogerou; Durga Prasanna Misra; Vikas Agarwal; George D Kitas; Raghu Kolluri; Jagjit S Teji; Mustafa Al-Maini; Surinder K Dhanjil; Meyypan Sockalingam; Ajit Saxena; Aditya Sharma; Vijay Rathore; Mostafa Fatemi; Azra Alizad; Padukode R Krishnan; Tomaz Omerzu; Subbaram Naidu; Andrew Nicolaides; Kosmas I Paraskevas; Mannudeep Kalra; Zoltán Ruzsa; Mostafa M Fouda
Journal:  Diagnostics (Basel)       Date:  2022-06-24
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.