Literature DB >> 35453870

Liver Tumor Localization Based on YOLOv3 and 3D-Semantic Segmentation Using Deep Neural Networks.

Javaria Amin1, Muhammad Almas Anjum2, Muhammad Sharif3, Seifedine Kadry4, Ahmed Nadeem5, Sheikh F Ahmad5.   

Abstract

Worldwide, more than 1.5 million deaths are occur due to liver cancer every year. The use of computed tomography (CT) for early detection of liver cancer could save millions of lives per year. There is also an urgent need for a computerized method to interpret, detect and analyze CT scans reliably, easily, and correctly. However, precise segmentation of minute tumors is a difficult task because of variation in the shape, intensity, size, low contrast of the tumor, and the adjacent tissues of the liver. To address these concerns, a model comprised of three parts: synthetic image generation, localization, and segmentation, is proposed. An optimized generative adversarial network (GAN) is utilized for generation of synthetic images. The generated images are localized by using the improved localization model, in which deep features are extracted from pre-trained Resnet-50 models and fed into a YOLOv3 detector as an input. The proposed modified model localizes and classifies the minute liver tumor with 0.99 mean average precision (mAp). The third part is segmentation, in which pre-trained Inceptionresnetv2 employed as a base-Network of Deeplabv3 and subsequently is trained on fine-tuned parameters with annotated ground masks. The experiments reflect that the proposed approach has achieved greater than 95% accuracy in the testing phase and it is proven that, in comparison to the recently published work in this domain, this research has localized and segmented the liver and minute liver tumor with more accuracy.

Entities:  

Keywords:  ResNet-50; YOLOv3; deeplabv3; generative adversarial network; inceptionresnetv2; liver tumor

Year:  2022        PMID: 35453870      PMCID: PMC9025116          DOI: 10.3390/diagnostics12040823

Source DB:  PubMed          Journal:  Diagnostics (Basel)        ISSN: 2075-4418


1. Introduction

The main organ, situated behind the right ribs and beneath the base of the lung, is the liver, which helps in food digestion [1]. It is responsible for filtering of the blood cells, nutritional recovery, and storage [2]. The two major areas of the liver are the right and left lobes. The caudate & quadrate are further two types of lobes. The liver cells grow rapidly and may spread to other areas of the body, which is similar to the cause of hepatocellular carcinoma (HCC) [3]. Hepatic primary malignancies arise when the cells have irregular actions [4]. In 2008, 750,000 liver cancer patients were diagnosed, 696,000 of whom died because of it [5]. In 2021, 42,230 cases of liver tumor/cancer including were diagnosed, 12,340 women & 29,890 men, 30,230 of which died (9930 female and 20,300 male) [6]. Globally the prevalence of infection among males is approximately double that of females [7,8]. Medical, imaging [9,10], and laboratory studies, such as MRI scans, and CT scans, detect primary liver malignancy [11]. To obtain accurate images from different angles such as the axial, coronal, and sagittal slices, a CT scan uses radiation [12]. Hepatic malignancy staging relies on the scale and the position of the malignancy. It is therefore necessary to establish an automated technique to accurately diagnose and identify the cancer area from the CT scan [13,14,15,16]. In general, liver CT scans are interpreted through manual/semi-manual procedures; however, these methods are subjective, costly, time-consuming, and extremely vulnerable to error [17]. Many computer-aided approaches [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51] have been developed to resolve these challenges and increase the efficiency of the diagnosis of liver tumors [52]. However, due to several problems, such as poor contrast between the liver and tumors, varying contrast ratios of tumors, difference in the number and size of tumors, tissue anomalies, and unusual tumor development in response to medical care, existing methods are not especially accurate in separating the liver and lesions [15,53]. Currently, fully convolutional networks [54,55,56,57] have provided great attention for liver segmentation as compared to traditional conventional approaches, [58] i.e., shape-based statistical methods [59,60] random forest, Adaboost [61], and graph cut [62]. Christ et al. utilized a cascaded FCN model segmentation [63]. The deep cascaded model has been widely utilized to segment the liver tumor. However, through the local field perspective and depth of the shallow network, FCN loses little spatial image information [64]. Thus, based on FCN, we employed a U-net, in which features are fused by the addition of four skip connections and up-sampled to the size of the input through utilizing deconvolutional layers [65]. The U-net model achieved maximum segmentation accuracy by increasing the depth of the network and receptive fields. The features of both the dense-net and the U-net models have been combined to explore the more informative features. The combination of feature information has reduced the computational cost [66]. Jin et al. presented an attention residual model for more useful input features extraction [67]. Ginneken et al. has employed an encoder-decoder model for precise liver tumor segmentation [68]. H. R. Roth et al. suggested a 3D-FCN model to extract detailed abdominal vessels and organ information [69]. Automated segmentation of the liver tumor is more challenging, owing to fuzzy borders among healthy and tumor tissues [19]. To handle these challenging tasks, extensive segmentation models have been developed. A two-stage model has been presented for liver tumor segmentation, including shape-based and auto-context learning [70]. However, these methods achieved minimum accuracy with computationally high cost [71,72]. An LSM model has been utilized with an energy edge that performed better as compared to a conventional Chan-Vese and geodesic model. The fuzzy pixel-based classification approach has been utilized for segmentation [73]. Existing work does not accurately segment the minute liver tumor due to limited training data. Researchers still need large-scale datasets for precise detection of liver lesions [74]. Hence, in the proposed research, existing limitations and challenges have been addressed through an enhanced GAN model in order to produce synthetic images to increase the training images which help in segmentation of very minute liver tumors. For more accurate localization of the small, affected region, the modified YOLOv3-ResNet-50 model is proposed. DeepLabv3 is used as a core of the Inceptionresnetv2 model for more precise segmentation and localization. Major steps of the proposed approach are: The synthetic liver CT images are created with a modified GAN model and fed into the localization part of the model. After synthetic images generation, the YOLOv3-ResNet-50 model is designed for liver and liver tumor localization. In the last step, a modified 3D-semantic segmentation model is presented, where DeepLabv3 serves as the base network for the Inceptionresnetv2. The remaining article is structured as follows: the introduction is defined in Section 1, while Section 2 explains the suggested model and Section 3 discusses the findings and conclusions.

2. Materials and Methods

In Phase I of the proposed research, Synthetic CT images are created using an improved GAN [75] model, and they are passed to improved pre-trained ResNet-50 [76] models as a base-network of the YOLOv3 [77] in phase II. The proposed localization model is fine-tuned with selected learning parameters for the accurately localized small size of the liver and the liver tumor. In Phase III, the convolutional neural network is used to conduct 3D segmentation, with Deeplabv3 [78] serving as the base network for the pre-trained Inceptionresnetv2 model. The proposed model is explained in Figure 1.
Figure 1

Proposed model.

2.1. Synthetic Images Generation Using Adversial Neural Network (GAN)

GAN is the deep neural network that generates input data that is approximately like the input slices. The model contains two networks including a generator and discriminator. The generator network takes a random vector value and generates a similar training image. However, the discriminator takes images in the form of batches that contain observations from trained data and generated data and classify the input images as real or synthetic. The proposed modified GAN [79], in which the generator network contains 13 layers, including 01 input “01 project” reshape, 04 transposed convolution, 03 batch-norm & 03 ReLU, Tanh and the discriminator network is comprised of 13 layers, such as 01 input, 05 convolutions, 04 LeakyReLU and 03 batch-normalization, as presented in Figure 2.
Figure 2

Proposed GAN model steps.

The loss function of the proposed GAN model is described as follows: The taken discriminator output : is probability belonging to input slices. Here represents the gradient sigmoid function. shows the probability of input slices Generator loss = Here denotes discriminator output probability for synthetic images generation. The discriminator probability is increased that accurately classifies the real input slices and synthetically generated slices. . Here denotes the probability of the discriminator output for real input slices. The generative score is the average of probabilities related to the discriminator output for synthetically generated images. . The discriminative score is average of probabilities related to the discriminator output for synthetic and real images. . The hyperparameters of the GAN model are depicted in Table 1.
Table 1

Parameters of GAN model.

NameParameters
Image Size(64, 64, 3)
Size of the filter5
Num of the Filters 64
Number of the input latent100
Scale 0.2
Epochs3000
Size of the batch128
Rate of the learn 0.0002
Factor of the Decay gradient 0.5
Factor of the Decay Gradient squared0.999
Factor of the Flip 0.3
Frequency Validation 100
Size of the Projection (4, 4, 512)
Dropout Probability0.5
Table 1 shows the GAN parameters where the size of the input images are that returns prediction scalar scores based on the series of convolutional, batch-normalization, and leaky-ReLU layers. The probability of the 0.5 dropout is selected in the dropout layer and the filter size is used in convolutional layers. The size of the stride is 2 and the scale of the leaky ReLU is 0.2. The mini-batch size is 128 for the 3000 epochs. The 0.0002 learning rate, 0.5 decay gradient factor and 0.999 gradient decay squared factor are selected for GAN model training. The 0.5 flip factor is set because the generator may fail in training if the discriminator learns to distinguish between actual and generated CT images too soon. Flip the labels of a percentage of the genuine photos at random that provide help to balance discriminator and generator’s learning.

2.2. Localization of Liver Tumor Using YOLOv3-RES Model

For better tracking of smaller objects, YOLOv3 [50] expands YOLOv2 by incorporating detection at many scales. Therefore, an improved YOLOv3-RES is suggested for tumor localization. The model is comprised of two-stage learning models where the extracted features from ResNet-50 are transferred to YOLOv3. The model is comprised of 177 layers i.e., one input image layer size of , 29 convolutional (CONv), ReLU (27), max-pooling (3), depth-concatenation (08), 01 up-sample. The proposed YOLOv3-RES model contains 181 layers, where 57 CONv, 53 batch-norm, 51 ReLU, 01 maxpooling, 16 addition, upsample, and depth concatenation is depicted in Figure 3.
Figure 3

Proposed YOLOv3-RESNet-50 model for localization.

In this step, the generated images obtained from the GAN model as well as original CT images are passed to the proposed localization model, where ResNet-50 is utilized for features extraction and two heads are added for detection at the end of the network. The size of the second detection head, activation-37, rectified linear units (ReLU) is twice as big as the first one of the detection activation-49-ReLU , so it performed better for the localization of small objects. The anchor number is determined to be 7 in order to obtain a better tradeoff among anchors and IoU. In the modified YOLOv3-RES network, extracted features from the ReLU-49-activation layer and further layers such as average global pooling, fully connected, softmax and classification output are removed, and eight layers are added, including convolution, ReLU, Conv2Detection1, upsampling, Depth concatenation, Conv1Detection2, ReLU, and Conv2Detection2. The proposed model localized the coordinates of the liver and liver tumor region in the CT images more precisely. The selection of the optimum learning rate is a challenging task because it directly affects the model performance. Therefore, this experiment is performed for the selection of the optimum learning rate as presented in Table 2.
Table 2

Selection of optimum learning rate after experimentation.

Learning RateError Eate
0.00010.2354
0.00050.2014
0.0010.1354
0.0020.1989
Table 2 depicts the learning rate values with the corresponding error rate, in which we achieved an error rate of 0.2354 on 0.0001 learning rate, 0.1354 error rate on 0.001 learning rate, 0.1989 error rate on 0.002 learning rate, and 0.2014 error rate on 0.0005 learning rate. After the experimentation, we observed that a 0.001 learning rate provides less error rate as compared to other values, hence we used a 0.001 learning rate for further experimentation. Table 3 states the training parameters.
Table 3

Parameters of YOLOv3-RES-net-50 model.

Confident threshold0.5
Overlapped threshold0.5
Anchor box Mask[1,2,3, 4,5,6]
Total anchors07
Total Epoch100
Size of Batch08
Learning Rate0.001
Period of warmup1000
Regularization l20.0005
Threshold Penalty0.5
Table 3 depicts the training parameters, where a batch size is of eight is selected to stabilize the training, and it also relies on the accessible memory. A learning rate of 0.001 is selected with 1000 wramp iterations that represent the total iterations in order to increase the rate of learning that exponentially relies on mathematical expression, and, as shown in Equation (1), it provides help in stabilizing the gradients. The factor of L2 regularization is set as 0.0005, and the penalty threshold is 0.5, in which detection < 0.5 overlaps with the ground mask.

2.3. Semantic Segmentation of the Liver Cancer Using Deeplabv3 with Inceptionresnetv2

The semantic segmentation is proposed for liver cancer segmentation, where inceptionresnetv2 is utilized as the base model of the deeplabv3 [45]. The Inceptionresnetv2 model contains 824 layers, including CONv (241), ReLU (245), batch-normalization (199), pooling (3), concatenate-depth (41), average global pooling (1), 39 scaling, and additional layers. The proposed semantic segmentation model consists of 853 layers, including 253 CONv, 208 batch-norm, 251 ReLU, Max pool 03, average pool (1), 44 depth-concate, 02 transpose CONv, 02 crop 2D, pixel classification, 49 scaling, addition (38), and softmax. The best-fit parameters from Table 4 are used for training. Figure 4 depicts the proposed semantic segmentation paradigm.
Table 4

Learning parameters.

ParametersName
OptimizerSgdm
Mini-batch-size08
Epochs100
Size of input 512×512×3
Figure 4

The proposed semantic segmentation paradigm.

The model is trained on an Sgdm optimizer with 8 batch-size and the proposed model is consistent on 100 epochs. Thus, these parameters are utilized for model building that provides significant improvement in liver tumor segmentation. The segmented proposed model results are shown with ground annotated masks in Figure 5.
Figure 5

Depiction of the results of segmentation, demonstrating that the suggested model accurately detects the liver. (a) Original images (b) binary segmentation (c) mapped segmented region on the original images.

3. Experimental Results

The performance of the proposed method is simulated on the publicly available 3D-IRCADb-01 dataset. 3D-IRCADb-01 data consists of the CT scans of 10 female and 10 males with tumors (hepatic) in approximately 75% of the cases. In this dataset, 20 folders are included that relate to 20 different patients. These folders are known as 3D-IRCADb-01-number, which contains four subfolders such as DICOM-patients, DICOM-labeled, DICOM-masks, and VTK-meshes [80]. In this research, 1353 training with 1353 masks & 1353 testing with 1353 binary masks slices of liver and liver tumor are utilized for 3D-segmentation. This research work is evaluated by performing three different experiments. Experiment #1 is used to assess the performance of the improved GAN approach. The second experiment was done to compute the localization method performance. In the third experiment, the segmentation model performance is computed. The proposed research work is implemented on a G5500 gaming laptop with a 2070 RTX 8GB Graphic card on a Windows 10 operating system with a 32 TB SSD, and MATLAB RA-2020b.

3.1. Experiment#1 GAN for Synthetic Images Generation

Experiment 1 is done to simulate the performance of the GAN model, in which synthetic CT images are generated with the generator model and subsequently classified with the discriminator model. The improved GAN training model performance is graphically depicted in Figure 6.
Figure 6

Training model performance (a) synthetic generated images (b) prediction scores (where red line shows discriminator scores and the green line represents the loss rate).

Table 5 illustrates the generator’s and discriminator’s prediction scores.
Table 5

GAN results.

ModelScores
Discriminator0.8092
Generator0.1354
The GAN model provides scores of 0.8092 discriminators for distinguishing between original and generated data. The generative score of 0.1354 denotes the average probabilities for the generated images. The results show that the GAN model produces synthetic images which are like the originals. GAN was used to generate the synthetic images depicted in Figure 7.
Figure 7

Synthetic images generation using GAN (a,c) input images (b,d) synthetic images.

3.2. Localization Using YOLOv3

The synthetic images are transferred to the enhanced model of localization. In terms of iterations, the training efficiency of the proposed model concerning total loss and learning rate is shown graphically in Figure 8. Figure 9 and Figure 10 depict the suggested method’s localization outcomes.
Figure 8

The training performance of the proposed YOLOv3-ResNet-50 model (a) Learning rate (b) Total loss.

Figure 9

Results of tumor localization (a) CT input (b) localized liver tumor. (c) tumor localization with label.

Figure 10

Localization outcomes (a) CT input (b) localized liver.

Figure 9 and Figure 10 show the maximum 0.99, 0.98 & 0.995, 999 prediction scores of liver/liver tumor, respectively. Table 6 lists the obtained localization findings. The localization results demonstrate that the proposed method localized the very minute liver tumor more accurately.
Table 6

Localization results.

MeasuresLiverLiver Tumor
mAP0.970.96
IoU0.980.97
Table 6 shows that 0.97 mAp and 0.98 IoU was achieved on the liver and 0.96 mAp & 0.97 IoU was achieved on the liver tumor from the benchmark liver CT dataset.

3.3. Experiment# 3: 3D-Semantic Segmentation of Liver Tumor

In this experiment, liver and liver tumor regions are segmented using an improved 3D-semantic segmentation model. At the pixel level, the model is trained using ground labelling. Figure 11 depicts the segmentation performance as a confusion matrix.
Figure 11

Confusion matrix shows the lesion segmentation results in (a) liver, and (b) liver tumor.

As seen in Figure 12 and Figure 13, the proposed model more correctly splits the actual liver tumour lesions.
Figure 12

3D-semantic liver segmentation (a) CT liver (b) binary segmentation (c,d) 3D-annotated.

Figure 13

Liver tumor segmentation (a) input liver CT (b,c) localization (d) 3D-segmentation (e) mapping on liver CT.

Table 7 shows the 3D segmentation of the liver and hepatic tumor region.
Table 7

Liver segmentation using CT images.

Liver/Liver TumorDatasetGlobal AccuracyMean AccuracyIoUPrecisionRecallSpecificityF1-Score
Liver3D-IRCADb0.9810.9720.990.990.980.980.984
Liver Tumor0.9910.9920.991.000.981.000.995
Table 7 shows the segmented liver region, in which the proposed method achieved 0.981 global, 0.972 mean, 0.99 IoU, 0.984 F1-score, 0.99 pPrecision, 0.98 rRecall, and 0.98 sSpecificity, respectively. On the segmented liver tumor region, the method achieved 0.991, 0.992, 0.99, 1.00, 0.98, 1.00, and 0.995 global, mean, IoU, Precision, Recall, Specificity, and F1-score respectively. The proposed method results in comparison to the published work so far is shown in Table 8.
Table 8

Recent existing work comparison.

Ref#YearExisting ModelsDatasetScores of LiverScores of Liver Tumor
[81]2020ResNet-503D-IRCADb0.960.82
[82]2020Encoder and decoder model0.9564.3% ± 34.6%
[67]2020Residual U-network0.960.83
[83]2020U-net0.960.56
[74]2021Dilated residual network0.980.65
[84]2021MRDU96.076.3
[85]2021Region adaptive growing-0.85
[86]2021Geometrical, shape, and texture features-0.87
[87]2022U-shaped network-0.84
Proposed Approach0.980.99
Table 8 shows existing method results where the ResNet-50 model is used for liver and liver tumor segmentation on the 3D-IRCADb dataset, with 0.96 scores on liver identification and 0.82 scores on liver tumor identification [81], whereas the encoder- and decoder-based semantic segmentation model is employed for liver segmentation, and achieved a score of 0.95; however, the scores were 64.3% ± 34.6% on hepatic tumor [82]. Similarly, the residual U-network is employed for liver and liver tumors segmentation, and this method achieved scores of 0.96 and 0.83,respectively [67]. The U-net model has been utilized for liver analysis, with a score of 0.56 [83]. The region adaptive growing is utilized to segment the liver tumor, and this method provides a score of 0.85 [85]. The geometrical, shape, and texture features are utilized for liver tumor segmentation, and results in a score of 0.87 [86]. The deep attention model provides 0.85 segmentation scores of the tumor region [84]. A multiscale residual dilated U-network (MRDU) is utilized for liver and liver tumors segmentation and provides dice scores of 96.0 and 76.3, respectively[84]. A U-shaped network is employed for liver tumor segmentation and this method provides dice scores of 0.84 [87]. In the existing work, we observed that no method provides improved results for liver and liver tumors segmentation. In the existing methodologies, when the liver segmentation scores are increased, then the liver tumor segmentation scores decrease [84]. In the proposed research, inceptionresnetv2 is utilized as the core model of the DeepLabv3 model, and the proposed model provides 0.99 scores for liver tumor and 0.98 on liver segmentation, which is higher when compared to existing methods. The comparison of results reflects that the proposed 3D-localization and segmentation model provides significantly better performance as compared to existing works on the same benchmark dataset.

4. Conclusions

Segmentation is a tough operation due to the variable size and shape of liver tumors. As a result, a novel framework for liver detection was developed in this. The number of the input slices is increased by the GAN model. A combination of ResNET-50 and the YOLOv3 detector model more precisely localized the small liver tumor. The model achieves 0.97 mAP on the liver and 0.96 mAp on the localized liver tumor. After localization, a 3D-semantic segmentation approach is proposed for the segmentation of the contaminated areas. The improved segmentation model segments the liver/liver tumor pixels more accurately. When compared to recently published work in this sector, the segmented regions attained 0.99 global accuracy.
  37 in total

1.  Total right hepatic lobectomy; report of a case.

Authors:  G T PACK; H W BAKER
Journal:  Ann Surg       Date:  1953-08       Impact factor: 12.969

Review 2.  Computer-aided diagnosis in medical imaging: historical review, current status and future potential.

Authors:  Kunio Doi
Journal:  Comput Med Imaging Graph       Date:  2007-03-08       Impact factor: 4.790

3.  An application of cascaded 3D fully convolutional networks for medical image segmentation.

Authors:  Holger R Roth; Hirohisa Oda; Xiangrong Zhou; Natsuki Shimizu; Ying Yang; Yuichiro Hayashi; Masahiro Oda; Michitaka Fujiwara; Kazunari Misawa; Kensaku Mori
Journal:  Comput Med Imaging Graph       Date:  2018-03-16       Impact factor: 4.790

4.  Quantum Machine Learning Architecture for COVID-19 Classification Based on Synthetic Data Generation Using Conditional Adversarial Neural Network.

Authors:  Javaria Amin; Muhammad Sharif; Nadia Gul; Seifedine Kadry; Chinmay Chakraborty
Journal:  Cognit Comput       Date:  2021-08-10       Impact factor: 4.890

5.  Detection and characterization of focal liver lesions: a Japanese phase III, multicenter comparison between gadoxetic acid disodium-enhanced magnetic resonance imaging and contrast-enhanced computed tomography predominantly in patients with hepatocellular carcinoma and chronic liver disease.

Authors:  Tomoaki Ichikawa; Kazuhiro Saito; Naoki Yoshioka; Akihiro Tanimoto; Takehiko Gokan; Yasuo Takehara; Takeshi Kamura; Toshifumi Gabata; Takamichi Murakami; Katsuyoshi Ito; Shinji Hirohashi; Akihiro Nishie; Yoko Saito; Hiroaki Onaya; Ryohei Kuwatsuru; Atsuko Morimoto; Koji Ueda; Masayo Kurauchi; Josy Breuer
Journal:  Invest Radiol       Date:  2010-03       Impact factor: 6.016

6.  Position paper of the Italian Association for the Study of the Liver (AISF): the multidisciplinary clinical approach to hepatocellular carcinoma.

Authors:  Luigi Bolondi; Umberto Cillo; Massimo Colombo; Antonio Craxì; Fabio Farinati; Edoardo G Giannini; Rita Golfieri; Massimo Levrero; Antonio Daniele Pinna; Fabio Piscaglia; Giovanni Raimondo; Franco Trevisani; Raffaele Bruno; Paolo Caraceni; Alessia Ciancio; Barbara Coco; Mirella Fraquelli; Maria Rendina; Giovanni Squadrito; Pierluigi Toniutto
Journal:  Dig Liver Dis       Date:  2013-02-23       Impact factor: 4.088

7.  Robust Engineering-based Unified Biomedical Imaging Framework for Liver Tumor Segmentation

Authors:  Vuong Pham; Hai Nguyen; Bao Pham; Thien Nguyen; Hien Nguyen
Journal:  Curr Med Imaging       Date:  2021-08-04

Review 8.  A Review on Recent Developments for Detection of Diabetic Retinopathy.

Authors:  Javeria Amin; Muhammad Sharif; Mussarat Yasmin
Journal:  Scientifica (Cairo)       Date:  2016-09-29

9.  Automatic Liver Segmentation on Volumetric CT Images Using Supervoxel-Based Graph Cuts.

Authors:  Weiwei Wu; Zhuhuang Zhou; Shuicai Wu; Yanhua Zhang
Journal:  Comput Math Methods Med       Date:  2016-04-05       Impact factor: 2.238

10.  De novo design of potent and resilient hACE2 decoys to neutralize SARS-CoV-2.

Authors:  Thomas W Linsky; Renan Vergara; Nuria Codina; Jorgen W Nelson; Matthew J Walker; Wen Su; Christopher O Barnes; Tien-Ying Hsiang; Katharina Esser-Nobis; Kevin Yu; Z Beau Reneer; Yixuan J Hou; Tanu Priya; Masaya Mitsumoto; Avery Pong; Uland Y Lau; Marsha L Mason; Jerry Chen; Alex Chen; Tania Berrocal; Hong Peng; Nicole S Clairmont; Javier Castellanos; Yu-Ru Lin; Anna Josephson-Day; Ralph S Baric; Deborah H Fuller; Carl D Walkey; Ted M Ross; Ryan Swanson; Pamela J Bjorkman; Michael Gale; Luis M Blancas-Mejia; Hui-Ling Yen; Daniel-Adriano Silva
Journal:  Science       Date:  2020-11-05       Impact factor: 47.728

View more
  3 in total

1.  Hybrid Rider Optimization with Deep Learning Driven Biomedical Liver Cancer Detection and Classification.

Authors:  Mesfer Al Duhayyim; Hanan Abdullah Mengash; Radwa Marzouk; Mohamed K Nour; Hany Mahgoub; Fahd Althukair; Abdullah Mohamed
Journal:  Comput Intell Neurosci       Date:  2022-06-30

2.  Recognition of Knee Osteoarthritis (KOA) Using YOLOv2 and Classification Based on Convolutional Neural Network.

Authors:  Usman Yunus; Javeria Amin; Muhammad Sharif; Mussarat Yasmin; Seifedine Kadry; Sujatha Krishnamoorthy
Journal:  Life (Basel)       Date:  2022-07-27

3.  An optimized features selection approach based on Manta Ray Foraging Optimization (MRFO) method for parasite malaria classification.

Authors:  Javeria Amin; Muhammad Sharif; Ghulam Ali Mallah; Steven L Fernandes
Journal:  Front Public Health       Date:  2022-09-06
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.