Literature DB >> 35572730

Deep Transfer Learning-Based Breast Cancer Detection and Classification Model Using Photoacoustic Multimodal Images.

Maha M Althobaiti1, Amal Adnan Ashour2, Nada A Alhindi3, Asim Althobaiti4, Romany F Mansour5, Deepak Gupta6, Ashish Khanna6.   

Abstract

The rapid development of technologies in biomedical research has enriched and broadened the range of medical equipment. Magnetic resonance imaging, ultrasonic imaging, and optical imaging have been discovered by diverse research communities to design multimodal systems, which is essential for biomedical applications. One of the important tools is photoacoustic multimodal imaging (PAMI) which combines the concepts of optics and ultrasonic systems. At the same time, earlier detection of breast cancer becomes essential to reduce mortality. The recent advancements of deep learning (DL) models enable detection and classification the breast cancer using biomedical images. This article introduces a novel social engineering optimization with deep transfer learning-based breast cancer detection and classification (SEODTL-BDC) model using PAI. The intention of the SEODTL-BDC technique is to detect and categorize the presence of breast cancer using ultrasound images. Primarily, bilateral filtering (BF) is applied as an image preprocessing technique to remove noise. Besides, a lightweight LEDNet model is employed for the segmentation of biomedical images. In addition, residual network (ResNet-18) model can be utilized as a feature extractor. Finally, SEO with recurrent neural network (RNN) model, named SEO-RNN classifier, is applied to allot proper class labels to the biomedical images. The performance validation of the SEODTL-BDC technique is carried out using benchmark dataset and the experimental outcomes pointed out the supremacy of the SEODTL-BDC approach over the existing methods.
Copyright © 2022 Maha M. Althobaiti et al.

Entities:  

Mesh:

Year:  2022        PMID: 35572730      PMCID: PMC9098312          DOI: 10.1155/2022/3714422

Source DB:  PubMed          Journal:  Biomed Res Int            Impact factor:   3.246


1. Introduction

Multimodal imaging plays a significant role in the healthcare of different diseases by enhancing the clinician's capability to implement surveillance, monitoring, diagnosis, staging, therapy guidance, planning, evaluating recurrence, and screening therapy efficacy [1]. The multimodal imaging system has been extensively employed in clinical practice and medical research [2], namely, tumor resection surgeries, cardiovascular disease, neuropsychiatric disease, and Alzheimer's. Photoacoustic imaging (PAI) is a hybrid biomedical imaging system that exploits optical and acoustical features [3]. PA imaging was assessed as a clinical and preclinical imaging technique in the biomedical fields. PA imaging depends on the PA effect. Once a pulsed laser using a pulse width of nanosecond illuminates a targeted object, a PA wave could be consequently induced object subsequent relaxation and thermoelastic expansion [4]. An ultrasound (US) transducer identifies the PA wave, and an image is recreated by using an imaging system. PAI is current example of the effective rise of optical imaging modality. PAI uses the absorption features of exogenous or endogenous biomarkers for generating targeted image contrast with a wide-ranging penetration depth and spatial resolution [5]. Figure 1 illustrates the process of PAI.
Figure 1

Process of PAI.

The rich absorption data that PAI offers will be complemented well by an imaging modality that provides scattering data in detail. Depending on the way image is formed, PAI is split into two major classes: photoacoustic microscopy (PAM) that employs focused-based image formation and photoacoustic tomography (PAT) that employs reconstruction-based image formation [6]. Usually, in PAT, a wide-ranging unfocused excitation beam is collectively utilized with an array of ultrasonic detector that measures the ultrasound wave in various locations [7]. It provides field of view (FOV) images and is utilized in applications like breast cancer studies and whole-body imaging of small animals. Mammography is utilized very much for earlier screening and detection of breast cancer over the last few years, but reading mammography is a labor-intensive task for radiotherapists, who cannot offer reliable outcomes between readings [8]. The readings are based on subjective, training, and experience criteria. Computer-aided diagnosis (CAD) system assists radiotherapist in interpreting sonography for mass classification and detection. The usage of machine learning (ML) was quickly increasing in the field of medical imaging, including radiomics, medical image analysis, and CAD. Lately, ML field named deep learning (DL) appeared in the computer vision fields and become more common in various areas [9]. It started from an event in late 2012, once a DL method depends on a convolution neural network (CNN) won an overwhelming victory in the better-known worldwide CV competition, ImageNet Classification. Thereafter, researchers in almost every field, including medicinal imaging, have actively started contributing to the increasing area of DL [10-13]. This article introduces a novel social engineering optimization with deep transfer learning-based breast cancer detection and classification (SEODTL-BDC) model using PAI. The SEODTL-BDC technique involves bilateral filtering (BF) as an image preprocessing technique to remove noise. Moreover, a lightweight LEDNet model is employed for the segmentation of biomedical images. Also, residual network (ResNet-18) model can be utilized as a feature extractor. Furthermore, SEO with recurrent neural network (RNN) model is applied for image classification. In order to demonstrate the enhanced outcomes of the SEODTL-BDC model, a series of simulations can be performed using benchmark dataset.

2. Literature Review

Manwar et al. [14] presented an approach-based DL method for virtually increasing the MPE to improve the signal-to-noise ratio of deep structure from the brain tissues. The presented approach estimated in vivo sheep brain imaging research. Then, approach could enable medical translation of photoacoustic method in brain imaging, particularly in transfontanelle brain imaging in neonates. Ma et al. [15] developed an approach for automatically generating breast mathematical models for PAI. The distinct kinds of tissue are automatically extracted initially by applying DL and other techniques from mammography. Later, the tissue is integrated with arithmetical set operation for generating a breast image afterward being allocated optical and acoustic parameters. Zhang et al. [16] investigated the DL methods in emerging tomography for breast cancer diagnosis. Especially, we utilized a preprocessing method for enhancing the uniformity and quality of input breast cancer images and a transfer learning (TL) technique to accomplish good classification accuracy. Lan et al. [17] introduced a CNN architecture Y-Net: a CNN framework for reconstructing the first PA pressure distribution by improving raw data and beamformed images. The network integrase 2 encoders with one decoder path optimally use data from beamformed images and raw information. Jabeen et al. [18] introduced an architecture for breast cancer classification in ultrasound images which applies DL and fusion of the optimal chosen features. The presented method is classified into the following: (i) data augmentation is implemented for increasing the size of new data set for learning of CNN model; (ii) a pretrained DarkNet-53 architecture is taken into account, and the output layer is adapted on the basis of data set class. Zhu et al. [19] developed an automated system for categorizing thyroid and breast cancers in ultrasound images with DCNN. Particularly, we proposed a generic DCNN framework using TL and the similar structural parameter settings for training model to thyroid and breast lesions (TNet and BNet) correspondingly and test the feasibility of generic model using ultrasound images gathered from medical practice. Ha et al. [20] examined the capability of CNN to forecast axillary lymph node metastasis with primary breast cancer ultrasound (US) images. The CNN has been executed completely of 3 × 3 convolution kernels and linear layer. Feature maps were downsampled with strided convolution.

3. The Proposed Model

In this study, a novel SEODTL-BDC technique has been developed for the detection and classification of breast cancer utilizing ultrasound images. The proposed SEODTL-BDC technique encompasses a series of subprocesses, namely, BF-based preprocessing, LEDNet-based segmentation, ResNet-18-based feature extraction, RNN-based classification, and SEO-based hyperparameter tuning. The detailed working of every module involved in the SEODTL-BDC technique is elaborated in the following. Figure 2 depicts the overall process of SEODTL-BDC technique.
Figure 2

Overall process of SEODTL-BDC technique.

3.1. Preprocessing

In this study, BF technique is used as an image preprocessing tool. It smoothens the images without changing the edges, through a nonlinear integration of the closing value of an image. The presented approach is simple, local, and noniterative. It integrates gray levels, based on the photometric similarity and geometric proximity. It selects closer values to distance values in range and domain. In contradiction of filter functioning in 3 individual color bands, a 2-sided filter enforces the fundamental perception metrics in the CIE-Lab color space, smoothens the color, and preserves the edge to suit human perception [21].

3.2. Image Segmentation: LEDNet Model

LEDNet follows an encoding-decoding infrastructure. It utilizes an asymmetric sequential infrastructure, whereas encoding produces downsampled feature map, and following decoding adapts APN which upsamples the feature map for matching input resolution. Also SS-nbt unit, the encoding also contains downsampling unit that is carried out by stacking 2 parallel resultants of single 3 × 3 convolutional with stride 2 and max-pooling. The downsampling allows very deeper network for gathering contexts but is simultaneously used for reducing computation. In addition, the procedure of dilated convolution permits infrastructure to take huge receptive domain, resulting in an enhancement in accuracy. Related to utilize of superior kernel size, this approach was established to enhance efficiency with respect to computational cost and parameters. Simulated by attention process [22], the decoding design APN for performing dense evaluation utilizes spatial-wise attention. For increasing receptive domain, the APN adapts a pyramid attention element that combines features in 3 distinct pyramid scales. It is initial employ 3 × 3, 5 × 5, and 7 × 7 convolutional with stride 2. Afterward, the pyramid infrastructure fuses data of distinct scales step-by-step that is integrate neighbor scales of context accurately. As higher-level feature map is smaller resolution, utilizing huge kernel size does not bring excess computation burden. Afterward, a 1 × 1 convolutional was executed to the resultant of encoding; next, the convolution feature map is pixel-wisely multiplied by pyramid attention feature. In order to improve efficiency, a global average pooling branch was established for integrating global context prior attention. Eventually, an upsampling unit was utilized for matching the resolution of input images.

3.3. Feature Extraction: ResNet-18 Model

During feature extraction process, the segmented image is passed as to ResNet-18 technique to identify the lesion regions in the ultrasound images [23]. For extracting deep features from input images, a deep CNN was needed that trained. But once the model is deep, the degradation issue is prone to take place. While the method obtains very deeper, the model performance will not enhance but reduce. The residual block (RB) that is stacked from the models is the core of ResNet. Different from traditional CNN stacked by convolution and pooling layers, all the RBs are comprised of 2 convolution layers and short connections. Now, x denotes the input signal, and F (x) represents the resultant of RB beforehand the 2nd layer activation function. When W1 and W2 represent the weight of 1st and 2nd layer of RB, correspondingly, F (x) is determined by F (x) = W2f (W1X). In the RB, activation function f employs ReLU. Therefore, the last outcome of RB is f (F (x) + x). Assume the target output of RB was equivalent to the input x that is easily viewed in a DL architecture. On the other hand, we needed to enhance x to F (x) = x from traditional CNN without shortcut connection. Here, it can be trained an 18-layer CNN (ResNet-18) comprised of eight RBs, 7 × 7 convolution layers, one fully connected layer, and two pooling layers for realizing the automated classification of TUSP images afterward resizing and padding. Also, all the RBs are comprised of two 3 × 3 convolution layers.

3.4. Image Classification: Optimal RNN Model

At the final stage, the SEO-RNN model can be applied for the detection and classification of breast cancer using ultrasound images. It is a preassumption in a conventional NN that each input and output are independent of one another. Nevertheless, this assumption is not true in several applications, especially those that utilize series data, like speech recognition tasks. Different from a conventional NN, RNN generates output dependent on the prior state calculated and repeatedly implements a similar task for sequential components. In another word, RNN benefitted from having a memory that stores formerly estimated data. RNN is commonly utilized for language modeling and showed greater potential in natural language processing tasks [24]. The possibility is given in the following: Consider x and h represent the input and hidden states at timestamp t, correspondingly. The output y at timestamp t is determined by whereas V denotes the weight matrix of output layer. h represents the memory of network and is estimated according to the preceding hidden layer and the input at the existing step: h = f(Ux + Wh) . U and W represent weight matrix for the input and hidden states, correspondingly. Usually, the activation function f is a nonlinearity, namely, tanh, ReLU, or sigmoid. In RNN, the overall amount of variables is reduced in comparison to FFNN as each parameter is shared between each step. Hence, for distinct inputs, a similar task is implemented at every step. For optimally tuning, the hyperparameters involved in the RNN model, the SEO algorithm can be utilized. SEO algorithm is a two-solution based metaheuristic proposed by Fard et al. [25]. The subsequent step describes the algorithm. The metaheuristic is initialized by fitness values and two random solutions, and the optimal solution takes the role of defender and attacker. Here, a solution is called a person, and the variable of solution is called traits. In Nvar-dimension optimization problem, a person is initialized arbitrarily as array of size 1×Nvar, represented by Afterward, the solution was initialized, and their fitness value has estimated. The steps mimic the retraining and training of the attackers from the defenders. Now, the attackers attempt to test on all the variables (trait) of defender to recognize the effective trait. Next, α percentage of attacker trait is chosen arbitrarily and replaced with the similar trait of the defender as whereas α indicates the percentage of chosen traits, and Nvar indicates the overall amount of traits in a person. Nrain shows the count of attacker traits that exchanged with similar arbitrary traits of defenders. Firstly, the attacker directly abuses the defender to attain the purpose as follows. whereas defnew and defold denote the new and present locations of the defender, correspondingly. att signifies the existing location of attackers. β indicates the rate of spotting an attack. r1 and r2 are the initialized arbitrarily within [0, 1]. At the time of phishing, the attacker pretended to attack the defender thus the defender changed to a novel location whereby the attacker needs it to be. During diversion theft process, the attacker guides the defender to a novel location from deception as follows: In pretext, the attacker traps the defender to defeat it. One novel solution is generated as follows: In which r1, r2, r3, and r4 are arbitrary values within [0, 1]. While responding to attacks, a novel location of the defender is estimated and compared to its older location. Furthermore, the optimum location for the defender is selected. When the novel location of the defender has superior to the attacker, the attacker becomes defender. The flowchart of the SEO algorithm is given in Figure 3.
Figure 3

Flowchart of SEO algorithm.

4. Results and Discussion

The performance validation of the SEODTL-BDC model is carried out using benchmark breast ultrasound dataset [26]. It comprises 437 benign images, 210 malignant images, and 133 normal images. Some sample images are demonstrated in Figure 4.
Figure 4

Sample images.

Figure 5 illustrates a set of three confusion matrices produced by the SEODTL-BDC technique on the test dataset. The outcomes indicated that the SEODTL-BDC model has shown effectual classification under varying sizes of training/testing data. For sample, with training/testing data of 70 : 30, the SEODTL-BDC model has recognized 130 instances under benign class, 63 images under malignant class, and 39 images under normal class. Followed by, the SEODTL-BDC model has resulted in 174, 83, and 51 images into benign, malignant, and normal, respectively.
Figure 5

Confusion matrix of SEODTL-BDC technique under three training/testing datasets.

Table 1 provides the overall classification results of the SEODTL-BDC model on distinct training/testing data. The results outperformed that the SEODTL-BDC model has resulted in maximal classification performance under all training/testing dataset. For sample, with training/testing data of 50 : 50, the SEODTL-BDC model has offered average prec of 0.9903, reca of 0.9903, accu of 0.9949, and Fscore of 0.9903. Simultaneously, with training/testing data of 70 : 30, the SEODTL-BDC model has provided average prec of 0.9891, reca of 0.9891, accu of 0.9943, and Fscore of 0.9891. Concurrently, with training/testing data of 60 : 40, the SEODTL-BDC model has resulted in average prec of 0.9838, reca of 0.9815, accu of 0.9915, and Fscore of 0.9827.
Table 1

Result analysis of SEODTL-BDC technique under distinct training/testing dataset.

MethodsPrecisionRecallAccuracy F-score
Training/testing -50 : 50
Benign0.99540.99540.99490.9954
Malignant0.99050.99050.99490.9905
Normal0.98510.98510.99490.9851
Average0.99030.99030.99490.9903
Training/testing -70 : 30
Benign0.99240.99240.99150.9924
Malignant1.00001.00001.00001.0000
Normal0.97500.97500.99150.9750
Average0.98910.98910.99430.9891
Training/testing -60 : 40
Benign0.99430.99430.99360.9943
Malignant0.97650.98810.99040.9822
Normal0.98080.96230.99040.9714
Average0.98380.98150.99150.9827
Table 2 and Figure 6 demonstrate a comprehensive comparative study of the SEODTL-BDC model with existing models on training/testing data of 50 : 50. The results indicated that the LD model has resulted in ineffectual outcome with the lower values of prec, reca, accu, and Fscore. Besides, the ESKNN and FKNN models have reached slightly improved values of prec, reca, accu, and Fscore. Along with that, the ESD and LSSVM models have obtained considerably increased values of prec, reca, accu, and Fscore. However, the SEODTL-BDC model has accomplished superior performance with the prec, reca, accu, and F of 0.9900, 0.9900, 0.9950, and 0.9900 correspondingly.
Table 2

Comparative analysis of SEODTL-BDC technique with recent approaches under training/testing -50 : 50.

MethodsPrecisionRecallAccuracy F-scoreClassification time (m)
SEODTL-BDC0.99000.99000.99500.99001.5130
ESD model0.98800.98800.98900.98803.3010
LSVM model0.98900.98900.98900.98902.0500
ESKNN model0.98600.96600.98700.98603.1630
FKNN algorithm0.98700.98700.98700.98702.1780
LD algorithm0.98600.98600.98600.98602.0150
Figure 6

Comparative analysis of SEODTL-BDC technique under Training/Testing (50 : 50).

Table 3 and Figure 7 validate a wide-ranging comparative study of the SEODTL-BDC model with existing models on training/testing data of 70 : 30. The experimental values depicted that the LD model has led to worse performance with minimal values of prec, reca, accu, and Fscore. In addition, the ESKNN and FKNN models have reached slightly improved values of prec, reca, accu, and Fscore. Followed by, the ESD and LSSVM models have obtained considerably increased values of prec, reca, accu, and Fscore. But the SEODTL-BDC model has outperformed the other methods with increased prec, reca, accu, and Fscore of 0.9840, 0.9820, 0.9920, and 0.9830 correspondingly.
Table 3

Comparative analysis of SEODTL-BDC technique with recent approaches under Training/Testing -70 : 30.

MethodsPrecisionRecallAccuracy F-scoreClassification time (m)
SEODTL-BDC0.98900.98900.99400.98901.1420
ESD model0.98200.98300.99000.98202.7850
LSVM model0.98100.97900.99100.98102.0100
ESKNN model0.98000.98100.98100.98002.3620
FKNN algorithm0.97700.97700.97700.97702.0720
LD algorithm0.97600.97600.97700.97602.1920
Figure 7

Comparative analysis of SEODTL-BDC technique under training/testing (70 : 30).

Table 4 and Figure 8 exhibit a brief comparative study of the SEODTL-BDC model with existing models on training/testing data of 60 : 40. The experimental results portrayed that the LD model has reached ineffectual outcome with the lower values of prec, reca, accu, and Fscore. Moreover, the ESKNN and FKNN models have reached certainly enhanced values of prec, reca, accu, and Fscore.
Table 4

Comparative analysis of SEODTL-BDC technique with recent approaches under training/testing -60 : 40.

MethodsPrecisionRecallAccuracy F-scoreClassification time (m)
SEODTL-BDC0.98400.98200.99200.98301.0920
ESD model0.98700.98600.98700.98701.5720
LSVM model0.98500.98500.98600.98501.1470
ESKNN model0.98000.97900.98000.98001.3220
FKNN algorithm0.97800.97800.97800.98201.4090
LD algorithm0.98100.98100.98100.98401.4220
Figure 8

Comparative analysis of SEODTL-BDC technique under training/testing (60 : 40).

Furthermore, the ESD and LSSVM models have obtained considerably increased values of prec, reca, accu, and Fscore. However, the SEODTL-BDC model has reached better performance with the prec, reca, accu, and Fscore of 0.9840, 0.9820, 0.9920, and 0.9830 correspondingly. Figure 9 inspects the comparative CT examination of the SEODTL-BDC with existing techniques. The results shown that the SEODTL-BDC technique has offered minimal CT over the other techniques under distinct sizes of training/testing data. For instance, on training/testing data of 50 : 50, the SEODTL-BDC technique has provided lower CT of 1.5130 m, whereas the ESD, LSVM, ESKNN, FKNN, and LD models have reached to higher CT of 3.3010 m, 2.0500 m, 3.1630 m, 2.1780 m, and 2.0150 m, respectively. Eventually, on training/testing data of 70 : 30, the SEODTL-BDC technique has offered reduced CT of 1.1420 m, whereas the ESD, LSVM, ESKNN, FKNN, and LD models have accomplished increased CT of 2.7850 m, 2.0100 m, 2.3620 m, 2.0720 m, and 2.1920 m, respectively.
Figure 9

CT analysis of SEODTL-BDC technique under three training/testing datasets.

Figure 10 demonstrates the ROC analysis of the SEODTL-BDC technique under different training and testing datasets. The figure exposed that the IAOA-DLFD system has reached enhanced outcome with the enhanced ROC of 98.4816 on training/testing (50 : 50).
Figure 10

ROC of SEODTL-BDC technique under different training/testing datasets.

The overall accuracy outcome analysis of the SEODTL-BDC method under training/testing (50 : 50) dataset is portrayed in Figure 11. The results demonstrated that the SEODTL-BDC technique has accomplished improved validation accuracy compared to training accuracy. It is also observable that the accuracy values get saturated with the count of epochs.
Figure 11

Accuracy of SEODTL-BDC technique under training/testing (50 : 50) dataset.

The overall loss outcome analysis of the SEODTL-BDC technique under training/testing (50 : 50) dataset is illustrated in Figure 12. The figure revealed that the SEODTL-BDC approach has denoted the reduced validation loss over the training loss. It is additionally noticed that the loss values get saturated with the count of epochs.
Figure 12

Loss of SEODTL-BDC technique under training/testing (50 : 50) dataset.

From the aforementioned tables and figures, it can be ensured that the SEODTL-BDC model has resulted in enhanced classification performance over the other methods.

5. Conclusion

In this study, a novel SEODTL-BDC approach has been developed for the detection and classification of breast cancer utilizing ultrasound images. The proposed SEODTL-BDC technique encompasses a series of subprocesses, namely, BF-based preprocessing, LEDNet-based segmentation, ResNet-18-based feature extraction, RNN-based classification, and SEO-based hyperparameter tuning. For demonstrating the improved outcomes of the SEODTL-BDC model, a sequence of simulations can be performed using benchmark dataset. Extensive comparative results pointed out the supremacy of the SEODTL-BDC approach over the existing methods. Therefore, the SEODTL-BDC model can be applied as a proficient tool for breast cancer classification utilizing ultrasound image. In future, advanced DL models can be utilized for enhanced breast cancer classification performance.
  18 in total

1.  Rapid in situ biosynthesis of gold nanoparticles in living platelets for multimodal biomedical imaging.

Authors:  Juan Jin; Taotao Liu; Mingxi Li; Chuxiao Yuan; Yang Liu; Jian Tang; Zhenqiang Feng; Yue Zhou; Fang Yang; Ning Gu
Journal:  Colloids Surf B Biointerfaces       Date:  2018-01-10       Impact factor: 5.268

2.  MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation.

Authors:  Nabil Ibtehaz; M Sohel Rahman
Journal:  Neural Netw       Date:  2019-09-04

3.  Multi-input deep learning approach for Cardiovascular Disease diagnosis using Myocardial Perfusion Imaging and clinical data.

Authors:  Ioannis D Apostolopoulos; Dimitris I Apostolopoulos; Trifon I Spyridonidis; Nikolaos D Papathanasiou; George S Panayiotakis
Journal:  Phys Med       Date:  2021-04-23       Impact factor: 2.685

Review 4.  Applications of deep learning techniques for automated multiple sclerosis detection using magnetic resonance imaging: A review.

Authors:  Afshin Shoeibi; Marjane Khodatars; Mahboobeh Jafari; Parisa Moridian; Mitra Rezaei; Roohallah Alizadehsani; Fahime Khozeimeh; Juan Manuel Gorriz; Jónathan Heras; Maryam Panahiazar; Saeid Nahavandi; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2021-07-31       Impact factor: 4.589

5.  Deep learning prediction of axillary lymph node status using ultrasound images.

Authors:  Shawn Sun; Simukayi Mutasa; Michael Z Liu; John Nemer; Mary Sun; Maham Siddique; Elise Desperito; Sachin Jambawalikar; Richard S Ha
Journal:  Comput Biol Med       Date:  2022-01-24       Impact factor: 4.589

6.  Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion.

Authors:  Kiran Jabeen; Muhammad Attique Khan; Majed Alhaisoni; Usman Tariq; Yu-Dong Zhang; Ameer Hamza; Artūras Mickus; Robertas Damaševičius
Journal:  Sensors (Basel)       Date:  2022-01-21       Impact factor: 3.576

7.  Recognition of Thyroid Ultrasound Standard Plane Images Based on Residual Network.

Authors:  Minghui Guo; Kangjian Wang; Shunlan Liu; Yongzhao Du; Peizhong Liu; Qichen Su; Guorong Lv
Journal:  Comput Intell Neurosci       Date:  2021-06-02
View more
  1 in total

1.  Automated Recognition of Cancer Tissues through Deep Learning Framework from the Photoacoustic Specimen.

Authors:  Gayathry Sobhanan Warrier; T M Amirthalakshmi; K Nimala; T Thaj Mary Delsy; P Stella Rose Malar; G Ramkumar; Raja Raju
Journal:  Contrast Media Mol Imaging       Date:  2022-08-10       Impact factor: 3.009

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.