Literature DB >> 32733596

An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm.

Wentao Wu1, Daning Li2, Jiaoyang Du1, Xiangyu Gao2, Wen Gu3, Fanfan Zhao2, Xiaojie Feng2, Hong Yan1.   

Abstract

Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.
Copyright © 2020 Wentao Wu et al.

Entities:  

Mesh:

Year:  2020        PMID: 32733596      PMCID: PMC7376410          DOI: 10.1155/2020/6789306

Source DB:  PubMed          Journal:  Comput Math Methods Med        ISSN: 1748-670X            Impact factor:   2.238


1. Introduction

The incidence of brain tumors increases with age [1]. This article focuses on gliomas in brain tumors. According to the location of the glioma, the cell type, and the severity of the tumor, the World Health Organization classifies the glioma into I~IV grades. Among them, Classes I and II are low-grade gliomas, and Classes III and IV are high-grade gliomas [2]. In order to facilitate doctors to accurately remove gliomas during surgery, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Computed Tomography (PET) and other imaging techniques are commonly used in clinical treatment to brain image segmentation of the glioma area which helps the doctor to safely remove the tumor within the maximum range. At the same time, MRI has the characteristics of significant soft tissue contrast and can provide abundant physiological tissue information. In the clinical treatment of gliomas, MRI is usually used to diagnose gliomas preoperatively, intraoperatively, and postoperatively. Glioma is a tumor composed of a necrotic core, a margin of tumor activity, and edema tissue. Multiple MRI sequences can be used to image different tumor tissues [3], as shown in Figure 1. At present, MRI imaging of gliomas generally has four modal sequences: T1-weighted, post-contrast T1-weighted, T2-weighted, and FLAIR. Different sequences reflect different glioma tissues [4]. The general FLAIR sequence is suitable for observing edema tissues, and the T1ce sequence is suitable for observing the active components of the tumor core.
Figure 1

MRI of glioma: (a) T1-weighted, (b) postcontrast T1-weighted, (c) T2-weighted, and (d) FLAIR.

MRI-based segmentation of gliomas and their surrounding abnormal tissues facilitates the doctor to observe the external morphology of each tumor tissue of the patient's glioma and also facilitates the doctor's imaging-based analysis and further treatment. Therefore, the segmentation of glioma is considered to be a first step in the MRI analysis of glioma patients. Because gliomas have different degrees of deterioration and contain multiple tumor tissue regions and brain MRI is a multimodal and many-layer three-dimensional scan image, manual segmentation of glioma regions requires a lot of time and manpower. In addition, manual segmentation is often based on the brightness of the image observed by the human eye for area segmentation, which is easily affected by the quality of the image generation and the personal factors of the tagger. It is prone to erroneous segmentation and segmentation of redundant areas. Therefore, in clinical practice, a fully automatic segmentation method with good segmentation accuracy for gliomas is needed. However, the problems in the study of automatic glioma segmentation methods are summarized as follows: (1) glioma is often distinguished in the image by the change in pixel intensity between the lesion area and surrounding normal tissues. Due to the presence of a gray-scale offset field, the intensity gradient between adjacent tumor tissues will be smoothed, resulting in blurred tumor tissue boundaries. (2) The structure of gliomas differs in size, shape, and position, making segmentation algorithms difficult to model. And because the growth position of glioma is not fixed, it is often accompanied by a tumor mass effect. This will cause the surrounding normal brain tissue to be compressed and change its shape, thereby generating irregular background information and increasing the difficulty of segmentation. At present, computer-aided diagnosis technology based on machine learning has been widely used in medical image analysis in recent years [5-14]. Since the algorithm based on machine learning can train model parameters through various features of medical images and use the trained model to predict the extracted features, it can well solve the classification, regression, and aggregation in medical images. At the same time, the deep learning technology in machine learning can directly obtain high-dimensional features directly from the data and automatically adjust the model parameters through forward propagation and back-regulation algorithms, so that the performance of the model in related tasks can be optimized. Therefore, medical data processing of deep learning technology has developed into a research hotspot. Brain tumor segmentation methods can be roughly divided into three categories: based on traditional image algorithms [15-20], based on machine learning [21-24], and based on deep learning [25-30]. In recent years, deep learning has become the method of choice for complex tasks due to its high accuracy. The convolutional neural network (CNN) proposed in [25] has made tremendous progress in the field of image processing. Therefore, the segmentation method based on the convolutional neural network is widely used in segmentation of lung nodules, retinal segmentation, liver cancer segmentation, and glioma segmentation [26]. Many scholars have begun to apply CNN in deep learning to segmentation of gliomas. Reference [31] proposes a brain cancer segmentation method based on dual-path CNN. Reference [32] trained two CNNs to segment high-grade gliomas and low-grade gliomas. Reference [33] proposed a two-channel three-dimensional CNN for glioma segmentation. This paper mainly studies the segmentation method of glioma based on the deep learning method, aiming at automatically and accurately segmenting the glioma region from the brain MRI through the deep learning algorithm. For the task of glioma segmentation, this paper proposes a DCNN-F-SVM deep classifier. The main research contents of this article are as follows: A new depth classifier is proposed. The classifier is composed of a deep convolutional neural network and an integrated SVM algorithm. First, CNN was trained to learn the mapping from image space to tumor label space. The predicted labels in CNN together with the test images were input into an integrated SVM classifier. In order to make the results more accurate, we deepened the classification process and iterated these two steps again to form the framework of the next CNN-SVM in series The traditional segmentation method is to use the training set to train a suitable classifier, and then test the set for verification. The method proposed in this study is completely different from the traditional method. The proposed model mainly includes three stages: one is preprocessing, feature extraction, and training CNN and SVM. The second is to test and generate the final segmentation results. The third is to deepen the order of our CNN-SVM cascade classifier through an iterative step Apply the proposed model to public datasets and self-made datasets for evaluation. Compared with the segmentation performance of CNN and SVM alone, the superiority of the proposed model can be reflected in various evaluation indexes

2. Related Works

2.1. Process of Brain Tumor Segmentation Algorithm Based on Deep Learning

In the currently proposed glioma segmentation method, the segmentation results of traditional image processing algorithms rely heavily on manual intervention, and a priori constraints are required to ensure the segmentation effect, resulting in poor robustness and low efficiency of the method. The glioma segmentation method based on machine learning needs to manually select the features of the image, so that the segmentation effect of this type of method depends on the artificial features, and the generalization ability of the segmentation algorithm itself is weak. The glioma segmentation method based on deep learning can automatically extract image features through the neural network model and segment the glioma region. Therefore, the shortcomings of strong prior constraints and manual intervention in the above method are overcame. The automation and robustness of the segmentation algorithm are improved, and good segmentation results can be achieved in large-scale complex glioma segmentation scenarios. Figure 2 is the flow of glioma segmentation algorithm based on deep learning. The process can be described as follows: first, obtain the MRI of the patient's brain and use it as the input data of the algorithm; then, divide the input data into the training set, verify the set, and test the set. At the same time, due to factors such as noise and uneven intensity in the original brain MRI, the divided data needs to be preprocessed. Commonly used glioma image preprocessing methods include image registration, skull removal, intensity standardization, and offset correction. Next, use the preprocessed input data to train the deep learning model. During the training process, the deep model will automatically perform feature extraction, and add the extracted features to the designed model structure for forward propagation. At the same time, the multiregion mask of glioma is used as a label to calculate the loss value, so that the model parameters are reversely adjusted in multiple iterations to achieve the purpose of optimal model performance. Then, at the end of each iteration, different evaluation indicators are used to evaluate the performance of the model, and the models that meet the conditions of the indicators are saved. Finally, the highly evaluated model is used to segment the test set data to obtain the final glioma segmentation results.
Figure 2

Flow chart of glioma segmentation algorithm based on deep learning.

2.2. A Deep Brain Tumor Feature Generation Method

CNNs are well-known practical models in the field of deep learning, and their innovative ideas stem from the processing of human brain nerves. The perceptron model proposed in 1980 is considered to be the original model of convolutional neural networks. The perceptron model is a classic model in the field of machine learning, but this model also has great shortcomings and cannot solve XOR problems well. On this basis, reference [34] proposed the LeNet model, which has multiple convolutional layers, and each layer is a fully connected model trained using the back propagation algorithm [35]. Reference [36] proposed an artificial neural network called displacement invariance and studied the parallel structure of the convolutional neural network. However, these models are limited by experimental data and hardware conditions. Therefore, it is not suitable for complex tasks such as object detection and scene classification. In order to solve some problems in the training process of convolutional neural networks, Krizhevsky et al. proposed the AlexNet model [37]. In order to solve the overfitting problem of convolutional neural networks, the model proposes local convolution and Relu technologies, and the overfitting problem is well solved. CNN is essentially a multilayer perceptron and a multilayer neural network, and there is an obvious sequence between these layers, which is composed of an input layer, a hidden layer, and an output layer. There can be multiple hidden layers, and each layer is composed of multiple two-dimensional planes. Each plane contains multiple neurons, and the hidden layer consists of a convolution layer, a downsampling layer, and a fully connected layer. The convolution layer and the downsampling layer appear alternately and can have multiple layers, and the fully connected layer can also have multiple layers. The network structure of the traditional convolutional neural network LeNet is shown in Figure 3.
Figure 3

LeNet convolutional neural network structure.

In the convolution layer, the feature maps output by the previous layer are convolved by the learned convolution kernel, and the corresponding partial derivatives are input into the activation function together to form an output feature maps. The downsampling layer is used for feature selection to select representative features. The fully connected layer is a neural network layer whose role is to map two-dimensional distributed features into feature vectors for better classification. The output layer is a simple classification layer, usually using logistic regression for classification. Here, we use the Softmax classifier for classification. The activation function usually selects a nonlinear function to better fit the nonlinear model. Selecting the activation function needs to consider its monotonicity and derivability. Common activation functions are shown as follows: Relu function: f(x) = max(0, x) Softplus function: f(x) = log(1 + e) The CNN model structure is simpler and easier to expand than the neurocognitive machine. In the neurocognitive machine, the downsampling layer and the convolutional layer alternate to form the function of feature extraction and abstraction, while in the convolutional neural network, the convolutional layer and the downsampling layer alternate, and their functions are similar. The convolution operation simplifies feature extraction, the excitation function replaces multiple nonlinear functions of the neurocognitive machine, and the pooling operation is also simpler. The CNN algorithm flow is shown in Figure 4.
Figure 4

CNN flow chart.

2.3. Introduction of Brain Tumor Dataset

The BraTS Challenge held in 2012 provided a brain MRI dataset with both low-grade gliomas and high-grade gliomas. The dataset provides MRI of multiple patients and provides a multiregion glioma segmentation ground truth for each patient. Among them, ground truth is the result of fusion of 20 segmentation algorithms and then manually labeled by multiple human experts. Every BraTS competition will provide a public dataset of gliomas. However, the glioma dataset provided since BraTS17 has been significantly different from the dataset provided before 2016. The dataset used between BraTS14 and BraTS16 contains images of gliomas before and after surgery, which leads to confusing glioma segmentation criteria in the dataset and does not have the conditions to be true segmentation criteria. Therefore, the datasets between BraTS14 and BraTS16 are no longer used in the games after BraTS17. The BraTS18 dataset is based on the BraTS17 dataset with the addition of the TCIA glioma dataset. The TCIA glioma dataset includes 262 high-grade glioma patient images and 199 low-grade glioma patient images. This dataset contains the MRI and ground truth of 543 glioma patients and is currently the most standard glioma segmentation dataset. The details of the datasets in the BraTS competition datasets over the years are shown in Table 1.
Table 1

Introduction of BraTS dataset over the years.

DatasetDateTotal number of samples
Training setValidation setTest setTotal
BraTS12201230102565
BraTS13201330102565
BraTS14201440102565
BraTS152015274110384
BraTS162016274191465
BraTS17201721046146412
BraTS18201828567191543
As shown in Figure 5, gliomas are generally divided into four tumor regions, namely, edema around the tumor (ED), nonenhanced tumor core (NET), enhanced tumor core (ET), and necrotic core (NCR). Among them, ED, NET, and NCR are real glioma tumor tissues. The enhancement of the tumor core is to facilitate the observation of the tumor core.
Figure 5

Tumor area division of glioma.

2.4. Evaluation Method of Segmentation Result

The common evaluation methods for evaluating the performance of each model in the field of image segmentation are shown in Table 2.
Table 2

The description of the adopted indices.

IndexExpression/description
True Positive (TP)TP indicates that the model predicts a glioma region, and the doctor marks pixels that are also glioma regions
False Positive (FP)FP means pixels predicted by the model as the glioma area are actually the background area
True Negative (TN)TN indicates that the model predicted as the background area is actually the pixel of the background area
True Negative (TN)FN means pixels predicted by the model as the background area are actually as the tumor area
Dice Similarity Coefficient (DSC)DSC = 2TP/FP + 2TP + FN
SensitivitySens = TP/TP + FN
SpecificitySpec = TN/TN + FP
In addition to the above evaluation indicators, there are indicators such as Hausdorff Li and positive predictive value. The most commonly used are DSC and sensitivity.

3. Introduction of DCNN-F-SVM Model

This study proposes a brain tumor segmentation model based on convolutional neural network fusion SVM. Figure 6 is the model flow chart.
Figure 6

The proposed model flow chart.

The proposed model segmentation of brain tumor images can be divided into two parts: one is preprocessing, feature extraction, and training CNN and SVM; the other is testing and generating the final segmentation results. It can be divided into 3 stages. In the first stage, CNN and integrated SVM are trained to obtain the mapping from the gray image domain to the tumor label domain. In the second stage, the labeled output of CNN and the test image are input into the integrated SVM classifier. In the third stage, an iterative step is used to connect the CNN and the integrated SVM classifier, which increases the number of layers. In order to select the optimal feature, an intermediate processing step is added to the model, as shown in Figure 7.
Figure 7

Schematic diagram of intermediate processing.

Grayscale, mean, and median are used to represent each pixel. These features are used to train CNN to obtain a nonlinear mapping between input features and labels. In the testing stage, an aggregated SVM classifier is independently trained using the aggregated CNN label map and the same features as before. An iterative classification process is applied to the preprocessed input image. First, CNN classifies the pixels in the key area, thus generating a kind of presegmentation, which will be sent to the integrated SVM classifier. Then, a Region Of Interest (ROI) on presegmentation will be generated. In addition to presegmentation, classification based on integrated SVM will be performed on this ROI. After that, the integrated SVM explores the neighborhood of the CNN output. Use CNN to classify the marked ROI again. Repeat the above steps to further refine the segmentation results.

4. Simulation Experiment

4.1. Experiment-Related Instructions

The experimental dataset used in this study includes the public dataset and the self-made dataset. The comparison models are SVM, CNN, and DCNN-F-SVM. In the setting of experimental parameters, set the window size to 5, σ=0.1, and C = 1000. The public dataset used is the BraTS18 dataset. The self-made dataset is the clinical MRI images of 26 patients. The evaluation index used in the experiment is DSC, sensitivity, and specificity. The description of the experimental software and hardware environment is shown in Table 3.
Table 3

Experimental environment description.

Hardware configurationSoftware configuration
Configuration itemConfiguration parameterConfiguration itemConfiguration parameter
Operating systemUbuntu 14.04Development environmentPyCharm
CPUAMD A8-5600KProgramming languagePython
RAM16.0GBImage algorithm libraryOpenCV
Video memory479 MBDeep learning algorithm libraryTensorFlow

4.2. Public Dataset Experiment

After the model training is completed, the test set can be predicted by the model to obtain the glioma segmentation result obtained by the model segmentation. In the test set divided by three-fold cross-validation, the evaluation index pair of each model on the BraTS18 dataset is shown in Table 4. The data in the table fully shows that the proposed model has better tumor segmentation performance than SVM and CNN. Compared with SVM, the proposed algorithm has improved by 8.3%, 9.7%, and 1.4% on the three indicators: DSC, sensitivity, and specificity; compared with CNN, the proposed algorithm has three indicators: DSC, sensitivity, and specificity, increased by 4.7%, 2.6% and 0.2%, respectively.
Table 4

Evaluation index of each model.

ModelDSCSensitivitySpecificity
SVM0.82680.83060.9845
CNN0.85560.88760.9962
DCNN-F-SVM0.89580.91100.9982

4.3. Self-Made Data Experiment

In this section, clinical MRI images of 26 patients were collected, and brain tumors were trained and segmented using three models, and the experimental results were given. Tables 5 and 6 show the segmentation results of CNN and DCNN-F-SVM for 26 patients, respectively.
Table 5

Evaluation data of 26 patients with brain tumor segmentation using the SVM model.

NumberDSCSensitivitySpecificityNumberDSCSensitivitySpecificity
10.88010.90200.9563140.86950.88960.9411
20.87680.89630.9368150.87530.89760.9520
30.88930.91580.9605160.85360.87290.9264
40.86820.89100.9482170.84630.86670.9118
50.89260.90890.9795180.88310.90530.9786
60.87960.89980.9385190.892091070.9632
70.88590.90960.9543200.86970.88960.9408
80.86330.88590.9386210.87870.90060.9602
90.88280.90100.9715220.88110.91200.9632
100.89890.91570.9634230.89800.92340.9728
110.90030.92360.9726240.84790.87520.9388
120.84290.86950.9367250.82560.86100.9286
130.83960.86000.9302260.86940.88870.9385
Table 6

Evaluation data of 26 patients with brain tumor segmentation using the DCNN-F-SVM model.

NumberDSCSensitivitySpecificityNumberDSCSensitivitySpecificity
10.89230.92200.9663140.89560.92220.9785
20.88670.90630.9368150.88960.91850.9669
30.90910.91930.9702160.88760.91040.9678
40.87820.90140.9588170.87820.90860.9585
50.90260.92890.9795180.90200.91030.9786
60.89980.90980.9405190.90230.91230.9752
70.90560.91960.9743200.88850.91160.9600
80.90300.92290.9696210.89630.92050.9696
90.89270.91100.9711220.90040.92870.9745
100.91260.92890.9806230.91020.92580.9798
110.91850.92980.9885240.87630.91150.9598
120.87890.91100.9605250.86890.90880.9469
130.88250.91680.9693260.89960.93050.9797
Among the index values shown in Table 5, the DSC values are generally distributed around 0.86 and have an up and down floating error of about 0.18. The sensitivity values are generally distributed around 0.89 and have a floating error of about 0.14. The specificity values are generally distributed around 0.95 and have an up and down floating error of about 0.11. Among the index values shown in Table 6, the DSC value is generally distributed around 0.89, and there is an upward and downward floating error of about 0.15. The sensitivity values are generally distributed around 0.91, and there is about 0.12 up and down floating error. The specificity value is generally distributed around 0.96, and there is about 0.09 up and down floating error. Table 7 shows the DSC, specificity, and sensitivity values of the three methods. The proposed DCNN-F-SVM has increased in comparison with CNN and SVM used independently, in which the three indicators in the table (DSC, sensitivity, and specificity) are 3.5%, 2.6%, and 3.2% higher compared to those of SVM and 1.6%, 0.9%, and 2.4% higher compared to those of CNN. The proposed model can indeed improve the segmentation performance.
Table 7

Evaluation indexes of the segmentation results of the three models.

MethodDSCSensitivitySpecificity
SVM0.87050.90010.9586
CNN0.88690.91520.9657
DCNN-F-SVM0.90100.92360.9889

5. Conclusion

The diagnosis of brain diseases requires accurate diagnosis without deviation. Any misdiagnosis will cause irreparable losses. The incidence of brain tumors in brain diseases has been high, and the number of patients has increased year by year. This has also increased the workload of medical personnel in this field to a certain extent. An accurate and efficient method of brain tumor image segmentation needs to be urgently proposed, which has solved the increasing demand. Based on this background, this paper proposes a depth classifier to improve the segmentation accuracy and achieve automatic segmentation without manual intervention. The classifier is mainly composed of DCNN and integrated SVM connected in series. The implementation of the model is divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from the image space to the tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. The simulation implementation verified the superiority and effectiveness of the proposed model. However, the proposed model still has shortcomings such as long calculation time. How to optimize the algorithm and shorten the running time will be the next research content.
  22 in total

1.  mDixon-Based Synthetic CT Generation for PET Attenuation Correction on Abdomen and Pelvis Jointly Using Transfer Fuzzy Clustering and Active Learning-Based Classification.

Authors:  Pengjiang Qian; Yangyang Chen; Jung-Wen Kuo; Yu-Dong Zhang; Yizhang Jiang; Kaifa Zhao; Rose Al Helo; Harry Friel; Atallah Baydoun; Feifei Zhou; Jin Uk Heo; Norbert Avril; Karin Herrmann; Rodney Ellis; Bryan Traughber; Robert S Jones; Shitong Wang; Kuan-Hao Su; Raymond F Muzic
Journal:  IEEE Trans Med Imaging       Date:  2019-08-16       Impact factor: 10.048

2.  Optimization of Diagnosis and Treatment of Chronic Diseases Based on Association Analysis Under the Background of Regional Integration.

Authors:  Kaijian Xia; Xiaowei Zhong; Li Zhang; Jianqiang Wang
Journal:  J Med Syst       Date:  2019-01-19       Impact factor: 4.460

3.  Fully Convolutional Networks for Semantic Segmentation.

Authors:  Evan Shelhamer; Jonathan Long; Trevor Darrell
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2016-05-24       Impact factor: 6.226

Review 4.  Machine learning and radiology.

Authors:  Shijun Wang; Ronald M Summers
Journal:  Med Image Anal       Date:  2012-02-23       Impact factor: 8.545

5.  Improved delineation of brain tumors: an automated method for segmentation based on pathologic changes of 1H-MRSI metabolites in gliomas.

Authors:  Andreas Stadlbauer; Ewald Moser; Stephan Gruber; Rolf Buslei; Christopher Nimsky; Rudolf Fahlbusch; Oliver Ganslandt
Journal:  Neuroimage       Date:  2004-10       Impact factor: 6.556

Review 6.  Radiological images and machine learning: Trends, perspectives, and prospects.

Authors:  Zhenwei Zhang; Ervin Sejdić
Journal:  Comput Biol Med       Date:  2019-02-27       Impact factor: 4.589

7.  GLISTR: glioma image segmentation and registration.

Authors:  Ali Gooya; Kilian M Pohl; Michel Bilello; Luigi Cirillo; George Biros; Elias R Melhem; Christos Davatzikos
Journal:  IEEE Trans Med Imaging       Date:  2012-08-13       Impact factor: 10.048

8.  A brain tumor segmentation framework based on outlier detection.

Authors:  Marcel Prastawa; Elizabeth Bullitt; Sean Ho; Guido Gerig
Journal:  Med Image Anal       Date:  2004-09       Impact factor: 8.545

9.  Brain and other nervous system tumours.

Authors:  C S Muir; H H Storm; A Polednak
Journal:  Cancer Surv       Date:  1994

10.  Radiomics: Images Are More than Pictures, They Are Data.

Authors:  Robert J Gillies; Paul E Kinahan; Hedvig Hricak
Journal:  Radiology       Date:  2015-11-18       Impact factor: 11.105

View more
  6 in total

1.  Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning.

Authors:  Ebrahim Mohammed Senan; Mukti E Jadhav; Taha H Rassem; Abdulaziz Salamah Aljaloud; Badiea Abdulkarem Mohammed; Zeyad Ghaleb Al-Mekhlafi
Journal:  Comput Math Methods Med       Date:  2022-05-18       Impact factor: 2.809

2.  Risk Factors of Restroke in Patients with Lacunar Cerebral Infarction Using Magnetic Resonance Imaging Image Features under Deep Learning Algorithm.

Authors:  Chunli Ma; Hong Li; Kui Zhang; Yuzhu Gao; Lei Yang
Journal:  Contrast Media Mol Imaging       Date:  2021-11-18       Impact factor: 3.161

Review 3.  Magnetic resonance image-based brain tumour segmentation methods: A systematic review.

Authors:  Jayendra M Bhalodiya; Sarah N Lim Choi Keung; Theodoros N Arvanitis
Journal:  Digit Health       Date:  2022-03-16

4.  Comparison of Fine-Tuned Deep Convolutional Neural Networks for the Automated Classification of Lung Cancer Cytology Images with Integration of Additional Classifiers.

Authors:  Tetsuya Tsukamoto; Atsushi Teramoto; Ayumi Yamada; Yuka Kiriyama; Eiko Sakurai; Ayano Michiba; Kazuyoshi Imaizumi; Hiroshi Fujita
Journal:  Asian Pac J Cancer Prev       Date:  2022-04-01

5.  An automatic and intelligent brain tumor detection using Lee sigma filtered histogram segmentation model.

Authors:  Simy Mary Kurian; Sujitha Juliet
Journal:  Soft comput       Date:  2022-09-09       Impact factor: 3.732

6.  Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation.

Authors:  Akella S Narasimha Raju; Kayalvizhi Jayavel; Thulasi Rajalakshmi
Journal:  Comput Intell Neurosci       Date:  2022-10-10
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.