Literature DB >> 34484652

Multiple Sclerosis Lesion Segmentation in Brain MRI Using Inception Modules Embedded in a Convolutional Neural Network.

Shahab U Ansari1, Kamran Javed1,2, Saeed Mian Qaisar3,4, Rashad Jillani1, Usman Haider1.   

Abstract

Multiple sclerosis (MS) is a chronic and autoimmune disease that forms lesions in the central nervous system. Quantitative analysis of these lesions has proved to be very useful in clinical trials for therapies and assessing disease prognosis. However, the efficacy of these quantitative analyses greatly depends on how accurately the MS lesions have been identified and segmented in brain MRI. This is usually carried out by radiologists who label 3D MR images slice by slice using commonly available segmentation tools. However, such manual practices are time consuming and error prone. To circumvent this problem, several automatic segmentation techniques have been investigated in recent years. In this paper, we propose a new framework for automatic brain lesion segmentation that employs a novel convolutional neural network (CNN) architecture. In order to segment lesions of different sizes, we have to pick a specific filter or size 3 × 3 or 5 × 5. Sometimes, it is hard to decide which filter will work better to get the best results. Google Net has solved this problem by introducing an inception module. An inception module uses 3 × 3, 5 × 5, 1 × 1 and max pooling filters in parallel fashion. Results show that incorporating inception modules in a CNN has improved the performance of the network in the segmentation of MS lesions. We compared the results of the proposed CNN architecture for two loss functions: binary cross entropy (BCE) and structural similarity index measure (SSIM) using the publicly available ISBI-2015 challenge dataset. A score of 93.81 which is higher than the human rater with BCE loss function is achieved.
Copyright © 2021 Shahab U. Ansari et al.

Entities:  

Mesh:

Year:  2021        PMID: 34484652      PMCID: PMC8410443          DOI: 10.1155/2021/4138137

Source DB:  PubMed          Journal:  J Healthc Eng        ISSN: 2040-2295            Impact factor:   2.682


1. Introduction

Multiple sclerosis (MS) is a chronic disease that damages the nerves in the spinal cord, brain, and optic nerves. Axons in the brain are covered with a myelin sheath. Demyelination is a process in which the myelin sheaths start falling off and develops lesions in brain nerves. Millions of people are affected by MS worldwide which is mainly found in young people between 20 and 50 years of age. The symptoms caused by this disease are fatigue, memory problem, the problem in concentration, weakness, loss of balance, loss of vision, and many others. Diagnosing and treating this disease is very challenging because of its variability in its clinical expression. These lesions can be traced in magnetic resonance imaging (MRI) using different sequences. Many features such as a volume and location are very important biomarkers for tracking the progression of the disease. Manually segmenting these lesions by expert radiologists is the most common practice in clinics, but this is tiresome, time consuming, and error prone. Figure 1 shows the manual segmentation of MS lesions by two raters in one slice of a brain MRI.
Figure 1

Manual segmentation of MS lesions: (a) T1w MRI, (b) manual segmentation by rater 1, and (c) manual segmentation by rater 2.

In recent years, automatic segmentation of MS lesions using convolutional neural networks (CNNs) have been investigated [1-5]. CNNs learn subtle features from the raw image data to facilitate 2D pixel (or 3D voxel) classification that ultimately leads to image segmentation. However, there is no one-fit-for-all CNN model that could work for every classification problem or data. An expert knowledge has to be incorporated during the design phase of the CNN model based on the nature of the application and the data. Complex problems such as MS lesion segmentation require careful selection of the CNN architecture and training model for an optimum solution. In addition, automatic segmentation of an MS lesion in MRI may be challenging due to the following: The lesion size and location are highly variable The edges between anatomical objects are not well defined in MR images due to low contrast The MR image of clinical quality may have imaging artifacts such as noise and inhomogeneity In this work, we are proposing a novel CNN architecture for MS lesion segmentation. The MS lesions vary tremendously in size and shape, and sometime, it is difficult to detect in brain MR images. To address this particular challenge, inception modules, originally introduced by Google in GoogLeNet, are added in the CNN model [6]. The significance of the inception module lies in using multiple kernels of different sizes in parallel in an efficient way. This smart approach captures features of varying magnitude in the input data without overburdening the network with additional computations. The proposed model is trained for two loss functions, binary cross entropy (BCE) and structural similarity index measure (SSIM). The BCE loss function tries to maximize the difference of the probability distribution between two classes, in this case, lesion and nonlesion voxels [7]. SSIM, on the other hand, is a perception-based loss function that quantifies the similarity between two images [8]. The proposed solution for the MS lesion segmentation in brain MRI offers the following attributes: Introduction of inception modules embedded in the CNN architecture for the segmentation of MS lesions with different shapes and sizes Comparison of MS lesion segmentation results using BCE and SSIM loss functions Improvement of performance of the proposed architecture in terms of the Dice coefficient, positive predicted value, lesion-wise true positive rate, and volume difference of the segmented lesions compared to the gold standard

1.1. Literature Review

In past decade, deep neural networks have shown promising results in the segmentation of MS lesions in brain MR images. In [9], a novel architecture for segmenting MS lesions in magnetic resonance images by using a deep 3D convolutional encoder with the connections of shortcut in pathways was proposed. The method was evaluated on publicly available data from ISBI-2015 [10] and MICCAI-2008 [11] challenges. Authors compared their method with other five available approaches used for MS lesion segmentation. The final results show that their method outperformed the previous existing methods for MS lesion segmentation. In [12], the authors used a fully automatic multiview CNN approach for segmenting a multiple sclerosis lesion in longitudinal MRI data and tested on the ISBI-2015 dataset. Various deep learning techniques for the medical image analysis are presented in [13]. Valverde et al. have proposed a novel architecture for segmentation of a white matter (WM) lesion in multiple sclerosis (MS) using small number of imaging data [14]. This approach proposed a cascaded CNN model working on 3D MRI patches from FLAIR and T1w modalities. In this method, the output of the first network is retrained on the second network in series to reduce misclassification from the first network. The proposed model score is evaluated on the publicly available dataset of MICCAI-2008 and outperformed all the participant approaches. Roy et al. proposed a fully convolutional neural network (FCNN) to segment WM lesions in multicontrast MR images using multiple convolutional pathways [15]. The first pathway of the CNN contains dual convolutional filters for two image modalities. In the second pathway, the convolutional filters are applied to the output of the first pathway which are in parallel and concatenated. This method was evaluated on the ISBI-2015 dataset. A novel approach of using a fully 2D CNN to segment MS lesions in MR images is proposed in [16]. Maleki et al. have investigated the use of a CNN model for the detection and segmentation of MS lesions [17]. In recent studies, a multimodal MRI dataset in tissue segmentation has shown promising results. In a recent work for brain tumor segmentation, a deep multitask learning framework that performs a performance test on multiple BraTS datasets was shown [18]. The authors claimed improvement over the traditional V-Net framework by using a structure of two parallel decoder branches. The original decoder performs segmentation, and the newly added decoder performs the auxiliary task of distance estimation to make more accurate segmentation boundary. A total loss function is introduced to combine the two tasks with a gamma factor to reduce the focus on the background area and set different weights for each type of label to alleviate the problem of category imbalance. Zhang et al. proposed the ME-Net model and obtained promising results using the BraTS 2020 dataset [19]. Four encoder structures for the four modal images of brain tumor MRI were employed with skip-connections. The combined feature map was given as input to the decoder. The authors also introduced a new loss function, that is, Categorical Dice, and set different weights for different masks. In another study, a 3D supervoxel-based learning method was proposed that demonstrated promising results in the segmentation of brain tumor [20]. The added features from multimodal MRI images greatly increased the segmentation accuracy. In another earlier study, Gabor texton feature, fractal analysis, curvature, and statistical intensity features from superpixels were used to segment tumors in multimodal brain MR images using extremely randomized trees (ERTs) [21]. The experimental results demonstrated the high detection and segmentation performance of the proposed method. Soltaninejad et al. proposed a method that used machine-learned features learned by fully convolutional networks (FCNs) and texton-based histograms as hand-crafted features [22]. The random forest (RF) classifier was then employed for the automated segmentation of brain tumor in the BraTS 2017 dataset. Segmentation results can be greatly affected by the quality of the MRI images. Low resolution, intensity variations, and image acquisition noise hamper the accuracy of a segmentation task. Jin et al. proposed a deep framework for the segmentation of prostate cancer [23]. They had shown that the segmentation results were greatly improved by using bicubic interpolation and improved version of 3D V-Net. The bicubic interpolation of the input data helped in enhancing the relevant features required for prostate segmentation. Recently, attention-based methods have gained reputation in the segmentation of small but discrete objects in MRI images. In a study, for the enhancement of left atrium scars, a dilated attention network was used [24]. The proposed approach improved the accuracy of the scar segmentation to 87%. Liu et al. proposed a spatial attentive Bayesian deep learning network for the automatic segmentation of the peripheral zone and transition zone of the prostate with uncertainty estimation [25]. This method outperformed the state-of-the-art methods. The heterogeneity of MS lesions poses a challenge for the detection and segmentation in MR images. An attention-based fully CNN has also been used in the segmentation of prostate zones [26]. The authors in this work have proposed a novel feature pyramid attention mechanism to cope with heterogeneous prostate anatomy. Raschke et al. developed a statistical method to analyze heterogeneity of brain tumors in multimodal MRI [27]. The approach presented in the paper does not make any assumption on the probability distribution of the MRI data and prior knowledge of the location of tumors. This, according to the authors, gives an advantage for tumor segmentation of varying sizes and spatial locations. The proposed method consist of two deep subnetworks in which the first one was an encoding network that was responsible of extracting feature maps and the second was a decoding network and was responsible for upsampling feature maps. The proposed FCNN was evaluated on an ISBI-2015 dataset.

2. Proposed Methodology

As mentioned earlier, the shape and size of MS lesions vary dramatically. To detect these lesions using machine learning techniques is a challenging task. In the proposed methodology, a CNN model with inception modules is employed to automatically segment MS lesions in brain MRI. Filters of multiple sizes used in the inception modules capture features of MS lesions of different sizes. Prior to CNN model training, the images in the dataset are first preprocessed to remove image noise, intensity inhomogeneity, variability of intensity ranges, and the presence of nonbrain tissues. In this work, preprocessed ISBI-2015 image data have been used.

2.1. Dataset

The proposed algorithm uses the dataset of ISBI-2015 challenge [10] which is grouped in two categories, training and testing data. The training data are named ISBI-21 and are available publicly with 21 MRI images from 5 patients. In the training set, MR brain images of four patients with 4 time points and one with 5 time points with a gap of approximately a year are gathered. The test data are named as ISBI-61 which are not available publicly and have 14 subjects with 61 images. Each subject in the testing set has 4-5 time points, and each time point has a gap of approximately a year. These images contain longitudinal scans of all five patients, as shown in Figure 2. During training, we used 80 percent of the total patches of 100 × 100 size for training and the remaining 20 percent for validation.
Figure 2

Sample of the ISBI dataset: (a) T1w, (b) FLAIR, (c) T2w, and (d) PDw, (e) manual delineation by rater 1, and (f) manual delineation by rater 2.

2.2. Proposed Deep Network Architecture

In the CNN architecture, a kernel size and type of filters have to be selected carefully so that it can learn all the features which are useful in the classification of objects. Generally, filters of different sizes and pooling schemes are employed in different CNN layers in order to learn most present features in the data. The inception module, however, uses multiple kernels in each layer in parallel and then pools the features [28]. In the proposed framework, we have investigated the efficacy of inception modules embedded in the CNN model for the segmentation of MS lesions.

2.2.1. CNN Model

In the proposed method for segmentation of multiple sclerosis disease, we incorporated three inception modules in our CNN model. Each module consists of 1 × 1, 3 × 3, 5 × 5, max pooling, and average pooling. The CNN model consists of two convolution layers with 64 feature maps followed by inception modules and then three convolutions layers. The final layer has one feature map for the prediction of lesion and nonlesion voxels. Figure 3 shows the complete architecture with inception modules embedded in the CNN layers. The model is trained with two different loss functions, i.e., binary cross entropy (BCE) and structural similarity index measure (SSIM). BCE is a measure of the difference between two probability distributions for a given random variable or a set of events and is used in binary classification tasks, whereas SSIM is a perceptual metric that quantifies image quality degradation caused by losses in data compression. For high similarity in images, the value of BCE is low and the value of SSIM is high.
Figure 3

Proposed deep network architecture for MS lesion segmentation in brain MRI.

2.2.2. Inception Module

The fundamental idea behind the GoogLeNet is the introduction of inception modules or inception blocks in the CNN architecture. In CNN, the feature maps learned from the previous layer are given as input to the next layer. The inception module takes the previous layer output and passes it to four different filter operations in parallel, as shown in Figure 4. The feature maps from all the filters are then concatenated to form the final output. The fundamental idea of using a 1 × 1 kernel in the inception module is just to shrink the depth of the feature maps [29]. The 1 × 1 convolutions preserve the parameters spatially that can be used when needed. This strategy in the inception module can lower the dimensions of the feature maps which can eventually drop the computational cost.
Figure 4

Inception block in the proposed CNN architecture.

2.3. Loss Functions

The proposed model is trained for two loss functions, binary cross entropy (BCE) and structural similarity index measure (SSIM). The BCE loss function tries to maximize the difference of the probability distribution between two classes, in this case, lesion and nonlesion voxels. It measures the performance of a classification model whose output is the probability between 0 and 1, i.e., the output of sigmoid activation. Mathematically, BCE loss for an output y with probability p can be computed as SSIM is a perception-based loss function that quantifies the similarity between two images. In SSIM, similarity between two images can be computed using a statistical model. Let μ and μ be the means, σ and σ be the variances, and σ be the covariance of the two images x and y; then,where C1 and C2 are regularization constants.

2.4. Model Implementation

The CNN model is implemented in Python using Keras [30] with TensorFlow library [31]. All the experiments were performed on the Nvidia GeForce RTX 2080 GPU. The deep network is trained end to end using patches. During the training phase of the CNN model, the patches are extracted from each slice in MR images. The training set is divided into two subsets, one for training the network and the other for validating the results. The optimization technique employed to update the parameters in the model is the Adam method [32]. In neural network parameter optimization, the Adam method shows better convergence. The hyperparameters used during network training include the fixed learning rate of 0.0001 for 50 epochs. These parameters' setting has produced sufficient convergence to optimal network parameters without overfitting the data. The size of the minibatch is set to 64, and each minibatch includes random number of patches. The best model from the validation set is selected at the 24th epoch which takes 48 hours on the GPUs.

3. Results and Discussion

3.1. Performance Metrics

Standard performance metrics for the assessment of the proposed CNN model have been employed. The Dice similarity coefficient measures reproducibility of segmentation as a statistical validation of manual annotation. Another similar metric is the Jaccard similarity index that gives the intersection between the machine segmentation and the ground truth. Positive predicted value is the probability that people with a positive screening test result indeed have the condition of interest. The portion of positive voxels in ground truth that is also identified as positive in the automatic segmentation is captured by true positive rate. Lesion-wise true/false positive rate is the number of lesions that overlap/do not overlap in automatic segmentation and the ground truth. The difference is volume of automatic segmentation, and the ground is another important metric for the assessment of the performance of the CNN model. The Pearson correlation coefficient computes the correlation between the automatic segmentation and the ground truth. The overall score gives the average of the combined effect of all these performance metrics in a single number. Table 1 shows formulas for these performance metrics.
Table 1

Performance metrics used in the proposed solution.

MetricFormula
Dice similarity coefficientDSC=2TP/(FN+FP+2TP)
Jaccard similarity coefficientJSC=TP/(TP+FP+FN)
Positive predicted valuePPV=TP/(TP+FP)
True positive rateTPR=TP/(TP+FN)
Lesion-wise true positive rateLTPR=LTP/RL
Lesion-wise false positive rateLFPR=LFP/PL
Volume differenceVD=TPs − TPgt/TPgt
Pearson correlation coefficientCor=cov(X, Y)/σXσY
Overall scoreSC=(1/|R|+|S|)∑R,S((DSC/8)+(PPV/8)+(1 − LFPR/4)+(LTPR/4)+(Cor/4))

3.2. Feature Learning by Inception Modules

As suggested by the literature, the proposed CNN model is trained on T1w, T2w, and FLAIR sequences of the MRI data. Table 2 shows quantitative results for automatic MS lesion segmentation in MRI using the BCE loss function for test images at time points TP. Although, in the results, both Dice and Jaccard similarity indices are reported, they both convey the same information. The performance metrics observed for the proposed CNN model have significantly outperformed when compared with the existing techniques, as shown in Table 3. Kernels of different sizes used in the inception modules help in extracting discriminative features for the automatic segmentation of MS lesions and background tissues in brain MRI. The most present features are ultimately pooled using max pooling and average pooling at various stages of the inception modules. The number of inception modules used in the CNN model is also very crucial in the architecture design. Using too many inception modules in MS lesion segmentation has degraded the results due to overfitting the model to the data. Also, poor results are obtained when the number of inception modules has been lowered. This may correspond to underfitting the CNN model for the segmentation of MS lesions. Experiments have also confirmed that a mix of average pooling and max pooling works better by keeping the most present features in the high-level feature maps and averaging them in the low-level feature maps. The authors suggest that, for a specific application, the number and placement of inception modules, filter size, and pooling strategy have to be selected accordingly.
Table 2

Quantification of MS lesion segmentation with the BCE loss function.

SubjectTPDiceJaccardPPVTPRLFPRLTPRVD
test0110.66390.49690.89910.52630.13560.50680.4147
test0120.69160.52860.91310.55660.08060.51280.3904
test0130.6820.51740.88450.55490.14520.50.3726
test0140.67320.50740.92260.52990.10340.46670.4256
test0210.69330.53060.75480.64110.11760.46530.1507
test0220.68230.51780.82290.58280.0870.49690.2918
test0230.6640.4970.82410.55590.06380.48670.3254
test0240.64090.47160.85290.51340.06310.54110.3981
test0250.70990.55020.84440.61230.12770.41570.2748
test0310.49490.32880.89440.34210.1250.30560.6175
test0320.51320.34510.92710.35480.13790.33330.6173
test0330.49880.33220.94570.338700.43750.6419
test0340.58380.41220.92420.42660.080.46670.5384
test0410.81680.69030.86930.77020.11540.69440.114
test0420.79280.65670.82050.76680.360.51720.0654
test0430.80670.6760.80990.80350.080.75860.0078
test0440.79990.66650.80950.79050.27590.6970.0234
Average0.67110.51330.86580.56860.12340.50600.3335
Table 3

Comparison with the existing techniques.

MethodSCDSCPPVLTPRLFPRVD
Birenbaum and Greenspan [12]90.070.62710.78890.56780.49750.3522
Litjens et al. [13]86.920.50090.54910.42880.57650.5707
Valverde et al. [14]91.330.62940.78660.36690.15290.3384
Aslani et al. [16]89.850.48560.74020.30340.17080.4768
Proposed90.840.63060.78880.57360.25120.3444

3.3. Comparison of BCE and SSIM Loss Functions

Two loss functions in training the proposed CNN model have been used, BCE and SSIM. Tables 2 and 4 report the quantitative results for the two loss functions. Table 5 gives the comparison of the two loss functions on the basis of the average values of the results. In the MS lesion classification, the BCE loss function seems to work better than SSIM. This sounds very intuitive as BCE tries to evaluate the difference in the maximum likelihood between the predictions and ground truths. SSIM, on the other hand, quantifies the perceptual differences between the predictions and the ground truths. It uses luminance, contrast, and structure features to compute the similarity between two images. The reason why the BCE loss function works better than SSIM is because loss functions also depend on the activation functions used in the output layer. For sigmoid activation, the literature suggests that the BCE loss function is the natural choice due to its accuracy and efficiency. The automatic MS lesion segmentation using BCE and SSIM loss functions is illustrated in Figure 5.
Table 4

Quantification of MS lesion segmentation with the SSIM loss function.

SubjectTPDiceJaccardPPVTPRLFPRLTPRVD
test0110.60610.43480.8660.46620.04080.42470.4617
test0120.62960.45950.86770.49410.07020.44870.4306
test0130.61790.4470.850.48530.05560.40240.429
test0140.61940.44860.88140.47750.09090.42670.4582
test0210.66080.49340.79230.56670.08640.3750.2847
test0220.6310.46090.82460.51110.1010.40250.3802
test0230.59870.42730.82240.47070.06670.340.4277
test0240.59090.41940.8520.45230.04670.44520.4692
test0250.66170.49440.84270.54460.09780.36140.3537
test0310.43940.28150.79860.30310.07690.33330.6205
test0320.46630.30410.83130.32410.080.40.6102
test0330.45970.29850.8520.31480.09090.3750.6305
test0340.52540.35630.82870.38470.20.33330.5358
test0410.7760.6340.85350.71150.13040.58330.1664
test0420.7620.61560.83450.70120.23530.41380.1597
test0430.77290.62990.80890.740.22730.58620.0852
test0440.77920.63830.81870.74330.16670.63640.0921
test0510.4330.27630.39390.48060.450.27780.2201
test0520.46220.30060.56520.3910.18520.40820.3082
test0530.53530.36550.64480.45760.19510.49230.2903
test0540.51690.34850.61450.4460.19230.3750.2742
Average0.59740.43500.78300.49840.13740.42100.3661
Table 5

Quantitative comparison of BCE and SSIM loss functions.

Loss functionSCDSCPPVLTPRLFPRVD
BCE90.840.63060.78880.57360.25120.3444
SSIM89.010.59340.72880.44760.19350.3999
Figure 5

Comparison of the segmentation results when using BCE and SSIM loss functions. (a) T1w. (b) T2w. (c) Rater 1. (d) BCE. (e) SSIM.

3.4. Comparison with Existing Techniques

The proposed methodology is compared with different published techniques for MS lesion segmentation using the ISBI-2015 dataset. The comparison of the results is shown in Table 3. The Dice coefficient, PPV, LTPR, and VD obtained in the proposed methodology show that the model is generalized well for successfully handling new data. The performance of Birenbaum and Greenspan's model, multiview CNN, includes a score of 90.07, DSC of 62.71%, PPV of 78.89%, LTPR of 56.78%, LFPR of 49.75%, and VD of 35.22%. This model produced the best LFPR result among the five techniques compared here. The performance of Litjens' CNN model used was the worst compared to the other techniques. The performance value of score was 86.92, the DSC was 50.09%, PPV was 54.91%, LTPR was 42.88%, LFPR was 57.95%, and VD was 57.07%. The second best performance was shown by the cascaded CNN architecture proposed by Velverde et al. It includes a score of 91.33, DSC of 62.94%, PPV of 78.66%, LTPR of 36.69% LFPR of 15.29%, and VD of 33.84%. The results for the multibranch CNN model proposed by Aslani et al.'s model includes a score of 89.85, DSC of 48.56%, PPV of 74.01%, LTPR of 30.34%, LFPR of 17.08%, and VD of 47.68%. Finally, the performance of the proposed model was the best with a score of 93.81, DSC of 67.11%, PPV of 86.58%, LTPR of 50.60%, LFPR of 12.34%, and VD of 33.35%. The value of LTPR was the only metric that was worse than Valverde's and Aslani's models. The shortcoming in LFPR can further be investigated in the future model of the present work.

4. Limitations in Real Clinical Studies

The proposed work is an attempt to prove the efficacy of AI-based techniques in medical applications. In recent years, AI has gained reputation in automating tedious routine works in clinical settings. However, the diversity and inadequacy of the patient data for training a deep network have hampered practical use of AI-based techniques in clinics. As more and more data will become available and as deep neural networks will become more efficient, the practicability of these techniques will definitely improve.

5. Conclusions and Future Works

In this work, a CNN model with inception modules is investigated in automatic segmentation of MS lesions in MRI. The CNN model with inception modules seems to pick MS lesions of different sizes and shapes more successfully. The key advantage of inception modules is the use of different kernels such as 1 × 1, 3 × 3, and 5 × 5 that tend to extract salient features in the input of varying sizes. This improves the Dice coefficient, PPV, LTPR, and VD of the segmentation when compared to the existing techniques. These results have outperformed all the existing techniques. The success of Velverde's model can also be attributed to accurate learning of MS lesion features of various sizes and shapes. The performance of Birenbaum and Greenspan's model, multiview CNN, includes a score of 90.07, DSC of 62.71%, PPV of 78.89%, LTPR of 56.78%, LFPR of 49.75%, and VD of 35.22%. This model produced the best LFPR result among the five techniques compared here. The performance of Litjens' CNN model was the worst compared to the other techniques. The performance of the model used had a score of 86.92, DSC of 50.09%, PPV of 54.91%, LTPR of 42.88%, LFPR of 57.95%, and VD of 57.07%. The second best performance was shown by the cascaded CNN architecture proposed by Velverde et al. It includes a score of 91.33, DSC of 62.94%, PPV of 78.66%, LTPR of 36.69%, LFPR of 15.29%, and VD of 33.84%. The results for the model proposed by Aslani et al.'s model, multibranch CNN, includes a score of 89.85, DSC of 48.56%, PPV of 74.01%, LTPR of 30.34%, LFPR of 17.08%, and VD of 47.68%. Finally, the performance of the proposed model was the best with a score of 93.81, DSC of 67.11%, PPV of 86.58%, LTPR of 50.60%, LFPR of 12.34%, and VD of 33.35%. The value of LTPR was the only metric that was worse than Valverde's and Aslani's models. In the present study, we have also discovered that the BCE loss function works better than the SSIM loss function. The intuition behind this behavior of the model is that BCE tries to maximize the differences between the probability distributions predictions and ground truths. SSIM, on the other hand, seems to converge to local minima while quantifying the error loss. Another important reason is the sigmoid activation function used in the output layer for the binary classification. The authors believe this naturally supports the BCE loss function to produce more accurate and efficient results. In the future, this work can be further extended to integrate in different architectures such as the residual network (ResNet), UNet, parallel CNN, and cascaded CNN on multiple datasets which are publicly available. The incorporation of event-driven processing can improve the performance of the suggested solution in terms of computational efficiency and compression [33-36]. Investigation based on this axis is another prospect.
  18 in total

1.  Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

Authors:  Tom Brosch; Lisa Y W Tang; David K B Li; Anthony Traboulsee; Roger Tam
Journal:  IEEE Trans Med Imaging       Date:  2016-02-11       Impact factor: 10.048

2.  Longitudinal multiple sclerosis lesion segmentation: Resource and challenge.

Authors:  Aaron Carass; Snehashis Roy; Amod Jog; Jennifer L Cuzzocreo; Elizabeth Magrath; Adrian Gherman; Julia Button; James Nguyen; Ferran Prados; Carole H Sudre; Manuel Jorge Cardoso; Niamh Cawley; Olga Ciccarelli; Claudia A M Wheeler-Kingshott; Sébastien Ourselin; Laurence Catanese; Hrishikesh Deshpande; Pierre Maurel; Olivier Commowick; Christian Barillot; Xavier Tomas-Fernandez; Simon K Warfield; Suthirth Vaidya; Abhijith Chunduru; Ramanathan Muthuganapathy; Ganapathy Krishnamurthi; Andrew Jesson; Tal Arbel; Oskar Maier; Heinz Handels; Leonardo O Iheme; Devrim Unay; Saurabh Jain; Diana M Sima; Dirk Smeets; Mohsen Ghafoorian; Bram Platel; Ariel Birenbaum; Hayit Greenspan; Pierre-Louis Bazin; Peter A Calabresi; Ciprian M Crainiceanu; Lotta M Ellingsen; Daniel S Reich; Jerry L Prince; Dzung L Pham
Journal:  Neuroimage       Date:  2017-01-11       Impact factor: 6.556

Review 3.  A survey on deep learning in medical image analysis.

Authors:  Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez
Journal:  Med Image Anal       Date:  2017-07-26       Impact factor: 8.545

4.  3D PBV-Net: An automated prostate MRI data segmentation method.

Authors:  Yao Jin; Guang Yang; Ying Fang; Ruipeng Li; Xiaomei Xu; Yongkai Liu; Xiaobo Lai
Journal:  Comput Biol Med       Date:  2020-12-07       Impact factor: 4.589

5.  Multi-branch convolutional neural network for multiple sclerosis lesion segmentation.

Authors:  Shahab Aslani; Michael Dayan; Loredana Storelli; Massimo Filippi; Vittorio Murino; Maria A Rocca; Diego Sona
Journal:  Neuroimage       Date:  2019-04-03       Impact factor: 6.556

6.  Improving automated multiple sclerosis lesion segmentation with a cascaded 3D convolutional neural network approach.

Authors:  Sergi Valverde; Mariano Cabezas; Eloy Roura; Sandra González-Villà; Deborah Pareto; Joan C Vilanova; Lluís Ramió-Torrentà; Àlex Rovira; Arnau Oliver; Xavier Lladó
Journal:  Neuroimage       Date:  2017-04-19       Impact factor: 6.556

Review 7.  The central vein sign and its clinical evaluation for the diagnosis of multiple sclerosis: a consensus statement from the North American Imaging in Multiple Sclerosis Cooperative.

Authors:  Pascal Sati; Jiwon Oh; R Todd Constable; Nikos Evangelou; Charles R G Guttmann; Roland G Henry; Eric C Klawiter; Caterina Mainero; Luca Massacesi; Henry McFarland; Flavia Nelson; Daniel Ontaneda; Alexander Rauscher; William D Rooney; Amal P R Samaraweera; Russell T Shinohara; Raymond A Sobel; Andrew J Solomon; Constantina A Treaba; Jens Wuerfel; Robert Zivadinov; Nancy L Sicotte; Daniel Pelletier; Daniel S Reich
Journal:  Nat Rev Neurol       Date:  2016-11-11       Impact factor: 42.937

8.  Volumetric MRI markers and predictors of disease activity in early multiple sclerosis: a longitudinal cohort study.

Authors:  Tomas Kalincik; Manuela Vaneckova; Michaela Tyblova; Jan Krasensky; Zdenek Seidl; Eva Havrdova; Dana Horakova
Journal:  PLoS One       Date:  2012-11-15       Impact factor: 3.240

9.  Signal-piloted processing and machine learning based efficient power quality disturbances recognition.

Authors:  Saeed Mian Qaisar
Journal:  PLoS One       Date:  2021-05-28       Impact factor: 3.240

10.  A Deep Multi-Task Learning Framework for Brain Tumor Segmentation.

Authors:  He Huang; Guang Yang; Wenbo Zhang; Xiaomei Xu; Weiji Yang; Weiwei Jiang; Xiaobo Lai
Journal:  Front Oncol       Date:  2021-06-04       Impact factor: 6.244

View more
  1 in total

1.  Framework to Segment and Evaluate Multiple Sclerosis Lesion in MRI Slices Using VGG-UNet.

Authors:  Sujatha Krishnamoorthy; Yaxi Zhang; Seifedine Kadry; Weifeng Yu
Journal:  Comput Intell Neurosci       Date:  2022-06-02
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.