Bingbao Yan1, Miao Cao2, Weifang Gong1, Benzheng Wei3. 1. School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China. 2. School of Life Science and Technology, Changchun University of Science and Technology, Changchun, 130022, China. caomiao@cust.edu.cn. 3. Qingdao Academy of Chinese Medical Sciences, Shandong University of Tradition Chinese Medical, Qingdao, 266000, China.
Abstract
PURPOSE: Fully convolutional neural networks (FCNNs) have achieved good performance in the field of medical image segmentation. FCNNs that use multimodal images and multi-scale feature extraction have higher accuracy for brain tumor segmentation. Therefore, we have made some improvements to U-Net for fully automated segmentation of gliomas using multimodal images. And we named it multi-scale dilate network with deep supervision (MSD-Net). METHODS: MSD-Net is a symmetrical structure composed of a down-sampling process and an up-sampling process. In the down-sampling process, we use the multi-scale feature extraction block (ME) to extract multi-scale features and focus on primary features. Unlike other methods, ME consists of dilate convolution and standard convolution. Dilate convolution extracts multi-scale informations and standard convolution merges features of different scales. Hence, the output of the ME contains local information and global information. During the up-sampling process, we add a deep supervision block (DSB), which can shorten the length of back-propagation. In this paper, we pay more attention to the importance of shallow features for feature restoration. RESULTS: Our network validated in the BraTS17's validation dataset. The DSC scores of MSD-Net for complete tumor, tumor core and enhancing tumor were 0.88, 0.81 and 0.78, respectively, which outperforms most networks. CONCLUSION: This study shows that ME enhances the feature extraction ability of the network and improves the accuracy of segmentation results. DSB speeds up the convergence of the network. In addition, we should also pay attention to the contribution of shallow features to feature restoration.
PURPOSE: Fully convolutional neural networks (FCNNs) have achieved good performance in the field of medical image segmentation. FCNNs that use multimodal images and multi-scale feature extraction have higher accuracy for brain tumor segmentation. Therefore, we have made some improvements to U-Net for fully automated segmentation of gliomas using multimodal images. And we named it multi-scale dilate network with deep supervision (MSD-Net). METHODS: MSD-Net is a symmetrical structure composed of a down-sampling process and an up-sampling process. In the down-sampling process, we use the multi-scale feature extraction block (ME) to extract multi-scale features and focus on primary features. Unlike other methods, ME consists of dilate convolution and standard convolution. Dilate convolution extracts multi-scale informations and standard convolution merges features of different scales. Hence, the output of the ME contains local information and global information. During the up-sampling process, we add a deep supervision block (DSB), which can shorten the length of back-propagation. In this paper, we pay more attention to the importance of shallow features for feature restoration. RESULTS: Our network validated in the BraTS17's validation dataset. The DSC scores of MSD-Net for complete tumor, tumor core and enhancing tumor were 0.88, 0.81 and 0.78, respectively, which outperforms most networks. CONCLUSION: This study shows that ME enhances the feature extraction ability of the network and improves the accuracy of segmentation results. DSB speeds up the convergence of the network. In addition, we should also pay attention to the contribution of shallow features to feature restoration.
Authors: Bjoern H Menze; Andras Jakab; Stefan Bauer; Jayashree Kalpathy-Cramer; Keyvan Farahani; Justin Kirby; Yuliya Burren; Nicole Porz; Johannes Slotboom; Roland Wiest; Levente Lanczi; Elizabeth Gerstner; Marc-André Weber; Tal Arbel; Brian B Avants; Nicholas Ayache; Patricia Buendia; D Louis Collins; Nicolas Cordier; Jason J Corso; Antonio Criminisi; Tilak Das; Hervé Delingette; Çağatay Demiralp; Christopher R Durst; Michel Dojat; Senan Doyle; Joana Festa; Florence Forbes; Ezequiel Geremia; Ben Glocker; Polina Golland; Xiaotao Guo; Andac Hamamci; Khan M Iftekharuddin; Raj Jena; Nigel M John; Ender Konukoglu; Danial Lashkari; José Antonió Mariz; Raphael Meier; Sérgio Pereira; Doina Precup; Stephen J Price; Tammy Riklin Raviv; Syed M S Reza; Michael Ryan; Duygu Sarikaya; Lawrence Schwartz; Hoo-Chang Shin; Jamie Shotton; Carlos A Silva; Nuno Sousa; Nagesh K Subbanna; Gabor Szekely; Thomas J Taylor; Owen M Thomas; Nicholas J Tustison; Gozde Unal; Flor Vasseur; Max Wintermark; Dong Hye Ye; Liang Zhao; Binsheng Zhao; Darko Zikic; Marcel Prastawa; Mauricio Reyes; Koen Van Leemput Journal: IEEE Trans Med Imaging Date: 2014-12-04 Impact factor: 10.048
Authors: Ramy A Zeineldin; Mohamed E Karar; Jan Coburger; Christian R Wirtz; Oliver Burgert Journal: Int J Comput Assist Radiol Surg Date: 2020-05-05 Impact factor: 2.924