Literature DB >> 30807894

OBELISK-Net: Fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions.

Mattias P Heinrich1, Ozan Oktay2, Nassim Bouteldja3.   

Abstract

Deep networks have set the state-of-the-art in most image analysis tasks by replacing handcrafted features with learned convolution filters within end-to-end trainable architectures. Still, the specifications of a convolutional network are subject to much manual design - the shape and size of the receptive field for convolutional operations is a very sensitive part that has to be tuned for different image analysis applications. 3D fully-convolutional multi-scale architectures with skip-connection that excel at semantic segmentation and landmark localisation have huge memory requirements and rely on large annotated datasets - an important limitation for wider adaptation in medical image analysis. We propose a novel and effective method based on trainable 3D convolution kernels that learns both filter coefficients and spatial filter offsets in a continuous space based on the principle of differentiable image interpolation first introduced for spatial transformer network. A deep network that incorporates this one binary extremely large and inflecting sparse kernel (OBELISK) filter requires fewer trainable parameters and less memory while achieving high quality results compared to fully-convolutional U-Net architectures on two challenging 3D CT multi-organ segmentation tasks. Extensive validation experiments indicate that the performance of sparse deformable convolutions is due to their ability to capture large spatial context with few expressive filter parameters and that network depth is not always necessary to learn complex shape and appearance features. A combination with conventional CNNs further improves the delineation of small organs with large shape variations and the fast inference time using flexible image sampling may offer new potential use cases for deep networks in computer-assisted, image-guided interventions.
Copyright © 2019. Published by Elsevier B.V.

Keywords:  Deep learning; Deformable convolutions; Image segmentation; Sparse kernels

Mesh:

Year:  2019        PMID: 30807894     DOI: 10.1016/j.media.2019.02.006

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  6 in total

1.  Generating novel pituitary datasets from open-source imaging data and deep volumetric segmentation.

Authors:  Rachel Gologorsky; Edward Harake; Grace von Oiste; Mustafa Nasir-Moin; William Couldwell; Eric Oermann; Todd Hollon
Journal:  Pituitary       Date:  2022-08-09       Impact factor: 3.599

2.  BV-GAN: 3D time-of-flight magnetic resonance angiography cerebrovascular vessel segmentation using adversarial CNNs.

Authors:  Dor Amran; Moran Artzi; Orna Aizenstein; Dafna Ben Bashat; Amit H Bermano
Journal:  J Med Imaging (Bellingham)       Date:  2022-08-31

Review 3.  A review of deep learning based methods for medical image multi-organ segmentation.

Authors:  Yabo Fu; Yang Lei; Tonghe Wang; Walter J Curran; Tian Liu; Xiaofeng Yang
Journal:  Phys Med       Date:  2021-05-13       Impact factor: 2.685

4.  Dynamic deformable attention network (DDANet) for COVID-19 lesions semantic segmentation.

Authors:  Kumar T Rajamani; Hanna Siebert; Mattias P Heinrich
Journal:  J Biomed Inform       Date:  2021-05-20       Impact factor: 8.000

Review 5.  3D Deep Learning on Medical Images: A Review.

Authors:  Satya P Singh; Lipo Wang; Sukrit Gupta; Haveesh Goli; Parasuraman Padmanabhan; Balázs Gulyás
Journal:  Sensors (Basel)       Date:  2020-09-07       Impact factor: 3.576

6.  Deep learning-enabled multi-organ segmentation in whole-body mouse scans.

Authors:  Oliver Schoppe; Chenchen Pan; Javier Coronel; Hongcheng Mai; Zhouyi Rong; Mihail Ivilinov Todorov; Annemarie Müskes; Fernando Navarro; Hongwei Li; Ali Ertürk; Bjoern H Menze
Journal:  Nat Commun       Date:  2020-11-06       Impact factor: 14.919

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.