Literature DB >> 30480818

AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy.

Wentao Zhu1, Yufang Huang2, Liang Zeng3, Xuming Chen4, Yong Liu4, Zhen Qian5, Nan Du5, Wei Fan5, Xiaohui Xie1.   

Abstract

PURPOSE: Radiation therapy (RT) is a common treatment option for head and neck (HaN) cancer. An important step involved in RT planning is the delineation of organs-at-risks (OARs) based on HaN computed tomography (CT). However, manually delineating OARs is time-consuming as each slice of CT images needs to be individually examined and a typical CT consists of hundreds of slices. Automating OARs segmentation has the benefit of both reducing the time and improving the quality of RT planning. Existing anatomy autosegmentation algorithms use primarily atlas-based methods, which require sophisticated atlas creation and cannot adequately account for anatomy variations among patients. In this work, we propose an end-to-end, atlas-free three-dimensional (3D) convolutional deep learning framework for fast and fully automated whole-volume HaN anatomy segmentation.
METHODS: Our deep learning model, called AnatomyNet, segments OARs from head and neck CT images in an end-to-end fashion, receiving whole-volume HaN CT images as input and generating masks of all OARs of interest in one shot. AnatomyNet is built upon the popular 3D U-net architecture, but extends it in three important ways: (a) a new encoding scheme to allow autosegmentation on whole-volume CT images instead of local patches or subsets of slices, (b) incorporating 3D squeeze-and-excitation residual blocks in encoding layers for better feature representation, and (c) a new loss function combining Dice scores and focal loss to facilitate the training of the neural model. These features are designed to address two main challenges in deep learning-based HaN segmentation: (a) segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and (b) training with inconsistent data annotations with missing ground truth for some anatomical structures.
RESULTS: We collected 261 HaN CT images to train AnatomyNet and used MICCAI Head and Neck Auto Segmentation Challenge 2015 as a benchmark dataset to evaluate the performance of AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state-of-the-art results from the MICCAI 2015 competition, AnatomyNet increases Dice similarity coefficient by 3.3% on average. AnatomyNet takes about 0.12 s to fully segment a head and neck CT image of dimension 178 × 302 × 225, significantly faster than previous methods. In addition, the model is able to process whole-volume CT images and delineate all OARs in one pass, requiring little pre- or postprocessing.
CONCLUSION: Deep learning models offer a feasible solution to the problem of delineating OARs from CT images. We demonstrate that our proposed model can improve segmentation accuracy and simplify the autosegmentation pipeline. With this method, it is possible to delineate OARs of a head and neck CT within a fraction of a second.
© 2018 American Association of Physicists in Medicine.

Entities:  

Keywords:  U-Net; automated anatomy segmentation; deep learning; head and neck cancer; radiation therapy

Mesh:

Year:  2018        PMID: 30480818     DOI: 10.1002/mp.13300

Source DB:  PubMed          Journal:  Med Phys        ISSN: 0094-2405            Impact factor:   4.071


  59 in total

1.  Improving accuracy and robustness of deep convolutional neural network based thoracic OAR segmentation.

Authors:  Xue Feng; Mark E Bernard; Thomas Hunter; Quan Chen
Journal:  Phys Med Biol       Date:  2020-03-31       Impact factor: 3.609

2.  Automated pulmonary nodule detection in CT images using 3D deep squeeze-and-excitation networks.

Authors:  Li Gong; Shan Jiang; Zhiyong Yang; Guobin Zhang; Lu Wang
Journal:  Int J Comput Assist Radiol Surg       Date:  2019-04-26       Impact factor: 2.924

Review 3.  Head and Neck Cancer Adaptive Radiation Therapy (ART): Conceptual Considerations for the Informed Clinician.

Authors:  Jolien Heukelom; Clifton David Fuller
Journal:  Semin Radiat Oncol       Date:  2019-07       Impact factor: 5.934

4.  Anatomically consistent CNN-based segmentation of organs-at-risk in cranial radiotherapy.

Authors:  Pawel Mlynarski; Hervé Delingette; Hamza Alghamdi; Pierre-Yves Bondiau; Nicholas Ayache
Journal:  J Med Imaging (Bellingham)       Date:  2020-02-13

5.  Attention-Enriched Deep Learning Model for Breast Tumor Segmentation in Ultrasound Images.

Authors:  Aleksandar Vakanski; Min Xian; Phoebe E Freer
Journal:  Ultrasound Med Biol       Date:  2020-07-21       Impact factor: 2.998

6.  A slice classification model-facilitated 3D encoder-decoder network for segmenting organs at risk in head and neck cancer.

Authors:  Shuming Zhang; Hao Wang; Suqing Tian; Xuyang Zhang; Jiaqi Li; Runhong Lei; Mingze Gao; Chunlei Liu; Li Yang; Xinfang Bi; Linlin Zhu; Senhua Zhu; Ting Xu; Ruijie Yang
Journal:  J Radiat Res       Date:  2021-01-01       Impact factor: 2.724

7.  Fully Automatic Volume Measurement of the Spleen at CT Using Deep Learning.

Authors:  Gabriel E Humpire-Mamani; Joris Bukala; Ernst T Scholten; Mathias Prokop; Bram van Ginneken; Colin Jacobs
Journal:  Radiol Artif Intell       Date:  2020-07-22

Review 8.  Artificial intelligence and machine learning for medical imaging: A technology review.

Authors:  Ana Barragán-Montero; Umair Javaid; Gilmer Valdés; Dan Nguyen; Paul Desbordes; Benoit Macq; Siri Willems; Liesbeth Vandewinckele; Mats Holmström; Fredrik Löfman; Steven Michiels; Kevin Souris; Edmond Sterpin; John A Lee
Journal:  Phys Med       Date:  2021-05-09       Impact factor: 2.685

Review 9.  A review of deep learning based methods for medical image multi-organ segmentation.

Authors:  Yabo Fu; Yang Lei; Tonghe Wang; Walter J Curran; Tian Liu; Xiaofeng Yang
Journal:  Phys Med       Date:  2021-05-13       Impact factor: 2.685

10.  Impact of slice thickness, pixel size, and CT dose on the performance of automatic contouring algorithms.

Authors:  Kai Huang; Dong Joo Rhee; Rachel Ger; Rick Layman; Jinzhong Yang; Carlos E Cardenas; Laurence E Court
Journal:  J Appl Clin Med Phys       Date:  2021-03-29       Impact factor: 2.102

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.