| Literature DB >> 34026615 |
Yang Zhong1,2,3, Yanju Yang1,2,3, Yingtao Fang1,2,3, Jiazhou Wang1,2,3, Weigang Hu1,2,3.
Abstract
PURPOSE: While artificial intelligence has shown great promise in organs-at-risk (OARs) auto segmentation for head and neck cancer (HNC) radiotherapy, to reach the level of clinical acceptance of this technology in real-world routine practice is still a challenge. The purpose of this study was to validate a U-net-based full convolutional neural network (CNN) for the automatic delineation of OARs of HNC, focusing on clinical implementation and evaluation.Entities:
Keywords: auto segmentation; clinical evaluation; deep learning; head and neck cancer; organs at risk
Year: 2021 PMID: 34026615 PMCID: PMC8132944 DOI: 10.3389/fonc.2021.638197
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1Schematic of the study design representing the timeline and the details of every step.
Figure 2(A) The main obstacles of HNC OAR delineation (to present the questionnaire data, for each issue every item was given corresponding values according to the ranking. Therefore, the average score of every item can be obtained); (B) The time consumed for manual contouring and auto-contouring modification.
The DSC and HD values of 2 evaluation parameters for the 15 OARs segmentation.
| HD*(mm) | DSC*(%) | HD**(mm) | DSC**(%) | |
|---|---|---|---|---|
|
| 9.03 ± 0.11 | 0.79 ± 0.04 | 6.36 ± 2.96 | 0.87 ± 0.06 |
|
| 7.91 ± 1.64 | 0.79 ± 0.01 | 9.05 ± 3.05 | 0.80 ± 0.06 |
|
| 27.47 ± 6.99 | 0.72 ± 0.06 | 14.53 ± 6.9 | 0.88 ± 0.073 |
|
| 23.26 ± 6.00 | 0.73 ± 0.06 | 14.28 ± 8.56 | 0.87 ± 0.09 |
|
| 6.13 ± 1.73 | 0.82 ± 0.03 | 3.86 ± 1.66 | 0.93 ± 0.04 |
|
| 5.62 ± 0.93 | 0.83 ± 0.02 | 3.23 ± 1.94 | 0.93 ± 0.05 |
|
| 5.19 ± 0.34 | 0.51 ± 0.09 | 2.75 ± 1.81 | 0.78 ± 0.16 |
|
| 5.06 ± 0.91 | 0.56 ± 0.10 | 3.50 ± 3.30 | 0.71 ± 0.19 |
|
| 23.04 ± 6.55 | 0.53 ± 0.12 | 8.51 ± 5.54 | 0.93 ± 0.07 |
|
| 10.55 ± 2.12 | 0.44 ± 0.06 | 6.72 ± 3.6 | 0.65 ± 0.21 |
|
| 7.82 ± 1.94 | 0.51 ± 0.10 | 6.50 ± 8.9 | 0.69 ± 0.18 |
|
| 12.73 ± 3.40 | 0.79 ± 0.05 | 7.04 ± 4.56 | 0.92 ± 0.07 |
|
| 14.06 ± 4.60 | 0.79 ± 0.04 | 7.3 ± 4.5 | 0.85 ± 0.13 |
|
| 13.01 ± 1.73 | 0.72 ± 0.04 | 8.93 ± 3.55 | 0.84 ± 0.08 |
|
| 9.95 ± 2.63 | 0.46 ± 0.11 |
*the results of model trainings; **the results of model updating.
Figure 3The evaluation of the OARs for the network performance by clinicians. The left bar is the count number by the auto segmentation method and the right bar is manual.
Figure 4Rates of correct and incorrect classification of contours as human or automatically generated.
Figure 5Visualization of the convolution neural network-based segmentation results of the chiasm (A, B) and oral cavity (C, D). The DL performance of the OARs with model updating are listed in the right column (E, F). The ground truth segmentations are depicted in red, and the auto segmentations are depicted in green.
Figure 6The box plot results of auto-segmentation of OARs in HNC reported in terms of the Dice coefficient (22). The red mark dots are the Dice coefficient in our second phase training results.