| Literature DB >> 34935006 |
Chunfeng Lian1, Fan Wang1, Hannah H Deng2, Li Wang1, Deqiang Xiao1, Tianshu Kuang2, Hung-Ying Lin2, Jaime Gateno2,3, Steve G F Shen4,5, Pew-Thian Yap1, James J Xia2,3, Dinggang Shen1.
Abstract
Accurate bone segmentation and anatomical landmark localization are essential tasks in computer-aided surgical simulation for patients with craniomaxillofacial (CMF) deformities. To leverage the complementarity between the two tasks, we propose an efficient end-to-end deep network, i.e., multi-task dynamic transformer network (DTNet), to concurrently segment CMF bones and localize large-scale landmarks in one-pass from large volumes of cone-beam computed tomography (CBCT) data. Our DTNet was evaluated quantitatively using CBCTs of patients with CMF deformities. The results demonstrated that our method outperforms the other state-of-the-art methods in both tasks of the bony segmentation and the landmark digitization. Our DTNet features three main technical contributions. First, a collaborative two-branch architecture is designed to efficiently capture both fine-grained image details and complete global context for high-resolution volume-to-volume prediction. Second, leveraging anatomical dependencies between landmarks, regionalized dynamic learners (RDLs) are designed in the concept of "learns to learn" to jointly regress large-scale 3D heatmaps of all landmarks under limited computational costs. Third, adaptive transformer modules (ATMs) are designed for the flexible learning of task-specific feature embedding from common feature bases.Entities:
Keywords: Craniomaxillofacial (CMF); Landmark localization; Multi-task learning; Segmentation
Year: 2020 PMID: 34935006 PMCID: PMC8687703 DOI: 10.1007/978-3-030-59719-1_78
Source DB: PubMed Journal: Med Image Comput Comput Assist Interv