Carrie E Zimmerman1, Pulkit Khandelwal2, Long Xie3, Hyunyeol Lee4, Hee Kwon Song4, Paul A Yushkevich3, Arastoo Vossough5, Scott P Bartlett6, Felix W Wehrli7. 1. Division of Plastic and Reconstructive Surgery, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania. 2. Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania; Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania. 3. Department of Radiology, University of Pennsylvania, 1 Founders Building, 3400 Spruce Street, Philadelphia, PA 19104-4283; Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania. 4. Department of Radiology, University of Pennsylvania, 1 Founders Building, 3400 Spruce Street, Philadelphia, PA 19104-4283; Laboratory for Structural, Physiologic and Functional Imaging, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania. 5. Department of Radiology, University of Pennsylvania, 1 Founders Building, 3400 Spruce Street, Philadelphia, PA 19104-4283; Children's Hospital of Philadelphia, Department of Radiology, Philadelphia, Pennsylvania. 6. Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania; Department of Surgery, University of Pennsylvania, Philadelphia, Pennsylvania. 7. Department of Radiology, University of Pennsylvania, 1 Founders Building, 3400 Spruce Street, Philadelphia, PA 19104-4283; Laboratory for Structural, Physiologic and Functional Imaging, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania. Electronic address: felix.wehrli@pennmedicine.upenn.edu.
Abstract
RATIONALE AND OBJECTIVES: Solid-state MRI has been shown to provide a radiation-free alternative imaging strategy to CT. However, manual image segmentation to produce bone-selective MR-based 3D renderings is time and labor intensive, thereby acting as a bottleneck in clinical practice. The objective of this study was to evaluate an automatic multi-atlas segmentation pipeline for use on cranial vault images entirely circumventing prior manual intervention, and to assess concordance of craniometric measurements between pipeline produced MRI and CT-based 3D skull renderings. MATERIALS AND METHODS: Dual-RF, dual-echo, 3D UTE pulse sequence MR data were obtained at 3T on 30 healthy subjects along with low-dose CT images between December 2018 to January 2020 for this prospective study. The four-point MRI datasets (two RF pulse widths and two echo times) were combined to produce bone-specific images. CT images were thresholded and manually corrected to segment the cranial vault. CT images were then rigidly registered to MRI using mutual information. The corresponding cranial vault segmentations were then transformed to MRI. The "ground truth" segmentations served as reference for the MR images. Subsequently, an automated multi-atlas pipeline was used to segment the bone-selective images. To compare manually and automatically segmented MR images, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) were computed, and craniometric measurements between CT and automated-pipeline MRI-based segmentations were examined via Lin's concordance coefficient (LCC). RESULTS: Automated segmentation reduced the need for an expert to obtain segmentation. Average DSC was 90.86 ± 1.94%, and average 95th percentile HD was 1.65 ± 0.44 mm between ground truth and automated segmentations. MR-based measurements differed from CT-based measurements by 0.73-1.2 mm on key craniometric measurements. LCC for distances between CT and MR-based landmarks were vertex-basion: 0.906, left-right frontozygomatic suture: 0.780, and glabella-opisthocranium: 0.956 for the three measurements. CONCLUSION: Good agreement between CT and automated MR-based 3D cranial vault renderings has been achieved, thereby eliminating the laborious manual segmentation process. Target applications comprise craniofacial surgery as well as imaging of traumatic injuries and masses involving both bone and soft tissue.
RATIONALE AND OBJECTIVES: Solid-state MRI has been shown to provide a radiation-free alternative imaging strategy to CT. However, manual image segmentation to produce bone-selective MR-based 3D renderings is time and labor intensive, thereby acting as a bottleneck in clinical practice. The objective of this study was to evaluate an automatic multi-atlas segmentation pipeline for use on cranial vault images entirely circumventing prior manual intervention, and to assess concordance of craniometric measurements between pipeline produced MRI and CT-based 3D skull renderings. MATERIALS AND METHODS: Dual-RF, dual-echo, 3D UTE pulse sequence MR data were obtained at 3T on 30 healthy subjects along with low-dose CT images between December 2018 to January 2020 for this prospective study. The four-point MRI datasets (two RF pulse widths and two echo times) were combined to produce bone-specific images. CT images were thresholded and manually corrected to segment the cranial vault. CT images were then rigidly registered to MRI using mutual information. The corresponding cranial vault segmentations were then transformed to MRI. The "ground truth" segmentations served as reference for the MR images. Subsequently, an automated multi-atlas pipeline was used to segment the bone-selective images. To compare manually and automatically segmented MR images, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) were computed, and craniometric measurements between CT and automated-pipeline MRI-based segmentations were examined via Lin's concordance coefficient (LCC). RESULTS: Automated segmentation reduced the need for an expert to obtain segmentation. Average DSC was 90.86 ± 1.94%, and average 95th percentile HD was 1.65 ± 0.44 mm between ground truth and automated segmentations. MR-based measurements differed from CT-based measurements by 0.73-1.2 mm on key craniometric measurements. LCC for distances between CT and MR-based landmarks were vertex-basion: 0.906, left-right frontozygomatic suture: 0.780, and glabella-opisthocranium: 0.956 for the three measurements. CONCLUSION: Good agreement between CT and automated MR-based 3D cranial vault renderings has been achieved, thereby eliminating the laborious manual segmentation process. Target applications comprise craniofacial surgery as well as imaging of traumatic injuries and masses involving both bone and soft tissue.
Authors: Hyunyeol Lee; Xia Zhao; Hee Kwon Song; Rosaline Zhang; Scott P Bartlett; Felix W Wehrli Journal: Magn Reson Med Date: 2018-12-18 Impact factor: 4.668
Authors: L Pogliani; G V Zuccotti; M Furlanetto; V Giudici; A Erbetta; L Chiapparini; L Valentini Journal: Childs Nerv Syst Date: 2017-06-03 Impact factor: 1.475
Authors: Amy Berrington de González; Mahadevappa Mahesh; Kwang-Pyo Kim; Mythreyi Bhargavan; Rebecca Lewis; Fred Mettler; Charles Land Journal: Arch Intern Med Date: 2009-12-14
Authors: Li Wang; Ken Chung Chen; Yaozong Gao; Feng Shi; Shu Liao; Gang Li; Steve G F Shen; Jin Yan; Philip K M Lee; Ben Chow; Nancy X Liu; James J Xia; Dinggang Shen Journal: Med Phys Date: 2014-04 Impact factor: 4.071