Jordi Minnema1, Maureen van Eijnatten2, Wouter Kouw3, Faruk Diblen3, Adriënne Mendrik3, Jan Wolff4. 1. Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, de Boelelaan 1117, Amsterdam, the Netherlands. Electronic address: j.minnema@vumc.nl. 2. Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, de Boelelaan 1117, Amsterdam, the Netherlands; Centrum Wiskunde & Informatica (CWI), Science Park 123, Amsterdam, the Netherlands. 3. Netherlands eScience Center, Science Park 140, Amsterdam, the Netherlands. 4. Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovation Lab, Amsterdam Movement Sciences, de Boelelaan 1117, Amsterdam, the Netherlands; Department of Oral and Maxillofacial Surgery, Division for Regenerative Orofacial Medicine, University Hospital Hamburg-Eppendorf, Hamburg, Germany.
Abstract
BACKGROUND: The most tedious and time-consuming task in medical additive manufacturing (AM) is image segmentation. The aim of the present study was to develop and train a convolutional neural network (CNN) for bone segmentation in computed tomography (CT) scans. METHOD: The CNN was trained with CT scans acquired using six different scanners. Standard tessellation language (STL) models of 20 patients who had previously undergone craniotomy and cranioplasty using additively manufactured skull implants served as "gold standard" models during CNN training. The CNN segmented all patient CT scans using a leave-2-out scheme. All segmented CT scans were converted into STL models and geometrically compared with the gold standard STL models. RESULTS: The CT scans segmented using the CNN demonstrated a large overlap with the gold standard segmentation and resulted in a mean Dice similarity coefficient of 0.92 ± 0.04. The CNN-based STL models demonstrated mean surface deviations ranging between -0.19 mm ± 0.86 mm and 1.22 mm ± 1.75 mm, when compared to the gold standard STL models. No major differences were observed between the mean deviations of the CNN-based STL models acquired using six different CT scanners. CONCLUSIONS: The fully-automated CNN was able to accurately segment the skull. CNNs thus offer the opportunity of removing the current prohibitive barriers of time and effort during CT image segmentation, making patient-specific AM constructs more accesible.
BACKGROUND: The most tedious and time-consuming task in medical additive manufacturing (AM) is image segmentation. The aim of the present study was to develop and train a convolutional neural network (CNN) for bone segmentation in computed tomography (CT) scans. METHOD: The CNN was trained with CT scans acquired using six different scanners. Standard tessellation language (STL) models of 20 patients who had previously undergone craniotomy and cranioplasty using additively manufactured skull implants served as "gold standard" models during CNN training. The CNN segmented all patient CT scans using a leave-2-out scheme. All segmented CT scans were converted into STL models and geometrically compared with the gold standard STL models. RESULTS: The CT scans segmented using the CNN demonstrated a large overlap with the gold standard segmentation and resulted in a mean Dice similarity coefficient of 0.92 ± 0.04. The CNN-based STL models demonstrated mean surface deviations ranging between -0.19 mm ± 0.86 mm and 1.22 mm ± 1.75 mm, when compared to the gold standard STL models. No major differences were observed between the mean deviations of the CNN-based STL models acquired using six different CT scanners. CONCLUSIONS: The fully-automated CNN was able to accurately segment the skull. CNNs thus offer the opportunity of removing the current prohibitive barriers of time and effort during CT image segmentation, making patient-specific AM constructs more accesible.
Authors: Peter A J Pijpker; Tim S Oosterhuis; Max J H Witjes; Chris Faber; Peter M A van Ooijen; Jiří Kosinka; Jos M A Kuijlen; Rob J M Groen; Joep Kraeima Journal: Int J Comput Assist Radiol Surg Date: 2021-05-27 Impact factor: 2.924