Nathan Lampen1, Daeseung Kim2, Xi Fang1, Xuanang Xu1, Tianshu Kuang2, Hannah H Deng2, Joshua C Barber2, Jamie Gateno2,3, James Xia4,5, Pingkun Yan6. 1. Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA. 2. Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA. 3. Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, 10021, USA. 4. Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA. jxia@houstonmethodist.org. 5. Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, NY, 10021, USA. jxia@houstonmethodist.org. 6. Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA. yanp2@rpi.edu.
Abstract
PURPOSE: Orthognathic surgery requires an accurate surgical plan of how bony segments are moved and how the face passively responds to the bony movement. Currently, finite element method (FEM) is the standard for predicting facial deformation. Deep learning models have recently been used to approximate FEM because of their faster simulation speed. However, current solutions are not compatible with detailed facial meshes and often do not explicitly provide the network with known boundary type information. Therefore, the purpose of this proof-of-concept study is to develop a biomechanics-informed deep neural network that accepts point cloud data and explicit boundary types as inputs to the network for fast prediction of soft-tissue deformation. METHODS: A deep learning network was developed based on the PointNet++ architecture. The network accepts the starting facial mesh, input displacement, and explicit boundary type information and predicts the final facial mesh deformation. RESULTS: We trained and tested our deep learning model on datasets created from FEM simulations of facial meshes. Our model achieved a mean error between 0.159 and 0.642 mm on five subjects. Including explicit boundary types had mixed results, improving performance in simulations with large deformations but decreasing performance in simulations with small deformations. These results suggest that including explicit boundary types may not be necessary to improve network performance. CONCLUSION: Our deep learning method can approximate FEM for facial change prediction in orthognathic surgical planning by accepting geometrically detailed meshes and explicit boundary types while significantly reducing simulation time.
PURPOSE: Orthognathic surgery requires an accurate surgical plan of how bony segments are moved and how the face passively responds to the bony movement. Currently, finite element method (FEM) is the standard for predicting facial deformation. Deep learning models have recently been used to approximate FEM because of their faster simulation speed. However, current solutions are not compatible with detailed facial meshes and often do not explicitly provide the network with known boundary type information. Therefore, the purpose of this proof-of-concept study is to develop a biomechanics-informed deep neural network that accepts point cloud data and explicit boundary types as inputs to the network for fast prediction of soft-tissue deformation. METHODS: A deep learning network was developed based on the PointNet++ architecture. The network accepts the starting facial mesh, input displacement, and explicit boundary type information and predicts the final facial mesh deformation. RESULTS: We trained and tested our deep learning model on datasets created from FEM simulations of facial meshes. Our model achieved a mean error between 0.159 and 0.642 mm on five subjects. Including explicit boundary types had mixed results, improving performance in simulations with large deformations but decreasing performance in simulations with small deformations. These results suggest that including explicit boundary types may not be necessary to improve network performance. CONCLUSION: Our deep learning method can approximate FEM for facial change prediction in orthognathic surgical planning by accepting geometrically detailed meshes and explicit boundary types while significantly reducing simulation time.
Authors: Daeseung Kim; Tianshu Kuang; Yriu L Rodrigues; Jaime Gateno; Steve G F Shen; Xudong Wang; Han Deng; Peng Yuan; David M Alfi; Michael A K Liebschner; James J Xia Journal: Med Image Comput Comput Assist Interv Date: 2019-10-10