| Literature DB >> 30190628 |
Fengxia Yan1,2, Jayaram K Udupa2, Yubing Tong2, Guoping Xu2, Dewey Odhner2, Drew A Torigian2.
Abstract
The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.Entities:
Keywords: Virtual landmarks; computed tomography (CT); head and neck; neural network learning; object recognition
Year: 2018 PMID: 30190628 PMCID: PMC6122856 DOI: 10.1117/12.2293700
Source DB: PubMed Journal: Proc SPIE Int Soc Opt Eng ISSN: 0277-786X