| Literature DB >> 27104497 |
Yunliang Cai1, Mark Landis2, David T Laidley2, Anat Kornecki2, Andrea Lum2, Shuo Li3.
Abstract
Automatic vertebra recognition, including the identification of vertebra locations and naming in multiple image modalities, are highly demanded in spinal clinical diagnoses where large amount of imaging data from various of modalities are frequently and interchangeably used. However, the recognition is challenging due to the variations of MR/CT appearances or shape/pose of the vertebrae. In this paper, we propose a method for multi-modal vertebra recognition using a novel deep learning architecture called Transformed Deep Convolution Network (TDCN). This new architecture can unsupervisely fuse image features from different modalities and automatically rectify the pose of vertebra. The fusion of MR and CT image features improves the discriminativity of feature representation and enhances the invariance of the vertebra pattern, which allows us to automatically process images from different contrast, resolution, protocols, even with different sizes and orientations. The feature fusion and pose rectification are naturally incorporated in a multi-layer deep learning network. Experiment results show that our method outperforms existing detection methods and provides a fully automatic location+naming+pose recognition for routine clinical practice.Entities:
Keywords: Convolution network; Deep learning; Vertebra detection; Vertebra recognition
Mesh:
Year: 2016 PMID: 27104497 DOI: 10.1016/j.compmedimag.2016.02.002
Source DB: PubMed Journal: Comput Med Imaging Graph ISSN: 0895-6111 Impact factor: 4.790