| Literature DB >> 27497072 |
Thijs Kooi1, Geert Litjens2, Bram van Ginneken2, Albert Gubern-Mérida2, Clara I Sánchez2, Ritse Mann2, Ard den Heeten3, Nico Karssemeijer2.
Abstract
Recent advances in machine learning yielded new techniques to train deep neural networks, which resulted in highly successful applications in many pattern recognition tasks such as object detection and speech recognition. In this paper we provide a head-to-head comparison between a state-of-the art in mammography CAD system, relying on a manually designed feature set and a Convolutional Neural Network (CNN), aiming for a system that can ultimately read mammograms independently. Both systems are trained on a large data set of around 45,000 images and results show the CNN outperforms the traditional CAD system at low sensitivity and performs comparable at high sensitivity. We subsequently investigate to what extent features such as location and patient information and commonly used manual features can still complement the network and see improvements at high specificity over the CNN especially with location and context features, which contain information not available to the CNN. Additionally, a reader study was performed, where the network was compared to certified screening radiologists on a patch level and we found no significant difference between the network and the readers.Entities:
Keywords: Breast cancer; Computer aided detection; Convolutional neural networks; Deep learning; Machine learning; Mammography
Mesh:
Year: 2016 PMID: 27497072 DOI: 10.1016/j.media.2016.07.007
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545