| Literature DB >> 30104768 |
Jeffrey De Fauw1, Joseph R Ledsam1, Bernardino Romera-Paredes1, Stanislav Nikolov1, Nenad Tomasev1, Sam Blackwell1, Harry Askham1, Xavier Glorot1, Brendan O'Donoghue1, Daniel Visentin1, George van den Driessche1, Balaji Lakshminarayanan1, Clemens Meyer1, Faith Mackinder1, Simon Bouton1, Kareem Ayoub1, Reena Chopra2, Dominic King1, Alan Karthikesalingam1, Cían O Hughes1,3, Rosalind Raine3, Julian Hughes2, Dawn A Sim2, Catherine Egan2, Adnan Tufail2, Hugh Montgomery3, Demis Hassabis1, Geraint Rees3, Trevor Back1, Peng T Khaw2, Mustafa Suleyman1, Julien Cornebise1,3, Pearse A Keane4, Olaf Ronneberger5.
Abstract
The volume and complexity of diagnostic imaging is increasing at a pace faster than the availability of human expertise to interpret it. Artificial intelligence has shown great promise in classifying two-dimensional photographs of some common diseases and typically relies on databases of millions of annotated images. Until now, the challenge of reaching the performance of expert clinicians in a real-world clinical pathway with three-dimensional diagnostic scans has remained unsolved. Here, we apply a novel deep learning architecture to a clinically heterogeneous set of three-dimensional optical coherence tomography scans from patients referred to a major eye hospital. We demonstrate performance in making a referral recommendation that reaches or exceeds that of experts on a range of sight-threatening retinal diseases after training on only 14,884 scans. Moreover, we demonstrate that the tissue segmentations produced by our architecture act as a device-independent representation; referral accuracy is maintained when using tissue segmentations from a different type of device. Our work removes previous barriers to wider clinical use without prohibitive training data requirements across multiple pathologies in a real-world setting.Entities:
Mesh:
Year: 2018 PMID: 30104768 DOI: 10.1038/s41591-018-0107-6
Source DB: PubMed Journal: Nat Med ISSN: 1078-8956 Impact factor: 53.440