Literature DB >> 33323271

Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study.

Livia Faes1, Siegfried K Wagner2, Dun Jack Fu3, Xiaoxuan Liu4, Edward Korot5, Joseph R Ledsam6, Trevor Back6, Reena Chopra7, Nikolas Pontikos8, Christoph Kern9, Gabriella Moraes3, Martin K Schmid10, Dawn Sim2, Konstantinos Balaskas2, Lucas M Bachmann11, Alastair K Denniston12, Pearse A Keane13.   

Abstract

BACKGROUND: Deep learning has the potential to transform health care; however, substantial expertise is required to train such models. We sought to evaluate the utility of automated deep learning software to develop medical image diagnostic classifiers by health-care professionals with no coding-and no deep learning-expertise.
METHODS: We used five publicly available open-source datasets: retinal fundus images (MESSIDOR); optical coherence tomography (OCT) images (Guangzhou Medical University and Shiley Eye Institute, version 3); images of skin lesions (Human Against Machine [HAM] 10000), and both paediatric and adult chest x-ray (CXR) images (Guangzhou Medical University and Shiley Eye Institute, version 3 and the National Institute of Health [NIH] dataset, respectively) to separately feed into a neural architecture search framework, hosted through Google Cloud AutoML, that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity, and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we did external validation using the Edinburgh Dermofit Library dataset.
FINDINGS: Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (sensitivity 73·3-97·0%; specificity 67-100%; AUPRC 0·87-1·00). In the multiple classification tasks, the diagnostic properties ranged from 38% to 100% for sensitivity and from 67% to 100% for specificity. The discriminative performance in terms of AUPRC ranged from 0·57 to 1·00 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0·47, with a sensitivity of 49% and a positive predictive value of 52%.
INTERPRETATION: All models, except the automated deep learning model trained on the multilabel classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The quality of the open-access datasets (including insufficient information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitations of this study. The availability of automated deep learning platforms provide an opportunity for the medical community to enhance their understanding in model development and evaluation. Although the derivation of classification models without requiring a deep understanding of the mathematical, statistical, and programming principles is attractive, comparable performance to expertly designed models is limited to more elementary classification tasks. Furthermore, care should be placed in adhering to ethical principles when using these automated models to avoid discrimination and causing harm. Future studies should compare several application programming interfaces on thoroughly curated datasets. FUNDING: National Institute for Health Research and Moorfields Eye Charity.
Copyright © 2019 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.

Entities:  

Mesh:

Year:  2019        PMID: 33323271     DOI: 10.1016/S2589-7500(19)30108-6

Source DB:  PubMed          Journal:  Lancet Digit Health        ISSN: 2589-7500


  27 in total

Review 1.  Deep learning for ultra-widefield imaging: a scoping review.

Authors:  Nishaant Bhambra; Fares Antaki; Farida El Malt; AnQi Xu; Renaud Duval
Journal:  Graefes Arch Clin Exp Ophthalmol       Date:  2022-07-20       Impact factor: 3.535

2.  Hands-on with IBM Visual Insights.

Authors:  Shirui Luo; Volodymyr Kindratenko
Journal:  Comput Sci Eng       Date:  2020-08-14       Impact factor: 2.152

3.  Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: a comparative study.

Authors:  Ka Wing Wan; Chun Hoi Wong; Ho Fung Ip; Dejian Fan; Pak Leung Yuen; Hoi Ying Fong; Michael Ying
Journal:  Quant Imaging Med Surg       Date:  2021-04

4.  Artificial Intelligence for Understanding Imaging, Text, and Data in Gastroenterology.

Authors:  Ryan W Stidham
Journal:  Gastroenterol Hepatol (N Y)       Date:  2020-07

5.  Deep Learning-Based Image Classification in Differentiating Tufted Astrocytes, Astrocytic Plaques, and Neuritic Plaques.

Authors:  Shunsuke Koga; Nikhil B Ghayal; Dennis W Dickson
Journal:  J Neuropathol Exp Neurol       Date:  2021-03-22       Impact factor: 3.685

6.  A method for utilizing automated machine learning for histopathological classification of testis based on Johnsen scores.

Authors:  Yurika Ito; Mami Unagami; Fumito Yamabe; Yozo Mitsui; Koichi Nakajima; Koichi Nagao; Hideyuki Kobayashi
Journal:  Sci Rep       Date:  2021-05-10       Impact factor: 4.379

7.  Predicting sex from retinal fundus photographs using automated deep learning.

Authors:  Edward Korot; Nikolas Pontikos; Xiaoxuan Liu; Siegfried K Wagner; Livia Faes; Josef Huemer; Konstantinos Balaskas; Alastair K Denniston; Anthony Khawaja; Pearse A Keane
Journal:  Sci Rep       Date:  2021-05-13       Impact factor: 4.379

Review 8.  Surgical data science - from concepts toward clinical translation.

Authors:  Lena Maier-Hein; Matthias Eisenmann; Duygu Sarikaya; Keno März; Toby Collins; Anand Malpani; Johannes Fallert; Hubertus Feussner; Stamatia Giannarou; Pietro Mascagni; Hirenkumar Nakawala; Adrian Park; Carla Pugh; Danail Stoyanov; Swaroop S Vedula; Kevin Cleary; Gabor Fichtinger; Germain Forestier; Bernard Gibaud; Teodor Grantcharov; Makoto Hashizume; Doreen Heckmann-Nötzel; Hannes G Kenngott; Ron Kikinis; Lars Mündermann; Nassir Navab; Sinan Onogur; Tobias Roß; Raphael Sznitman; Russell H Taylor; Minu D Tizabi; Martin Wagner; Gregory D Hager; Thomas Neumuth; Nicolas Padoy; Justin Collins; Ines Gockel; Jan Goedeke; Daniel A Hashimoto; Luc Joyeux; Kyle Lam; Daniel R Leff; Amin Madani; Hani J Marcus; Ozanan Meireles; Alexander Seitel; Dogu Teber; Frank Ückert; Beat P Müller-Stich; Pierre Jannin; Stefanie Speidel
Journal:  Med Image Anal       Date:  2021-11-18       Impact factor: 13.828

9.  Impact of Artificial Intelligence on Medical Education in Ophthalmology.

Authors:  Nita G Valikodath; Emily Cole; Daniel S W Ting; J Peter Campbell; Louis R Pasquale; Michael F Chiang; R V Paul Chan
Journal:  Transl Vis Sci Technol       Date:  2021-06-01       Impact factor: 3.283

Review 10.  Radiogenomics of lung cancer.

Authors:  Chi Wah Wong; Ammar Chaudhry
Journal:  J Thorac Dis       Date:  2020-09       Impact factor: 3.005

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.