Titus J Brinker1, Achim Hekler2, Alexander H Enk3, Joachim Klode4, Axel Hauschild5, Carola Berking6, Bastian Schilling7, Sebastian Haferkamp8, Dirk Schadendorf4, Stefan Fröhling2, Jochen S Utikal9, Christof von Kalle2. 1. National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Department of Dermatology, University Hospital Heidelberg, Heidelberg, Germany. Electronic address: titus.brinker@dkfz.de. 2. National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany. 3. Department of Dermatology, University Hospital Heidelberg, Heidelberg, Germany. 4. Department of Dermatology, University Hospital Essen, Essen, Germany. 5. Department of Dermatology, University Hospital Kiel, Kiel, Germany. 6. Department of Dermatology, University Hospital Munich (LMU), Munich, Germany. 7. Department of Dermatology, University Hospital Würzburg, Würzburg, Germany. 8. Department of Dermatology, University Hospital Regensburg, Regensburg, Germany. 9. Department of Dermatology, Heidelberg University, Mannheim, Germany; Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany.
Abstract
BACKGROUND: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. METHODS: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. INTERPRETATION: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks.
BACKGROUND: Recent studies have demonstrated the use of convolutional neural networks (CNNs) to classify images of melanoma with accuracies comparable to those achieved by board-certified dermatologists. However, the performance of a CNN exclusively trained with dermoscopic images in a clinical image classification task in direct competition with a large number of dermatologists has not been measured to date. This study compares the performance of a convolutional neuronal network trained with dermoscopic images exclusively for identifying melanoma in clinical photographs with the manual grading of the same images by dermatologists. METHODS: We compared automatic digital melanoma classification with the performance of 145 dermatologists of 12 German university hospitals. We used methods from enhanced deep learning to train a CNN with 12,378 open-source dermoscopic images. We used 100 clinical images to compare the performance of the CNN to that of the dermatologists. Dermatologists were compared with the deep neural network in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS: The mean sensitivity and specificity achieved by the dermatologists with clinical images was 89.4% (range: 55.0%-100%) and 64.4% (range: 22.5%-92.5%). At the same sensitivity, the CNN exhibited a mean specificity of 68.2% (range 47.5%-86.25%). Among the dermatologists, the attendings showed the highest mean sensitivity of 92.8% at a mean specificity of 57.7%. With the same high sensitivity of 92.8%, the CNN had a mean specificity of 61.1%. INTERPRETATION: For the first time, dermatologist-level image classification was achieved on a clinical image classification task without training on clinical images. The CNN had a smaller variance of results indicating a higher robustness of computer vision compared with human assessment for dermatologic image classification tasks.
Authors: Andreas Kleppe; Ole-Johan Skrede; Sepp De Raedt; Knut Liestøl; David J Kerr; Håvard E Danielsen Journal: Nat Rev Cancer Date: 2021-01-29 Impact factor: 60.716
Authors: Xinyi Du-Harpur; Callum Arthurs; Clarisse Ganier; Rick Woolf; Zainab Laftah; Manpreet Lakhan; Amr Salam; Bo Wan; Fiona M Watt; Nicholas M Luscombe; Magnus D Lynch Journal: J Invest Dermatol Date: 2020-09-12 Impact factor: 8.551
Authors: C Muñoz-López; C Ramírez-Cornejo; M A Marchetti; S S Han; P Del Barrio-Díaz; A Jaque; P Uribe; D Majerson; M Curi; C Del Puerto; F Reyes-Baraona; R Meza-Romero; J Parra-Cares; P Araneda-Ortega; M Guzmán; R Millán-Apablaza; M Nuñez-Mora; K Liopyris; C Vera-Kellet; C Navarrete-Dechent Journal: J Eur Acad Dermatol Venereol Date: 2020-11-22 Impact factor: 6.166
Authors: Yao Zhang; Kamil Ali; Jacob A George; Jason S Reichenberg; Matthew C Fox; Adewole S Adamson; James W Tunnell; Mia K Markey Journal: J Med Imaging (Bellingham) Date: 2021-02-10
Authors: Claire M Felmingham; Nikki R Adler; Zongyuan Ge; Rachael L Morton; Monika Janda; Victoria J Mar Journal: Am J Clin Dermatol Date: 2021-03 Impact factor: 7.403