Titus J Brinker1, Achim Hekler2, Alexander H Enk3, Joachim Klode4, Axel Hauschild5, Carola Berking6, Bastian Schilling7, Sebastian Haferkamp8, Dirk Schadendorf4, Tim Holland-Letz9, Jochen S Utikal10, Christof von Kalle2. 1. National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Department of Dermatology, University Hospital Heidelberg, Heidelberg, Germany. Electronic address: titus.brinker@dkfz.de. 2. National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany. 3. Department of Dermatology, University Hospital Heidelberg, Heidelberg, Germany. 4. Department of Dermatology, University Hospital Essen, Essen, Germany. 5. Department of Dermatology, University Hospital Kiel, Kiel, Germany. 6. Department of Dermatology, University Hospital Munich (LMU), Munich, Germany. 7. Department of Dermatology, University Hospital Würzburg, Würzburg, Germany. 8. Department of Dermatology, University Hospital Regensburg, Regensburg, Germany. 9. Department of Biostatistics, German Cancer Research Center, Heidelberg, Germany. 10. Department of Dermatology, Heidelberg University, Mannheim, Germany; Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany.
Abstract
BACKGROUND: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. METHODS: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. INTERPRETATION: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity.
BACKGROUND: Recent studies have successfully demonstrated the use of deep-learning algorithms for dermatologist-level classification of suspicious lesions by the use of excessive proprietary image databases and limited numbers of dermatologists. For the first time, the performance of a deep-learning algorithm trained by open-source images exclusively is compared to a large number of dermatologists covering all levels within the clinical hierarchy. METHODS: We used methods from enhanced deep learning to train a convolutional neural network (CNN) with 12,378 open-source dermoscopic images. We used 100 images to compare the performance of the CNN to that of the 157 dermatologists from 12 university hospitals in Germany. Outperformance of dermatologists by the deep neural network was measured in terms of sensitivity, specificity and receiver operating characteristics. FINDINGS: The mean sensitivity and specificity achieved by the dermatologists with dermoscopic images was 74.1% (range 40.0%-100%) and 60% (range 21.3%-91.3%), respectively. At a mean sensitivity of 74.1%, the CNN exhibited a mean specificity of 86.5% (range 70.8%-91.3%). At a mean specificity of 60%, a mean sensitivity of 87.5% (range 80%-95%) was achieved by our algorithm. Among the dermatologists, the chief physicians showed the highest mean specificity of 69.2% at a mean sensitivity of 73.3%. With the same high specificity of 69.2%, the CNN had a mean sensitivity of 84.5%. INTERPRETATION: A CNN trained by open-source images exclusively outperformed 136 of the 157 dermatologists and all the different levels of experience (from junior to chief physicians) in terms of average specificity and sensitivity.
Authors: Andreas Kleppe; Ole-Johan Skrede; Sepp De Raedt; Knut Liestøl; David J Kerr; Håvard E Danielsen Journal: Nat Rev Cancer Date: 2021-01-29 Impact factor: 60.716
Authors: Yao Zhang; Kamil Ali; Jacob A George; Jason S Reichenberg; Matthew C Fox; Adewole S Adamson; James W Tunnell; Mia K Markey Journal: J Med Imaging (Bellingham) Date: 2021-02-10
Authors: Claire M Felmingham; Nikki R Adler; Zongyuan Ge; Rachael L Morton; Monika Janda; Victoria J Mar Journal: Am J Clin Dermatol Date: 2021-03 Impact factor: 7.403
Authors: Julia Höhn; Achim Hekler; Eva Krieghoff-Henning; Jakob Nikolas Kather; Jochen Sven Utikal; Friedegund Meier; Frank Friedrich Gellrich; Axel Hauschild; Lars French; Justin Gabriel Schlager; Kamran Ghoreschi; Tabea Wilhelm; Heinz Kutzner; Markus Heppt; Sebastian Haferkamp; Wiebke Sondermann; Dirk Schadendorf; Bastian Schilling; Roman C Maron; Max Schmitt; Tanja Jutzi; Stefan Fröhling; Daniel B Lipka; Titus Josef Brinker Journal: J Med Internet Res Date: 2021-07-02 Impact factor: 5.428