Erik Rodner1,2, Thomas Bocklitz3,4, Ferdinand von Eggeling3,4,5, Günther Ernst5, Olga Chernavskaia4, Jürgen Popp3,4, Joachim Denzler1, Orlando Guntinas-Lichius5. 1. Department of Computer Science, Friedrich Schiller University, Jena, Germany. 2. Corporate Research and Technology, Carl Zeiss AG, Jena, Germany. 3. Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Jena, Germany. 4. Leibniz Institute of Photonic Technology, Jena, Germany. 5. Department of Otorhinolaryngology, Jena University Hospital, Jena, Germany.
Abstract
BACKGROUND: A fully convolutional neural networks (FCN)-based automated image analysis algorithm to discriminate between head and neck cancer and noncancerous epithelium based on nonlinear microscopic images was developed. METHODS: Head and neck cancer sections were used for standard histopathology and co-registered with multimodal images from the same sections using the combination of coherent anti-Stokes Raman scattering, two-photon excited fluorescence, and second harmonic generation microscopy. The images analyzed with semantic segmentation using a FCN for four classes: cancer, normal epithelium, background, and other tissue types. RESULTS: A total of 114 images of 12 patients were analyzed. Using a patch score aggregation, the average recognition rate and an overall recognition rate or the four classes were 88.9% and 86.7%, respectively. A total of 113 seconds were needed to process a whole-slice image in the dataset. CONCLUSION: Multimodal nonlinear microscopy in combination with automated image analysis using FCN seems to be a promising technique for objective differentiation between head and neck cancer and noncancerous epithelium.
BACKGROUND: A fully convolutional neural networks (FCN)-based automated image analysis algorithm to discriminate between head and neck cancer and noncancerous epithelium based on nonlinear microscopic images was developed. METHODS: Head and neck cancer sections were used for standard histopathology and co-registered with multimodal images from the same sections using the combination of coherent anti-Stokes Raman scattering, two-photon excited fluorescence, and second harmonic generation microscopy. The images analyzed with semantic segmentation using a FCN for four classes: cancer, normal epithelium, background, and other tissue types. RESULTS: A total of 114 images of 12 patients were analyzed. Using a patch score aggregation, the average recognition rate and an overall recognition rate or the four classes were 88.9% and 86.7%, respectively. A total of 113 seconds were needed to process a whole-slice image in the dataset. CONCLUSION: Multimodal nonlinear microscopy in combination with automated image analysis using FCN seems to be a promising technique for objective differentiation between head and neck cancer and noncancerous epithelium.