| Literature DB >> 35136673 |
Mario Siller1, Lea Maria Stangassinger2, Christina Kreutzer3, Peter Boor4,5, Roman D Bulow4,5, Theo J F Kraus6, Saskia von Stillfried4,5, Soraya Wolfl7, Sebastien Couillard-Despres3, Gertie Janneke Oostingh2, Anton Hittmair8, Michael Gadermayr1.
Abstract
BACKGROUND: The fast acquisition process of frozen sections allows surgeons to wait for histological findings during the interventions to base intrasurgical decisions on the outcome of the histology. Compared with paraffin sections, however, the quality of frozen sections is often strongly reduced, leading to a lower diagnostic accuracy. Deep neural networks are capable of modifying specific characteristics of digital histological images. Particularly, generative adversarial networks proved to be effective tools to learn about translation between two modalities, based on two unconnected data sets only. The positive effects of such deep learning-based image optimization on computer-aided diagnosis have already been shown. However, since fully automated diagnosis is controversial, the application of enhanced images for visual clinical assessment is currently probably of even higher relevance.Entities:
Keywords: Frozen sections; generative adversarial networks; histology; paraffin sections; thyroid cancer; whole slide imaging
Year: 2022 PMID: 35136673 PMCID: PMC8794030 DOI: 10.4103/jpi.jpi_53_21
Source DB: PubMed Journal: J Pathol Inform
Figure 1Original frozen section (A) and a corresponding optimized “fake-paraffin” patch showing characteristics of paraffin sections (translated with the CUT setting, see the section “Virtual frozen-to-paraffin translation”). An example of a paraffin section is shown on the right side (C)
Figure 2Overview of the two conducted experiments, performed by each of the six experts. In experiment 1, five corresponding images, containing one real and four virtual counterparts, were ranked. In experiment 2, the expert has to decide which image out of a non-corresponding pair is real
Configurations of the three investigated deep learning-based image translation models
| CG[ | CUT[ | CG-PEC[ | |
|---|---|---|---|
| Patch size | 256 × 256 | 256 × 256 | 256 × 256 |
| Batch size | 1 | 1 | 1 |
| Regularization | Batch normalization | Batch normalization | Batch normalization |
| Generator(s) | U-Net | U-Net | U-Net |
| Discriminator | Patch-GAN | Patch-GAN | Patch-GAN |
| Weights | |||
Figure 4Experiment 1: Portion of samples, assessed as visually more appropriate than the original samples (A) and then the stain normalized samples (B), respectively. For example, from (A), we can extract that expert 3 (E3) assessed stain normalization (SN) as superior to the original image in 90% (0.9) of the cases