| Literature DB >> 34306174 |
Xiangyu Meng1,2, Xin Li3, Xun Wang1,4.
Abstract
Histological analysis to tissue samples is elemental for diagnosing the risk and severity of ovarian cancer. The commonly used Hematoxylin and Eosin (H&E) staining method involves complex steps and strict requirements, which would seriously impact the research of histological analysis of the ovarian cancer. Virtual histological staining by the Generative Adversarial Network (GAN) provides a feasible way for these problems, yet it is still a challenge of using deep learning technology since the amounts of data available are quite limited for training. Based on the idea of GAN, we propose a weakly supervised learning method to generate autofluorescence images of unstained ovarian tissue sections corresponding to H&E staining sections of ovarian tissue. Using the above method, we constructed the supervision conditions for the virtual staining process, which makes the image quality synthesized in the subsequent virtual staining stage more perfect. Through the doctors' evaluation of our results, the accuracy of ovarian cancer unstained fluorescence image generated by our method reached 93%. At the same time, we evaluated the image quality of the generated images, where the FID reached 175.969, the IS score reached 1.311, and the MS reached 0.717. Based on the image-to-image translation method, we use the data set constructed in the previous step to implement a virtual staining method that is accurate to tissue cells. The accuracy of staining through the doctor's assessment reached 97%. At the same time, the accuracy of visual evaluation based on deep learning reached 95%.Entities:
Year: 2021 PMID: 34306174 PMCID: PMC8270697 DOI: 10.1155/2021/4244157
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1Overview of this virtual staining process: (a) the overview of this domain translation method; (b) overview of the virtual staining process.
Figure 2Comparison of the results of using different trained models to construct data sets: (a) results generated by CycleGAN model; (b) results synthesized by the improved CycleGAN model; (c) enlarge some cavity area of the synthesized image; (d) results after introducing domain consistency network training; (e) result of using our modified generator structure and domain consistency network.
Figure 3Training the domain consistency network.
Comparison of the quality of unstained images using different methods.
| CycleGAN | Improved CycleGAN | Ours | Ours (with separable Conv) | |
|---|---|---|---|---|
| IS ↓ | 1.590 | 1.700 | 1.407 | 1.311 |
| FID ↓ | 471.421 | 360.029 | 235.410 | 175.969 |
| MS ↓ | 0.883 | 0.873 | 0.794 | 0.717 |
Accuracy results of unstained images synthesized using different methods.
| CycleGAN | Improved CycleGAN | Ours | Ours (with separable Conv) | |
|---|---|---|---|---|
| Doctor 1 | 12.50% | 1.25% | 24.25% | 77.5% |
| Doctor 2 | 3.50% | 5.50% | 39.5% | 86.5% |
| Doctor 3 | 0.00% | 1.50% | 55.50% | 93.50% |
Figure 4The structure of Parallel Feature Fusion Network.
Figure 5Network structure of feature extraction module (UNet2). (a) FromRGB module. This module extends low-dimensional images to higher dimensional feature maps. (b) Feature extraction module based on UNet2.
Figure 6The UNet structure introducing additional skip connections.
Comparison of image quality using different loss functions and generator network structures.
| Network | Loss function | FID ↓ | IS ↓ |
|---|---|---|---|
| UNet |
| 57.6092 | 1.3405 |
|
| 54.1733 | 1.4687 | |
|
| 56.1733 | 1.4687 | |
|
| 59.4684 | 1.4720 | |
|
| 58.1196 | 1.4720 | |
|
| 56.1639 | 1.4432 | |
|
| |||
| UNet6 |
| 54.1436 | 1.3928 |
|
| 50.8299 | 1.4256 | |
|
| 52.4708 | 1.3865 | |
|
| 51.3790 | 1.3907 | |
|
| 55.2754 | 1.4852 | |
|
| 49.3387 | 1.4073 | |
|
| |||
| PFFN (ours) |
| 54.8384 | 1.3835 |
|
| 49.1167 | 1.3903 | |
|
| 49.6818 | 1.4124 | |
|
| 49.3575 | 1.3238 | |
|
| 47.0977 | 1.3505 | |
|
| 48.8730 | 1.2158 | |
Figure 7Virtual staining result display on pathological sections of ovarian cancer.
Staining accuracy of our model analyzed by three doctors.
| Doctor 1 | Doctor 2 | Doctor 3 | |
|---|---|---|---|
| Samples with successful staining | 190 | 196 | 194 |
| Accuracy | 95% | 98% | 97% |