| Literature DB >> 35884502 |
Islam Alzoubi1, Guoqing Bao1, Rong Zhang1, Christina Loh2, Yuqi Zheng2, Svetlana Cherepanoff3, Gary Gracie3, Maggie Lee4, Michael Kuligowski5, Kimberley L Alexander4,6,7, Michael E Buckland4, Xiuying Wang1, Manuel B Graeber2.
Abstract
Routine examination of entire histological slides at cellular resolution poses a significant if not insurmountable challenge to human observers. However, high-resolution data such as the cellular distribution of proteins in tissues, e.g., those obtained following immunochemical staining, are highly desirable. Our present study extends the applicability of the PathoFusion framework to the cellular level. We illustrate our approach using the detection of CD276 immunoreactive cells in glioblastoma as an example. Following automatic identification by means of PathoFusion's bifocal convolutional neural network (BCNN) model, individual cells are automatically profiled and counted. Only discriminable cells selected through data filtering and thresholding were segmented for cell-level analysis. Subsequently, we converted the detection signals into the corresponding heatmaps visualizing the distribution of the detected cells in entire whole-slide images of adjacent H&E-stained sections using the Discrete Wavelet Transform (DWT). Our results demonstrate that PathoFusion is capable of autonomously detecting and counting individual immunochemically labelled cells with a high prediction performance of 0.992 AUC and 97.7% accuracy. The data can be used for whole-slide cross-modality analyses, e.g., relationships between immunochemical signals and anaplastic histological features. PathoFusion has the potential to be applied to additional problems that seek to correlate heterogeneous data streams and to serve as a clinically applicable, weakly supervised system for histological image analyses in (neuro)pathology.Entities:
Keywords: CD276; PathoFusion framework; artificial intelligence; bifocal convolutional neural network (BCNN); image fusion; image segmentation
Year: 2022 PMID: 35884502 PMCID: PMC9316952 DOI: 10.3390/cancers14143441
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1Illustration of the extended PathoFusion framework used for the analysis of cell profiles in whole-slide images (WSI). Slides were subjected to immunochemistry for CD276, followed by brief hemalum counterstaining. (1) Model training: Patches were extracted from a WSI in line with specialist (consultant neuropathologist) annotations and passed to the bifocal convolutional network (BCNN). (2) Model inference: Following partitioning of the test image (the grid size shown is not to scale and is for illustration purposes only), extracted patches are provided to the trained BCNN model for classification and the prediction results are converted into the corresponding heatmaps (the pseudo-colour red marks recognized cells). Scale bar: microns.
Figure 2Cell-level analysis. Image patches containing cells identified by the BCNN are then processed for cell-level analysis. Filtering and thresholding are followed by edge and contour detection. An image is considered discriminable if the minimum acceptance value for the Otsu threshold is reached. It can then be segmented using edge and contour detection.
Figure 3Image fusion schematic illustrating our use of the Discrete Wavelet Transform, with pixel averaging as the fusion rule on the RGB channels of input images.
Figure 4(A) Confusion matrix for “halo cell” detection. (B) Area under the curve (AUC) and receiver operating characteristic (ROC) analysis of “halo cell” detection by our BCNN model. (C) Comparison of ROC/AUC performance of the BCNN with that of two other deep learning models.
Performance evaluation of different deep learning models.
| Model | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|
| BCNN | 97.7% | 97.7% | 97.7% | 97.7% |
| Subnet (BCNN) | 90.0% | 90.0% | 90.0% | 89.8% |
| Resnet−50 | 94.0% | 94.0% | 94.0% | 94.0% |
Figure 5(A,B) The prediction results generated by the BCNN model for two independent cases. (1) Low-magnification view of an area showing “halo cells”; (2) the corresponding predicted heatmap revealing the detected “halo cells” (in red). (3) Visualizations of the results of the filtering and thresholding process: (3A,3B) upper panel: discriminable cells that returned Otsu values exceeding the discrimination threshold, (3A,3B) lower panel: Less discriminable cells whose Otsu value fell below the discrimination threshold. Scale bar: microns.
Cell-level features of the cases used.
| Heading | Feature | Value Range |
|---|---|---|
| Number of detected “halo cells” | 100–15,000 cells | |
| Density of “halo cells” | 0.0003–0.045 cells/mm2 | |
| Cell-level features | Cell area | 7000–11,000 pixels |
| Cell perimeter | 450–800 pixels | |
| “Halo cell” pixel intensity | 0.026–0.070 | |
| Compactness | 2.5–4.5 |
Figure 6Image fusion using Discrete Wavelet Transform. (A) Original heatmap showing histological features. (B) Heatmap showing cell-level features (“halo cells” and negative areas). (C) Thresholding, registration and alignment, resulting in a heatmap showing only the cells of interest. (D) The Discrete Wavelet Transform was used to fuse heatmaps (A,C); (E) shows the same image as (D) but with greater brightness to emphasize the location of “halo cells” (bright dots). Whole slide images are shown.
Density of “halo cells” in relation to diagnostic morphological features in adjacent H&E-stained sections.
| Morphological Features | Number of Halo Cells | Density—Number of Halo Cells/mm2 |
|---|---|---|
| Normal blood vessels | 481 | 0.051 |
| Normal brain tissue | 554 | 0.036 |
| Geographic necrosis | 2247 | 0.047 |
| Viable tumour tissue | 7045 | 0.034 |
| Palisading necrosis | 248 | 0.031 |
| Microvascular proliferation | 1636 | 0.036 |