Literature DB >> 34514205

Evaluation of Kidney Histological Images Using Unsupervised Deep Learning.

Noriaki Sato1,2, Eiichiro Uchino1,2, Ryosuke Kojima1, Minoru Sakuragi1,2, Shusuke Hiragi2,3, Sachiko Minamiguchi4, Hironori Haga4, Hideki Yokoi2, Motoko Yanagita2,5, Yasushi Okuno1.   

Abstract

INTRODUCTION: Evaluating histopathology via machine learning has gained research and clinical interest, and the performance of supervised learning tasks has been described in various areas of medicine. Unsupervised learning of histological images has the advantage of reproducibility for labeling; however, the relationship between unsupervised evaluation and clinical information remains unclear in nephrology.
METHODS: We propose an unsupervised approach combining convolutional neural networks (CNNs) and a visualization algorithm to cluster the histological images and calculate the score for patients. We applied the approach to the entire images or patched images of the glomerulus of kidney biopsy samples stained with hematoxylin and eosin obtained from 68 patients with immunoglobulin A nephropathy. We assessed the relationship between the obtained scores and clinical variables of urinary occult blood, urinary protein, serum creatinine (SCr), systolic blood pressure, and age.
RESULTS: The glomeruli of the patients were classified into 12 distinct classes and 10 patches. The output of the fine-tuned CNN, which we defined as the histological scores, had significant relationships with assessed clinical variables. In addition, the clustering and visualization results suggested that the defined clusters captured important findings when evaluating renal histopathology. For the score of the patch-based cluster containing crescentic glomeruli, SCr (coefficient = 0.09, P = 0.019) had a significant relationship.
CONCLUSION: The proposed approach could successfully extract features that were related to the clinical variables from the kidney biopsy images along with the visualization for interpretability. The approach could aid in the quantified evaluation of renal histopathology.
© 2021 International Society of Nephrology. Published by Elsevier Inc.

Entities:  

Keywords:  autoencoder; convolutional neural networks; deep learning; histopathology; machine learning; nephropathology

Year:  2021        PMID: 34514205      PMCID: PMC8418980          DOI: 10.1016/j.ekir.2021.06.008

Source DB:  PubMed          Journal:  Kidney Int Rep        ISSN: 2468-0249


Machine learning algorithms, especially neural network architecture–based convolutional neural networks (CNNs), have achieved breakthrough performance in the classification of images to defined classes and are applied in various research areas, including medicine. Furthermore, they have gained considerable attention in the fields of histology and pathology, especially in neoplastic histopathology. Generally, in deep learning for histopathological images, supervised learning is performed wherein people decide on labels., One of the problems with this process is the occasional disagreement between and within pathologists that makes it difficult to obtain the correct labels to be used in the supervised learning tasks. In addition, the labeling of thousands of images is labor intensive. In unsupervised deep learning evaluations, the labeling is automated and reproducible because it is performed by a machine. Therefore, defining the classification labels in an unsupervised manner could be advantageous. However, it remains unclear whether the information obtained in an unsupervised manner is clinically meaningful in nephrology practice. In the present study, we propose an approach to assess the histological findings of biopsy specimens in an unsupervised manner and visualize how deep learning algorithms make these decisions. In a recent study involving renal pathologies, Ginley et al. extracted features from glomerular images, scored them using CNN and recurrent neural networks, and compared them to the findings of pathologists in diabetic nephropathy. Another study showed the preliminary results of classification and visualization of transplant renal biopsy, discriminating the severity of T cell–mediated rejection and antibody-mediated rejection. In addition, one study compared the performance discriminating 7 major pathological findings among CNNs and pathologists. However, the unsupervised labeling and the association between yielded classification labels and clinical variables have not been examined in nephropathology. Thus, we applied our unsupervised approach to the kidney slide images of patients with immunoglobulin A nephropathy (IgAN) and evaluated if the unsupervised extracted features could relate to clinical information.

Methods

Patient Selection and Covariate Assessment

We retrospectively obtained the available virtual slide images of patients who underwent renal needle biopsy and were diagnosed with IgAN based on findings observed by optical microscopy and immunofluorescence staining between July 2012 and May 2018 at Kyoto University Hospital. We excluded those with a definite concurrent histological diagnosis of other diseases, except for nephrosclerosis. Patients diagnosed with hepatic IgAN were excluded. Patients who underwent multiple biopsy procedures were included. All patients provided written informed consent for the use of specimens in the present study. The study protocol was approved by the Ethics Committee on Human Research of the Graduate School of Medicine, Kyoto University (No. R643-2 and G562), and the study adhered to the Declaration of Helsinki. We collected basic patient demographics, including age, gender, systolic blood pressure (SBP, in mm Hg), laboratory tests comprising serum creatinine value (SCr, in mg/dL), urinary protein excretion level (UPro, in g/day), and the result of a urinary occult blood (UOB) test, which was classified into 5 categories: − (negative), ±, +, 2+, and 3+. These test values were obtained during the stay for renal biopsy in the hospital or at an outpatient visit before the renal biopsy procedure. If a daily urinary protein excretion value was not available during the respective hospital stay, the urinary protein creatinine ratio value was examined. In addition, we obtained the Oxford classification (MEST-C score) based on the definitions in pathological reports of kidney biopsy specimens.

Extraction of Images and Preprocessing

All renal biopsy specimens were scanned with a NanoZoomer-2.0HT whole-slide imager, digital pathology slide scanner, and the software NDP.scan 3.1.7 (Hamamatsu Photonics, Hamamatsu City, Japan), using a ×40 lens (0.23 μm/pixel). The quality of all the images was checked manually after scanning; if the slides were out of focus, new scans were performed. We stained slides with hematoxylin and eosin (H&E), which is the basic and most commonly used staining protocol. The whole-slide images were stored in NDPI format, and we used OpenSlide to extract the PNG images from those files. We obtained the images with the highest resolution. The positions of glomeruli were manually annotated by a nephrologist in the images with lower resolutions and then cropped out from the highest-resolution images. Extracted glomeruli images underwent stain normalization via the method described by Macenko et al. The method assumes that every pixel in the image is the linear combination of 2 stain vectors (H&E). The image was first converted to their optical density (OD) values, and OD below the specified threshold was removed. Subsequently, singular value decomposition was calculated, the eigenvectors were obtained, and the plane from the eigenvectors corresponding to the 2 largest singular values was formed. All the OD value was then transformed onto this plane and normalized. Finally, staining intensity was corrected. The detailed method is described in their original article. Subsequently, the white areas on the edges in the glomerulus images were removed by the custom function, and in addition, glomerulus images with the proportion of white regions ≥0.2 were removed. For the experiment on the entire glomerulus images, the images were resized to a width of 331 and a height of 331, which is the default input shape for the CNN. These filtered images were used for clustering and training analysis of the CNN, and all the images after the normalization were used for the assessment of the relationship with clinical traits.

Feature Extraction and Dimension Reduction

We used Neural Architecture Search Network (NASNet), implemented in keras (NASNet-Large), to extract the features. The NASNet searched for the model architecture directly and achieved state-of-the-art performance regarding the classification of ImageNet, which consisted of >14 million images. Weights pretrained with ImageNet on keras implementation were used, and the output of the final concatenation layer was averaged by the global average pooling, which yielded 4032 feature vectors obtained per glomerulus image. These vectors were subsequently processed by uniform manifold approximation and projection (UMAP), which is a popular nonlinear dimension reduction algorithm officially implemented in Python library umap-learn.

Model-based Clustering

We clustered the output of UMAP using mclust, a model-based clustering library using parameterized finite Gaussian mixture models (GMMs) with various covariance structures. The models were fitted with components of 1 to 20. The model with the best Bayesian information criterion was selected, and the optimal number of clusters was determined. All other parameters were set to the default, and all the covariances available in mclust were tested. The glomeruli were labeled according to the highest probabilities of belonging to the clusters.

Fine-tuning of the CNN and Calculation of Scores

We subsequently fine-tuned NASNet for the multiclass classification of defined clusters to produce scores robust to the rotation of the glomerulus images and visualize the rationale behind the prediction. We used keras 2.3.1 with tensorflow 1.15 or 2.2.0 backend to train a model. We constructed a new model using layers of NASNet, from the input layer to the last concatenation layer, and a subsequent global average pooling layer, a dropout layer, and a dense layer with softmax as an activation function. We set the last 10 convolutional layers and the last dense layer to be trainable and set all the other layers to be untrainable. The preprocessed images were split into training and test datasets in a ratio of 8 to 2. Subsequently, the remaining training data were split into training and validation data by the ratio of 8 to 2 for use in the training process, with the stratification of the classes and the fixed seed. We split by stratification of classes, not partitioning by the patients, to preserve class distribution across the dataset. When training, the original images and centered zoomed images were augmented to horizontal, vertical flip, and rotation of 90°, 180°, and 270°, yielding 16 images from 1 raw image. The test and validation images were not augmented. In addition, because there was an imbalance of images between classes, class weights were set when training. The callback function performed early stopping when validation loss did not improve, reducing the learning rate on the plateau, and saving weights with the best validation loss were used during the training. We used categorical cross-entropy as the loss function and Adam as the optimizer. The performance was assessed using the area under the receiver operating characteristic curves (AUROCs) with the unaugmented test dataset as input. This was a multiclass classification problem and performance was assessed with a 1-vs-1 pairwise comparison and 1-vs-rest comparison with the prevalence weighted average.

Calculation of Histological Scores

The output values of the final layer were calculated using all the images after H&E normalization. The activation function of the last layer was softmax; thus, the scores per glomeruli were summed to 1. The calculated scores served as the histological scores of the respective glomerulus, and the mean scores of all glomerulus images from the slide images of the patient served as the histological scores of the respective patient.

Visualization of the Reason Behind the Prediction

We used score-weighted class activation mapping (Score-CAM) to visualize and highlight the important regions in the images for predicting the respective class. Because multiple convolutional layers were batch-normalized and concatenated to 1 layer in the last cell of NASNet, we visualized Score-CAM by obtaining the activation map corresponding to the output of the activation function of the final concatenation layer. In addition, guided backpropagation was calculated and multiplied with Score-CAM values to obtain guided Score-CAM to visualize the rationale at higher resolution, and the results of gradient-weighted class activation mapping (Grad-CAM) were visualized.

Patch-based Analysis

We subsequently conducted the patch-based analysis of each glomerulus image to assess the applicability of the proposed approach for the higher resolution. The patch-based analysis referred to the same workflow applied to the image patches with the width and height of 96 pixels, equally divided into 16 sections from 1 glomerulus image. The patches were filtered beforehand in the same manner as the whole glomerulus images. The convolutional autoencoder with 6 convolutional layers was trained with the extracted patches as input. Subsequently, the output vectors of the encoder were clustered by GMMs, and the encoder was retrained with the defined clusters as same as the whole glomerulus images. In the patch-based analysis, the augmentation was not performed. The scores for each patch were obtained by the output of the retrained encoder, and the scores of all the patches were summed to calculate the glomerulus score. These glomerulus scores were averaged to obtain the histological score for the respective patient. The visualization was obtained by Score-CAM applied for the final layer of the encoder.

Comparison of the Scores in the Patient With Multiple Biopsy Specimens

To assess whether these scores reflect disease progression or regression in the multiple biopsy speciemens obtained from the same patient, we calculated the scores for 2 biopsy specimens from a patient and assessed the relationship between pathological assessment and the changes in the histological scores.

Statistical Analysis

The relationships between continuous variables and histological scores were modeled by linear regression models, and we tested the null hypothesis that the coefficient of the histological scores equals 0. The P values obtained via linear models were corrected using the Bonferroni procedure. The relationship between UOB and histological scores was assessed via one-way analysis of variance, with adjustment using Dunnett’s method with the control category as the negative category, performed via R library multcomp. The relationship between the clinical variables of SBP, SCr, and UPro and the MEST-C classification category was assessed by the same methods as the UOB. Adjusted P values < 0.05 were considered statistically significant. Data preprocessing was performed by pandas or tidyverse., The splitting of training, validation, and test data, the calculation of AUROC scores, and the clustering of patches were performed via the respective functions in scikit-learn. The figures were generated using the R libraries ggplot2 and firatheme. The visualized significant clusters identified by the algorithm with both the patched and the whole glomerulus images were evaluated first by 3 nephrologists, and the findings were validated by a board-certificated pathologist.

Data Availability

We cannot share the raw slide images because that will potentially breach patient privacy. However, we share the model and weights file used in the study for convolutional autoencoder and NASNet implemented in keras, which can be used to test the glomeruli images from other institutions after the normalization of staining (https://github.com/noriakis/glomerulus-clustering).

Results

Patient Demographics

The demographic information of 68 patients who underwent biopsy procedures at Kyoto University Hospital is summarized in Table 1. The resolution of the slide images was 54,332 ± 36,469 in width and 58,522 ± 14,353 in height (mean ± SD). Overall, 2144 images of glomeruli were obtained from the H&E staining–normalized slide images. The mean number of glomeruli per slide was 31.5 ± 16.8 (mean ± SD; minimum 6, maximum 73). After preprocessing, 1319 images were obtained for the downstream analysis of clustering and training of the CNN. Note that all the images were used in the calculation of histological scores.
Table 1

Clinical and pathological characteristics of the included patients

Clinical valuesIgAN (n = 68)
Age, yr, mean (SD)42.28 (18.75)
Serum creatinine, mg/dL, mean (SD)0.97 (0.53)
Urinary protein, g/day or protein/creatinine ratio, mean (SD)1.37 (1.92)
Systolic blood pressure, mm Hg, mean (SD)124.13 (17.23)
Male gender, n (%)27 (39.7)
Urinary occult blood, n (%)
 −4 (5.9)
 ±5 (7.4)
 1+3 (4.4)
 2+28 (41.2)
 3+28 (41.2)
M = 1, n (%)28 (41.2)
E = 1, n (%)9 (13.2)
S = 1, n (%)52 (76.5)
T, n (%)
 057 (83.8)
 19 (13.2)
 22 (2.9)
C, n (%)
 030 (44.1)
 137 (54.4)
 21 (1.5)

C, cellular or fibrocellular crescents; E, endocapillary hypercellularity; IgAN, IgA nephropathy; M, mesangial hypercellularity; S, segmental glomerulosclerosis; SD, standard deviation; T, interstitial fibrosis/tubular atrophy.

Clinical and pathological characteristics of the included patients C, cellular or fibrocellular crescents; E, endocapillary hypercellularity; IgAN, IgA nephropathy; M, mesangial hypercellularity; S, segmental glomerulosclerosis; SD, standard deviation; T, interstitial fibrosis/tubular atrophy.

The Presentation of Workflow and Performance Assessment

The overall proposed workflow with the selected steps is shown in Figure 1. The complete listing of all the steps is shown in Supplementary Text S1. We first used NASNet to extract the features from the preprocessed and filtered glomeruli images. UMAP was performed on the obtained feature vectors, and using the resulting vectors as inputs the optimal number of clusters was determined by GMMs by Bayesian information criterion. The model with 12 components, with the VVE (ellipsoidal, equal orientation) covariance had the best Bayesian information criterion value; therefore, the number of clusters was determined accordingly. Components 1, 2, and 3 of the UMAP results are shown in Supplementary Figure S1. The number of images in each cluster were 76, 117, 146, 137, 102, 95, 25, 251, 68, 99, 90, and 113, respectively. Using the defined cluster labels as the correct label, we performed fine-tuning of NASNet using the weights trained with ImageNet. The proportion of images of each patient in the training, test, and validation datasets are presented in Supplementary Table S1. The training was performed with 13,504 augmented images. Using the unaugmented test dataset, the 1-vs-1 weighted AUROC average was 0.921, and the 1-vs-rest weighted AUROC average was 0.918. The highest 1-vs-rest AUROC was obtained in cluster 10 (AUROC 0.998) and the lowest was in cluster 5 (AUROC 0.839). Using the weights obtained, we calculated histological scores for all patients using all the glomerulus images. The representative glomeruli for each cluster are presented in Supplementary Figure S2.
Figure 1

Overall workflow. The overall workflow of the proposed methods is visualized.

Overall workflow. The overall workflow of the proposed methods is visualized.

Relationship Between Clinical Variables and Histological Scores

The score of cluster 2 was the highest among all the categories (0.168 ± 0.081). The overall relationships between the histological scores and clinical variables, including age, SBP, SCr, UPro, and the result of UOB test are summarized in Table 2 and Figure 2. The score of cluster 6 was significantly related to UOB, in the way that the negative category had the highest values compared with the other categories. The cluster was presumed to be the clusters containing the glomerulus with normal or minor abnormalities. The score of cluster 10 was significantly associated with SBP, SCr, and UPro, with higher scores indicating higher values. Cluster 11 had a significant relationship between SCr and UPro. The statistical summaries including the coefficients, P values, and R2 values of the linear models are presented in Supplementary Tables S2 and S3. For comparison, we assessed the relationship between the Oxford MEST-C score and clinical variables. For this assessment, the patient with C2 scoring was excluded beforehand. As a result, SCr was significantly associated with M score (coefficient = 0.288, P = 0.027). SBP and UPro had no significant relationship with the MEST-C score in the current cohort.
Table 2

Statistically significant clusters and their associated variables

Clinical valuesaSignificant clusterb
Age3
Systolic blood pressure3, 4, and 10
Serum creatinine3, 8, 10, and 11
Urinary protein excretion3, 10, and 11
Urinary occult blood (significant in ±, +, 2+, and 3+ compared with the negative [−] category)6

Clinical values tested.

The cluster in which the score is significantly associated with corresponding clinical values after the adjustment of P values.

Figure 2

Relationship between histological scores and clinical variables. The box plot (urinary occult blood) and line plots (age, systolic blood pressure, serum creatinine, and urinary protein excretion) show the relationship between histological scores and clinical variables. The x axes represent clinical variables, and the y axes represent histological scores. Statistically significant clusters are presented with an asterisk and red background.

Statistically significant clusters and their associated variables Clinical values tested. The cluster in which the score is significantly associated with corresponding clinical values after the adjustment of P values. Relationship between histological scores and clinical variables. The box plot (urinary occult blood) and line plots (age, systolic blood pressure, serum creatinine, and urinary protein excretion) show the relationship between histological scores and clinical variables. The x axes represent clinical variables, and the y axes represent histological scores. Statistically significant clusters are presented with an asterisk and red background.

Visualization of the Rationale Behind the Prediction

We obtained Score-CAM and guided Score-CAM, along with Grad-CAM and guided Grad-CAM of glomeruli having the highest 5 probabilities of classification of the respective cluster, which served as the rationale for the prediction of each cluster. We present the results of clusters 6, 10, and 11 in Figure 3. Cluster 6 contained glomeruli with mostly minor abnormalities. Cluster 10 contained sclerotic glomeruli. Cluster 11 contained glomeruli with mesangial matrix expansion and mesangial cell proliferation. In addition, the crescentic glomeruli or glomeruli with suspected adhesion and fibrosis were included. Grad-CAM and Score-CAM seemed to correctly point out the structure inside the glomeruli, with attention split to various positions. The guided Grad-CAM and Score-CAM of cluster 6 seemed to indicate that white pixel regions in the images got high attention. In the case of glomerular pathology such as the present study, the white areas are likely to be Bowman’s space or capillary lumen. However, in the other clusters, specific regions such as the regions of the mesangial matrix expansion did not get specific high attention.
Figure 3

Visualization results of the rationale behind the prediction of each class. The score-weighted class activation mapping, gradient-weighted class activation mapping, and the results obtained by multiplication with guided backpropagation are shown. Clusters 6 (left), 10 (middle), and 11 (right) are shown.

Visualization results of the rationale behind the prediction of each class. The score-weighted class activation mapping, gradient-weighted class activation mapping, and the results obtained by multiplication with guided backpropagation are shown. Clusters 6 (left), 10 (middle), and 11 (right) are shown.

The Result of the Patch-based Analysis

We conducted additional analysis for the equally divided patches using the same workflow to assess the applicability of the approach for the higher resolution. Overall, 23,168 patches extracted from the glomerulus images were analyzed. The results were summarized in Figure 4. The score of patch cluster 1 had a significant relationships with SCr (coefficient = 0.09, P = 0.019), and the score of cluster 3 with SCr (coefficient = 0.249, P < 0.001), SBP (coefficient = 5.71, P = 0.013), and UPro (coefficient = 0.714, P = 0.003). Cluster 7 had a significant relationship with SCr (coefficient = −0.145, P = 0.039). In addition, cluster 10 had a significant relationship with UOB (comparison between negative and ±, 2+, and 3+). All the statistical summaries including the coefficients, P values, and R2 values of the linear models are presented in Supplementary Tables S4 and S5.
Figure 4

The result summary of the patch-based analysis. The results of the patch-based analysis are shown. The left panel shows the clustered patches and the rationale behind the clustering visualized by score-weighted class activation mapping. The number of patches in the class, along with the clinical variables that had a significant relationship with the score of the patch class, are shown. The right panel shows the rationale for the patches of the glomeruli with the highest scores of the respective cluster of patches. The predicted cluster of each patch is shown in the upper left corner with the prediction probability.

The result summary of the patch-based analysis. The results of the patch-based analysis are shown. The left panel shows the clustered patches and the rationale behind the clustering visualized by score-weighted class activation mapping. The number of patches in the class, along with the clinical variables that had a significant relationship with the score of the patch class, are shown. The right panel shows the rationale for the patches of the glomeruli with the highest scores of the respective cluster of patches. The predicted cluster of each patch is shown in the upper left corner with the prediction probability. The visualization results suggested that cluster 1 contained crescentic glomeruli. Cluster 3 contained sclerotic glomeruli. Cluster 7 contained glomeruli with mild mesangial matrix expansion and mesangial cell proliferation. The visualization results of Score-CAM suggested that the model gave high attention to cells with mesangial matrix, sclerotic regions, and white regions, such as Bowman’s space or capillary lumen, in clusters 1, 3, and 7, respectively. Using the patch-based scores as input, we performed the additional analysis comparing the multiple subsequent biopsy specimens of 1 patient. The patient was diagnosed with IgAN after the first biopsy procedure and went through the second biopsy procedure after the Pozzi protocol. The third biopsy specimen was obtained as UPro increased. We performed the analysis on the available virtual slides of the second and third biopsy specimens. The clinical and pathological findings of the second and third biopsy specimens of the respective patients are shown in Supplementary Table S6. As a result, the score of the third biopsy specimen was higher in clusters 1 and 3, which indicated that the proportion of sclerotic and crescentic glomeruli increased. Contrarily, the scores of cluster 10, which gave high attention to mostly the white regions, decreased. This indication matched the pathological reports of the respective biopsy procedures. A summary of the comparison is shown in Supplementary Figure S3.

Discussion

In the present study, we proposed an unsupervised approach to quantitatively assess histological findings and evaluated their relationship with clinical information. In addition, the reason behind the definition of classes was visualized. As a result, the histological scores obtained by unsupervised clustering of the glomerulus image features extracted from the CNN model had significant relationships with the important clinical measurements in patients with IgAN. Various studies have used machine learning to evaluate renal histopathology. Their main objective is to segment various structures present in the slide images, such as the glomeruli, or to detect the glomeruli from the images, or to extract novel pathological findings from the glomeruli, or to associate defined glomerular features with some pathological findings or clinical variables., Our study falls into the last category. One study used manually constructed features of the glomerulus images to detect proliferative lesions in the glomerulus, without using deep learning. Compared with the study conducted by Ginley et al. that used handcrafted features combined with CNN, our approach consisted of no previous knowledge, such as glomerular components, to feed into the network. Rather, we allowed the CNN and clustering algorithm to decide the class of the glomeruli images and patches. This can be advantageous in the sense that we can evaluate how the CNN assesses and classify the glomerular image regardless of the existing knowledge. Conversely, this can also be problematic because the resolution did not reach the expertise of pathologists, thus limiting the use of our model in the clinical setting because of low interpretability. The CNN could discriminate between the defined clusters of the glomeruli images according to the AUROC. This is an expected result because the correct labels were defined using feature vectors extracted by the same CNN. We used Score-CAM which is a newly developed method to visualize the rationale in CNN architecture that is reported to represent better localization compared with the popular Grad-CAM or Grad-CAM++. The clustering results suggested that the proposed model captured some of the important pathological findings and normal findings of IgAN. However, as discussed above, the activation map could not localize the specific pathological changes in the structural components inside the glomeruli, such as adhesion, crescent formation, or mesangial proliferation; rather focusing vaguely in the experiment on the entire glomerulus images. We speculated that one of the reasons for these results is that the H&E stain used in this study was originally difficult to evaluate these findings, compared with periodic acid–Schiff or periodic acid methenamine silver stain. On the patch-based analysis, we found that the proposed approach could correctly give high attention to the structure in the images like cellular components, sclerotic regions, crescentic regions, or capillary lumen, compared with the results that the analysis with the entire glomerulus images as input failed to capture these findings precisely. This suggested that the calculated scores based on the proposed approach could be interpreted by physicians to some extent. In previous studies investigating the correlation of MEST-C score and clinical variables, C-score was associated with SCr and UOB, and S-score with hypertension. In addition, global glomerulosclerosis was associated with lower estimated glomerular filtration rate, UPro, SCr, and higher incidence of arterial hypertension, and crescentic formation was reported to be associated with estimated glomerular filtration rate and proteinuria. We also investigated the relationship between the MEST-C scoring and clinical variables. In our study, the scores reported to be related to clinical variables such as C were not associated with any clinical variables. This is suspected to be the nature of the investigated population from the university hospital, where the severe disease patients tend to admit. Compared with MEST-C scoring, our proposed score could relate to UPro, and the other variables of SBP and UOB as well. In addition, our system is capable of evaluating glomeruli more quantitatively. The advantage of this study is that we successfully developed a histological assessment workflow, and confirm the calculated scores to relate with clinical indicators in the patients with IgAN, especially UPro and SCr, by examining glomerular image features in an unsupervised manner, independent of nephrologists and pathologists. The weights of this model are publicly available, and thus these could contribute to the standardization of histological assessment. In addition, the CNNs used in the study are replaceable and can easily scale as the new models develop in the future. Moreover, there is a possibility that disease progression could be evaluated quantitatively in the subsequent biopsy specimen in the same patient; however, testing with more patients with the subsequent biopsy specimen is desired. The major limitation of the study was that we could not assess the relationship between histological scores and prognostic information of patients because the observation period was short. In addition, because it involves unsupervised clustering, the results are expected to vary with parameter adjustments. For example, if the number of neighbors in UMAP is reduced in the experiment of the entire glomerulus images as input, the images will split into many more clusters, and it depends on the number of available training images. In addition, the evaluation of the clustering and visualization results remained subjective. Although the association was statistically significant, the R2 values for the linear models were low. This is presumed to be partially because of the variabilities of clinical variables assessed, especially for UOB, which was assessed by dipstick, and UPro, in which the daily urinary protein and urinary creatinine ratio were used. In summary, we proposed an unsupervised approach to quantitatively evaluate histological findings along with providing the rationale for the evaluation and applied it to the kidney histological images. The obtained scores were related to important clinical variables in patients with IgAN and could be applied to other glomerular diseases or conditions that require evaluation of specific structures inside the slide images.

Disclosure

MY receives research grants from Mitsubishi Tanabe Pharma and Boehringer Ingelheim. All the other authors declared no competing interests.
  15 in total

1.  Detection and Classification of Novel Renal Histologic Phenotypes Using Deep Neural Networks.

Authors:  Susan Sheehan; Seamus Mawe; Rachel E Cianciolo; Ron Korstanje; J Matthew Mahoney
Journal:  Am J Pathol       Date:  2019-06-18       Impact factor: 4.307

2.  Classification of glomerular pathological findings using deep learning and nephrologist-AI collective intelligence approach.

Authors:  Eiichiro Uchino; Kanata Suzuki; Noriaki Sato; Ryosuke Kojima; Yoshinori Tamada; Shusuke Hiragi; Hideki Yokoi; Nobuhiro Yugami; Sachiko Minamiguchi; Hironori Haga; Motoko Yanagita; Yasushi Okuno
Journal:  Int J Med Inform       Date:  2020-07-11       Impact factor: 4.046

Review 3.  Artificial intelligence and machine learning in nephropathology.

Authors:  Jan U Becker; David Mayerich; Meghana Padmanabhan; Jonathan Barratt; Angela Ernst; Peter Boor; Pietro A Cicalese; Chandra Mohan; Hien V Nguyen; Badrinath Roysam
Journal:  Kidney Int       Date:  2020-04-01       Impact factor: 10.612

4.  Segmentation of Glomeruli Within Trichrome Images Using Deep Learning.

Authors:  Shruti Kannan; Laura A Morgan; Benjamin Liang; McKenzie G Cheung; Christopher Q Lin; Dan Mun; Ralph G Nader; Mostafa E Belghasem; Joel M Henderson; Jean M Francis; Vipul C Chitalia; Vijaya B Kolachalama
Journal:  Kidney Int Rep       Date:  2019-04-15

5.  Correlation of Oxford MEST-C Scores With Clinical Variables for IgA Nephropathy in South India.

Authors:  Swarnalata Gowrishankar; Yashita Gupta; Mahesha Vankalakunti; Kiran K Gowda; Anila A Kurien; K S Jansi Prema; N V Seethalekshmy; Jyotsna Yesodharan
Journal:  Kidney Int Rep       Date:  2019-07-03

6.  Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours.

Authors:  Osamu Iizuka; Fahdi Kanavati; Kei Kato; Michael Rambeau; Koji Arihiro; Masayuki Tsuneki
Journal:  Sci Rep       Date:  2020-01-30       Impact factor: 4.379

7.  Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning.

Authors:  Nicolas Coudray; Paolo Santiago Ocampo; Theodore Sakellaropoulos; Navneet Narula; Matija Snuderl; David Fenyö; Andre L Moreira; Narges Razavian; Aristotelis Tsirigos
Journal:  Nat Med       Date:  2018-09-17       Impact factor: 53.440

8.  PathoSpotter-K: A computational tool for the automatic identification of glomerular lesions in histological images of kidneys.

Authors:  George O Barros; Brenda Navarro; Angelo Duarte; Washington L C Dos-Santos
Journal:  Sci Rep       Date:  2017-04-24       Impact factor: 4.379

9.  Identification of glomerular lesions and intrinsic glomerular cell types in kidney diseases via deep learning.

Authors:  Caihong Zeng; Yang Nan; Feng Xu; Qunjuan Lei; Fengyi Li; Tingyu Chen; Shaoshan Liang; Xiaoshuai Hou; Bin Lv; Dandan Liang; WeiLi Luo; Chuanfeng Lv; Xiang Li; Guotong Xie; Zhihong Liu
Journal:  J Pathol       Date:  2020-07-07       Impact factor: 7.996

View more
  3 in total

1.  Evaluating tubulointerstitial compartments in renal biopsy specimens using a deep learning-based approach for classifying normal and abnormal tubules.

Authors:  Satoshi Hara; Emi Haneda; Masaki Kawakami; Kento Morita; Ryo Nishioka; Takeshi Zoshima; Mitsuhiro Kometani; Takashi Yoneda; Mitsuhiro Kawano; Shigehiro Karashima; Hidetaka Nambo
Journal:  PLoS One       Date:  2022-07-11       Impact factor: 3.752

Review 2.  The potential of artificial intelligence-based applications in kidney pathology.

Authors:  Roman D Büllow; Jon N Marsh; S Joshua Swamidass; Joseph P Gaut; Peter Boor
Journal:  Curr Opin Nephrol Hypertens       Date:  2022-02-14       Impact factor: 3.416

Review 3.  Artificial Intelligence-Assisted Renal Pathology: Advances and Prospects.

Authors:  Yiqin Wang; Qiong Wen; Luhua Jin; Wei Chen
Journal:  J Clin Med       Date:  2022-08-22       Impact factor: 4.964

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.