| Literature DB >> 33808978 |
Fernando Pérez-Sanz1, Miriam Riquelme-Pérez2, Enrique Martínez-Barba3, Jesús de la Peña-Moral3, Alejandro Salazar Nicolás3, Marina Carpes-Ruiz4, Angel Esteban-Gil1, María Del Carmen Legaz-García1, María Antonia Parreño-González1, Pablo Ramírez5, Carlos M Martínez4.
Abstract
Liver transplantation is the only curative treatment option in patients diagnosed with end-stage liver disease. The low availability of organs demands an accurate selection procedure based on histological analysis, in order to evaluate the allograft. This assessment, traditionally carried out by a pathologist, is not exempt from subjectivity. In this sense, new tools based on machine learning and artificial vision are continuously being developed for the analysis of medical images of different typologies. Accordingly, in this work, we develop a computer vision-based application for the fast and automatic objective quantification of macrovesicular steatosis in histopathological liver section slides stained with Sudan stain. For this purpose, digital microscopy images were used to obtain thousands of feature vectors based on the RGB and CIE L*a*b* pixel values. These vectors, under a supervised process, were labelled as fat vacuole or non-fat vacuole, and a set of classifiers based on different algorithms were trained, accordingly. The results obtained showed an overall high accuracy for all classifiers (>0.99) with a sensitivity between 0.844 and 1, together with a specificity >0.99. In relation to their speed when classifying images, KNN and Naïve Bayes were substantially faster than other classification algorithms. Sudan stain is a convenient technique for evaluating ME in pre-transplant liver biopsies, providing reliable contrast and facilitating fast and accurate quantification through the machine learning algorithms tested.Entities:
Keywords: computer vision; liver transplantation; machine learning; macrovesicular steatosis; sudan stain
Mesh:
Year: 2021 PMID: 33808978 PMCID: PMC8001362 DOI: 10.3390/s21061993
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Average training time for each algorithm (in s), according with the proposed range of pixels selected.
Average training time and AUC for each algorithm under different numbers of pixels.
| Model | Pixels | Average Time | Average AUC | SE Time | SE AUC |
|---|---|---|---|---|---|
| KNN | 1000 | 0.000 | 1.000 | 0.000 | 0.000 |
| NB | 1000 | 0.001 | 1.000 | 0.000 | 0.000 |
| NN | 1000 | 0.117 | 1.000 | 0.032 | 0.000 |
| RF | 1000 | 0.118 | 1.000 | 0.001 | 0.000 |
| SVM | 1000 | 0.004 | 1.000 | 0.000 | 0.000 |
| Keras | 1000 | 0.645 | 0.984 | 0.022 | 0.012 |
| KNN | 5000 | 0.000 | 0.997 | 0.000 | 0.000 |
| NB | 5000 | 0.001 | 0.998 | 0.000 | 0.000 |
| NN | 5000 | 0.429 | 1.000 | 0.040 | 0.000 |
| RF | 5000 | 0.207 | 0.999 | 0.002 | 0.000 |
| SVM | 5000 | 0.045 | 1.000 | 0.000 | 0.000 |
| Keras | 5000 | 2.417 | 0.998 | 0.113 | 0.000 |
| KNN | 10,000 | 0.001 | 0.998 | 0.000 | 0.000 |
| NB | 10,000 | 0.001 | 0.999 | 0.000 | 0.000 |
| NN | 10,000 | 0.697 | 1.000 | 0.069 | 0.000 |
| RF | 10,000 | 0.329 | 0.998 | 0.002 | 0.000 |
| SVM | 10,000 | 0.115 | 1.000 | 0.001 | 0.000 |
| Keras | 10,000 | 4.616 | 0.999 | 0.080 | 0.000 |
| KNN | 50,000 | 0.003 | 0.997 | 0.000 | 0.000 |
| NB | 50,000 | 0.005 | 0.998 | 0.000 | 0.000 |
| NN | 50,000 | 1.763 | 0.999 | 0.141 | 0.000 |
| RF | 50,000 | 1.642 | 0.999 | 0.011 | 0.000 |
| SVM | 50,000 | 2.046 | 0.999 | 0.011 | 0.000 |
| Keras | 50,000 | 22.307 | 0.999 | 0.333 | 0.000 |
| KNN | 100,000 | 0.006 | 0.997 | 0.000 | 0.000 |
| NB | 100,000 | 0.011 | 0.997 | 0.001 | 0.000 |
| NN | 100,000 | 2.799 | 0.999 | 0.210 | 0.000 |
| RF | 100,000 | 3.562 | 0.999 | 0.079 | 0.000 |
| SVM | 100,000 | 7.153 | 0.999 | 0.107 | 0.000 |
| Keras | 100,000 | 38.802 | 0.999 | 2.507 | 0.000 |
Figure 2ROC curves with the AUCs of all classifiers for each training/testing data set from 1000 (a) to 100,000 (e) pixels.
Average classification time (in s), based on number of threads (from 1 to 10) and image size (from 1.5 to 6.1 MB).
| Model | Image | 1 | 2 | 4 | 6 | 8 | 10 |
|---|---|---|---|---|---|---|---|
| KNN | 1.5 M | 0.09 | 0.28 | 0.26 | 0.24 | 0.22 | 0.26 |
| SVM | 1.5 M | 8.48 | 4.37 | 2.41 | 1.67 | 1.29 | 1.46 |
| RF | 1.5 M | 5.96 | 3.37 | 2.01 | 1.55 | 1.44 | 1.47 |
| NB | 1.5 M | 0.15 | 0.25 | 0.22 | 0.21 | 0.20 | 0.21 |
| NN | 1.5 M | 0.82 | 0.82 | 0.73 | 0.69 | 0.72 | 0.57 |
| Keras | 1.5 M | 40.56 | 40.56 | 40.56 | 40.56 | 40.56 | 40.56 |
| KNN | 6.1 M | 0.32 | 0.61 | 0.49 | 0.45 | 0.42 | 0.46 |
| SVM | 6.1 M | 33.70 | 17.30 | 9.28 | 6.34 | 4.82 | 5.39 |
| RF | 6.1 M | 27.68 | 13.91 | 7.85 | 6.10 | 5.54 | 5.77 |
| NB | 6.1 M | 0.66 | 0.69 | 0.55 | 0.52 | 0.51 | 0.51 |
| NN | 6.1 M | 3.14 | 2.85 | 2.61 | 2.49 | 2.62 | 1.94 |
| Keras | 6.1 M | 163.84 | 163.84 | 163.84 | 163.84 | 163.84 | 163.84 |
Figure 3Classification time (in s) of each model used.
Figure 4Results of image classification for each classifier.
Metrics comparing automatic and manual classification.
| Metric | KNN | SVM | RF | NB | NN | Keras |
|---|---|---|---|---|---|---|
| Accuracy | 0.996 | 0.996 | 0.996 | 0.997 | 0.997 | 0.995 |
| Sensitivity | 0.844 | 0.962 | 0.956 | 0.910 | 0.963 | 0.972 |
| Specificity | 0.999 | 0.997 | 0.997 | 0.999 | 0.998 | 0.996 |
| Precision | 0.961 | 0.897 | 0.894 | 0.969 | 0.906 | 0.856 |
Figure 5Original image (left) and manually classified image (right).
Zeiss Axiocam basic specifications.
| Sensor Model | Sony ICX 694, EXview HAD CCD II |
| Sensor pixel count | 6 Megapixel. 2752 (H) × 2208 (V) |
| Pixel size | 4.54 μm × 4.54 μm |
| Exposure time range | 250 μs to 60 s. |
| Spectral sensitivity | Aprox. 400–720 nm. RGB Bayer color filter mask |
Figure 6Web application interface image classification steps: (1) Objective magnification selector; (2) Image uploader; (3) Manual or pre-trained model selector; and (4) (if pre-trained) algorithm selector.
Figure 7Result of image classification and quantification of fatty vacuoles.