| Literature DB >> 33203975 |
Johannes Stroebel1, Annie Horng2,3, Marco Armbruster2, Alberto Mittone4,5, Maximilian Reiser2, Alberto Bravin4, Paola Coan6,7,8.
Abstract
We applied transfer learning using Convolutional Neuronal Networks to high resolution X-ray phase contrast computed tomography datasets and tested the potential of the systems to accurately classify Computed Tomography images of different stages of two diseases, i.e. osteoarthritis and liver fibrosis. The purpose is to identify a time-effective and observer-independent methodology to identify pathological conditions. Propagation-based X-ray phase contrast imaging WAS used with polychromatic X-rays to obtain a 3D visualization of 4 human cartilage plugs and 6 rat liver samples with a voxel size of 0.7 × 0.7 × 0.7 µm3 and 2.2 × 2.2 × 2.2 µm3, respectively. Images with a size of 224 × 224 pixels are used to train three pre-trained convolutional neuronal networks for data classification, which are the VGG16, the Inception V3, and the Xception networks. We evaluated the performance of the three systems in terms of classification accuracy and studied the effect of the variation of the number of inputs, training images and of iterations. The VGG16 network provides the highest classification accuracy when the training and the validation-test of the network are performed using data from the same samples for both the cartilage (99.8%) and the liver (95.5%) datasets. The Inception V3 and Xception networks achieve an accuracy of 84.7% (43.1%) and of 72.6% (53.7%), respectively, for the cartilage (liver) images. By using data from different samples for the training and validation-test processes, the Xception network provided the highest test accuracy for the cartilage dataset (75.7%), while for the liver dataset the VGG16 network gave the best results (75.4%). By using convolutional neuronal networks we show that it is possible to classify large datasets of biomedical images in less than 25 min on a 8 CPU processor machine providing a precise, robust, fast and observer-independent method for the discrimination/classification of different stages of osteoarthritis and liver diseases.Entities:
Mesh:
Year: 2020 PMID: 33203975 PMCID: PMC7673137 DOI: 10.1038/s41598-020-76937-y
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Examples of PCI micro CT images (224 × 224 pixels) used as input of the neuronal network's systems; (A,B): images acquired with a detector pixel size of 0.7 × 0.7 µm2 of a healthy (A) and degenerated (B) cartilage specimen, respectively; (C–E): PCI images acquired at a final voxel size of 2.2 × 2.2 × 2.2 µm3 of a healthy (C). (D) fibrotic-4 weeks liver and (E) fat liver, respectively.
Figure 3Accuracy plots as a function of the number of epochs for the VGG16, Inception V3 and Xception networks. The Xception has the largest gap between training and validation accuracy and the highest level of accuracy is achieved by the VGG16 network. The line-plots in this figure were generated with matplotlib version 2.2.2 (https://www.matplotlib.org.cn/en/).
Figure 2Confusion matrix of the VVG16, Inception V3, and Xception networks applied to the cartilage dataset. They show the “true” label (as a result of the histologic analysis) and the predicted (by the CNN) label. (A) The test accuracy in classifying healthy and degenerated (i.e. OA affected tissues) is 99.8% with the VGG 16 (B) The test accuracy of the Inception V3 network is 84.7% (C) The Xception has a test accuracy of 72.6%. The confusion matrices in this figure were generated with matplotlib version 2.2.2 (https://www.matplotlib.org.cn/en/).
Figure 4Confusion matrixes representing the image classification capability for the VGG16 (A), Inception V3 (B) and Xception (C) networks. The testing accuracy of VGG16 network is 96.0%, for Inception 43.6% and for Xception 53.8%. The confusion matrices in this figure were generated with matplotlib version 2.2.2 (https://www.matplotlib.org.cn/en/).