| Literature DB >> 31119087 |
Hiroki Masumoto1, Hitoshi Tabuchi1, Shunsuke Nakakura1, Hideharu Ohsugi1, Hiroki Enno2, Naofumi Ishitobi1, Eiko Ohsugi1, Yoshinori Mitamura3.
Abstract
Evaluating the discrimination ability of a deep convolution neural network for ultrawide-field pseudocolor imaging and ultrawide-field autofluorescence of retinitis pigmentosa. In total, the 373 ultrawide-field pseudocolor and ultrawide-field autofluorescence images (150, retinitis pigmentosa; 223, normal) obtained from the patients who visited the Department of Ophthalmology, Tsukazaki Hospital were used. Training with a convolutional neural network on these learning data objects was conducted. We examined the K-fold cross validation (K = 5). The mean area under the curve of the ultrawide-field pseudocolor group was 0.998 (95% confidence interval (CI) [0.9953-1.0]) and that of the ultrawide-field autofluorescence group was 1.0 (95% CI [0.9994-1.0]). The sensitivity and specificity of the ultrawide-field pseudocolor group were 99.3% (95% CI [96.3%-100.0%]) and 99.1% (95% CI [96.1%-99.7%]), and those of the ultrawide-field autofluorescence group were 100% (95% CI [97.6%-100%]) and 99.5% (95% CI [96.8%-99.9%]), respectively. Heatmaps were in accordance with the clinician's observations. Using the proposed deep neural network model, retinitis pigmentosa can be distinguished from healthy eyes with high sensitivity and specificity on ultrawide-field pseudocolor and ultrawide-field autofluorescence images.Entities:
Keywords: Neural network; Retinitis pigmentosa; Screening system; Ultrawide-field autofluorescence; Ultrawide-filed pseudocolor imaging
Year: 2019 PMID: 31119087 PMCID: PMC6510218 DOI: 10.7717/peerj.6900
Source DB: PubMed Journal: PeerJ ISSN: 2167-8359 Impact factor: 2.984
Figure 1K-Fold (K = 5) cross validation method.
All images are divided into five groups. Four groups are augmented and then used for training the model, and one group is used as a validation data. The process repeated five times until each of the five groups becomes a validation data. The answers of the neural networks for all images are used for calculating the performance of the neural networks.
Figure 2Overall architecture of the Visual Geometry Group—16 (VGG-16) model.
VGG-16 comprises five blocks and three fully connected layers. Each block comprises some convolutional layers followed by a max-pooling layer. After flattening the output matrix after block 5, there are two fully connected layers for binary classification. The DNN used ImageNet parameters as the default weights of blocks 1–4 (Nagasato et al., 2018).
Background characteristics of study participants.
| Normal | RP | ||
|---|---|---|---|
| 223 | 150 | ||
| Age | 64.0 ± 14.0 (11–78) | 61.1 ± 15.1 (19–87) | |
| Sex, female | 123 (55.2%) | 74 (49.3%) | |
| Eye, left | 119 (53.4%) | 70 (46.7%) |
Notes:
There are no significant differences in age, female ratio and left ratio between normal images and retinitis pigmentosa images.
Age (years) is reported as the mean ± standard deviation with (range).
Sex, eye are shown as number with (%).
RP, Retinitis Pigmentosa.
Figure 3Receiver operating characteristic (ROC) curve of retinitis pigmentosa (RP).
A example of ROC curve of the UWPC and the UWAF.
Figure 4The images and their heatmaps of (A) ultrawide-field pseudocolor (UWPC). (B) Ultrawide-field autofluorescence (UWAF).
In both UWPC and UWAF images, points of interest on the heatmaps accumulates in the bone spicule pigmentation of the fundus, which is characteristic of retinitis pigmentosa.