| Literature DB >> 30598372 |
Tackeun Kim1, Jaehyuk Heo2, Dong-Kyu Jang3, Leonard Sunwoo4, Joonghee Kim5, Kyong Joon Lee4, Si-Hyuck Kang6, Sang Jun Park7, O-Ki Kwon8, Chang Wan Oh9.
Abstract
BACKGROUND: Recently, innovative attempts have been made to identify moyamoya disease (MMD) by focusing on the morphological differences in the head of MMD patients. Following the recent revolution in the development of deep learning (DL) algorithms, we designed this study to determine whether DL can distinguish MMD in plain skull radiograph images.Entities:
Keywords: Convolutional neural network; Deep learning; Moyamoya; Skull
Mesh:
Year: 2018 PMID: 30598372 PMCID: PMC6413674 DOI: 10.1016/j.ebiom.2018.12.043
Source DB: PubMed Journal: EBioMedicine ISSN: 2352-3964 Impact factor: 8.143
Fig. 1Diagram shows steps in image pre-processing. The skull image is of one of the authors. After resizing, each image is assigned to a training set or test set, although we present same skull image in both sets. The images of the training set are augmented by random application of a horizontal flip, rotation within 5°, and horizontal shift within 15%.
Fig. 2Diagram shows schematic structure of convolutional neural network. The skull image is of one of the authors.
Basic characteristics of enrolled subjects.
| All | Moyamoya group | Control group | p-value | ||
|---|---|---|---|---|---|
| Age (years) | 35·2 ± 9·2 | 35·2 ± 9·1 | 35·3 ± 9·1 | 0·92 | |
| Sex | Male | 269 (35·7) | 125 (36·2) | 144 (35.3) | 0·85b |
| Female | 484 (64·3) | 220 (63·8) | 264 (64·7) | ||
| Height (cm) | 164·1 ± 8·4 | 164·2 ± 8·2 | 163·5 ± 9·2 | 0·41 | |
| Weight (kg) | 62·6 ± 14·1 | 63·1 ± 14·7 | 61·0 ± 11·9 | 0·16 |
Continuous values are presented as mean ± standard deviation, while categorical values are as number (percent).
Student t-test, bPearson chi-square test.
Fig. 3Flow chart shows the schematic process of data collection and partitioning.
Fig. 4Charts show training process and evaluation metrics. (a) As epochs proceed, loss of training and validation sets converges to near 0, while accuracy rose to near 0·99. (b) Area under receiver operative characteristic curve is 0·91 for predicting separated test sets. (c) The confusion matrix shows 84·1% accuracy. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 5Matrix shows representative saliency maps. (a) and (b) show correctly classified images. Most moyamoya cases were classified by the attention around the lower face. However, attention areas are more scattered and sparser for predicting the control. (c) shows the cases where the classifier predicts moyamoya as control, and (d) shows the opposite.
Fig. 6Charts show the results of external validation. (a) The confusion matrix shows 75·9% accuracy. (b) Area under receiver operative characteristic curve is 0·78 for predicting the external validation set.