| Literature DB >> 31080211 |
Yasuhiko Tachibana1, Takayuki Obata1, Jeff Kershaw1, Hironao Sakaki2, Takuya Urushihata3, Tokuhiko Omatsu1, Riwa Kishimoto1, Tatsuya Higashi4.
Abstract
PURPOSE: A general problem of machine-learning algorithms based on the convolutional neural network (CNN) technique is that the reason for the output judgement is unclear. The purpose of this study was to introduce a strategy that may facilitate better understanding of how and why a specific judgement was made by the algorithm. The strategy is to preprocess the input image data in different ways to highlight the most important aspects of the images for reaching the output judgement.Entities:
Keywords: convolutional neural network; deep learning; diagnosis; magnetic resonance imaging
Year: 2019 PMID: 31080211 PMCID: PMC7232029 DOI: 10.2463/mrms.mp.2019-0021
Source DB: PubMed Journal: Magn Reson Med Sci ISSN: 1347-3182 Impact factor: 2.471
Fig. 1Twelve subimages from around the periphery of the brain and another 12 subimages from the inner area of the brain were subsampled automatically from each slice selected in preprocessing step 3. (a) First, each slice image was rotated through a random angle around the center of the brain. Four small subimages (64 × 64 pixels each) were then defined on a horizontal line passing through the center of the brain (gray line). The first two images were taken from the peripheral brain area (green squares), with the ratio of the brain parenchyma length to the extra-parenchyma length along the line being 2:1. Next, another two subimages (64 × 64 pixels each, blue squares) adjacent to the peripheral images on the medial sides were selected as images from the inner area of the brain. (b) The subsampling of peripheral and medial images was repeated 12 times after rotating the line in 30° increments beginning from the initial orientation. Overall, the procedure results in 12 subimages sampled at regular angular intervals for both the peripheral and the inner area of the brain.
Fig. 2Five differently preprocessed image sets (pp1–5) were generated for training and testing the model in five different ways. pp1 and 2: the subimages from around the inner area of the brain and those from around the periphery of the brain, respectively (images generated in steps 1–4 of the section “Image preprocessing to create five different image sets for training and testing”, see ); pp3–5: the brain parenchymal area, gray matter area, and white matter area, respectively, were further extracted from the pp2 images. The deleted parts of the images were with Gaussian noise so that the mean and dispersion of pp3–5 images was the same as for the original pp2 image.
Fig. 3The fraction of accurately classified slices per series differed among the five differently trained models (CNN1–5). The differences between CNN 1, 2 and CNN 3–5, as well as the differences between CNN 3, 4 and CNN 5 were significant (P < 0.05). Apart from the image sets used for training and testing, the settings were identical for all training patterns. CNN1–5: the models trained by preprocessed image sets 1–5 (pp1–5, see ), respectively. *P < 0.05. CNN, convolutional neural network.