| Literature DB >> 31076620 |
Keisuke Kawauchi1, Kenji Hirata2, Chietsugu Katoh1,3, Seiya Ichikawa1, Osamu Manabe4, Kentaro Kobayashi4, Shiro Watanabe4, Sho Furuya4, Tohru Shiga4.
Abstract
Patient misidentification in imaging examinations has become a serious problem in clinical settings. Such misidentification could be prevented if patient characteristics such as sex, age, and body weight could be predicted based on an image of the patient, with an alert issued when a mismatch between the predicted and actual patient characteristic is detected. Here, we tested a simple convolutional neural network (CNN)-based system that predicts patient sex from FDG PET-CT images. This retrospective study included 6,462 consecutive patients who underwent whole-body FDG PET-CT at our institute. The CNN system was used for classifying these patients by sex. Seventy percent of the randomly selected images were used to train and validate the system; the remaining 30% were used for testing. The training process was repeated five times to calculate the system's accuracy. When images for the testing were given to the learned CNN model, the sex of 99% of the patients was correctly categorized. We then performed an image-masking simulation to investigate the body parts that are significant for patient classification. The image-masking simulation indicated the pelvic region as the most important feature for classification. Finally, we showed that the system was also able to predict age and body weight. Our findings demonstrate that a CNN-based system would be effective to predict the sex of patients, with or without age and body weight prediction, and thereby prevent patient misidentification in clinical settings.Entities:
Mesh:
Year: 2019 PMID: 31076620 PMCID: PMC6510755 DOI: 10.1038/s41598-019-43656-y
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1(a) The functional architecture of a convolutional neural network (CNN). (b) Training and testing process.
Figure 2Sample mask images. The average value of the entire image was used as the pixel value of the mask set of each image. Each mask location was determined based on a typical image of a patient with average height and weight, and then applied to all the other patients’ images.
Figure 3(a) The overall accuracy of Experiments 1–3 and the accuracy for (b) male and (c) female patients.
Figure 4Two patients (a, male; b, female) whose sex was incorrectly predicted.
Results of the mask experiment.
| Mask location | Male | Female |
|---|---|---|
| Accuracy | Accuracy | |
| No mask | 95% | 94% |
| Head | 94% | 98% |
| Chest | 99% | 89% |
| Abdomen | 89% | 98% |
| Pelvis | 86% | 57% |
| Upper body | 97% | 91% |
| Lower body | 80% | 61% |
Figure 5Typical examples of Grad-CAM. The areas on which the neural network focused are highlighted. The chest and abdominal regions are typically highlighted for male and female patients, respectively.
Figure 6(a) Difference between predicted and true ages. (b) Confusion matrix of predicted and true ages. (c) Difference between predicted and true weights. (b) Confusion matrix of predicted and true weights.
Figure 7Loss curves of training and validation in this study. The training was completed at 10 epochs due to early stopping, and both the training and evaluation gradually declined.