| Literature DB >> 29915349 |
Shi Chen1,2, Zhou-Xian Pan3, Hui-Juan Zhu1, Qing Wang4,5, Ji-Jiang Yang4, Yi Lei6, Jian-Qiang Li7, Hui Pan8,9,10.
Abstract
Technologies applied for the recognition of facial features in diagnosing certain disorders seem to be promising in reducing the medical burden and improve the efficiency. This pilot study aimed to develop a computer-assisted tool for the pattern recognition of facial features for diagnosing Turner syndrome (TS). Photographs of 54 patients with TS and 158 female controls were collected from July 2016 to May 2017. Finally, photographs of 32 patients with TS and 96 age-matched controls were included in the study that were further divided equally into training and testing groups. The process of automatic classification consisted of image preprocessing, facial feature extraction, feature reduction and fusion, automatic classification, and result presentation. A total of 27 physicians and 21 medical students completed a web-based test including the same photographs used in computer testing. After training, the automatic facial classification system for diagnosing TS achieved a 68.8% sensitivity and 87.5% specificity (and a 67.6% average sensitivity and 87.9% average specificity after resampling), which was significantly higher than the average sensitivity (57.4%, P < 0.001) and specificity (75.4%, P < 0.001) of 48 participants, respectively. The accuracy of this system was satisfactory and better than the diagnosis by clinicians. However, the system necessitates further improvement for achieving a high diagnostic accuracy in clinical practice.Entities:
Mesh:
Year: 2018 PMID: 29915349 PMCID: PMC6006259 DOI: 10.1038/s41598-018-27586-9
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Age distribution of patients and controls. The black bar represents the number of controls. The gray bar represents the number of patients. (A) The age distribution of all participants. (B) The age distribution of the participants chosen for the study. The ratio of the number of controls to patients was 3:1 in each age group.
Figure 2Schematic representation of the study design. (A) Collection and selection of photos. (B) Framework of automatic facial classification system.
Figure 3Sixty-eight-feature landmark face model of a patient diagnosed with Turner syndrome.
Extraction method of each local feature.
| Feature | Method |
|---|---|
| Forehead | Calculate the Euclidean distance between point numbers 17 and 26 |
| Multiple facial nevi | Blob detection |
| Epicanthus | Gabor wavelet filter without dividing blocks |
| Nasal bridge | Calculate the Euclidean distance between point numbers 30 and 33 |
| Ocular distance | Calculate the ratio between d39,42 and d36,45 |
Figure 4Order of applying AdaBoost in this experiment.
Classification accuracy of automatic classification system.
| Feature | Method | Performance on the testing set | |
|---|---|---|---|
| Sensitivity | Specificity | ||
| Global geometrical features | PCA + SVM | 4/16 = 25% | 39/48 = 81.3% |
| Global texture features | PCA + SVM | 6/16 = 37.5% | 42/48 = 87.5% |
| Fusion of local features | AdaBoost | 11/16 = 68.8% | 42/48 = 87.5% |
SVM, support vector machine.
PCA, principal component analysis.
Average classification accuracy of the automatic classification system after random resampling.
| Feature | Method | Average performance on the testing set | |||
|---|---|---|---|---|---|
| Average sensitivity | P-value | Average specificity | P-value | ||
| Global geometrical features | PCA + SVM | 23.8 ± 15.2% | 0.56 | 80.8 ± 8.1% | 0.63 |
| Global texture features | PCA + SVM | 44.0 ± 15.7% | 0.01* | 87.5 ± 5.7% | 1.00 |
| Fusion of local features | AdaBoost | 67.6 ± 14.5% | 0.57 | 87.9 ± 4.5% | 0.47 |
SVM, support vector machine.
PCA, principal component analysis.
The P-value was used for comparing the results of the specific sampling of participants in Table 2 with the average results of 50 times of resampling, using the t-test. *P < 0.05.
Classification accuracy of doctors and medical students.
| Classification accuracy | |||||
|---|---|---|---|---|---|
| Sensitivity(%) (mean ± SD) | P-value | Specificity(%) (mean ± SD) | P-value | ||
| All (N = 48) | 57.4 ± 21.9 | 0.001* | 75.4 ± 17.3 | <0.001* | |
| Levels | Physicians (N = 27) | 56.7 ± 21.9 | 0.01* | 77.2 ± 19.8 | 0.01* |
| Attending (N = 21) | 56.8 ± 19.2 | 0.01* | 81.3 ± 14.2 | 0.06 | |
| Residents (N = 6) | 56.3 ± 32.1 | 0.38 | 63.2 ± 30.4 | 0.11 | |
| Medical students (N = 21) | 58.3 ± 22.4 | 0.05* | 72.6 ± 13.9 | <0.001* | |
| Departments | Endocrinology (N = 25) | 64.0 ± 20.9 | 0.27 | 71.0 ± 17.9 | <0.001* |
| Pediatrics (N = 3) | 39.6 ± 9.5 | 0.03* | 97.2 ± 1.2 | 0.005+ | |
| Gynaecology (N = 10) | 50.0 ± 16.9 | 0.01* | 77.3 ± 17.1 | 0.09 | |
| Others (N = 10) | 53.7 ± 27.0 | 0.11 | 77.9 ± 14.1 | 0.06 | |
| Hospitals | PUMCH (N = 13) | 62.0 ± 25.7 | 0.36 | 71.0 ± 19.8 | 0.01* |
| Others (N = 35) | 55.7 ± 20.5 | 0.001* | 77.0 ± 16.3 | 0.001* | |
SD, standard deviation; PUMCH, Peking Union Medical College Hospital.
The P value was used for comparing the sensitivity or specificity of the participants and computer using the t test.
The sensitivity and specificity of the computer were 68.8% and 87.5%, respectively.
*P < 0.05; worse than the computer. +P < 0.05; better than the computer.
Figure 5Classification accuracy of doctors and medical students. (A) The sensitivity of participants of different levels, different departments, and different hospitals. The dotted line represents the sensitivity (68.8%) of the automatic classification system. (B) The specificity of participants of different levels, different departments, and different hospitals. The dotted line represents the specificity (87.5%) of the automatic classification system. *The sensitivity or specificity of the computer was better than that of the participants (evaluated using the t-test; P < 0.05). +The sensitivity or specificity of the participants was better than that of the computer (evaluated using the t-test; P < 0.05).