| Literature DB >> 35373918 |
Xiangbin Liu1,2,3, Lijun Fu1,2,3, Jerry Chun-Wei Lin4, Shuai Liu1,2,3.
Abstract
Prenatal karyotype diagnosis is important to determine if the foetus has genetic diseases and some congenital diseases. Chromosome classification is an important part of karyotype analysis, and the task is tedious and lengthy. Chromosome classification methods based on deep learning have achieved good results, but if the quality of the chromosome image is not high, these methods cannot learn image features well, resulting in unsatisfactory classification results. Moreover, the existing methods generally have a poor effect on sex chromosome classification. Therefore, in this work, the authors propose to use a super-resolution network, Self-Attention Negative Feedback Network, and combine it with traditional neural networks to obtain an efficient chromosome classification method called SRAS-net. The method first inputs the low-resolution chromosome images into the super-resolution network to generate high-resolution chromosome images and then uses the traditional deep learning model to classify the chromosomes. To solve the problem of inaccurate sex chromosome classification, the authors also propose to use the SMOTE algorithm to generate a small number of sex chromosome samples to ensure a balanced number of samples while allowing the model to learn more sex chromosome features. Experimental results show that our method achieves 97.55% accuracy and is better than state-of-the-art methods.Entities:
Keywords: SMOTE; chromosome classification; low-resolution chromosome; self-attention negative feedback network
Mesh:
Year: 2022 PMID: 35373918 PMCID: PMC9290780 DOI: 10.1049/syb2.12042
Source DB: PubMed Journal: IET Syst Biol ISSN: 1751-8849 Impact factor: 1.468
FIGURE 1Chromosomes 1, 2, and 3 and chromosomes X, 6, 7, and 8 have a high degree of similarity in terms of length and banding distribution
FIGURE 2SRAS‐net network structure
FIGURE 3Self‐Attention Negative Feedback Network (SRAFBN)
FIGURE 4Negative feedback module (FB) structure
FIGURE 5Self‐attention mechanism structure
FIGURE 6Comparison of the effect before and after the super‐resolution treatment
Dataset description
| No. of chromosomes | No. of cells (female/male) | Training datasets | Test datasets |
|---|---|---|---|
| 5474 | 119 (74/45) | 4379 | 1095 |
Comparison of our method and existing methods
| Method | Acc. (%) |
|---|---|
| Res‐CRANN [ | 90.42 |
| Super‐Xception [ | 92.36 |
| CIR‐Net [ | 95.89 |
| MixNet [ | 96.50 |
| Ensemble (VGG19, Resnet50, MobilenetV2) [ | 97.01 |
| Resnet50 [ | 87.64 |
| Xception [ | 91.80 |
| VGG19 [ | 91.67 |
| Inception_Resnet_V2 [ | 91.69 |
| SRAS‐net (Resnet50; our proposed) | 95.92 |
| SRAS‐net (Xception; our proposed) | 96.01 |
| SRAS‐net (VGG19; our proposed) | 96.56 |
| SRAS‐net (Inception_Resnet_V2; our proposed) | 97.55 |
The model performance of adding each module in turn
| Method | Acc. (%) |
|---|---|
| Inception_Resnet_V2 | 91.69 |
| SMOTE + Inception_Resnet_V2 | 93.30 |
| SRAFBN + SMOTE + Inception_Resnet_V2 | 96.83 |
| SRAFBN + SMOTE + IAM + Inception_Resnet_V2 (SRAS‐net) | 97.55 |
Classification performance
| Class (No.) | Acc. (%) | Precision (%) | Recall (%) |
|
|---|---|---|---|---|
| 1 | 100.0 | 100.0 | 100.0 | 100.0 |
| 2 | 100.0 | 97.96 | 100.0 | 98.97 |
| 3 | 95.83 | 100.0 | 97.92 | 98.95 |
| 4 | 93.62 | 97.78 | 93.62 | 95.65 |
| 5 | 95.83 | 95.83 | 95.83 | 95.83 |
| 6 | 100.0 | 100.0 | 100.0 | 100.0 |
| 7 | 97.92 | 100.0 | 100.0 | 100.0 |
| 8 | 93.75 | 100.0 | 95.83 | 97.87 |
| 9 | 97.87 | 93.75 | 95.74 | 94.74 |
| 10 | 95.83 | 95.83 | 95.83 | 95.83 |
| 11 | 97.92 | 97.92 | 97.92 | 97.92 |
| 12 | 95.74 | 100.0 | 97.87 | 98.92 |
| 13 | 97.92 | 100.0 | 97.92 | 98.95 |
| 14 | 95.74 | 100.0 | 89.36 | 94.38 |
| 15 | 100.0 | 94.12 | 100.0 | 96.97 |
| 16 | 100.0 | 97.96 | 100.0 | 98.97 |
| 17 | 97.87 | 95.83 | 97.87 | 96.84 |
| 18 | 97.92 | 100.0 | 97.92 | 98.95 |
| 19 | 97.87 | 92.16 | 100.0 | 95.92 |
| 20 | 97.92 | 97.92 | 97.92 | 97.92 |
| 21 | 100.0 | 100.0 | 97.87 | 98.92 |
| 22 | 93.75 | 97.83 | 93.75 | 95.74 |
| X | 97.87 | 97.87 | 95.74 | 96.79 |
| Y | 100.0 | 100.0 | 100.0 | 100.0 |
FIGURE 7Confusion matrix
FIGURE 8Use the T‐SNE method to display the test set features extracted by the network. (a) Represents the visual features of the test set classified using Inception_Resnet_V2. (b) Represents the visual features of the test set classified using CIR‐Net. (c) Represents the visual features of the test set classified using our method