| Literature DB >> 32454880 |
RuoXi Qin1, Huike Zhang2, LingYun Jiang1, Kai Qiao1, Jinjin Hai1, Jian Chen1, Junling Xu2, Dapeng Shi2, Bin Yan1.
Abstract
To achieve the robust high-performance computer-aided diagnosis systems for lymph nodes, CT images may be typically collected from multicenter data, which cause the isolated performance of the model based on different data source centers. The variability adaptation problem of lymph node data which is related to the problem of domain adaptation in deep learning differs from the general domain adaptation problem because of the typically larger CT image size and more complex data distributions. Therefore, domain adaptation for this problem needs to consider the shared feature representation and even the conditioning information of each domain so that the adaptation network can capture significant discriminative representations in a domain-invariant space. This paper extracts domain-invariant features based on a cross-domain confounding representation and proposes a cycle-consistency learning framework to encourage the network to preserve class-conditioning information through cross-domain image translations. Compared with the performance of different domain adaptation methods, the accurate rate of our method achieves at least 4.4% points higher under multicenter lymph node data. The pixel-level cross-domain image mapping and the semantic-level cycle consistency provided a stable confounding representation with class-conditioning information to achieve effective domain adaptation under complex feature distribution.Entities:
Year: 2020 PMID: 32454880 PMCID: PMC7239501 DOI: 10.1155/2020/3709873
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1A graphical illustration of the proposed method: the cross-domain confounding representation is generated by constraining the cross-domain mapping reconstruction. The classification cycle consistency enables the network to perceive the significant discriminative representation in a domain-invariant space for final classification.
Figure 2Illustration of the proposed network architecture. (a) Domain confounding representation through cross-domain mapping: the encoder F and the decoder G constitute the VAE architecture for unsupervised representation learning. The D module constitutes the GAN discriminator, while the C module constitutes a classifier. The encoder F uniformly encodes images from two domains. Paired decoders process different domain features, enabling cross-domain pixel-level image reconstruction and adversarial discrimination. (b) Classification cycle consistency: the reconstructed image based on source-domain features, as shown by the black line, will be constrained by classification cycle consistency through F and C. (c) Illustration of the loss overview.
Figure 3Illustration of the verification phase. F refers to the encoder which encodes the image to domain invariant space and C refers to the classifier. All those parameters are fixed during the verification.
Figure 4Samples of the multicenter CT images used for network training: (a) benign cases; (b) malignant cases; (c) plain CT scans; (d) enhanced CT scans.
Accuracy values on different datasets using the verification network model.
| Plain CT scan | Enhanced CT scan | SVHN | MNIST | |
|---|---|---|---|---|
| Verification accuracy (%) | 84.6 | 88.4 | 98.4 | 99.2 |
Accuracy values (mean ± std%) with different models, datasets, and domain settings.
| MN⟶SV | SV⟶MN | Enhanced⟶plain | Plain⟶enhanced | |
|---|---|---|---|---|
| Source only | 73.1±1.4 | 68.3±1.5 | 61.5±2.3 | 61.6±3.5 |
| GRL [ | 85.4±1.7 | 87.2±2.1 | 65.4±3.9 | 60.2±2.3 |
| MMD [ | 62.6±0.7 | 66.1±0.8 | 63.5±2.7 | 65.7±3.2 |
| DSN [ | 81.3±1.4 | 86.4±0.5 | 58.5±4.1 | 55.2±3.4 |
| GTA [ | 92.5±1.2 | 92.4±1.3 | 69.4±1.1 | 67.4±1.8 |
| Ours | 91.6±0.3 | 91.8±0.4 | 73.8±0.9 | 72.5±1.3 |
Effect of the classification cycle consistency on the classification accuracy.
| Enhanced⟶plain | Plain⟶enhanced | |
|---|---|---|
| Without classification cycle consistency | 67.6±1.4 | 65.4±1.1 |
| With classification cycle consistency | 73.8±0.9 | 72.5±1.3 |