| Literature DB >> 32604859 |
Han Sun1,2, Xinyi Chen1,2, Ling Wang1,2, Dong Liang1,2, Ningzhong Liu1,2, Huiyu Zhou3.
Abstract
Deep neural networks have been successfully applied in domain adaptation which uses the labeled data of source domain to supplement useful information for target domain. Deep Adaptation Network (DAN) is one of these efficient frameworks, it utilizes Multi-Kernel Maximum Mean Discrepancy (MK-MMD) to align the feature distribution in a reproducing kernel Hilbert space. However, DAN does not perform very well in feature level transfer, and the assumption that source and target domain share classifiers is too strict in different adaptation scenarios. In this paper, we further improve the adaptability of DAN by incorporating Domain Confusion (DC) and Classifier Adaptation (CA). To achieve this, we propose a novel domain adaptation method named C2DAN. Our approach first enables Domain Confusion (DC) by using a domain discriminator for adversarial training. For Classifier Adaptation (CA), a residual block is added to the source domain classifier in order to learn the difference between source classifier and target classifier. Beyond validating our framework on the standard domain adaptation dataset office-31, we also introduce and evaluate on the Comprehensive Cars (CompCars) dataset, and the experiment results demonstrate the effectiveness of the proposed framework C2DAN.Entities:
Keywords: MK-MMD; classifier adaptation; domain adaptation; domain confusion; transfer learning; vehicle classification
Year: 2020 PMID: 32604859 PMCID: PMC7349586 DOI: 10.3390/s20123606
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The structure of proposed network C2DAN.
Figure 2The definition of domain confusion.
Figure 3Residual learning: a building block.
Figure 4Examples of the Office-31 dataset. (A) (Amazon, downloaded from amazon.com), (D) (DSLR, captured from digital SLR cameras), (W) (Webcam, captured from webcams).
Result of unsupervised domain adaptation experiment on Office-31 dataset.
| A-W | D-W | W-D | A-D | D-A | W-A | Average | |
|---|---|---|---|---|---|---|---|
| Baseline | 60.6(~0.6) | 95.0(~0.5) | 99.1(~0.2) | 59.0(~0.7) | 49.7(~0.3) | 46.2(~0.5) | 68.2 |
| DAN [ | 66.9(~0.6) | 96.3(~0.4) | 99.3(~0.2) | 66.3(~0.5) | 52.2(~0.3) | 49.4(~0.4) | 71.6 |
| DAN+DC (fc6) | 67.3(~0.6) | 96.0(~0.3) | 99.1(~0.2) | 66.0(~0.7 | 51.5(~0.3) | 49.6(~0.5) | 71.5 |
| DAN+DC (fc7) | 69.0(~0.7) | 96.2(~0.4) | 99.5(~0.2) | 67.0(~0.6) | 52.5(~0.5) | 50.2(~0.5) | 72.5 |
Result of unsupervised domain adaptation experiment on Office-10+Caltech10 dataset.
| A-C | W-C | D-C | C-A | C-W | C-D | Average | |
|---|---|---|---|---|---|---|---|
| Baseline | 82.6(~0.3) | 75.8(~0.3) | 77.1(~0.5) | 90.5(0.1) | 79.6(0.2) | 83.5(0.5) | 81.5 |
| DAN [ | 86.0(~0.5) | 81.5(~0.2) | 81.8(~0.3) | 92.0(~0.5) | 90.6(~0.5) | 90.2(~0.3) | 87.0 |
| DAN+DC (fc6) | 85.0(~0.1) | 80.4(~0.3) | 80.0(~0.3) | 91.7(~0.3) | 85.6(~0.2) | 88.6(~0.2) | 85.2 |
| DAN+DC (fc7) | 86.4(~0.2) | 82.2(~0.5) | 82.5(~0.1) | 92.8(~0.3) | 92.3(~0.5) | 91.3(~0.5) | 87.9 |
Figure 5Accuracy of different iterations using different method, (a) A-W; (b) W-A.
Figure 6Accuracy comparison in different weights, (a)weights of MK-MMD; (b) weights of DC; (c) weights of CA.
Figure 7website vehicle images, (a) Zhonghua, (b) Mitsubishi, (c) Besturn
Figure 8filtered website vehicle images.
Figure 9Examples of sv_data.
The categories and quantities.
|
|
|
|
|
| |
|---|---|---|---|---|---|
| data | 157 | 570 | 72 | 356 | 405 |
| sv_data | 370 | 155 | 68 | 395 | 465 |
| Dongfengfengdu | Geely | Haima | Honda | Hyundai | |
| data | 46 | 426 | 69 | 360 | 645 |
| sv_data | 92 | 576 | 203 | 380 | 572 |
| Jeep | Lexus | MAZDA | Mitsubishi | Nissan | |
| data | 200 | 283 | 314 | 275 | 431 |
| sv_data | 304 | 188 | 371 | 281 | 462 |
| Shuanglong | Toyota | Volkswagen | Volvo | Zhonghua | |
| data | 190 | 511 | 553 | 370 | 193 |
| sv_data | 264 | 572 | 533 | 598 | 111 |
Comparison of vehicle classification accuracy.
| Method | Accuracy |
|---|---|
| CNN (Baseline) | 0.351 |
| DAN [ | 0.449 |
| RTN [ | 0.443 |
| DAN+DC | 0.476 |
| RTN+DC | 0.456 |
| C2DAN (DAN+DC+CA) | 0.507 |
Comparison of classification accuracy for 20 vehicle types.
| CNN (Baseline) | DAN | DAN + DC | C2DAN | |
|---|---|---|---|---|
| Acura | 0.511 | 0.600 | 0.614 | 0.608 |
| Benz | 0.265 | 0.696 | 0.587 | 0.781 |
| Besturn | 0.118 | 0.220 | 0.505 | 0.206 |
| BYD | 0.083 | 0.387 | 0.332 | 0.504 |
| Changan | 0.606 | 0.326 | 0.328 | 0.338 |
| Dongfengfengdu | 0.054 | 0.130 | 0.185 | 0.054 |
| Geely | 0.474 | 0.534 | 0.520 | 0.641 |
| Haima | 0.000 | 0.039 | 0.060 | 0.014 |
| Honda | 0.431 | 0.281 | 0.389 | 0.409 |
| Hyundai | 0.271 | 0.470 | 0.472 | 0.530 |
| Jeep | 0.740 | 0.815 | 0.803 | 0.869 |
| Lexus | 0.617 | 0.399 | 0.479 | 0.724 |
| MAZDA | 0.218 | 0.498 | 0.496 | 0.517 |
| Mitsubishi | 0.238 | 0.476 | 0.605 | 0.514 |
| Nissan | 0.530 | 0.510 | 0.574 | 0.500 |
| Shuanglong | 0.273 | 0.401 | 0.409 | 0.391 |
| Toyota | 0.196 | 0.222 | 0.234 | 0.195 |
| Volkswagen | 0.580 | 0.656 | 0.658 | 0.714 |
| Volvo | 0.582 | 0.698 | 0.652 | 0.679 |
| Zhonghua | 0.234 | 0.612 | 0.622 | 0.712 |