| Literature DB >> 34868287 |
Abstract
For the problem of synthetic aperture radar (SAR) image target recognition, a method via combination of multilevel deep features is proposed. The residual network (ResNet) is used to learn the multilevel deep features of SAR images. Based on the similarity measure, the multilevel deep features are clustered and several feature sets are obtained. Then, each feature set is characterized and classified by the joint sparse representation (JSR), and the corresponding output result is obtained. Finally, the results of different feature sets are combined using the weighted fusion to obtain the target recognition results. The proposed method in this paper can effectively combine the advantages of ResNet and JSR in feature extraction and classification and improve the overall recognition performance. Experiments and analysis are carried out on the MSTAR dataset with rich samples. The results show that the proposed method can achieve superior performance for 10 types of target samples under the standard operating condition (SOC), noise interference, and occlusion conditions, which verifies its effectiveness.Entities:
Mesh:
Year: 2021 PMID: 34868287 PMCID: PMC8642017 DOI: 10.1155/2021/2392642
Source DB: PubMed Journal: Comput Intell Neurosci
Correlation matrix of deep feature vectors.
|
|
| ⋯ |
| |
|---|---|---|---|---|
|
|
|
| ⋯ |
|
|
|
|
| ⋯ |
|
| ⋮ | ⋮ | ⋮ | ⋱ | ⋮ |
|
|
|
| ⋯ |
|
Algorithm 1Clustering algorithm for deep features.
Figure 1Flowchart of the proposed method.
Figure 2Images of targets to be classified. (a) BMP2. (b) BTR70. (c) T72. (d) T62. (e) BRDM2. (f) BTR60. (g) ZSU23/4. (h) D7. (i) ZIL131. (j) 2S1.
Relevant information about MSTAR dataset.
| Azimuth (°) | Depression angle (°) | Resolution (m) | Size (pixel) |
|---|---|---|---|
| 0∼360 | 15, 17, 30, 45 | 0.3 × 0.3 | 128 × 128 |
Training and test sets for the 10-class recognition problem.
| Class | Training | Test | ||
|---|---|---|---|---|
| Configuration | Samples | Configuration | Samples | |
| BMP2 | 9563 | 233 | 9563 | 195 |
| 9566 | 196 | |||
| C21 | 196 | |||
|
| ||||
| BTR70 | C71 | 233 | C71 | 196 |
| 132 | 196 | |||
|
| ||||
| T72 | 132 | 232 | 812 | 195 |
| s7 | 191 | |||
|
| ||||
| T62 | A51 | 299 | A51 | 273 |
| BRDM2 | E-71 | 298 | E-71 | 274 |
| BTR60 | 7532 | 256 | 7532 | 195 |
| ZSU23/4 | d08 | 299 | d08 | 274 |
| D7 | 13015 | 299 | 13015 | 274 |
| ZIL131 | E12 | 299 | E12 | 274 |
| 2S1 | B01 | 299 | B01 | 274 |
Figure 3Confusion matrix achieved by the proposed method.
Average recognition rates under the standard operating condition.
| Method | Average recognition rate (%) |
|---|---|
| Proposed | 99.28 |
| ResNet | 99.02 |
| A-ConvNet | 98.75 |
| JSR-mono | 98.68 |
| JSR-deep | 99.14 |
Average recognition rates of the proposed method at different thresholds.
|
| 0.1 | 0.2 | 0.3 | 0.35 | 0.4 | 0.45 | 0.5 | 0.55 |
|---|---|---|---|---|---|---|---|---|
| Average recognition rate (%) | 99.08 | 99.12 | 99.18 | 99.24 | 99.28 | 99.22 | 99.15 | 99.10 |
Recognition rates under noise corruption.
| Methods | SNR/dB | ||||
|---|---|---|---|---|---|
| −10 | −5 | 0 | 5 | 10 | |
| Proposed | 70.58 | 81.32 | 88.14 | 93.56 | 98.94 |
| ResNet | 63.42 | 74.42 | 83.43 | 87.53 | 98.42 |
| A-ConvNet | 62.74 | 73.46 | 82.81 | 86.78 | 98.02 |
| JSR-Mono | 64.92 | 75.08 | 85.09 | 89.02 | 98.13 |
| JSR-Deep | 66.57 | 76.82 | 85.49 | 91.82 | 98.36 |
Figure 4Average recognition rates under target occlusions.