| Literature DB >> 36236760 |
Ruiming Jia1, Xin Chen1, Jiali Cui1, Zhenghui Hu2.
Abstract
A coarse-to-fine multi-view stereo network with Transformer (MVS-T) is proposed to solve the problems of sparse point clouds and low accuracy in reconstructing 3D scenes from low-resolution multi-view images. The network uses a coarse-to-fine strategy to estimate the depth of the image progressively and reconstruct the 3D point cloud. First, pyramids of image features are constructed to transfer the semantic and spatial information among features at different scales. Then, the Transformer module is employed to aggregate the image's global context information and capture the internal correlation of the feature map. Finally, the image depth is inferred by constructing a cost volume and iterating through the various stages. For 3D reconstruction of low-resolution images, experiment results show that the 3D point cloud obtained by the network is more accurate and complete, which outperforms other advanced algorithms in terms of objective metrics and subjective visualization.Entities:
Keywords: 3D reconstruction; attention mechanism; multi-view stereo; transformer
Year: 2022 PMID: 36236760 PMCID: PMC9571650 DOI: 10.3390/s22197659
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1The network structure of MVS-T.
Figure 2The structure of TFA module. (a) TFA; (b) S1; (c) S2−U; (d) S3−D.
Figure 3The structure of Transformer block.
Figure 4The structure of an image-like fusion module.
Comparison of reconstruction quality in objective metrics.
| Method | Acc. (mm) | Comp. (mm) | Overall (mm) |
|---|---|---|---|
| Colmap [ | 6.5778 | 10.1405 | 8.2930 |
| AA-RMVSNet [ | 0.8207 | 3.4115 | 2.1161 |
| CasMVSNet [ | 1.4045 | 1.6096 | 1.5071 |
| CVP-MVSNet [ | 1.1964 | 1.0569 | 1.1267 |
| AACVP-MVSNet [ | 1.1329 | 0.8814 | 1.0071 |
| MVSTER [ | 2.6132 | 1.9704 | 2.2918 |
| TransMVS [ | 1.0248 | 1.3075 | 1.1662 |
| Ours | 0.9296 | 1.0120 | 0.9708 |
Figure 5Comparison of reconstructed results. (a) AA-RMVSNet; (b) CasMVSNet; (c) CVPMVSNet; (d) AACVP-MVSNet; (e) Ours.
Figure 6Reconstruction results of BlendedMVS dataset.
Quantitative performance with different components.
| Model Settings | Mean Distance | ||||
|---|---|---|---|---|---|
| TFA | Transformer | Acc. | Comp. | Overall | |
| (a) | 1.1964 | 1.0569 | 1.1267 | ||
| (b) | √ | 0.9635 | 1.0257 | 0.9946 | |
| (c) | √ | √ | 0.9296 | 1.0120 | 0.9708 |
Ablation study on the size of patch on DTU dataset.
| Acc. | Comp. | Overall | |
|---|---|---|---|
| patch size = 8 | 1.0182 | 1.1022 | 1.0602 |
| patch size = 4 | 0.9296 | 1.0120 | 0.9708 |
| patch size = 2 | 0.9465 | 1.0237 | 0.9851 |
Figure 7Comparison of reconstructed results. (a) Patch size = 2; (b) Patch size = 4; (c) Patch size = 8.
Ablation study on the learnable token and image-like fusion methods.
|
|
|
|
| |
|---|---|---|---|---|
| Acc. | 0.9287 | 0.9296 | 0.9856 | 0.9724 |
| Comp. | 1.0363 | 1.0120 | 1.0692 | 1.0505 |
| Overall | 0.9825 | 0.9708 | 1.0274 | 1.0114 |
Ablation study on the number of Transformer blocks.
| T | Acc. | Comp. | Overall |
|---|---|---|---|
| 6 | 0.9731 | 1.0575 | 1.0153 |
| 4 | 0.9296 | 1.0120 | 0.9708 |
| 2 | 1.0148 | 1.0824 | 1.0486 |
Results on different resolution images.
| Image Size | Acc. | Comp. | Overall |
|---|---|---|---|
| 160 × 128 | 0.9296 | 1.0120 | 0.9708 |
| 320 × 256 | 0.7695 | 1.0163 | 0.8929 |
| 640 × 512 | 0.5348 | 1.2394 | 0.8871 |
| 1280 × 1024 | 0.4089 | 0.9584 | 0.6836 |