| Literature DB >> 36080928 |
Jinglei Wang1,2,3, Yixuan Li1,2,3, Yifan Ji1,2,3, Jiaming Qian1,2,3, Yuxuan Che1,2,3, Chao Zuo1,2,3, Qian Chen1,3, Shijie Feng1,2,3.
Abstract
Fringe projection profilometry (FPP) is widely applied to 3D measurements, owing to its advantages of high accuracy, non-contact, and full-field scanning. Compared with most FPP systems that project visible patterns, invisible fringe patterns in the spectra of near-infrared demonstrate fewer impacts on human eyes or on scenes where bright illumination may be avoided. However, the invisible patterns, which are generated by a near-infrared laser, are usually captured with severe speckle noise, resulting in 3D reconstructions of limited quality. To cope with this issue, we propose a deep learning-based framework that can remove the effect of the speckle noise and improve the precision of the 3D reconstruction. The framework consists of two deep neural networks where one learns to produce a clean fringe pattern and the other to obtain an accurate phase from the pattern. Compared with traditional denoising methods that depend on complex physical models, the proposed learning-based method is much faster. The experimental results show that the measurement accuracy can be increased effectively by the presented method.Entities:
Keywords: deep learning; denoising; fringe projection; phase retrieval; speckle noise
Mesh:
Year: 2022 PMID: 36080928 PMCID: PMC9460471 DOI: 10.3390/s22176469
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1The flowchart of the proposed deep learning-based 3D measurements using NIR FPP. For CNN1, the input is the raw fringe image with speckle noise and the output is the denoised image. For CNN2, it learns to obtain the numerator and denominator. As the phase can be used as temporary texture, the 3D reconstruction is then calculated with stereo vision.
Figure 2Schematic diagram of the denoising network CNN1, consisting of a convolutional layer and multiple residual blocks.
Figure 3Schematic representation of phase information in fringe images demodulated using deep neural network CNN2.
Figure 4The loss curve of (a) CNN1, (b) CNN2.
Figure 5The performance of the trained CNN1. (a1–a3) The captured raw NIR fringe patterns of different scenes. (b1–b3) The ground-truth-filtered NIR fringe patterns processed by BM3D. (c1–c3) The filtered NIR fringe patterns obtained by CNN1.
Figure 6The comparison of the algorithm in the 300th row from Figure 5a3,b3,c3.
Comparison of image denoising processing time between BM3D and our deep learning-based method for different scenes.
| Time Cost of Fringe Analysis | BM3D/s | Our Method/s |
|---|---|---|
| Scene 1 | 1.983 | 0.0648 |
| Scene 2 | 1.995 | 0.0673 |
| Scene 3 | 1.997 | 0.0633 |
Figure 7The numerator (a1–a3) and denominator (b1–b3) estimated by our method. (c1–c3) The wrapped phase calculated with numerator and denominator. (d1–d3) The absolute phase obtained by TPU using the wrapped phase.
Figure 8(a1–a3): The ground-truth label of the unwrapped phase which was calculated by the NIR fringes denoised by BM3D followed by the eight-step phase-shifting algorithm. The unwrapped phase obtained by (b1–b3) the raw NIR patterns followed by the three-step phase-shifting algorithm, (c1–c3) the NIR fringes denoised by BM3D followed by the three-step phase-shifting algorithm, and (d1–d3) our method. (e1–e3,f1–f3,g1–g3): Absolute phase error maps of the corresponding cases.
Figure 9The 3D reconstruction of the NIR fringes obtained by (a1–a3) BM3D denoising followed by eight-step phase-shifting algorithm, (b1–b3) three-step phase-shifting algorithm, (c1–c3) BM3D denoising followed by three-step phase-shifting algorithm, and (d1–d3) our method.
Figure 10The 3D reconstructed sphere (top) and error pixels distribution (bottom) obtained by (a) direct three-step PS of the original IR fringes, (b) BM3D denoising with three-step PS, and (c) our method.