| Literature DB >> 32384807 |
Peidong Chen1,2, Xiuqin Su1, Muyuan Liu1,2, Wenhua Zhu1,2.
Abstract
Within the framework of Internet of Things or when constrained in limited space, lensless imaging technology provides effective imaging solutions with low cost and reduced size prototypes. In this paper, we proposed a method combining deep learning with lensless coded mask imaging technology. After replacing lenses with the coded mask and using the inverse matrix optimization method to reconstruct the original scene images, we applied FCN-8s, U-Net, and our modified version of U-Net, which is called Dense-U-Net, for post-processing of reconstructed images. The proposed approach showed supreme performance compared to the classical method, where a deep convolutional network leads to critical improvements of the quality of reconstruction.Entities:
Keywords: Dense-U-Net; FCN (Fully Convolutional Networks); U-Net; computational imaging; deep learning; image reconstruction; lens-free; lensless
Year: 2020 PMID: 32384807 PMCID: PMC7249064 DOI: 10.3390/s20092661
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The overall process of image reconstruction and post-processing using a deep convolutional network. As is shown in the figure above, firstly, we used the calibration images measured by the sensor to calculate the left and right system transfer matrix iteratively, and then reconstruct the dataset images and the object images. Next, the dataset images were sent into the deep convolutional network for training, then the trained network was used to post-process the object image, then output the resulting image.
Figure 2The architecture of FCN-8s.
Figure 3The architecture of U-Net.
Figure 4The architecture of Dense-U-Net. Its improvements to U-Net are that there are some densely connected parts in Den-U-Net. The resulting feature layer generated by the same convolution with a 3 × 3 kernel size and the BatchNormalization operation is concatenated with the feature layer generated by the same convolution with a 3 × 3 kernel size.
Figure 5The flow chart of our experimental process.
Figure 6The figure above shows images captured by the sensor, the output results of different methods, and the original scene images. It can be seen that in terms of visual effect, the images processed by FCN-8s network are rough and the details are not recovered accurately, while the grayscale value of the images processed by U-Net network is not very accurate, and the image details and grayscale value processed by Dense-U-Net are relatively accurate.
Imaging quality evaluation parameters of output images produced by several methods. The best results are shown in bold.
| PSNR | SSIM | |||||||
|---|---|---|---|---|---|---|---|---|
| Previous work | FCN-8s | U-Net | Dense-U-Net | Previous work | FCN-8s | U-Net | Dense-U-Net | |
| Symbol | 6.9629 | 16.8320 | 17.7754 |
| 0.0128 | 0.7872 | 0.8312 |
|
| Lena | 10.0581 | 19.2174 | 18.9286 |
| 0.0129 | 0.5547 | 0.5971 |
|
| Pepper | 9.8043 | 18.3765 | 17.4211 |
| 0.0142 | 0.5586 | 0.5733 |
|
| Baby | 809028 | 18.3188 | 17.4833 |
| 0.0133 | 0.5801 |
| 0.5973 |
|
|
| |||||||
| Previous work | FCN-8s | U-Net | Dense-U-Net | Previous work | FCN-8s | U-Net | Dense-U-Net | |
| Symbol | 3.5432 | 9.0763 | 10.2099 |
| 0.1971 | 0.6955 | 0.7501 |
|
| Lena | 3.6515 | 5.8475 | 6.6642 |
| 0.4822 | 0.7279 | 0.7391 |
|
| Pepper | 3.5218 | 6.2367 | 7.3112 |
| 0.4828 | 0.7384 |
| 0.7548 |
| Baby | 3.7463 | 7.670 | 7.7607 |
| 0.4943 | 0.7531 | 0.7686 |
|