| Literature DB >> 31484432 |
Yuda Song1, Yunfang Zhu2, Xin Du3.
Abstract
Deep convolutional neural networks have achieved great performance on various image restoration tasks. Specifically, the residual dense network (RDN) has achieved great results on image noise reduction by cascading multiple residual dense blocks (RDBs) to make full use of the hierarchical feature. However, the RDN only performs well in denoising on a single noise level, and the computational cost of the RDN increases significantly with the increase in the number of RDBs, and this only slightly improves the effect of denoising. To overcome this, we propose the dynamic residual dense network (DRDN), a dynamic network that can selectively skip some RDBs based on the noise amount of the input image. Moreover, the DRDN allows modifying the denoising strength to manually get the best outputs, which can make the network more effective for real-world denoising. Our proposed DRDN can perform better than the RDN and reduces the computational cost by 40 - 50 % . Furthermore, we surpass the state-of-the-art CBDNet by 1.34 dB on the real-world noise benchmark.Entities:
Keywords: deep learning; dynamic network; image restoration; noise reduction
Year: 2019 PMID: 31484432 PMCID: PMC6749329 DOI: 10.3390/s19173809
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Feature map visualization of the RDBs. There are adjacent feature maps with higher similarity in the red squares.
Figure 2The top part presents the overall architecture of the DRDN. The bottom part shows the difference between the RDB and DRDB and how the gate module is inserted into the original block.
Results of real-world denoising on the RENOIR [27] and the SIDD [28]. Best performance is in boldface.
| Dataset | Method | PSNR | SSIM | Params (M) | FLOPs (G) | Latency (s) |
|---|---|---|---|---|---|---|
| RENOIR | RDN+ |
|
|
| 105.5 | 0.63 |
| DRDN | 38.12 | 0.9010 | 5.59 |
|
| |
| SIDD | RDN+ | 39.55 | 0.9399 |
| 105.5 | 0.63 |
| DRDN |
|
| 5.59 |
|
|
Figure 3Skip ratio per block of the DRDN while testing on the RENOIR and the SIDD.
Results of real-world denoising on the Darmstadt Noise Dataset [38]. Best performance is in boldface.
| Method | PSNR | SSIM | Blind/Non-Blind |
|---|---|---|---|
| EPLL [ | 33.51 | 0.8244 | Non-blind |
| TNRD [ | 33.65 | 0.8306 | Non-blind |
| BM3D [ | 34.51 | 0.8507 | Non-blind |
| MCWNNM [ | 37.38 | 0.9294 | Non-blind |
| FFDNet+ [ | 37.61 | 0.9415 | Non-blind |
| DnCNN+ [ | 37.90 | 0.9430 | Blind |
| TWSC [ | 37.96 | 0.9416 | Non-blind |
| CBDNet [ | 38.06 | 0.9421 | Blind |
| PD [ | 38.40 | 0.9452 | Blind |
| Path-Restore [ | 39.00 |
| Blind |
| DRDN |
| 0.9524 | Blind |
Figure 4Qualitative results of real-world denoising on the Darmstadt Noise Dataset.
Results of real-world denoising on the Nam dataset [29] and the PolyU dataset [30]. Best performance is in boldface.
| Dataset | Metric | EPLL [ | BM3D [ | TNRD [ | DnCNN [ | TWSC [ | DRDN |
|---|---|---|---|---|---|---|---|
| Nam | PSNR | 33.66 | 35.19 | 36.61 | 33.86 | 37.81 |
|
| SSIM | 0.8591 | 0.8580 | 0.9463 | 0.8635 | 0.9586 |
| |
| PolyU | PSNR | 36.17 | 37.40 | 38.17 | 36.08 | 38.60 |
|
| SSIM | 0.9216 | 0.9526 | 0.9640 | 0.9161 | 0.9685 |
|
Figure 5Denoising results vary as the skip ratio increases.
Figure 6Trends of PSNR and SSIM with the skip ratio changing during testing on the RENOIR.
Figure 7Skip ratio of the DRDN train via supervised learning or reinforcement learning on the RENOIR.
Comparison of supervised learning and reinforcement learning on the RENOIR. Best performance is in boldface.
| Method | PSNR | SSIM | FLOPs |
|---|---|---|---|
| DRDN+SL |
|
| 61.18 |
| DRDB+RL | 38.03 | 0.9003 |
|