| Literature DB >> 34912381 |
Nandhini Abirami R1, Durai Raj Vincent P M1.
Abstract
Image enhancement is considered to be one of the complex tasks in image processing. When the images are captured under dim light, the quality of the images degrades due to low visibility degenerating the vision-based algorithms' performance that is built for very good quality images with better visibility. After the emergence of a deep neural network number of methods has been put forward to improve images captured under low light. But, the results shown by existing low-light enhancement methods are not satisfactory because of the lack of effective network structures. A low-light image enhancement technique (LIMET) with a fine-tuned conditional generative adversarial network is presented in this paper. The proposed approach employs two discriminators to acquire a semantic meaning that imposes the obtained results to be realistic and natural. Finally, the proposed approach is evaluated with benchmark datasets. The experimental results highlight that the presented approach attains state-of-the-performance when compared to existing methods. The models' performance is assessed using Visual Information Fidelitysse, which assesses the generated image's quality over the degraded input. VIF obtained for different datasets using the proposed approach are 0.709123 for LIME dataset, 0.849982 for DICM dataset, 0.619342 for MEF dataset.Entities:
Keywords: computer vision; convolutional neural network; deep learning; facial expression recognition; generative adversarial network; human-robot interaction
Year: 2021 PMID: 34912381 PMCID: PMC8667858 DOI: 10.3389/fgene.2021.799777
Source DB: PubMed Journal: Front Genet ISSN: 1664-8021 Impact factor: 4.599
FIGURE 1Left Column: Natural low-light images. Middle Row: Ground truth images. Right Column: The results enhanced by our method.
Advantages and disadvantages of existing low-light image enhancement techniques.
| References | Method | Dataset | Advantages | Disadvantages |
|---|---|---|---|---|
|
| Multiscale retinex | Dataset collected from multiple sources | Images are enhanced under varied illumination conditions. Color restoration restores wrong colors | The method introduces artifacts and noises in the image resulting in a blurry image |
|
| LIME: Low light image enhancement | High-dynamic range dataset | The algorithm is computationally inexpensive | The bright regions are over-enhanced and lose contrast |
|
| Learning-based restoration of backlit images | Data obtained from Li’s database | Backlit regions are identified and enhance the degraded image | Processing takes a long time, and the enhanced image quality depends highly on the accuracy of segmentation |
|
| Estimating the retinal contrast from the image | High-dynamic range dataset | It illuminates both low-light and brighter areas of the image | Underexposed regions are slightly enhanced |
FIGURE 2Proposed methodology.
FIGURE 3Flowchart of the proposed model.
FIGURE 4Samples of training data pairs showing low light image samples (left) and normal light image samples (right).
FIGURE 5Enhancement comparison of the proposed approach with existing approaches.
Summarizes the VIF results in the images compared to the state-of-the-art models.
| Method | LIME | DICM | MEF |
|---|---|---|---|
| Data | Data | Data | |
| Dong[23] | 0.309192 | 0.446071 | 0.288288 |
| NPE[21] | 0.528744 | 0.803247 | 0.434567 |
| LIME[4] | 0.492474 | 0.595954 | 0.407859 |
| MF[28] | 0.545735 | 0.732275 | 0.438101 |
| SRIE[35] | 0.655896 | 0.805968 | 0.602530 |
| MSR[31] | 0.483033 | 0.484776 | 0.409118 |
| GLadNET[36] | 0.447379 | 0.6369877 | 0.433680 |
| LIMET | 0.709123 | 0.849982 | 0.619342 |