Literature DB >> 35684790

Single Image Dehazing Using Global Illumination Compensation.

Junbao Zheng1, Chenke Xu1, Wei Zhang1, Xu Yang1.   

Abstract

The existing dehazing algorithms hardly consider background interference in the process of estimating the atmospheric illumination value and transmittance, resulting in an unsatisfactory dehazing effect. In order to solve the problem, this paper proposes a novel global illumination compensation-based image-dehazing algorithm (GIC). The GIC method compensates for the intensity of light scattered when light passes through atmospheric particles such as fog. Firstly, the illumination compensation was accomplished in the CIELab color space using the shading partition enhancement mechanism. Secondly, the atmospheric illumination values and transmittance parameters of these enhanced images were computed to improve the performance of atmospheric-scattering models, in order to reduce the interference of background signals. Eventually, the dehazing result maps with reduced background interference were obtained with the computed atmospheric-scattering model. The dehazing experiments were carried out on the public data set, and the dehazing results of the foggy image were compared with cutting-edge dehazing algorithms. The experimental results illustrate that the proposed GIC algorithm shows enhanced consistency with the real-imaging situation in estimating atmospheric illumination and transmittance. Compared with established image-dehazing methods, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) metrics of the proposed GIC method increased by 3.25 and 0.084, respectively.

Entities:  

Keywords:  dark channel prior (DCP); fog imaging; global illumination compensation factor; guided filtering; image dehazing; shading partition enhancement mechanism

Year:  2022        PMID: 35684790      PMCID: PMC9185279          DOI: 10.3390/s22114169

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.847


1. Introduction

Haze is an atmospheric phenomenon where dust, smoke and other dry particles obscure the clarity of the atmosphere. This results in a loss of contrast, visibility and vividness in images required for vision technology [1,2] due to the scattering effect of light through haze particles. Therefore, in applications involving image sensors and related image processing, such as video surveillance, assisted driving, etc., it is important to dehaze images to improve target recognition capabilities particularly. The single image-dehazing technology in recent years can be divided into three categories: (1) The image-dehazing method based on a physical model. Under the framework of the atmospheric-scattering model [3], prior knowledge has been employed to estimate the transmittance and atmospheric light intensity and then combined with the model to dehaze the image. By observing a large number of foggy images, He et al. [4] found that the intensity of some pixels in at least one-color channel of the foggy image is very low, close to zero, and proposed a fast image-dehazing method based on dark channel prior (DCP). DCP provides an effective method to estimate model parameters, but it still suffers from slow processing speed and unpredictable performance in the sky regions. Since then, there have been many improvements to DCP. Dhara et al. [5] used weighted least-squares filtering and color correction on DCP to optimize the dehazing effect. Kim [6] proposed a sky detection method using region-based and boundary-based sky segmentation, which enables DCP to perform image restoration for sky of various shapes. Chung et al. [7] estimated, by analyzing a large number of optimal scattering coefficients and the distribution between dark channels, an appropriate scattering coefficient. Simultaneously, dehazing algorithms based on new prior knowledge [8,9,10,11] have emerged. (2) The image-dehazing method based on image fusion. Ancuti et al. [12] proposed an image-enhancement method based on image fusion that excludes the atmospheric-scattering model. This method obtains and filters the information in the original image through various methods, and then fuses Laplacian and Gaussian pyramids to achieve the effect of dehazing. On this basis, Choi et al. [13] completed dehazing by extracting haze-related features and using a weighting scheme that selectively fuses images. Cho et al. [14] introduced the adaptive tone remapping (ATR) algorithm to achieve balanced image enhancement while refining image details. Ngo et al. [15] took a set of detail-enhanced and under-exposed im-ages derived from a single haze image as the input for image fusion, and combined the ATR algorithm to complete dehazing. (3) The image-dehazing method based on deep learning. The success of deep learning in computer vision tasks has led to a large number of deep-learning-based dehazing methods, such as convolutional neural networks (CNN) [16], generative adversarial networks [17,18,19], attention-based multi-scale models [20,21], and encoder–decoder structured networks [22]. Among the above methods, although the fusion-based method guarantees the gradient information of multi-scale input, its image restoration strength is insufficient to some degree. The deep-learning-based methods usually need considerable well-labeled samples, indicating high cost in data collection and curation. Based on the atmospheric-scattering model, we analyze the shortcomings of the DCP method and the fog-imaging principle. It is believed that the brightness and color of a pixel of the input image will be affected by the surrounding pixels in a limited range. Therefore, in order to eliminate or reduce the interference from the background and obtain the real prior information, this paper proposes the GIC algorithm. Before estimating atmospheric illumination and transmittance, the GIC algorithm eliminates or weakens the interference of the background on the color distribution of the image, thereby obtaining more accurate model parameters. Our main contributions are summarized as follows: (1) We propose an image-dehazing method based on global illumination compensation. Compared with other image-dehazing methods, the GIC method eliminates or reduces the influence of surrounding pixels on the target pixel through the shading partition enhancement mechanism before performing a priori analysis. In evaluation experiments on public datasets, the GIC method achieves better image evaluation metrics than the candidate methods, yet keeps high computational efficiency. (2) In order to eliminate the interference from the background in the process of dehazing, the GIC algorithm employs a contrast-enhancement technique named the shading partition enhancement mechanism to distinguish the light and dark information of the image. The shading partition enhancement mechanism calculates the real atmospheric illumination observation points and eliminates or reduces the interference of surrounding pixels to target pixels by means of local enhancement. (3) The GIC algorithm uses the global illumination compensation factor and guided filtering for transmittance map refinement to ensure a dehazed image with no halo artifacts. The global illumination compensation factor is a local weighted average algorithm that can make the transmission map have sharper edges. The remainder of this paper is organized as follows: In Section 2, the atmospheric-scattering model, dark channel prior, as well as the guided filtering method are introduced as background information about image dehazing. In Section 3, the GIS approach is proposed, in detail, to overcome the drawbacks of established dehazing methods. Section 4 contains its subjective and quantitative comparison with the state-of-the-art dehazing methods, before a conclusion is drawn in Section 5.

2. Problem Formulation of Image Dehazing

In this section, we introduce the atmospheric-scattering model which is the basic underlying model of image dehazing. Then we introduce some existing methods used in this paper to calculate haze-related parameters. Finally, we analyze the foggy imaging process that inspired the idea of this paper.

2.1. Atmospheric-Scattering Model

The atmospheric-scattering model [3] describes the dehazing process of haze images and is widely used in the field of image dehazing. According to the atmospheric-scattering model, the formation of blurred images can be described by Equation (1): where is the observed intensity, is the scene radiance, is the global atmospheric light, and is the medium transmission describing the portion of the light that is not scattered and reaches the camera. It can be expressed as , where is the attenuation coefficient and indicates the scene depth. The real scene can be recovered after and are estimated.

2.2. Calculation of Haze-Related Parameters

This subsection mainly introduces the two methods used in this paper to calculate the haze-related parameters.

2.2.1. Dark Channel Prior

The dark channel prior method is mainly designed for outdoor haze-free images. In these non-sky areas, there exists one-color channels that have pixels with intensity close to zero. The dark channel is defined as the minimum value of the RGB of pixels in the local area of the image: where is a color channel of , is any one of the three channels of the image, and is a local patch centered at . The value of the dark channel is related to the fog density of the image and can be used to closely estimate atmospheric illumination and transmittance.

2.2.2. Guided Filtering Method

Guided filtering [23] is a novel explicit image filter. Derived from a local linear model, the guided filter computes the filtering output by considering the content of a guidance image. The key assumption of the guided filter is a local linear model between the guidance and the filtering output . This method assumes that is a linear transform of in a window centered at the pixel : where is the value of the output pixel, is the pixel value of the input image, and are the pixel indices, and are the coefficients of the linear function when the center of the window is at . To determine the linear coefficients and , we need constraints from the filtering input , which can be derived from Equations (4) and (5). where and are the mean and variance of in , is the number of pixels in , is the mean of in . Having obtained the linear coefficients and , we can compute the filtering output by Equation (3). This model can be used to optimize the initial transmittance.

2.3. Imaging Process under Haze Conditions

Under various haze imaging conditions, the image captured by the camera is formed by the light reflected from the object penetrating the air diffusely. There are three main sources of light hitting an object. The first source indicates that the light is irradiated on the object by the light source directly. In the second situation, the light is reflected on the object after the light source irradiates the surrounding object diffusely. In the third case, the diffuse reflection light of the surrounding objects is irradiated on the atmospheric particles and then scattered to the object. Shown in Figure 1, the picture captured by the camera is formed by the direct lighting and the indirect lighting reflected by objects and atmospheric particles.
Figure 1

Imaging process under haze conditions.

Model-based methods often estimate atmospheric illumination and transmittance by analyzing the local and global color distribution of the input foggy image. However, it can be seen from Figure 1 that a certain pixel of the real image is affected by atmospheric particles and surrounding pixels. The traditional model-based methods directly perform prior analysis on the input image (disturbed image), and the following problems arise: (1) The estimated atmospheric light value is small due to the reflection that occurs when light from the light source passes through atmospheric particles such as fog. (2) Transmittance refers to the part of the light reflected by the target pixel that is not scattered during the process of projecting the light from the target pixel to the camera through the medium. The influence of the surrounding pixels on the target pixel causes the part received by the camera to change, resulting in an inaccurate estimated . Therefore, it is necessary to eliminate or reduce the interference from the background to estimate the correct atmospheric illumination and transmittance.

3. The Proposed GIS Method

The atmospheric-scattering model in Section 2.1 suggests that estimation of the atmospheric illumination and transmittance are two crucial steps to recover a haze-free image. In order to obtain more accurate atmospheric illumination values and transmittance, we propose GIC, an image-dehazing algorithm based on atmospheric-scattering model and considering image background interference. A few intermediate outputs in our procedure to obtain from using GIC method are shown in Figure 2. The GIC algorithm contains two key steps, i.e., the shading partition enhancement and global illumination compensation factor. In this section, we focus on the method design of these two steps and the evaluation indexes used in this paper. The detailed steps of the GIC algorithm are shown in Algorithm 1.
Figure 2

The flow chart of the proposed GIC algorithm. Steps 1 and 2, denoted as the shading partition enhancement and compensation factor, are two key components employed in the GIC algorithm to improve the image-dehazing performance. For dark channel images, step 2 aims to obtain optimized transmittance using a global illumination compensation factor.

3.1. The Shading Partition Enhancement

He et al. [4] first obtained the pixel position with the largest gray value of 0.1% from the dark channel image. Then, they found the pixels at these locations in the input image and used the maximum brightness value of these pixels as the atmospheric illumination value. We calculated these pixel positions in the synthetic objective testing set (STOS) [24] dataset by this method. We found that the brightness standard deviation of surrounding pixels tended to fluctuate within a small range. After adding the brightness value of the center pixel, the standard deviation increased significantly. Combined with the experimental results, we believe that the sudden increase in the brightness value of the center pixel was affected by the surrounding pixels. In order to find the correct measurement point of atmospheric illumination, this paper proposes the shading partition enhancement mechanism [25]. The shading partition enhancement mechanism processes the light and dark areas of the L-channel of the input image, in order to enhance the contrast of the image in the light and dark areas. The basic process of shading partition enhancement is shown in Figure 3.
Figure 3

The shading partition enhancement consists of two parts: (a) used to obtain the maximum (minimum) pixel value in the local area of the input image and (b) the center pixel is enhanced by surrounding pixels.

In the first step, the enhancement mechanism normalizes the L-channel of the input image, and obtains the largest (or smallest) pixel from the L-channel using an n × n window (here = 3). The window slides by 1 pixel each time. The maximum value (or minimum value) of each window area is used to obtain a local bright (dark) matrix. Subsequently, an n × n window (here = 3) was used to slide on the local light (dark) matrix. For each position, the product of other elements around the central element is calculated to form a single output matrix element. The calculation process is shown in Equation (6). Based on computation, the brighter areas in the input image are enhanced and displayed in . As the enhanced dark areas are barely visible, this part will be inverted to show in . The next step is the process of fusing the enhanced light and dark area information with the input image. The light and dark area information needs to be obtained from and . As shown in Equations (7) and (8), where and are the light and dark area information, and are the light and dark area ranges of the L-channel, respectively. At this time, the enhanced light and dark area information has been obtained. The previous image preprocessing process removed the intermediate area information. In the later fusion, the intermediate area information needs to be filled back. The fused image information is shown in Equation (9). Finally, we go back from the Lab color space to the RGB color space to get our enhanced image. This mechanism can eliminate or reduce the influence of surrounding pixels on the target pixel effectively. After eliminating the influence of surrounding pixels, the atmospheric illumination value is calculated by [4]. The experimental results show that the atmospheric illumination value estimated by the enhanced image is more in line with the real imaging situation.

3.2. Global Illumination Compensation Factor

The GIC method first calculates DCP on the enhanced image to obtain the atmospheric illumination value, then performs the optimization of the transmittance. With the computed atmospheric illumination values, the DCP principle is used to calculate the dark channel of the atmospheric-scattering model: According to the DCP principle, the dark channel of the image should be close to zero under fog-free conditions, so there are: After eliminating or reducing the influence from the surrounding scene through the shading partition enhancement mechanism. The overall outline of the calculated initial transmittance map is not clear enough. Therefore, we introduce a global illumination compensation factor on the basis of the enhanced dark channel image, with the purpose of optimizing the transmittance. Assuming that the target pixel is affected by the surrounding pixels in a local area and the average information of the surrounding pixels is consistent, this paper introduces a weight coefficient to , namely: where , refers to the average information matrix of pixels in the local area centered on x in the enhanced dark channel image. This can enhance, to a certain extent, the profile characteristics of the transmittance map.

3.3. Dehazing Process of the GIC Algorithm

In the GIC algorithm, we obtain the enhanced image through the shading partition enhancement mechanism and combine the DCP to obtain the atmospheric illumination value from the enhanced image. To optimize the initial transmittance, we use global illumination compensation factor and guided filtering to smooth the image and highlight image edges. By Equation (13), the haze-free image can be recovered easily.

3.4. Evaluation Indexes of Image Dehazing

To evaluate the dehazing performance quantitatively, two popular metrics are adopted in this study: PSNR and SSIM [26]. Given a ground-truth image Y and a processed image X, definition PSNR is defined given in Equation (14), where MSE (mean square error) is the mean square error of the processed image X and the ground-truth image Y; is the number of bits per pixel. The SSIM index measures image similarity in terms of brightness, contrast, and structure. Its calculation method is shown in Equation (15) where and are the mean value of image X and Y, and are the standard deviation of image X and Y, and is the covariance of image X and Y. However, a completely clear haze-free image cannot be obtained in reality and the two indexes themselves are limited and difficult to keep consistent with the quality of human perception. Therefore, two no-reference IQA indexes were selected—spatial and spectral entropies quality (SSEQ) [27] and natural image quality evaluator (NIQE) [28] to supplement the shortage of PSNR and SSIM. SSEQ calculates the quality of the distorted image through the image spectral probability map, and its calculation method is shown in Equation (16), where P(i,j) is the probability map of the pixel in the spectrum. NIQE represents the quality of the distorted image by the distance between the natural scene statistic (NSS) feature model and the multivariate Gaussian (MVG) extracted from the distorted image features. The calculation method of this indicator is shown in Equation (17), where , , , are the mean vector and covariance matrix of MVG. Note that a higher PSNR and SSIM value and a lower SSEQ and NIQE value indicate higher quality in the dehazing process.

4. Experimental Outcomes and Discussion

In order to verify the effectiveness of the shading partition enhancement mechanism in image dehazing, we analyzed its correctness for obtaining atmospheric illumination observation points and compared it with the established dehazing methods, including Dhara [5], Berman [9], Choi [13], Cho [14], and Cai [16].

4.1. Analysis of the Shading Partition Enhancement Mechanism

In the STOS dataset, the position of the observation point of the atmospheric illumination value and the brightness value of the surrounding pixels were calculated in [4] and the GIC algorithm. In more than 70% of the images, the average brightness of surrounding pixels calculated by the GIC algorithm was higher than in the literature [4]. In 74.6% of the images, the average brightness of the observation points and surrounding pixels of the GIC algorithm only fluctuated around 0.3, showing better stability than the literature [4]. The brightness reflects the fog concentration around the observation point. It can be seen that the observation points of atmospheric illumination value calculated by the GIC algorithm was more in line with the real imaging situation. In order to verify the effectiveness of the light–dark partition enhancement mechanism proposed in this paper in image dehazing, we compared it with various image enhancement algorithms. Method B1 does not perform image enhancement, method B2 uses histogram equalization processing, method B3 denotes contrast-limited adaptive histogram equalization (CLAHE) [29] processing, and method B4 adapts Retinex [30] processing. Except for the different image-enhancement methods, the other processes of these methods are the same as the GIC algorithm. Figure 4 shows the dehazing effect of an outdoor real foggy image in the SOTS dataset. After GIC algorithm processing, the color of the trees in the first picture was greener, the outline was clearer, and the scenery in the second picture was clearer. Table 1 shows the average value (AVG) and standard deviation (SD) of the reference indexes obtained after dehazing all the fog images in the SOTS dataset by the five methods. The average PSNE and SSEQ indexes of the GIC method were significantly higher than other methods. Standard deviations of SSIM and NIQE indexes obtained by the GIC method during dehazing were the lowest among selected approaches, demonstrating stable performance. From both qualitative and quantitative perspectives, the shading partition enhancement mechanism had better dehazing effect and stability than other image-enhancement methods, proving the effectiveness of this mechanism in image dehazing. Due to limited impact on image quality, shading partition enhancement can be regarded as a suitable preprocessing mechanism for this SOTS dataset, which contains considerable outdoor images.
Figure 4

Comparison of outdoor image dehazing for the five methods. (a) B1 method processing results, (b) B2 method processing results, (c) B3 method processing results, (d) B4 method processing results, and (e) GIC method processing results.

Table 1

Comparison of indexes of the five methods.

MethodsPSNRSSIMSSEQNIQE
AVGSDAVGSDAVGSDAVGSD
B118.673.610.880.0518.206.362.610.60
B218.483.250.790.0918.016.303.390.81
B319.542.390.710.0916.255.874.140.81
B413.754.100.510.2424.6515.104.091.72
GIC 22.07 3.02 0.90 0.05 15.49 5.60 2.76 0.59

The data in bold in the table represent the best index for each column.

4.2. Dehazing of Indoor Simulated Images

Indoor synthetic foggy images were used in the SOTS dataset to conduct simulation experiments to evaluate the feasibility of the above algorithms from both subjective and objective aspects. Image-dehazing algorithms of Dhara [5], Berman [9], Choi [13], Cho [14], and Cai [16] were selected for comparison. The results of these algorithms are produced by the author’s code with the specified parameters. Dehazing results with those dehazing methods of indoor images are illustrated in Figure 5.
Figure 5

Dehazing effects of various methods on indoor composite images under different fog concentrations. (a) Input images, (b) fog-free image, (c) Dhara method, (d) Berman method, (e) Choi method, (f) Cho method, (g) Cai method, and (h) the proposed GIC method.

It can be observed from Figure 5 that for indoor images at different concentrations, Dhara [5], Choi [13], and Cho [14] algorithms achieved good results in dehazing. However, the dehazing effect of these algorithms came at the expense of image color fidelity. The GIC method, Berman [9], and Cai [16] all had better processing effects on haze images. In the red area in the upper left corner of the image, the GIC method had more saturated colors compared to Berman [9] and Cai [16]. In the processing of indoor synthetic simulation images, the GIC method had better color retention ability for colored areas and brightly colored mist images. After obtaining better results subjectively, we used SSEQ to evaluate the quality of the image after dehazing. Considering that the fog concentration has a direct impact on the image-dehazing effect, ten different concentrations were set in the verification process to test the algorithm performance. For the same indoor images under ten different fog concentrations, we used the above methods to dehaze the sample image set respectively, as shown in Figure 6.
Figure 6

The SSEQ indicator changes of each method at multiple fog levels.

In general, high levels of fog tend to a negative impact on the quality of image dehazing. With the increase of fog concentration, except for the SSEQ index obtained by the Cho [14] method which increased slightly, the SSEQ index obtained by the other five dehazing methods showed a downward trend generally. The SSEQ index obtained by the GIC method was the lowest among the selected methods and maintained a continuous downward trend. The GIC method can better ensure the quality of the image, and there is no obvious loss of image quality due to the enhancement of image contrast, which proves the feasibility of the GIC method.

4.3. Dehazing of Outdoor Real Images

After processing the simulated images to achieve good results, we conducted experiments on real outdoor foggy images in the SOTS dataset, and compared the results with the dehazing algorithms of Dhara [5], Berman [9], Choi [13], Cho [14], and Cai [16]. The effectiveness of the dehazing algorithm in this paper was evaluated from the image quality evaluation index. In the visual analysis, we increased the dehazing effect of the He method. The comparison results of outdoor images are shown in Figure 7. As can be seen from Figure 7, after the six candidate methods and the GIC algorithm processed the input image, the clarity and contrast of the image were improved to different degrees. He [4] suffered from halo artifacts where the sky meets the object. Although Dhara [5], Choi [13] and Cho [14] had higher dehazing intensity, they all showed striped color distortion. The dehazing effect of Berman [9] and Cai [16] was weaker among the methods. Compared with the six candidate methods, the GIC algorithm removed the fog in the image effectively. There is no image distortion in the dehazing result. However, in general, there was still a shortage of incomplete fog removal.
Figure 7

Comparison of the results of different dehazing methods for outdoor real images. (a) Input image, (b) fog-free image, (c) He method processing results, (d) Berman method processing results, (e) Dhara method processing results, (f) Choi method processing results, (g) Cho method processing results, (h) Cai method processing results, and (i) GIC method processing results.

Table 2 shows the comparison of evaluation indexes of outdoor image-dehazing processing. From the PSNR and SSIM indexes reflecting the dehazing effect, the GIC method outperformed the other five candidate methods. In terms of NIQE index, the Cai [16] method was better slightly; Cho [14] achieved the lowest SSEQ index, but the NIQE index obtained by this method was significantly higher than that of other candidate methods, indicating that this method is not balanced in performance. According to the results in Table 1, it can be seen that the GIC method not only had a better dehazing effect, but also had a stable performance relatively, maintaining a good balance in different performance indexes. The above results show that the GIC method has better restoration effect.
Table 2

Comparison of evaluation indexes in image dehazing.

MethodsPSNRSSIMSSEQNIQE
AVGSDAVGSDAVGSDAVGSD
Dhara [5]17.133.350.830.0814.875.702.750.58
Berman [9]18.333.130.790.0915.476.262.800.67
Choi [13]18.973.690.830.0715.536.012.800.67
Cho [14]17.78 2.32 0.740.08 14.18 8.273.261.95
Cai [16]21.903.090.890.0817.967.38 2.66 0.57
GIC 22.07 3.02 0.90 0.05 15.49 5.60 2.760.59

The data in bold in the table represent the best index for each column.

The GIC method preprocesses the input image through the shading partition enhancement mechanism, which compensates for the light intensity scattered by atmospheric particles such as fog. Meanwhile, the influence of surrounding pixels on the target pixel is also eliminated. Realistic observation points of atmospheric illumination can be found in the enhanced image. The global illumination compensation factor optimizes transmittance in order to obtain a sharper profile transmittance map. In this way, the GIC method completes the fog removement with the atmospheric-scattering model. Experimental results show that the GIC method had improved performance on outdoor foggy images, illustrated by reduced image distortion and darkening in the restoration result. We also test the computational efficiency of candidate dehazing methods, as shown in Table 3, on the computer with i5-1135G7 CPU and 16 GB of running memory, the time required to run all the images in the SOTS dataset. The dehazing method proposed by Berman [9] and Choi [13] had low computational efficiency in the SOTS dataset, and the GIC method had the highest running efficiency, which proves the effectiveness and high performance of the algorithm.
Table 3

The efficiency of candidate dehazing methods.

MethodBerman [9]Dhara [5]Choi [13]Cho [14]GIC
time (s)2278.5302.225605307.92140.44

5. Conclusions

Under the framework of the atmospheric-scattering model, this paper designs and implements the GIC algorithm. Introducing the global illumination factor compensation and the shading partition enhancement can eliminate or reduce the interference from the background effectively and improve the image-dehazing quality and effect. The experimental results show that the proposed method can effectively reduce the background interference through the shading partition enhancement and the global illumination compensation factor, and the obtained atmospheric illumination and transmittance are more in line with the real situation during imaging. From the perspective of quantitative analysis, the GIC method can achieve higher PNSR and SSIM indexes for outdoor scene images, which are better than the selected candidate algorithms; it has a better dehazing effect on indoor low-density foggy images. From the perspective of subjective observation, the GIC algorithm can maintain the natural color of the image well and produce clear dehazing images with high operating efficiency.
  13 in total

1.  Single Image Haze Removal Using Dark Channel Prior.

Authors:  Kaiming He; Jian Sun; Xiaoou Tang
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2010-09-09       Impact factor: 6.226

2.  A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior.

Authors:  Qingsong Zhu; Jiaming Mai; Ling Shao
Journal:  IEEE Trans Image Process       Date:  2015-06-18       Impact factor: 10.856

3.  Benchmarking Single Image Dehazing and Beyond.

Authors:  Boyi Li; Wenqi Ren; Dengpan Fu; Dacheng Tao; Dan Feng; Wenjun Zeng; Zhangyang Wang
Journal:  IEEE Trans Image Process       Date:  2018-08-30       Impact factor: 10.856

4.  Guided image filtering.

Authors:  Kaiming He; Jian Sun; Xiaoou Tang
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2013-06       Impact factor: 6.226

5.  Single image dehazing by multi-scale fusion.

Authors:  Codruta Orniana Ancuti; Cosmin Ancuti
Journal:  IEEE Trans Image Process       Date:  2013-08       Impact factor: 10.856

6.  DehazeNet: An End-to-End System for Single Image Haze Removal.

Authors: 
Journal:  IEEE Trans Image Process       Date:  2016-11       Impact factor: 10.856

7.  Single Image Dehazing Using Color Ellipsoid Prior.

Authors:  Wonha Kim
Journal:  IEEE Trans Image Process       Date:  2018-02       Impact factor: 10.856

8.  Residual Spatial and Channel Attention Networks for Single Image Dehazing.

Authors:  Xin Jiang; Chunlei Zhao; Ming Zhu; Zhicheng Hao; Wen Gao
Journal:  Sensors (Basel)       Date:  2021-11-27       Impact factor: 3.576

9.  Image Dehazing Using LiDAR Generated Grayscale Depth Prior.

Authors:  Won Young Chung; Sun Young Kim; Chang Ho Kang
Journal:  Sensors (Basel)       Date:  2022-02-05       Impact factor: 3.576

10.  Single Image Haze Removal from Image Enhancement Perspective for Real-Time Vision-Based Systems.

Authors:  Dat Ngo; Seungmin Lee; Quoc-Hieu Nguyen; Tri Minh Ngo; Gi-Dong Lee; Bongsoon Kang
Journal:  Sensors (Basel)       Date:  2020-09-10       Impact factor: 3.576

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.