Literature DB >> 35480971

Low-Illumination Image Enhancement Algorithm Based on Improved Multi-Scale Retinex and ABC Algorithm Optimization.

Ying Sun1,2,3, Zichen Zhao1,2, Du Jiang1,3, Xiliang Tong1, Bo Tao1,4, Guozhang Jiang1,2,3, Jianyi Kong1,2,3, Juntong Yun2,4, Ying Liu2,4, Xin Liu1,4, Guojun Zhao1,2, Zifan Fang5.   

Abstract

In order to solve the problems of poor image quality, loss of detail information and excessive brightness enhancement during image enhancement in low light environment, we propose a low-light image enhancement algorithm based on improved multi-scale Retinex and Artificial Bee Colony (ABC) algorithm optimization in this paper. First of all, the algorithm makes two copies of the original image, afterwards, the irradiation component of the original image is obtained by used the structure extraction from texture via relative total variation for the first image, and combines it with the multi-scale Retinex algorithm to obtain the reflection component of the original image, which are simultaneously enhanced using histogram equalization, bilateral gamma function correction and bilateral filtering. In the next part, the second image is enhanced by histogram equalization and edge-preserving with Weighted Guided Image Filtering (WGIF). Finally, the weight-optimized image fusion is performed by ABC algorithm. The mean values of Information Entropy (IE), Average Gradient (AG) and Standard Deviation (SD) of the enhanced images are respectively 7.7878, 7.5560 and 67.0154, and the improvement compared to original image is respectively 2.4916, 5.8599 and 52.7553. The results of experiment show that the algorithm proposed in this paper improves the light loss problem in the image enhancement process, enhances the image sharpness, highlights the image details, restores the color of the image, and also reduces image noise with good edge preservation which enables a better visual perception of the image.
Copyright © 2022 Sun, Zhao, Jiang, Tong, Tao, Jiang, Kong, Yun, Liu, Liu, Zhao and Fang.

Entities:  

Keywords:  ABC algorithm; bilateral gamma function; image enhancement; multi-scale retinex; weighted guided image filtering

Year:  2022        PMID: 35480971      PMCID: PMC9035903          DOI: 10.3389/fbioe.2022.865820

Source DB:  PubMed          Journal:  Front Bioeng Biotechnol        ISSN: 2296-4185


Introduction

The vast majority of information acquired by humans comes from vision. Images, as the main carrier of visual information, play an important role in three-dimensional reconstruction, medical detection, automatic driving, target detection and recognition and other aspects of perception (Li B. et al., 2019; Wang et al., 2019; Yu et al., 2019; Huang et al., 2021; Liu et al., 2022a; Tao et al., 2022a; Yun et al., 2022a; Bai et al., 2022). With the rapid development of optical and computer technology, equipment for image acquisition are constantly updated, and images often contain numerous valuable information waiting to be discovered and accessed by humans (Jiang et al., 2019a; Huang et al., 2020; Hao et al., 2021a; Cheng and Li, 2021). However, due to the influence of light, weather and imaging equipment, the captured images are often dark, noisy, poorly contrasted and partially obliterated in detail in real life (Sun et al., 2020a; Tan et al., 2020; Wang et al., 2020). This kind of image makes the area of interest difficult to identify, thus reducing the quality of image and the visual effect of the human eyes (Jiang et al., 2019b; Hu et al., 2019), and also causes great inconvenience for the extraction and analysis of image information, generating considerable difficulty for computers and other vision devices to carry out normal target detection and recognition (Su and Jung, 2018; Sun et al., 2020b; Cheng et al., 2020; Luo et al., 2020; Hao et al., 2021b). Therefore, it is necessary to enhance the low-light images through image enhancement technology (Jiang et al., 2019c; Sun et al., 2020c), so as to highlight the detailed features of the original images, improve contrast, reduce noise, make the original blurred and low recognition images clear, improve the recognition and interpretation of images comparatively, and satisfy the requirements of certain specific occasions (Tao et al., 2017; Ma et al., 2020; Jiang et al., 2021a; Tao et al., 2021; Liu et al., 2022b). Metaheuristic algorithms have great advantages for multi-objective problem solving and parameter optimization (Li et al., 2020a; Yu et al., 2020; Chen et al., 2021a; Liu X. et al., 2021; Wu et al., 2022; Xu et al., 2022; Zhang et al., 2022; Zhao et al., 2022), Methods of Multiple Subject Clustering and Subject Extraction as well as, K-means clustering methods, steady-state analysis methods, numerical simulation techniques quantification and regression methods are also widely used in data processing (Li et al., 2020b; Sun et al., 2020d; Chen et al., 2022). Artificial Bee Colony (ABC) is an optimization method proposed to imitate the honey harvesting behavior of bee colony, which is a specific application of cluster intelligence idea. The main feature is that ABC requires no special information about the problem, but only needs to compare the advantages and disadvantages of the problem (Li C. et al., 2019; He et al., 2019; Duan et al., 2021), and through the individual local optimization-seeking behavior of each worker bee, the global optimum value will eventually emerge in the population, which has a fast convergence speed (Chen et al., 2021b; Yun et al., 2022b). In response to the above problems, considering this advantage of ABC, this paper proposes a low-illumination image enhancement algorithm based on improved multi-scale Retinex and ABC optimization. Based on Retinex theory and image layering processing, this algorithm improves and optimizes the multi-scale Retinex algorithm with the structure extraction from texture via relative total variation, and replicates the original image to obtain the main feature layer and the compensation layer. In the image fusion process, the ABC algorithm is used to optimize the fusion weight factors of each layer and select the optimal solution to realize the processing enhancement of low-illumination images. Finally, the effectiveness of the algorithm in this paper is verified by conducting experiments on the LOLdataset dataset. The other parts of this paper as follows: Related Work gives an overview of image enhancement methods in low illumination and Artificial Bee Colony algorithms; Basic Theory describes the basic theory of Retinex; The Algorithm Proposed in This Paper proposes a low illumination image enhancement algorithm based on improved multiscale Retinex and ABC optimization; Experiments and Results Analysis conducts verification experiments which compares with the traditional Retinex algorithm and the method proposed in this paper and the results were analyzed by Friedman test and Wilcoxon signed rank test; and the conclusions of this paper are summarized in Conclusion.

Related Work

Image enhancement algorithms are grouped into two main categories: spatial domain and frequency domain image enhancement algorithms (Vijayalakshmi et al., 2020). The methods of spatial domain enhancement mainly include histogram equalization (Tan and Isa, 2019) and Retinex algorithm, etc. Histogram Equalization (HE) achieves the enhancement of image contrast by adjusting the pixel grayscale of the original image and mapping the image grayscale to more gray levels to make it evenly distributed, but often the noise of image processed by HE is also enhanced and the details are lost (Nithyananda et al., 2016); The Retinex image enhancement method proposed by Land E H (Land, 1964) combines well with the visual properties of the human eye, especially in low-illumination enhancement, and which performs well overall compared to other conventional methods. Based on the Retinex theory, Jobson D J et al.(Jobson et al., 1997) proposed the Single-Scale Retinex (SSR) algorithm, which can get better contrast and detail features by estimating the illumination map, but this algorithm can cause detail loss in image enhancement. Researchers subsequently proposed Multi-Scale Retinex (MSR), the image enhanced by this algorithm will have certain problems of color bias, and there will still be local unbalanced enhancement and “halo” phenomenon (Wang et al., 2021). Therefore, Rahman Z et al. (Rahman et al., 2004) proposed the Multi-Scale Retinex with Color Restoration (MSRCR), and the “halo” and color problems have been improved. The application of convolutional neural networks to deep learning has led to improved enhancement and recognition, but the difficulties in the construction of the network and the collection of data sets for training make this method difficult to implement (Liu et al., 2021b; Sun et al., 2021; Weng et al., 2021; Yang et al., 2021; Tao et al., 2022b; Liu et al., 2022c). Based on the Retinex algorithm, Wang D et al. (Wang et al., 2017) used Fast Guided Image Filtering (FGIF) to evaluate the irradiation component of the original image, combined with bilateral gamma correction to adjust and optimize the image, which preserved the details and colors of the image to some extent, but the overall visual brightness was not high. Zhai H et al. (Zhai et al., 2021) proposed an improved Retinex with multi-image fusion algorithm to operate and fuse three copies of images separately, and the images processed by this algorithm achieved some improvement in brightness and contrast, but the overall still had noise and some details lost. The frequency domain enhancement methods mainly include Fourier transform, wavelet transform, Kalman filtering and image pyramid, etc (Li et al., 2019c; Li et al., 2019d; Huang et al., 2019; Chang et al., 2020; Tian et al., 2020; Liu et al., 2021c). This kind of algorithm can effectively enhance the structural features of the image, but the target details of the image which are enhanced by these methods are still blurred. The image layering enhancement method proposed by researchers in recent years has led to the application of improved low-light image enhancement methods based on this principle more and more widely (Liao et al., 2020; Long and He, 2020). The enhancement of image layer decomposes the input image into base layer and detail layer components, and then processes the enhancement of the two layers separately, and finally selects the appropriate weighting factor for image fusion. Commonly used edge-preserving filters are bilateral filtering, Guided Image Filtering (GIF), Fast Guided Image Filtering (Singh and Kumar, 2018), etc. Since GIF uses the same linear model and weight factors for each region of the image, it is difficult to adapt to the differences in texture features between different regions of the image. In order to resolve this problem of GIF, Li Z et al.(Li et al., 2014) proposed a Weighted Guided Image Filtering (WGIF) based on local variance, which constructs an adaptive weighting factor based on traditional guided filtering, which not only improves the edge-preserving ability but also reduces the “halo artifacts” caused by image enhancement. Inspired by the honey harvesting behavior of bee colonies, Karaboga (Karaboga, 2005) proposed a novel global optimization algorithm based on swarm intelligence, Artificial Bee Colony (ABC), in 2005. Since its introduction, the ABC algorithm has attracted the attention of many scholars and has been analyzed comparatively. Karaboga et al. (Karaboga and Basyurk, 2008) analyze the performance of ABC compared with other intelligent algorithms under multidimensional and multimodal numerical problems and the effect of the scale of the ABC control parameters taken. Karaboga et al. (Karaboga and Akay, 2009) were the first to perform a detailed and comprehensive performance analysis of ABC by testing it against 50 numerical benchmark functions and comparing it with other well-known evolutionary algorithms such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Differential Evolution Algorithm (DE), and Ant Colony Optimization (ACO). Akay et al. (Akay and Karaboga, 2009) analyzed the effect of parameter variation on ABC performance. Singh et al. (Singh, 2009) proposed an artificial bee colony algorithm for solving minimum spanning tree and verified the superiority of this algorithm for solving such problems. Ozurk et al. (Ozurk and Karaboga, 2011) proposed a hybrid method of artificial bee colony algorithm and Levenberg-Marquardts for the training of neural networks. Karaboga et al. (Karaboga and Gorkemli, 2014) modified the new nectar search formula to find the best nectar source near the exploited nectar source (set a certain radius value) to be exploited in order to improve the local merit-seeking ability of the swarm algorithm.

Basic Theory

Fundamentals of Retinex

Retinex is a common method of image enhancement based on scientific experiments and scientific analysis, which is proposed by Edwin.H.Land in 1963 (Land and McCann, 1971). In this theory, two factors determine the color of an object being observed, as shown in Figure 1, namely the reflective properties of the object and the intensity of the light around the them, but according to the theory of color constancy, it is known that the inherent properties of the object are not affected by light, and the ability of the object to reflect different light waves determines the color of the object to a large extent (Zhang et al., 2018).
FIGURE 1

Retinex schematic.

Retinex schematic. This theory shows that the color of the substance is consistent and depends on its ability to reflect wavelengths, which is independent of the absolute value of the intensity of the reflected light, in addition to being unaffected by non-uniform illumination, and is consistent, so Retinex is based on color consistency. While traditional nonlinear and linear only enhance one type of feature of the object, this theory can be adjusted in terms of dynamic range compression, edge enhancement and color invariance, enabling adaptive image enhancement. The Retinex method assumes that the original image is obtained by multiplying the reflected image and the illuminated image, which can be expressed as In Eq. 1, I is the original image, R is the reflection component with the image details of the target object, L is the irradiation component with the intensity information of the surrounding light. In order to reduce the computational complexity in the traditional Retinex theory, the complexity of the algorithm is usually simplified by taking logarithms on both sides with a base of 10 of Eq. 1 and converting the multiplication and division operations in the real domain to the addition and subtraction operations in the logarithmic domain. The conversion results are as follows:

Traditional Retinex Algorithm

The SSR method uses a Gaussian kernel function as the central surround function to obtain the illumination component by convolving with the original image and then subtracting it to obtain the reflection component in the logarithmic domain. The specific expressions are as follows: In Eqs 4, 5, denotes the center surround function - Gaussian kernel function, is obtained by convolving with . is the Gaussian surround scale parameter, and is the only adjustable parameter in SSR. When is small, it can retain better image details, but the color is easily distorted; When is larger, better image color can be preserved, but the details of image easily loss (Parihar and Singh, 2018; Jiang et al., 2021b). In order to maintain high image fidelity and compression of the dynamic range of the image, researchers proposed the Multi-Scale Retinex (MSR) method on the basis of SSR(Peiyu et al., 2020), The MSR algorithm uses multiple Gaussian wrap-around scales for weighted summation, The specific expressions are as follows: In Eqs. 7, K is the number of Gaussian center surround functions. When K = 1, MSR degenerates to SSR. is the weighting factor under different Gaussian surround scales, and in order to ensure the advantages of both high, medium and low scales of SSR to be considered, K is usually taken as three and . Considering the color bias problem of SSR and MSR, the researchers developed the MSRCR (Weifeng and Dongxue, 2020), MSRCR adds a color recovery factor to MSR, which is used to adjust the color ratio of the channels, The specific expressions are as follows: In Eq. 9, is the gain constant; is the nonlinear intensity control parameter; denotes the image of the ith channel. denotes the sum of pixels in this channel. After processing the image by MSRCR algorithm, the pixel values usually appear negative. So the color balance is achieved by linear mapping and adding overflow judgment to achieve the desired effect.

The Algorithm Proposed in This Paper

The low-illumination image enhancement algorithm based on improved multi-scale Retinex and ABC optimization, which is proposed in this paper, divides the image equivalently into a main feature layer and a compensation layer. For the main feature layer firstly, HE is used for image enhancement, and WGIF is selected for edge-preserving noise reduction. For the compensation layer, the irradiated component of the original image is first obtained by used the structure extraction from texture via relative total variation, and then the original image is processed with the MSRCR algorithm to obtain the reflected component for color recovery, and Histogram Equalization, bilateral gamma function correction, and edge-preserving filtering are applied to it. Finally, the main feature layer and the compensation layer are fused by optimal parameters, and the optimal parameters are obtained by adaptive processing correction with an ABC algorithm to achieve image enhancement under low illumination. The flow chart of the algorithm in this paper is shown in Figure 2.
FIGURE 2

Flowchart of the algorithm in this paper.

Flowchart of the algorithm in this paper.

Main Feature Layer

Weighted Guided Image Filtering

Guided Image Filter is a filtering method proposed by He K et al. (He et al., 2012), which is an image smoothing filter based on a local linear model. The basic idea of guided image filter is to assume that the output image is linearly related to the bootstrap image within a local window . A guided image is used to generate weights to derive a linear model for each pixel, and thus the input image is processed. The mathematical model expression is as follows: To find the linear coefficients in Eq. 10, the cost function is introduced as follows: Using least squares to minimize the value of the cost function , the linear coefficients are obtained as: In Eqs. 10, 11, 12, is the output image, is the guide image, and is the input image; , are the linear coefficients of the local window ; is the regularization coefficient to prevent the linear coefficient from being too large, and the larger the value of is, the more obvious the smoothing effect is when the input image is used as the guide image. denotes the mean value of within , denotes the standard deviation of within , is the total number of pixel blocks within the local window and is the mean value of the input image within the window . Since a pixel point in the output image can be derived by linear coefficients in different windows, the following expression can be obtained: GIF uses a uniform regularization factor for each region of the image, and larger regularization factors produce a “halo” phenomenon in the edge regions of the image. In view of this problem, WGIF achieves adaptive adjustment of the regularization coefficients by introducing a weighting factor . In this way, adaptive adjustment of the linear coefficients is obtained, thus achieving adaptivity to each region of the image and improving the filtering effect. The weighting factor and the new linear coefficient are as follows: In Eqs. 15, 16, is the variance of the guide image with respect to , where denotes a window centered at and ; is the regularization factor, taken as , L is the dynamic range of the image (Li et al., 2014). A comparison of the results processed by WGIF and FGIF is shown in Figure 3. As it can be seen in Figure 3, the FGIF-processed images still have some noise, while the results after WGIF processing are well improved in this aspect.
FIGURE 3

The first row is the image obtained after FGIF processing; The second row is the image obtained after WGIF processing.

The first row is the image obtained after FGIF processing; The second row is the image obtained after WGIF processing.

Obtaining the Main Feature Layer

HE is used for image enhancement and WGIF is selected for edge-preserving noise reduction. The results obtained from each step are shown in Figure 4. From this figure, it can be seen that the image obtained by HE has been improved compared with the original image, but in this process, the noise in the image is also extracted and amplified. Some of the details and noise in the image are filtered out by the process of WGIF, and the “halo” phenomenon and the “artifacts” caused by the gradient inversion are avoided, because WGIF takes into account the texture differences between regions in the image.
FIGURE 4

The results of main feature layer obtains. (A) Waiting to process images (B) Histogram Equalization (C) WGIF.

The results of main feature layer obtains. (A) Waiting to process images (B) Histogram Equalization (C) WGIF.

Compensation Layer

Structure Extraction From Texture via Relative Total Variation

As can be seen from 2.2, the traditional Retinex algorithm uses a Gaussian filter kernel function to convolve with the original image, and after eliminating the filtered irradiated component, the reflected component is used as the enhancement result, but the estimation of Gaussian filter at the edge of the image is prone to bias, and thus the “halo” phenomenon occurs, which undoubtedly This will undoubtedly lead to unnatural enhancement results due to the lack of illumination. To address this problem, this paper uses the structure extraction from texture via relative total variation in obtaining the irradiation component of the compensation layer, which was proposed by Xu L et al. (Xu et al., 2012) in 2012, to better preserve the main edge information of the image and thus reduce the “halo” phenomenon in the edge information-rich region. The model of the method is as follows: In Eqs 17 and 18, 19, 20 and 21, S is the output image, p is the pixel index, λ is the weighting factor to adjust the degree of smoothing of the image, and the larger the value of λ is, the smoother the image is; is a positive number close to zero and is used to prevent the denominator from being zero; and are respectively the variation functions of pixel p in the x and y directions, and is the window centered on p. and are the intrinsic functions within the window, respectively. The parameter is the texture suppression factor, and the larger the value of is, the stronger the texture suppression effect is. In order to demonstrate the advantages of this method from a practical point of view, the images in the LOLdataset were taken for the structure extraction from texture via relative total variation and convolution operations with the Gaussian kernel function in the traditional Retinex algorithm to obtain the irradiation components, respectively, and the results obtained by the two methods are shown in Figure 5. Meanwhile, Information Entropy and Standard Deviation were used to assess their quality, and the results are shown in Tables 1 and 2. From Figure 5, it can be seen that the structure extraction from texture via relative total variation method preserves the irradiation component better, and at the same time, it is known from the correlation evaluation function that the IE and SD of this method are greater than those of the Gaussian kernel function convolution method of the traditional Retinex algorithm, which proves that the structure extraction from texture via relative total variation method is better in preserving the image information in the acquisition of the irradiation component.
FIGURE 5

The first row is the irradiation component obtained by Gaussian kernel function; the second row is the irradiation component obtained by the structure extraction from texture via relative total variation method.

TABLE 1

Evaluation of IE for five sets of images.

12345
Gaussian kernel function4.35544.54574.02003.24002.9727
This method5.35965.09364.78884.70384.7922
TABLE 2

Evaluation of SD for five sets of images.

12345
Gaussian kernel function6.794810.81755.73872.77202.7177
This method16.543617.97348.81708.14529.0849
The first row is the irradiation component obtained by Gaussian kernel function; the second row is the irradiation component obtained by the structure extraction from texture via relative total variation method. Evaluation of IE for five sets of images. Evaluation of SD for five sets of images.

Obtaining of Compensation Layers

For the original image, a duplication layer is performed to obtain the image to be processed, and the structure extraction from texture via relative total variation is selected to obtain the irradiation component, and combined with the principle of Retinex and color recovery to obtain the reflection component, at the same time, histogram equalization, bilateral gamma correction and bilateral filtering are performed. The results obtained from each step are shown in Figure 6. As can be seen from the figure, the image content is basically recovered by the MSRCR algorithm processing, but the image saturation is not enough to restore the real scene in comparison. After HE method, the color was recovered to some extent, but the obtained image shows that the light and dark transition areas are not effective. Therefore, this paper used the improved bilateral gamma function for processing (Wang et al., 2021). The mathematical expression of the traditional gamma function is as follows:
FIGURE 6

The results of compensation layers obtains.(A) Waiting to process images (B) MSRCR (C) Histogram Equalization (D) Bilateral Gamma Correction (E) Bilateral Filter.

The results of compensation layers obtains.(A) Waiting to process images (B) MSRCR (C) Histogram Equalization (D) Bilateral Gamma Correction (E) Bilateral Filter. In Eq. 23, is the input image to be processed, is the output image, and is a constant between (0,1) to control the enhancement performance of the image. is the convex function corrected for the dark region and is the convex function corrected for the bright region. Since the traditional bilateral gamma function can only be mechanically enhanced, to address this problem and considering the distribution characteristics of the illumination function, the mathematical expression of the scholars’ improved bilateral gamma function is as follows: In Eq. 24, The value of is taken as ; m is the pixel average of the illuminated image; The adjustment parameter takes the value of . Hence an improved bilateral gamma function is used for adaptive correction of the luminance transition region; Finally, bilateral filtering is used for edge-preserving and noise-reducing to obtain the final compensation layer.

Image Fusion

Selection of the Fitness Function

Through the above processing flow, the main feature layer and compensation layer are finally obtained, and the corresponding fusion is performed at the end of the proposed method in this paper, where an image evaluation system is established and three evaluation indexes are introduced: Information Entropy, Standard Deviation and Average Gradient. The Standard Deviation (SD) reflects the magnitude of the dispersion of the image pixels. The larger the standard deviation, the greater the dynamic range of the image and the more gradation levels. The formula to calculate SD is as follows: In Equ. 25, is the width of the input image and is the height of the input image. The Average Gradient (AG) represents the variation of small details in the multidimensional direction of the image. The larger the AG, the sharper the image detail, and the greater the sense of hierarchy. The formula to calculate AG is as follows: The information entropy (IE) of image is a metric used to measure the amount of information in an image. The greater the IE, the more informative and detailed the image is, and the higher the image quality. The formula to calculate IE is as follows: In Eq. 27, R is the image pixel gray level, usually R = 28-1, and P(x) is the probability that the image will appear at a point in the image when the gray value x is at that point. On the concept of multi-objective optimization (Li et al., 2019e; Liao et al., 2021; Xiao et al., 2021; Liu et al., 2022d; Yun et al., 2022c), The IE, AG and SD are weighted together and balanced by using an equal proportional overlay, showing that IE, AG and SD are equally important in image evaluation. The mathematical expression of the fitness function obtained is as follows: The values of the fitness function under different weights are obtained by applying different weights to the main feature layer and the compensation layer for image weighting fusion, as shown in Figure 7. It is clear from this figure that the value of the fitness function varies with different weights and that the maximum value should be generated in . To determine the optimal weights, an adaptive optimization evaluation system needs to be constructed.
FIGURE 7

Value of fitness function with different weights.

Value of fitness function with different weights. Traditional nonlinear optimization algorithms update the objective solution by certain rules of derivatives, such as Gradient Descent, Newton’s Method and Quasi-Newton Methods. When solving multi-objective nonlinear optimization problems, it is difficult to satisfy the requirements because of the computational complexity of following the defined methodological steps for optimization The convergence of the Gradient Descent is slowed down when it approaches a minimal value, and requires several iterations; Newton’s method is second-order convergence, which is fast, but each step requires solving the inverse matrix of the Hessian matrix of the objective function, which is computationally complicated. The metaheuristic algorithm models the optimization problem based on the laws of biological activity and natural physical phenomena. According to the laws of natural evolution, the natural evolution-based metaheuristic algorithm uses the previous experience of the population in solving the problem, and selects the methods that have worked well so that the target individuals are optimized in the iterations, and finally arrives at the best solution. Considering the computational complexity of the objective function and this feature of the metaheuristic algorithm, the artificial bee colony algorithm is chosen for the objective optimization.

Artificial Bee Colony Algorithm

Inspired by the honey harvesting behavior of bee colonies, Karaboga (2005) proposed a novel global optimization algorithm based on swarm intelligence, Artificial Bee Colony (ABC), in 2005. The bionic principle is that bees perform different activities during nectar collection according to their respective division of labor, and achieve sharing and exchange of colony information to find the best nectar source. In ABC, the entire population is divided into three types of bees, namely, employed bees, scout bees and follower bees. When a employed bee finds a honey source, it will share it with a follower bee with a certain probability; a scout bee does not follow any other bee and looks for the honey source alone, and when it finds it, it will become a employed bee to recruit a follower bee; when a follower bee is recruited by multiple employed bees, it will choose one of the many leaders to follow until the honey source is finished. Determination of the initial location of the nectar source: In Eq. 29, is a random number that follows a uniform distribution over the interval; and denote the upper and lower bounds of the traversal. Leading the bee search for new nectar sources: In Eq. 30, is a [-1,1] uniformly distributed random number that determines the degree of perturbation; is the acceleration coefficient, which is usually taken as 1. Probability of follower bees selecting the employed bee: Scout bees searching for new nectar sources: During the search for the nectar source, if it has not been updated to a better one after n iterations of the search reach the threshold T, then the source is abandoned. The scout bee then finds a new nectar source again. The flow chart of the artificial bee colony algorithm is shown in Figure 8.
FIGURE 8

Flowchart of artificial bee colony algorithm.

Flowchart of artificial bee colony algorithm. The above fitness function is selected and iteratively optimized by the artificial bee colony algorithm, each parameter is set to the number of variables is 2, max-iter is 100, n-pop is 45, and the maximum number of honey source mining is 90. The convergence curve of the optimal weight parameter is shown in Figure 9.The convergence curves of the optimal weight parameters by iterative optimization of the artificial bee colony algorithm by selecting the above fitness function are shown in Figure 8. Considering that the optimization algorithm is to obtain the minimum value, the results are inverted, and it can be seen from Figure 9 that the maximum value is 42.0534 under this fitness function, and convergence is completed at 14 times.
FIGURE 9

Convergence curve of artificial bee colony algorithm. (A) Convergence curve (B) Partial Enlargement.

Convergence curve of artificial bee colony algorithm. (A) Convergence curve (B) Partial Enlargement.

Experiments and Results Analysis

The computer used in this experiment was a 64-bit Win10 operating system; CPU为Intel(R)Core(TM) i5-6300HQ at2.30GHz; GPU is NVIDIA 960M with 2G GPU memory; RAM is 8 GB; All algorithms in this paper were run on MATLAB 2021b and Python3.7 on the PyCharm platform, and statistical analysis of the results using IBM SPSS Statistics 26. The images used in the experiments are all from the LOLdataset dataset, and 200 low-illumination images are randomly selected and tested one by one by the algorithm, and the representative images are selected for comparison of processing effects. The algorithm proposed in this paper is compared with SSR algorithm, MSR algorithm, MSRCR algorithm, literature (Zhai et al., 2021), and literature (Wang et al., 2017) algorithms, where the Gaussian surround scale parameters of SSR algorithm are set to 100; the Gaussian surround scale parameters of MSR algorithm are set to 15, 80, 250; the Gaussian surround scale parameters of MSRCR algorithm are set to 15, 80, 250, α = 125, β = 46; literature (Zhai et al., 2021) and literature (Wang et al., 2017) are built according to the content of the paper respectively, and the algorithm is restored as much as possible. In this paper, the image enhancement results under different methods are analyzed by subjective evaluation and objective evaluation, and the processing results of each method are shown in Figure 10.
FIGURE 10

Low-illumination image processing results under different algorithms. (A) Original image (B) SSR (C) MSR (D) MSRCR (E) Literature (Zhai et al., 2021) (F) Literature (Wang et al., 2017) (G) Method of this paper.

Low-illumination image processing results under different algorithms. (A) Original image (B) SSR (C) MSR (D) MSRCR (E) Literature (Zhai et al., 2021) (F) Literature (Wang et al., 2017) (G) Method of this paper.

Subjective Evaluation

It can be shown from Figure 10 that the brightness of the image after processing by SSR and MSR algorithms is improved compared with the original image, but the color retention effect is poor, the image is whitish and the color loss is serious. MSRCR ensure the brightness improvement comparing to the former methods, the color is also restored to some extent, but the color reproduction is not high and there is loss of details. The processing results of literature (Zhai et al., 2021) are in better color reproduction, but general brightness enhancement, part of the detailed information not effectively enhanced is still annihilated in the dark areas of the image, specifically in the end of the bed in Figures 10–e-1, the shadow of the cabinet in the lower left corner of 10-(e)-4 and the cabinet in the middle of the image of 10-(e)-8, Meanwhile, it can be seen from the image that a small amount of noise still exists in part of the location, specifically in the edge of Figures 10–e-1 and the glass of 10-(e)-5; The processing result of the literature (Wang et al., 2017) makes the brightness of the image get some improvement, basically no noise and relatively good color retention, but the image is not strong in the sense of hierarchy, specifically in the bed sheet in Figures 10–f-1 and the cabinet in 10-(f)-4. Meanwhile, it can be seen from the image that the processing images of this method have some detail loss, specifically in the restored shadows in Figures 10–f-4 and 10-(f)-8. The enhanced image obtained by the algorithm in this paper has higher color fidelity, more prominent details, better structural information effect, and more consistent with the visual perception of human eyes in overall comparison.

Objective Evaluation

Subjective evaluation is susceptible to interference from other factors and varies from person to person. In order to have a better comparison of the image quality of the enhancement results under different methods and to ensure the reliability of the experiments, Standard Deviation, Information Entropy and Average Gradient are used as evaluation metrics in this paper. The Standard Deviation reflects the magnitude of the dispersion of the image pixels, the greater the Standard Deviation, the greater the dynamic range of the image; Information Entropy is a metric used to measure the amount of information in an image, the higher the Information Entropy, the more information in the image. The Average Gradient represents the variation of small details in the multidimensional direction of the image, the larger the Average Gradient, the stronger the image hierarchy. The evaluation results of low-illumination image enhancement with different algorithms are shown in Tables 3–5.
TABLE 3

SD of low-illumination image enhancement with different algorithms.

ImagesOriginal ImageSSRMSRMSRCRLiterature (Zhai et al., 2021)Literature (Wang et al., 2017)Method of This Paper
012.35540.23536.908530.647630.647633.050766.2931
117.684142.447640.137241.936841.936820.616567.0082
219.348231.128229.607326.553426.553424.707464.7167
318.136335.976731.550731.918831.918827.105467.4371
411.847532.831334.484927.420527.420527.899264.7396
510.718823.776720.931319.891819.891816.351765.5351
629.195939.645837.954836.318436.318432.823666.2019
79.480833.874328.968927.18527.18523.227665.4500
811.012431.660427.772325.690525.690520.473564.5207
99.259440.209935.390632.529932.529923.989865.1388
TABLE 5

AG of low-illumination image enhancement with different algorithms.

ImagesOriginal ImageSSRMSRMSRCRLiterature (Zhai et al., 2021)Literature (Wang et al., 2017)Method of This Paper
02.35688.71048.85847.26685.80006.80258.875
11.46475.06815.27744.69854.12304.70346.9498
21.71484.96335.24294.77343.22284.69587.8268
31.74844.83794.80574.57393.81504.68516.0248
42.03736.75866.89226.10644.47685.95708.7011
51.50593.77883.78173.54054.87694.90806.9163
61.91257.29867.73018.70495.17207.41359.1211
71.33127.02056.91786.33065.16025.70157.0233
81.35594.84604.93824.41524.47785.50696.9878
91.20386.51786.45736.09216.34766.57936.8033
SD of low-illumination image enhancement with different algorithms. IE of low-illumination image enhancement with different algorithms. AG of low-illumination image enhancement with different algorithms. Statistical analysis is taken for the data in Tables 3–5. The Friedman test is used to analyze the variability of the results of experiments, and the Wilcoxon sign ranked test method is used to analyze the advantages of the proposed method in this paper with other methods. The Friedman test is a statistical test for the chi-squaredness of multiple correlated samples, which was proposed by M. Friedman in 1973. The Friedman test requires the following requirements to be met: 1. sequential level data; 2. three or more groups; 3. relevant groups; And 4. a random sample of values from the collocation. Obviously, the data in Tables 3–5 all satisfy the requirements. Under the Friedman test, the following hypothesis is set: H0: No difference between the six methods compared. H1: There are differences in the six methods of comparison. The data is imported into SPSS software for analysis, and the results were obtained as shown in Tables 6 and 7.
TABLE 6

The mean of Rank at Evaluation Indexes.

SDIEAG
Original_image1.001.001.00
SSR5.805.604.70
MSR4.704.405.20
MSRCR3.503.603.40
Literature_Zhai3.504.302.50
Literature_Wang2.502.104.20
Ours7.007.007.00
TABLE 7

Friedman test statistics at Evaluation Indexes.

SDIEAG
Number of cases101010
χ52.45752.67148.386
Degree of freedom666
Asymptotic Significance0.0000.0000.000
The mean of Rank at Evaluation Indexes. Friedman test statistics at Evaluation Indexes. The Wilcoxon Signed Rank Test was proposed by F. Wilcoxon in 1945. In the Wilcoxon Signed Rank Test, it takes the rank of the absolute value of the difference between the observation and the central position of the null hypothesis and sums them separately according to different signs as its test statistic. Under the Wilcoxon Signed Rank Test, it can be seen from Tables 3–5 that the method of this paper is numerically greater than the other algorithms, so the following hypothesis is set: H0: The images enhanced by ours did not differ from the other methods. H1: The images enhanced by ours differ from the other methods. The data is imported into SPSS software for analysis, and the results of the data were obtained as shown in Table 8 and 9.
TABLE 8

Rank.

Number of CasesThe Mean of RankThe Sum of Rank
Ours - SSROurs < SSR00.000.00
Ours > SSR105.5055.00
Ours = SSR0
Total10
Ours - MSROurs < MSR00.000.00
Ours > MSR105.5055.00
Ours = MSR0
Total10
Ours - MSRCROurs < MSRCR00.000.00
Ours > MSRCR105.5055.00
Ours = MSRCR0
Total10
Ours - Literature_ZhaiOurs < Literature_Zhai00.000.00
Ours > Literature_Zhai105.5055.00
Ours = Literature_Zhai0
Total10
Ours - Literature_WangOurs < Literature_Wang00.000.00
Ours > Literature_Wang105.5055.00
Ours = Literature_Wang0
Total10
TABLE 9

Wilcoxon signed rank test.

Ours - SSROurs - MSROurs - MSRCROurs - Literature_ZhaiOurs - Literature_Wang
Z (Based on negative rank)−2.803−2.803−2.803−2.803−2.803
Asymptotic Significance (Bilateral)0.0050.0050.0050.0050.005
Rank. Wilcoxon signed rank test. From the data in Tables 3–5, it can be seen that the algorithm in this paper achieves a large improvement in SD, IE and AG, which is significantly better than the other five algorithms, Meanwhile, after Friedman test, it can be seen from Table 6 and 7 that asymptotic significance is less than 0.001 in all three evaluation metrics, so the original hypothesis is rejected and this data is extremely different in statistics; After Wilcoxon Signed Rank Test, it can be seen from Table 8 and 9 that the bilateral asymptotic significance is less than 0.01 for all three evaluation metrics, so the original hypothesis is rejected and the method of this paper is effective, which is differ from the other methods. This shows that the images enhanced by the algorithm in this paper have increased brightness, richer details, less image distortion and better image quality, thus verifying the effectiveness of the algorithm proposed in this paper.

Conclusion

For the problems of poor image quality and loss of detail information in the process of low-illumination image enhancement, a low-illumination image enhancement algorithm is proposed in this paper, which is based on improved multi-scale Retinex and ABC optimization. Duplicate layering the original image, the main feature layer is processed by HE and WGIF, to enable image brightness enhancement, color restoration and noise elimination, and avoid the generation of gradient inversion artifacts; The structure extraction from texture via relative total variation method is performed on the compensation layer to estimate the irradiation component, and combined with bilateral gamma correction and other methods to avoid the occurrence of halo phenomenon; Finally, the Artificial Bee Colony algorithm is used to optimize the parameters for weighted fusion. The experimental results verify the rationality of the algorithm in this paper, and which achieves better results in both subjective and objective evaluations by comparing with other five methods.
TABLE 4

IE of low-illumination image enhancement with different algorithms.

ImagesOriginal ImageSSRMSRMSRCRLiterature (Zhai et al., 2021)Literature (Wang et al., 2017)Method of This Paper
05.34676.89126.78536.62346.55066.36457.7812
15.56896.72606.51386.44786.44255.70447.8505
25.56796.43886.25816.21916.65145.96947.7768
35.91266.85526.61346.64536.74346.32927.8113
45.26546.39946.24026.20486.44945.98287.7944
55.17876.29866.07106.00896.55795.54487.8458
65.38157.09546.97546.93406.65176.69107.6741
74.75596.91596.59236.57076.50386.01327.7403
85.09326.56476.35076.25116.61695.70337.7761
94.98136.94246.75246.60366.40986.08967.7995
  16 in total

1.  Properties and performance of a center/surround retinex.

Authors:  D J Jobson; Z Rahman; G A Woodell
Journal:  IEEE Trans Image Process       Date:  1997       Impact factor: 10.856

2.  Weighted guided image filtering.

Authors:  Zhengguo Li; Jinghong Zheng; Zijian Zhu; Wei Yao; Shiqian Wu
Journal:  IEEE Trans Image Process       Date:  2014-11-14       Impact factor: 10.856

3.  Low-Light Image Enhancement via the Absorption Light Scattering Model.

Authors:  Yun-Fei Wang; He-Ming Liu; Zhao-Wang Fu
Journal:  IEEE Trans Image Process       Date:  2019-06-17       Impact factor: 10.856

4.  Guided image filtering.

Authors:  Kaiming He; Jian Sun; Xiaoou Tang
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2013-06       Impact factor: 6.226

5.  Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.

Authors:  Xin Liu; Du Jiang; Bo Tao; Guozhang Jiang; Ying Sun; Jianyi Kong; Xiliang Tong; Guojun Zhao; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-01-10

6.  Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm.

Authors:  Ying Liu; Du Jiang; Juntong Yun; Ying Sun; Cuiqiao Li; Guozhang Jiang; Jianyi Kong; Bo Tao; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-02-11

7.  Genetic-Based Optimization of 3D Burch-Schneider Cage With Functionally Graded Lattice Material.

Authors:  Manman Xu; Yan Zhang; Shuting Wang; Guozhang Jiang
Journal:  Front Bioeng Biotechnol       Date:  2022-01-26

8.  Time Optimal Trajectory Planing Based on Improved Sparrow Search Algorithm.

Authors:  Xiaofeng Zhang; Fan Xiao; XiLiang Tong; Juntong Yun; Ying Liu; Ying Sun; Bo Tao; Jianyi Kong; Manman Xu; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-03-22
View more
  22 in total

1.  Multi-network collaborative lift-drag ratio prediction and airfoil optimization based on residual network and generative adversarial network.

Authors:  Xiaoyu Zhao; Weiguo Wu; Wei Chen; Yongshui Lin; Jiangcen Ke
Journal:  Front Bioeng Biotechnol       Date:  2022-09-06

2.  Multi-Scale Feature Fusion Convolutional Neural Network for Indoor Small Target Detection.

Authors:  Li Huang; Cheng Chen; Juntong Yun; Ying Sun; Jinrong Tian; Zhiqiang Hao; Hui Yu; Hongjie Ma
Journal:  Front Neurorobot       Date:  2022-05-19       Impact factor: 3.493

3.  A Tandem Robotic Arm Inverse Kinematic Solution Based on an Improved Particle Swarm Algorithm.

Authors:  Guojun Zhao; Du Jiang; Xin Liu; Xiliang Tong; Ying Sun; Bo Tao; Jianyi Kong; Juntong Yun; Ying Liu; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-05-19

4.  Improved Multi-Stream Convolutional Block Attention Module for sEMG-Based Gesture Recognition.

Authors:  Shudi Wang; Li Huang; Du Jiang; Ying Sun; Guozhang Jiang; Jun Li; Cejing Zou; Hanwen Fan; Yuanmin Xie; Hegen Xiong; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-06-07

5.  Image Classification and Recognition of Rice Diseases: A Hybrid DBN and Particle Swarm Optimization Algorithm.

Authors:  Yang Lu; Jiaojiao Du; Pengfei Liu; Yong Zhang; Zhiqiang Hao
Journal:  Front Bioeng Biotechnol       Date:  2022-04-27

6.  Detection and Analysis of Bionic Motion Pose of Single Leg and Hip Joint Based on Random Process.

Authors:  Peng Zhang; Seung-Soo Baek
Journal:  Front Bioeng Biotechnol       Date:  2022-04-27

7.  Biomimetic Vision for Zoom Object Detection Based on Improved Vertical Grid Number YOLO Algorithm.

Authors:  Xinyi Shen; Guolong Shi; Huan Ren; Wu Zhang
Journal:  Front Bioeng Biotechnol       Date:  2022-05-20

8.  Recognition and Detection of Wide Field Bionic Compound Eye Target Based on Cloud Service Network.

Authors:  Yibo Han; Xia Li; XiaoCui Li; Zhangbing Zhou; Jinshuo Li
Journal:  Front Bioeng Biotechnol       Date:  2022-04-04

9.  Surface Defect Segmentation Algorithm of Steel Plate Based on Geometric Median Filter Pruning.

Authors:  Zhiqiang Hao; Zhigang Wang; Dongxu Bai; Xiliang Tong
Journal:  Front Bioeng Biotechnol       Date:  2022-07-01

10.  Multi-Objective Optimization Design of Ladle Refractory Lining Based on Genetic Algorithm.

Authors:  Ying Sun; Peng Huang; Yongcheng Cao; Guozhang Jiang; Zhongping Yuan; Dongxu Bai; Xin Liu
Journal:  Front Bioeng Biotechnol       Date:  2022-06-15
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.