Literature DB >> 36155657

Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement.

Tie Li1, Tianfei Zhou1.   

Abstract

Low contrast, poor color saturation, and turbidity are common phenomena of underwater sensing scene images obtained in highly turbid oceans. To address these problems, we propose an underwater image enhancement method by combining Retinex and transmittance optimized multi-scale fusion framework. Firstly, the grayscale of R, G, and B channels are quantized to enhance the image contrast. Secondly, we utilize the Retinex color constancy to eliminate the negative effects of scene illumination and color distortion. Next, a dual transmittance underwater imaging model is built to estimate the background light, backscattering, and direct component transmittance, resulting in defogged images through an inverse solution. Finally, the three input images and corresponding weight maps are fused in a multi-scale framework to achieve high-quality, sharpened results. According to the experimental results and image quality evaluation index, the method combined multiple advantageous algorithms and improved the visual effect of images efficiently.

Entities:  

Mesh:

Year:  2022        PMID: 36155657      PMCID: PMC9512202          DOI: 10.1371/journal.pone.0275107

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Underwater robots play an essential role in exploiting marine resources and studying biological resources around oceanic hydrothermal vents. In the environment perception of the underwater robot, the operator guides the manipulator to locate the target object and select the grasping attitude through underwater video and image. It is, however, difficult to acquire high-quality sensing scene images through machine vision systems. The quality is severely restricted by the complex underwater environments, such as light attenuation, suspended particles, and artificial lights. As a result, the underwater images we obtained often have color distortion, blur, and low contrast, which dramatically affects the robot’s subsequent target recognition and detection tasks. Therefore, correcting the color of the degraded image, eliminating the useless information, and improving the reliability of the different processing results of the image are the primary purposes of clearing the underwater image. In order to improve the detailed information of underwater images, researchers have proposed various image sharpening methods as preprocessing techniques to restore explicit and natural underwater scenes, including underwater image enhancement, restoration based on physical models, and deep learning. Next, we introduce the representative works of the three types of methods according to the progress time. The image pixels are processed directly in most underwater image enhancement algorithms. High-contrast images usually exhibit the characteristics of rich details and an extended dynamic range. Contrast limited adaptive histogram equalization (CLAHE) [1] algorithm can improve the contrast of underwater images, but it will introduce noise. The purpose of Retinex theory is to remove the influence of illumination light from an image to obtain the reflection attribute of an object. Yang et al. [2] decomposed V-channel via wavelet transform; then employed denoising algorithm with soft threshold and locally adaptive tone mapping to address the high-frequency and low-frequency components, respectively. Hu et al. [3] further improved its parameters and optimized the performance based on the classical multi-scale Retinex (MSR). Zhang et al. [4] applied a Gaussian low-pass filter to the L channel. They processed low-frequency components through optimal equalization threshold strategy of double interval histogram and enhanced high-frequency components with S-shaped function. Alternatively, Huang et al. [5] used the power function enhancement algorithm for visible light to improve the contrast, corrected the brightness of infrared image, and fused the two versions with Laplace transform. Zhuang et al. [6] established a posterior formulation for underwater image enhancement by imposing multi-step gradient priors on reflectance and illuminance. A piecewise and piecewise linear approximation of reflectivity is modelled with the l1 norm and the l2 norm is used to enhance spatial smoothness and spatial linear smoothness over lighting. Clear images are obtained through convergence analysis and optimization. Jaffe-McGlamery’s underwater optical imaging model [7] has been widely applied in underwater image restoration algorithms, as shown in Fig 1. Among them, He et al. [8] proposed the dark channel prior (DCP) theory built upon extensive statistical research, which indicates that there is at least one low-intensity channel in the red, green, and blue color channels in fog-free images. Consequently, the dark channel prior theory is widely used to estimate the background light and transmittance of haze images. However, since the attenuation characteristics of underwater light are different from that of the land environment, this algorithm cannot get superior results when it is directly applied to underwater images. Based on the DCP, Peng et al. [9] combined image blurriness and red light absorption differences to estimate the scene depth map and estimate the background light from the blurred area. This method is more suitable for underwater scenes. Drew et al. [10] proposed a priori method of underwater dark channels, which obtained relatively accurate transmittance based on the assumption that most visual information was heavily altered in the blue and green channels. Emberton et al. [11] divided the input image into three categories (bluish, greenish, and bluish green) and restored it according to the color deviation characteristics of each region. Besides, Chang et al. [12] revised the SDCP scheme to make DCP suitable for underwater environments, and adopted point spread function deconvolution strategy to solve the blurred edges.
Fig 1

Underwater optical imaging model.

With the vigorous development of networks, deep learning has been gradually applied to image processing in recent years. Jamadandi et al. [13] utilized the VGG19 network as the encoder and mirrored the wavelet non-pooling layer as the decoder to reconstruct the image model. Lin et al. [14] adopted the deep learning and rectified linear unit (ReLU) activation function to compensate for the linear model through the addition of nonlinear factors and effective feature achievement. Islam et al. [15] proposed FUnIE-GAN based on the U-Net framework. In order to obtain rich feature information, residual connections are added to the generator. To reduce the semantic difference between low-level features and high-level features, Han et al. [16] added residual path blocks between encoders and adopted depth supervision mechanism to improve gradient propagation. Lin et al. [17] proposed a multi-scale deformable convolutional network composed of encoder-decoder, which acquires abundant feature information of the receptive field from different scales, and finally optimized the model through pixel loss and perceptual loss. Li et al. [18] introduced a physical model on the basis of deep learning, and designed a medium-transport guided decoder network to enhance the network’s response to quality-degraded regions. The above methods have selectively improved some issues, such as atomization and color imbalance in underwater images. However, there is a need to deal with detailed information loss, edge contour, and texture blur comprehensively. Some existing fusion methods ignored the selective absorption of water and non-uniform illumination. By simply combining the pure image enhancement algorithms and not fully utilizing the comprehensive value of each advantageous algorithm, local excessive enhancement or color distortion can be seen in the obtained underwater images from other methods. These methods’ comprehensiveness, robustness, and accuracy are not ideal and seriously limited in practical application. Considering the degree of image retention, visual effect, and work complexity, we propose a novel underwater optical image enhancement algorithm combining Retinex and transmittance optimization. A clearing method based on image fusion is designed to fuse the dominant information of the image after defogging, contrast enhancement, and color correction. The result is a fog-free, contrast, and color-balanced underwater image. We analyze the research status of underwater image processing, the shortcomings of existing algorithms, and the idea of this paper in section 1. In what follows, section 2 introduces the related background and main work content of the method. Section 3 conducts experiments on the algorithm proposed in this paper. A detailed and objective evaluation is obtained by comparing with the existing novel algorithms. The last section presents the conclusion of the research.

Methodology

Generally speaking, our main contributions are summarized as follows. (1) We design three images i, ii, and iii respectively as the input of the fusion framework. i. Histogram quantization is performed on the middle grey area of each channel to improve contrast. ii. Image 2 adopts dynamic adaptive compensation to replace the linear transformation in the color constancy MSR algorithm to correct the image color. iii. Innovatively applies the dual transmittance imaging model to the underwater image fusion framework and integrates the red channel prior to the total transmittance estimation, which complements the transmittance information and removes the image turbidity. (2) Proposed method extracts the weights of the preprocessed clear input images. After obtaining features and necessary information of the same scene and target, various algorithms are fused to emphasize the image details. (3) Comprehensive and particular verification of the advantages of the algorithm through color correction testing, qualitative and quantitative comparison, complexity analysis, ablation experiments, and application analysis in real underwater. As shown in Fig 2, our study modifies and improves traditional algorithms, resulting in a comprehensive increase in performance for underwater image enhancement.
Fig 2

The three input images including contrast enhancement, color correction, and defogging are decomposed into five-layer Laplacian pyramids, respectively, and its normalized weight map is decomposed into five-layer Gaussian pyramids.

(To simplify the flowchart, we only show the defogging weight map and its pyramid level decomposition in the figure). Finally, multi-scale fusion is performed to obtain the output image.

The three input images including contrast enhancement, color correction, and defogging are decomposed into five-layer Laplacian pyramids, respectively, and its normalized weight map is decomposed into five-layer Gaussian pyramids.

(To simplify the flowchart, we only show the defogging weight map and its pyramid level decomposition in the figure). Finally, multi-scale fusion is performed to obtain the output image.

Design input images

Quantify histogram to enhance contrast

We adjust the image contrast by quantizing the color channel histogram as the first input image. Because of the uniform grey value distribution of the pixels, the three-color channels are divided into the dark, middle gray, and light areas, in which the middle grey area will be quantified. The positive or negative saturation are determined by corresponding to the proportion of gray value pixels. Positive saturation means that pixel numbers at a grey value of 255 exceed 1% of the total pixels. In contrast, negative saturation denotes that pixel numbers at zero grey value covered over 1% of the total ones. The boundary values of the middle grey area are different according to saturation directions. Fig 3 illustrates the upper and lower boundaries of the middle grey area, which represents the normalized histogram values. The orange and green distinguish the dark, bright, and middle grey regions. The boundaries, V and V, are determined by the black bold lines.
Fig 3

Upper and lower boundaries of middle gray area.

(a) Unsaturation; (b) Negative saturation; (c) Positive saturation; (d) Negative and positive saturation.

Upper and lower boundaries of middle gray area.

(a) Unsaturation; (b) Negative saturation; (c) Positive saturation; (d) Negative and positive saturation. Through the above analysis, the grey value of [V, V] is linearly mapped, as shown in Eq (1). Fig 4 shows the initial images, contrast enhancement results, and R, G, and B channel histograms. The contrast has been visibly improved with higher grey level distributions compared to the initial values.
Fig 4

Contrast image and RGB three-channel histogram.

(a) Original; (b) Contrast enhancement; (c) Original histogram; (d) Histogram after contrast enhancement.

Contrast image and RGB three-channel histogram.

(a) Original; (b) Contrast enhancement; (c) Original histogram; (d) Histogram after contrast enhancement.

Color correction based on Retinex theory

Underwater images often appear in blue and green tones. To further restore image color, the second input image is generated from the improved multi-scale Retinex with color preservation (IMSRCP) based on the Retinex theory. According to color constancy, the image is represented as the product of scene illumination and object reflection components. The reflection components of the object are obtained by removing the influence of the scene illumination component. We pre-correct and homogenize the color and reduce the blue-green bias. After that, the R, G, B, and L channels are corrected through MSR [3] to enhance the colors under the premise of high fidelity and L is the mean value of channels R, G, and B. where {i ∈ R, G, B, L}, N denotes the scales, ω represents the weights of each scale, F(x, y, σ) and σ are Gaussian wrap function and scale, subject to ∬ F(x, y) = 1. In the standard MSR algorithm, the gain and offset parameters could not adapt to some specific conversion of different types of images. Therefore, we adopt the dynamic adaptive tensile compensation to tune the parameters dynamically and address this problem, given by Eq (3). Besides, our proposed method retains the original necessary color and enhances the color saturation through the method of the anti-gray world in Eq (4). where {c ∈ R, G, B}, μ is the dynamic range, I and I refer to the mean and standard deviation of the MSR processed image. I demonstrates the mean value of input images, I refers to the light intensity channel of the improved MSR (IMSR) processed image, and ρ is the color compensation coefficient. Fig 5 compares the experimental results based on the Retinex algorithm, which shows that IMSRCP is less competent in removing turbidity but results in a more distinct color without exposure.
Fig 5

Different enhancement effects of the algorithms based on Retinex.

(a) Original image; (b) MSR; (c) IMSRCP.

Different enhancement effects of the algorithms based on Retinex.

(a) Original image; (b) MSR; (c) IMSRCP.

Dehazing algorithm based on imaging model

The contrast and color enhancement images can be obtained from the first and second input images. Then, the turbidity phenomenon is addressed using the third input image. Based on the different attenuation coefficients of direct and backscattering components in the underwater imaging model [19], the algorithm redefines the underwater imaging framework, provided by Eq (5). Compared with traditional models, we add a variable parameter and improve the accuracy of the underwater image restoration algorithm. where x is the pixel, I and J represent the blurred image taken by the camera and the clear image to be restored. A denotes the background light. Besides, we define the direct and backscattered component transmittance t(x) and t(x) as Eqs (6) and (7): where σ and σ are direct and backscatter attenuation coefficients, respectively, related to propagation distance and light wavelength. z(x) is the distance from pixel to camera. We estimate the background light by the hierarchical search of quadtree to prevent the background value from increasing due to artificial light sources and bright white objects. Here, we divide the image into four areas to find the image block with the smallest standard deviation and largest mean to set as the target. The target region is further divided into four smaller ones, repeated until the specific threshold. In the final selected region, we choose the point with the minimal Euclidean distance from the pure white pixel as the background light value. The weak correlations between the attenuation coefficient of backscattering transmittance and the light wavelength have been ignored in our work, which means the backscattering transmittances of R, G, and B channels are consistent. To correct the color deviation caused by the low R-channel in DCP [7], we apply the red dark channel prior [20]: J(x) ≈ 0. Also, we divide both sides of the dual-transmittance underwater imaging model by the background light as follows: In addition, we take the red dark channel prior into Eq (8) to obtain the backscattering transmittance: While taking images in underwater environments, distances between objects that are short enough to have little influence on direct component transmittance, which are ignored within the proposed algorithm. According to the estimation of backscattering transmittance and the relations between the two transmittances, we can further estimate the direct transmittances at all the pixels from Eqs (6) and (7). The restored image is obtained in Eq (12) by integrating the estimations into the original formula 5, where t0 is a threshold to prevent small transmittance, set to 0.3 empirically [21]. It can be shown from Fig 6(b) that our work can estimate the locations of the background light value accurately. Besides, represented alongside in Fig 6(c) depicts the backscattering transmittance of the image. The more serious the color deviation is, the more pronounced color change appears after restoration. Through Fig 6(d), 6(e) and 6(f), we can verify that this method conforms to the principle of underwater light propagation, in which red light decays the fastest, followed by blue and green lights. It can be seen from figures (g), (h), (i), and (j) that the transmittance maps can accurately reflect the image depth of field information.
Fig 6

Image transmission.

(a) Original; (b) Background light; (c) Backscatter component transmission; (d) R channel direct component transmission; (e) G channel direct component transmission; (f) B channel direct component transmission; (g) Thermodynamic diagram of figure (c); (h) Thermodynamic diagram of figure (d); (i) Thermodynamic diagram of figure (e); (j) Thermodynamic diagram of figure (f).

Image transmission.

(a) Original; (b) Background light; (c) Backscatter component transmission; (d) R channel direct component transmission; (e) G channel direct component transmission; (f) B channel direct component transmission; (g) Thermodynamic diagram of figure (c); (h) Thermodynamic diagram of figure (d); (i) Thermodynamic diagram of figure (e); (j) Thermodynamic diagram of figure (f).

Design weight diagrams

With the three fused input images, we designed three weight maps representing the basic features, necessary information, and pixels with high weight, including luminance, chromatic, and saliency maps. (1) Luminance map W. The luminance map assigns high saturation values to high visibility areas, resulting in images with higher visibility. It is utilized to reflect the image’s brightness, which can effectively measure the lack of brightness. Calculate the average value of three channels of each input image, the standard deviation of the R, G, and B channels and the average value is the luminance map: where R, G, and B are the three-channel pixel values, and L is the average value of the three-channel pixels. (2) Chromatic map W. Also, the chromatic map compensates for the disadvantage of color reduction and reflects the color purity and overall quality of the image. Convert the image to HSV space, calculate the saturation of each pixel and the maximum saturation in the region: where S(x) demonstrates the saturation of the pixel x, S represents the maximum saturation, which is taken as 1 in our method, σ controls the sensitivity of the standard deviation, and the value of 0.3 is more effective. (3) Saliency map W. The saliency map calculates the standard deviation between the brightness of the pixel and the local average of the surroundings to recover more details: where I represents the average value of pixels, and I means the luminance channel obtained after low-pass filtering. (4) Normalized map . Normalize the above weight map to get the corresponding standardized weight map: where k is the serial number of the input image, is the normalized weight map of the k-th input image, K = 3.

Multi-scale decomposition and fusion

In order to obtain multi-scale features, the Gaussian and Laplacian pyramids are applied to decompose the input image I and normalized weight image . Image pyramid is a technique for extracting multi-scale features of an image. By sampling the image several times, a group of characteristic images with different scales with reduced resolution and image size are obtained. (1) Constructing Gaussian pyramid: the Gaussian pyramid is acquired by smoothing the brightness through low-pass filtering and down sampling compression size. Gaussian smoothing brightness filtering is realized by Gaussian function generation kernel. Down sampling is obtained by sampling the image undergone Gaussian smoothing with interlaced rows and columns. (2) Constructing Laplacian Pyramid: the specific implementation process is to subtract the Gaussian pyramid of the same layer from the interpolation image expanded by interpolation on the previous layer of this layer. Finally, a multi-scale fusion method is utilized to reconstruct all pyramid layers to receive a clearer output image. where n and k determine the number of pyramid layers and input images, in this paper, the total number of layers of the pyramid is N = 5, G represents the n layer of Gaussian decomposition of the normalized weight map, and LP represents the n layer of Laplace decomposition.

Experimental results and analysis

We compare several existing algorithms from different directions to verify the effectiveness and robustness of our method in different water environments, including underwater dark channel prior (UDCP) [10], double red-dark channel prior (DRDCP) in Wang et al. [21] for the dual transmittance imaging model, the Fusion algorithm proposed by Ancuti et al. [22], UNTV is based on a red channel prior guided variational framework [23], multilevel feature fusion-based conditional gan (MLFcGAN) [24], a fast underwater image enhancement for improved visual perception (FUnIE-GAN) [15], underwater image enhancement convolutional neural network(UWCNN) [25] and a deep underwater image enhancement network (Water-Net) [26].

Color correction test

In order to verify the accuracy of color restoration of proposed algorithm, the above-mentioned multiple algorithms were subjected to underwater color card calibration experiments and compared with the standard color card. Fig 7(a) and 7(b) present the distorted color card images taken in underwater environments and the standard card.
Fig 7

Color correction test.

(a) Degraded color card; (b) Standard card; (c) UDCP; (d) DRDCP; (e) Fusion; (f) UNTV; (g) MLFcGAN; (h) FUnIE-GAN; (i) UWCNN; (j) Water-Net; (k) Ours.

Color correction test.

(a) Degraded color card; (b) Standard card; (c) UDCP; (d) DRDCP; (e) Fusion; (f) UNTV; (g) MLFcGAN; (h) FUnIE-GAN; (i) UWCNN; (j) Water-Net; (k) Ours. As shown in Fig 7, the image color recovery after UDCP processing is good, but the green and purple color blocks cannot be distinguished. Most of the image color blocks processed by DRDCP fail to restore the correct color depth. After the fusion method, the image has a darker tone overall. The red and blue blocks are more severely distorted, and the discrimination between different degrees of green blocks is low. The UNTV algorithm is seriously distorted. MLFcGAN processing color card is fuzzy, which results in the loss of color information. Color contrast of the algorithm has not been effectively improved in FUnIE-GAN even though its color block is close to the actual color card. Blue turbidity happens in the UWCNN, making it impossible to distinguish between various color systems. The color card processed by DUIENet has the phenomenon that the color information is not apparent. Combining the above algorithms, the color after processing by ours is very close to the standard card, and the contrast is significantly improved.

Qualitative comparison

To further consider the restoration of texture details, the Canny operator is employed to verify the algorithm. The more edges and sharpness of the texture, the higher the image information quality, as shown in Fig 8.
Fig 8

Canny local texture comparison images.

(a) Degraded image; (b) Degraded canny detection image; (c) Restore image; (d) Recovery canny detection image.

Canny local texture comparison images.

(a) Degraded image; (b) Degraded canny detection image; (c) Restore image; (d) Recovery canny detection image. Observing Fig 8, we can see that the texture information in the red frame has been improved, indicating that the algorithm can effectively restore texture details, remove the fog, and improve image visibility. In addition, we test different degrees of degraded images obtained in a complex underwater environment to verify the algorithm’s effectiveness further, shown in Fig 9.
Fig 9

Experimental results in complex underwater environments.

(a) Degraded image; (b) UDCP; (c) DRDCP; (d) Fusion; (e) UNTV; (f) MLFcGAN; (g) FUnIE-GAN; (h) UWCNN; (i) Water-Net; (j) Ours.

Experimental results in complex underwater environments.

(a) Degraded image; (b) UDCP; (c) DRDCP; (d) Fusion; (e) UNTV; (f) MLFcGAN; (g) FUnIE-GAN; (h) UWCNN; (i) Water-Net; (j) Ours. The degraded images we selected are from the following three datasets. UIEBD [26]: They are collected from Google, YouTube, and related papers, containing 950 degraded underwater images. EUVP [15]: Seven cameras are used to capture images rich in underwater scenes, including a set of paired and unpaired 20K underwater images (poor quality and good quality), with a total of 515 data. RUIE [27]: A system based on underwater multi-view imaging shooting to obtain data of marine life in the Yellow Sea, covering 4230 images. We selected 12 representative underwater images in these three datasets. Among them, UIEBD is rich in different types of underwater scenes that we selected seven images in this dataset. Due to the small amount of EUVP data, three representative images are adopted. Although the RUIE dataset contains many images, we only selected two of them because most of the images collected by this dataset are identical kinds of degraded images in a similar region. As shown in Fig 9, images 1-7 are from UIEBD, 8-10 are from EUVP, and 11 and 12 are from RUIE. Overall, the effect of UDCP and Fusion are poor no matter in which dataset and they deepen the tone of the original degraded image. The color correction effect of DRDCP is better, but its brightness is not improved. UNTV virtualizes the detailed information and adds different degrees of noise. MLFcGAN does not correctly remove the atomization phenomenon of the image, as shown in images 2, 4, and 12. FUnIE-GAN performs better in EUVP but has a poor ability to improve visual effects in UIEBD and EUVP. Similarly, the overall image generated by the UWCNN algorithm is dark blue with serious atomization, resulting in low discernability between color blocks. Water-Net outputs various degrees of red artefacts and fails to restore the image’s clarity effectively. However, our method could remove the turbidity and correct the color information naturally with increased contrast.

Quantitative comparison

To further quantitatively verify the performance of our algorithm, we adopt underwater color image quality evaluation (UCIQE) [28] and underwater image quality measures (UIQM) [29] evaluation indexes. UCIQE takes chroma, saturation, and brightness as a linear combination and the value of UCIQE is in the range of [0, 1], directly proportional to the underwater image quality. Table 1 shows the UCIQE evaluation results corresponding to the underwater image clarification algorithms in Fig 9, with the bold font emphasizing the best performance.
Table 1

Quantitative evaluation results of UCIQE index.

ImageUDCPDRDCPFusionUNTVMLFcGANFUnIEGANUWCNNWaterNetOurs
10.53300.64790.52450.67770.59130.62380.48980.5605 0.6829
20.65170.51410.51540.65720.45970.58380.54440.4706 0.7038
30.55620.57750.51630.62530.49870.54380.53450.4947 0.6268
40.53150.60370.46240.65470.54750.58700.45990.5362 0.6706
50.60700.60120.53290.63090.56450.58250.57590.5300 0.6336
60.57460.60900.51540.65540.56470.56920.52560.5922 0.7079
70.53100.58190.53370.65770.57810.59570.49690.5864 0.7034
8 0.6687 0.63540.56360.60850.60330.63260.57540.59660.6545
90.63160.63000.58020.61260.60070.61970.61810.6036 0.6632
100.52720.55040.48750.61050.51680.51740.51300.5078 0.6125
110.54310.51090.44370.59980.44180.51640.43400.4457 0.6519
120.55790.60120.55000.61210.58870.59710.51230.5660 0.6416
UIQM quality evaluation index takes color, sharpness, and contrast as the basis for evaluating image quality. This value is directly proportional to image quality, as shown in Table 2. Similarly, bold font is the optimal value.
Table 2

Quantitative evaluation results of UIQM index.

ImageUDCPDRDCPFusionUNTVMLFcGANFUnIEGANUWCNNWaterNetOurs
13.17163.57734.52755.00384.80884.78253.46494.7716 8.1785
24.52055.52925.23833.68375.05885.52045.24924.5950 7.1215
34.23795.34734.51832.85894.50404.92753.52514.3934 5.3745
45.05293.86395.20694.74925.16635.22953.98525.0823 5.6102
51.05463.96283.92843.42014.06494.73691.87114.5993 6.8025
64.27904.92265.37513.61845.24015.14634.02025.2799 9.0879
73.28343.90304.01364.66843.80253.86482.28514.1149 5.2714
85.07625.26595.28365.36974.53795.62324.69575.0037 14.864
94.32174.04685.05624.08243.87125.24033.53205.1132 6.9366
10 5.8178 4.98644.63584.28583.39254.59893.83414.45695.2270
111.68845.60684.44744.27644.74795.44932.80424.4433 6.0664
121.94033.67994.53013.71054.53954.72602.52384.7219 6.7804
More generally, when evaluated with UCIQE and UIQM, the proposed method performs better than others in various underwater surroundings with remarkable robustness. Then, we test the UCIQE and UIQM values of the datasets EUVP, RUIE, and UIEBD, and show their average values in Tables 3 and 4 to prove the universality and robustness of our method.
Table 3

UCIQE of evaluation results on test datasets.

DatasetUDCPDRDCPFusionUNTVMLFcGANFUnIEGANUWCNNWaterNetOurs
UIEBD 0.59770.57220.52100.63040.53190.55900.50140.5328 0.6710
EUVP 0.60300.61220.53830.62030.55640.58780.52860.5707 0.6635
RUIE 0.54620.53070.44700.60710.45050.50440.43860.4469 0.6552
Table 4

UIQM of evaluation results on test datasets.

DatasetUDCPDRDCPFusionUNTVMLFcGANFUnIEGANUWCNNWaterNetOurs
UIEBD 5.87514.59494.42413.46424.55934.52853.21814.4847 7.1265
EUVP 4.36694.52014.46144.53593.37194.90823.24654.6092 7.1927
RUIE 2.77124.85524.05434.25254.14904.71242.23674.1881 7.2856
According to Tables 3 and 4, our algorithm performs better than others in the three datasets and is suitable for various complex underwater environments. In order to show the advantages of the algorithm more intuitively, the above table is drawn as a scatter chart, as shown in Fig 10. It can be seen from the scatter chart that ours is not only the best but also the most stable. When other comparison algorithms perform better on the UIEBD and EUVP datasets, ours can far exceed their index values. When the UCIQE and UIQM values of different comparison algorithms are relatively low in the RUIE dataset, ours still maintains the best results.
Fig 10

UCIQE and UIQM of evaluation results on test datasets.

Complexity analysis

We compare the average time this code runs on three common datasets with non-deep learning and deep learning methods to analyze the complexity of the algorithm (CPU: Intel i7-6700HQ 2.60GHz; GPU: NVIDIA RTX 2070 8GB). Among them, non-deep learning algorithms include UDCP, DRDCP, Fusion, and UNTV. Deep learning algorithms include MLFcGAN, FUnIEGAN, UWCNN, and WaterNet. The running time of deep learning is only the test time, excluding the training time. The average processing time of each method is shown in Table 5.
Table 5

Average running time (seconds).

MethodUDCPDRDCPFusionUNTVMLFcGANFUnIEGANUWCNNWaterNetOurs
Average 0.1810.2371.1722.4281.4241.8261.7018.3011.351
We note that our method runs slower than the UDCP and DRDCP when only considering the non-deep learning methods. The reason is that we deal with the contrast, color, and dehazing of the underwater image separately, and the additional image enhancement effect increases the runtime. However, our running speed is better than that of Fusion and UNTV, which basically meets the requirements of underwater real-time performance. Deep learning algorithms not only take one to two days to train but also consume a lot of memory and test time is not as good as traditional algorithms. According to various subjective and objective evaluation metrics, our method outperforms other single restoration methods when dealing with complex underwater environments. Therefore, our work still has strong advantages in terms of comprehensive speed and enhancement effect.

Ablation study

In this sub-section, the input image of each module component is compared with the fused clear image for detail information recovery experiment, as shown in Fig 11, the upper right corner is the local red mark area. The analysis shows that the input image can effectively complete contrast enhancement, color correction and dehazing respectively. Excessive color correction of the second input image will lead to serious exposure. The third input image can eliminate dense haze, but with distorted brightness and color. In contrast, our method establishes more generality than three input images that lack optimization capabilities.
Fig 11

Comparison of details of each input component.

(a) Degraded image; (b) First input image; (c) Second input image; (d) Third input image; (e) Ours.

Comparison of details of each input component.

(a) Degraded image; (b) First input image; (c) Second input image; (d) Third input image; (e) Ours. After that, we verify that each input component is indispensable to the PSNR, UCIQE and UIQM evaluation metric. A denotes ours without contrast enhancement component, B represents ours without color correction component, C indicates ours without defogging component and D is the complete frame. The PSNR index is a full reference evaluation, which needs to be compared with the ground truth. Images with numerically higher PSNR values are generally considered better quality. However, there are no ground truth images in the RUIE dataset, so only the mean PSNR values on the UIEBD and EUVP datasets are compared. UCIQE and UIQM are non reference evaluation indicators, we compared the values of each component on the three data sets, as shown in Fig 12. We note that the PSNR, UCIQE and UIQM values of the unremoved input modules are all higher than other models, indicating that each module of input plays an important role.
Fig 12

PSNR, UCIQE and UIQM value of ablation experiment.

(a) PSNR; (b) UCIQE; (c) UIQM.

PSNR, UCIQE and UIQM value of ablation experiment.

(a) PSNR; (b) UCIQE; (c) UIQM.

Application test

In this sub-section, we adopt the performance of human target detection and saliency target detection based on YOLOv5 prediction algorithm to verify its application effect. The evaluation of human detection is measured by confidence, which the higher the value, the more influential the algorithm application. Saliency detection aims to identify the significant part containing helpful information in an underwater image. The more detailed information, the more precise the image, as shown in Fig 13.
Fig 13

Human and saliency target detection.

We evaluate the performance of the algorithm through SURF [30] feature point matching and corner detection [31]. SURF test compares the matching number of feature points before and after the algorithm processing, judges the processing effect, and then evaluates the algorithm’s performance. Generally speaking, the better the processing effect of the algorithm, the more the number of feature points matching. In terms of false and missed detection of corners, the method based on gray-scale images is better than that of original images directly. Therefore, the images in the experiments are converted into gray-scale, where the number of corner detections is directly proportional to the quality of the image. The experimental results are shown in Fig 14, tagged with the number of matching points and corners in the top left-hand.
Fig 14

SURF matching and corner detection.

(a) Degraded image 1, SURF and corner; (b) Corrected image 1, SURF and corner; (c) Degraded image 2, SURF and corner; (d) Corrected image 2, SURF and corner.

SURF matching and corner detection.

(a) Degraded image 1, SURF and corner; (b) Corrected image 1, SURF and corner; (c) Degraded image 2, SURF and corner; (d) Corrected image 2, SURF and corner. Experimental results show that the proposed algorithm can match more feature points and the number of corner detection increases significantly. The degraded underwater image has no confidence value because it is low and will not be displayed. Where saliency target area is white, and the background area is black. It can be seen from the significant image that our algorithm accurately recognized the region of interest, while the original image is not recognized. Finally, in order to reflect the practical utilization of our work, our method is applied to underwater real video. The original video is decomposed into one frame at a frame rate of 30 frames per second, then each frame is fused and enhanced, and the corrected video is finally synthesized. As shown in Fig 15, two original images of three underwater videos are intercepted for experimental comparison. The real underwater video and enhanced video have been uploaded to Google Drive.
Fig 15

Real underwater video test.

(a) Image from video 1; (b) Image from video 2; (c) Image from video 3.

Real underwater video test.

(a) Image from video 1; (b) Image from video 2; (c) Image from video 3. The experimental results show that the pretreatment effect of underwater video is clearer, and can reflect more abundant environmental information, which has practical application value.

Discussion

Our method generally outperforms other contrasting algorithms, but still has certain limitations. For example, in UCIQE and UIQM indicators, the algorithm is not the optimal value. The background regions of the deeply blurred states in image 8 and image 10 in Fig 9 cannot be effectively recovered. In the processing of real underwater video, it is found that it cannot process low-resolution moving images. At the same time, according to the comparative running time, the complexity of the algorithm needs to be further improved.

Conclusion

The complexity of the underwater imaging environment leads to low contrast, color distortion, blur, and other degradation problems within the acquired images. By targeting these issues, we propose an underwater optical image enhancement methodology based on the fusion of Retinex and transmittance optimization in this paper. Initially, by quantifying the gray values of each channel in the fuzzy underwater images, effective improvement in the contrast can be seen. Inspired by the Retinex model, dynamic adaptive stretch compensation is adopted to address image color deviation. The restored image obtained by the inverse double transmittance algorithm can deal with turbidity successfully. Next, the weights of the three input images calculated on different scales are obtained to reflect the basic features and necessary information of images. Finally, multi-scale pixel-level fusion is employed to construct the Laplacian pyramid and Gaussian pyramid for input and weight maps, resulting in a much more optimized underwater image. The experimental results indicate that our method can improve the image clarity and contrast while also correcting the color imbalance and retaining the image details. 6 May 2022
PONE-D-22-07396
A Novel Multi-scale Fusion Framework via Retinex and Transmittance Optimization for Underwater Sensing Scene Image Enhancement
PLOS ONE Dear Dr. Zhou, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 20 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Sen Xiang Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 4. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 5. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Some comments should be concerned below: C1. The proposed method should be proved in some real videos of underwater scenes. C2. The runtime of the proposed method should be compared with some non deep learning and deep learning state-of-the-art methods on some public datasets. C3. The mainline of Introduction should be clear so that readers can better understand their intentions. C4. There are some formatting problems, such as basic alignment of all paragraphs. C5. There is an error in Fig. 1 where the reflected light direction of suspended particles is not uniform. Missing references: [1]Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding. Chongyi Li, Saeed Anwar, Junhui Hou, Runmin Cong, Chunle Guo, Wenqi Ren. IEEE Transactions on Image Processing, 2021. [2]Bayesian Retinex Underwater Image Enhancement. Peixian Zhuang, Chongyi Li, Jiamin Wu. Engineering Applications of Artificial Intelligence, 2021. Reviewer #2: The paper consists of a series of image enhancement algorithms such as improved multi-scale Retinex with color preservation (IMSRCP), histogram quantization, the red channel prior is integrated to the total transmittance estimation, and extracting the weights of the preprocessed clear input images. However, I have a few questions: -- most of the methods included in the overall framework of this method are already well-known. In the proposed methodology, some integrated methods are combined in sequence to get better human perception results. In this way, the manuscript holds novelty. However, the author must discuss their original contribution. -- In section 2 Methodology, improved multi-scale Retinex with color preservation (IMSRCP), histogram quantization, the red channel prior is integrated to the total transmittance estimation, and extracting the weights of the preprocessed clear input images. But in the explanation of methodology in each sub-sections, the order of the methodology seems to be in correct. Reorder the methods used in the methodology based on the summary as briefed in the overview of the proposed methodology of Section 2. -- In sub-section 2.2 Design weight diagrams, the normalized weight map of the k-th input image, K = 3. But in the Fig. 2, the decomposition level is 5. Explanations are needed because this introduces ambiguity. -- the Proposed Methodology Fig. 2 needs a self-explanation with proper order of the methods used, and further, the Fig. 2 needs more artistic representation and check for the normalized weight map and its corresponding level of decomposition. -- Some ablation studies need to be performed in the experiments to prove the effectiveness of each method. If possible PSNR validation can be carried out in ablation study. -- the author must discuss failure cases of their method (if any). -- there are few typo errors present in the manuscript, the author must correct these errors. -- notations used in certain equations were not properly defined. -- the english of the paper must be revised thoroughly as the reviewer has found numerous grammatical errors and punctuation mistakes. -- the work is very interesting, however too many algorithms will lead to consume more time and computing power. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: PLosOne-Comment.pdf Click here for additional data file. Submitted filename: PONE-D-22-07396 Reviewer comments.docx Click here for additional data file. 20 Jun 2022 We have submitted the revised draft of "A Novel Multi-scale Fusion Framework via Retinex and Transmittance Optimization for Underwater Sensing Scene Image Enhancement"(ID: PONE-D-22-07396). Thank you for giving us the opportunity to modify it. We have carefully read your decision letter and made changes based on the comments of two reviewers, the revised content is in the 'Response to Reviewers',which we wish to be considered for publication in PLOS ONE. Submitted filename: Response to Reviewers.pdf Click here for additional data file. 25 Jul 2022
PONE-D-22-07396R1
A Novel Multi-scale Fusion Framework via Retinex and Transmittance Optimization for Underwater Sensing Scene Image Enhancement
PLOS ONE Dear Dr. Zhou, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 08 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Sen Xiang Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: No ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: We have no comments on this manuscript, and recommend the authors to check the full text carefully. This manuscript presented an underwater image enhancement method based on a fusion of Retinex and transmittance optimization. This method quantified the gray values of each channel to improved the contrast, used the dynamic adaptive stretch compensation to solve the color deviation, and obtained the restored image through an inverse double transmittance algorithm. Multi-scale pixel-level fusion is used to construct the Laplacian and Gaussian pyramids for input and weight maps, resulting in the final underwater image. Reviewer #2: 1. The paper title should be concise and as short as possible, and Include keywords (refer - as per Authors Guidelines of this journal) 2. I appreciate that the authors has carried out the Ablation study based on PSNR metric and why not other metrics such as UCIQE and UIQM metrics. Further, the authors can also include Ablation study with input image and its corresponding output. This can provide the readers to understand the variations in the output image as well as the importance of each block of components as proposed in Fig. 2. 3. Make sure all figures are as per the journal dpi format, since some of the images are seems to be of less resolution (refer - as per Authors Guidelines of this journal). 4. English grammar correction (such as punctuation, typo mistakes) needs to be done. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
Submitted filename: Reviewer Comments.docx Click here for additional data file. 26 Aug 2022 We have submitted the revised draft of 'Multi-scale Fusion Framework via Retinex and Transmittance Optimization for Underwater Image Enhancement'(ID: PONE-D-22-07396). Thank you for giving us the opportunity to modify it. We have carefully read your decision letter and made changes based on the comments of two reviewers, the revised content is in the 'Response to Reviewers',which we wish to be considered for publication in PLOS ONE. Submitted filename: Response to Reviewers.pdf Click here for additional data file. 12 Sep 2022 Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement PONE-D-22-07396R2 Dear Dr. Zhou, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Sen Xiang Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: We have no comments on this manuscript. This method quantified the gray values of each channel to improve the contrast, used the dynamic adaptive stretch compensation to solve the color deviation, and obtained the restored image through an inverse double transmittance algorithm. Multiscale pixel-level fusion is used to construct the Laplacian and Gaussian pyramids for input and weight maps, resulting in the final underwater image. Reviewer #2: The manuscript can be accepted for publication, as there is no further comments on this manuscript, and also I appreciate the authors for addressing all the comments. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** 14 Sep 2022 PONE-D-22-07396R2 Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement Dear Dr. Zhou: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Sen Xiang Academic Editor PLOS ONE
  9 in total

1.  Single Image Haze Removal Using Dark Channel Prior.

Authors:  Kaiming He; Jian Sun; Xiaoou Tang
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2010-09-09       Impact factor: 6.226

2.  An Underwater Color Image Quality Evaluation Metric.

Authors:  Miao Yang; Arcot Sowmya
Journal:  IEEE Trans Image Process       Date:  2015-10-19       Impact factor: 10.856

3.  Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding.

Authors:  Chongyi Li; Saeed Anwar; Junhui Hou; Runmin Cong; Chunle Guo; Wenqi Ren
Journal:  IEEE Trans Image Process       Date:  2021-05-14       Impact factor: 10.856

4.  Underwater Image Restoration Based on Image Blurriness and Light Absorption.

Authors:  Yan-Tsung Peng; Pamela C Cosman
Journal:  IEEE Trans Image Process       Date:  2017-02-02       Impact factor: 10.856

5.  Underwater Depth Estimation and Image Restoration Based on Single Images.

Authors:  Paulo L J Drews; Erickson R Nascimento; Silvia S C Botelho; Mario Fernando Montenegro Campos
Journal:  IEEE Comput Graph Appl       Date:  2016 Mar-Apr       Impact factor: 2.088

6.  Color Balance and Fusion for Underwater Image Enhancement.

Authors:  Codruta O Ancuti; Cosmin Ancuti; Christophe De Vleeschouwer; Philippe Bekaert
Journal:  IEEE Trans Image Process       Date:  2017-10-05       Impact factor: 10.856

7.  An Underwater Image Enhancement Benchmark Dataset and Beyond.

Authors:  Chongyi Li; Chunle Guo; Wenqi Ren; Runmin Cong; Junhui Hou; Sam Kwong; Dacheng Tao
Journal:  IEEE Trans Image Process       Date:  2019-11-28       Impact factor: 10.856

8.  Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns).

Authors:  Hui Huang; Linlu Dong; Zhishuang Xue; Xiaofang Liu; Caijian Hua
Journal:  PLoS One       Date:  2021-02-19       Impact factor: 3.240

9.  Deep Supervised Residual Dense Network for Underwater Image Enhancement.

Authors:  Yanling Han; Lihua Huang; Zhonghua Hong; Shouqi Cao; Yun Zhang; Jing Wang
Journal:  Sensors (Basel)       Date:  2021-05-10       Impact factor: 3.576

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.