Color Correction and Local Contrast Enhancement For Underwater Image Enhancement
Color Correction and Local Contrast Enhancement For Underwater Image Enhancement
ABSTRACT Underwater images suffer from various quality degradation problems such as color cast, low
contrast, and blurred details. To solve these issues, a novel underwater image enhancement method that
can implement color correction, detail sharpening, and contrast enhancement in stages. In particular, the
proposed method combines multi-channel color compensation with color correction. It solves detail blurring
and low contrast by the Gaussian differential pyramid and the local contrast enhancement of contrast limited
adaptive histogram equalization, respectively. The proposed method mainly includes color compensation,
color correction, detail sharpening, and contrast enhancement. Qualitative and quantitative comparisons
demonstrate that the proposed method can effectively remove the blur of the image, realize the color
correction, and significantly improve the clarity of the image.
INDEX TERMS Underwater image enhancement, color correction, detail sharpening, contrast enhancement.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 119193
S. Jin et al.: Color Correction and Local Contrast Enhancement for Underwater Image Enhancement
1) An underwater image enhancement method is proposed into account the partial polarization of object reflection and
based on color correction and local contrast enhancement. backscattering from more than two images, but the process
Compared with the state-of-the-art underwater image restora- of image acquisition is complicated. Kopf et al. [16] used the
tion or enhancement methods, the proposed method can existing geospatial information and the urban 3D models to
effectively improve the quality of underwater images and restore images, but some additional information needs to be
reduce detail information loss without implementing color provided by user interaction. Tian et al. [17] used synthetic
space transformation. aperture imaging and polarization imaging to restore images,
2) Considering that the degradation of underwater images which could effectively increase the amount of information
is caused by the attenuation of light with different wave- obtained by a single imaging system. However, it requires
lengths, it is essential for color compensation for red and blue more complex hardware systems, thereby it isn’t suitable for
channels. The Multi-Scale Retinex (MSR) method based on ordinary users.
auto-levels is implemented to effectively correct the compen- For the third class of methods using prior knowl-
sated underwater image’s color cast. edge [18], [19], [20], [21], [22]. The dark channel prior (DCP)
3) To highlight the details of the color-corrected image, the proposed by He et al. [18] is used for outdoor image dehazing.
detail feature map of the R, G, and B channels are recon- Image degradation in foggy and underwater is caused by
structed using the Gaussian differential pyramid and fused scattering and absorption of media, thereby the method can
into the R, G, and B channels corresponding to the color- be used for image restoration [19], [20], [21], [22].
corrected image. The non-physical models are mainly divided into five
4) To further improve the contrast of the underwater image, categories: transform domain-based method, spatial domain-
the pixel values of the high, medium, and low intervals are based method, color constancy-based method, deep learning
processed using CLAHE. Finally, the pixel values of the high method, and fusion method. For the first class of methods
and low intervals are effectively suppressed, and the pixel mainly includes quaternion [23], homomorphic filter [24],
values of the middle interval are effectively stretched. and wavelet transform [25]. This kind of method has a good
The specific arrangement of the remaining sections is as effect on noise removal, but it can’t achieve ideal results in
follows: Section II introduces several image enhancement contrast and color of underwater images.
methods. Section III presents the details of our proposed The spatial domain-based methods mainly include
method. Section IV compares and analyzes the experimental grayscale transformation [26], histogram equalization
results. Section V summarizes the work of this paper and (HE) [27], and contrast limited adaptive histogram equal-
future research. ization (CLAHE) [28]. The HE stretches the histogram
evenly globally to improve the image contrast [29]. However,
II. RELATED WORK HE increases the sparsity of the gray distribution of the
Collecting high-quality image information in a complex image, which may lose part of the detailed information.
underwater environment is challenging, so the image Adaptive histogram equalization (AHE) [30] improves the
enhancement technique has been extensively studied [6], [7] local contrast of the image and enhances edge details by
to meet the needs of practical applications. Restoration redistributing the local gray level of the image multiple times,
parameters are derived from physical models and prior but it also has the problem of amplifying noise. Based on the
knowledge, restoring high-quality underwater images [8]. AE, CLAHE imposes constraints on the local image contrast
The non-physical model method obtains underwater images to avoid excessively amplifying image noise in enhancing
with rich information and better visuals [9] by adjusting the image contrast. Hitam et al. [31] proposed a comprehensive
pixel values of the original image. CLAHE color model and applied the CLAHE method to the
The methods based on the physical model mainly RGB and HSV color models, respectively.
include specialized hardware, multiple images, and prior For the third class of methods mainly includes White Bal-
knowledge. Although these methods use specialized hard- ance [32], Gray World [33], Gray Edge [34], Weighted Grey
ware [10], [11], [12], [8], although the technology has par- Edge [35], and Retinex [36]. White balance may cause color
ticular effectiveness in enhancing and restoring underwa- distortion when there is insufficient light. Blurred edges of
ter images, it still has some limitations. For instance, the underwater images due to the complexity of the underwater
underwater optical imaging system is complex and expensive environment, the results are not ideal when the assumed
because of using complex hardware acquisition devices (e.g., conditions of Gray World, Gray Edge, and Weighted Grey
optical laser sensor) to capture turbid underwater images. Edge are destroyed. Joshi et al. [37] applied Retinex to
For the second class of methods using multiple weather degraded images, but its enhancement effect is lim-
images [13], [14], [15] or approximate estimation of the ited. Based on the Retinex theory, Fu et al. [38] reposed
scene [16], [17]. Narasimhan et al. [13] and Schechner et al. to use different methods to enhance the two components of
[14] employed multiple images for underwater image restora- incident and reflection of underwater images. They achieved
tion, but it needs to use more than two images of the same a good enhancement effect in the case of color deviation in
scene as a priori knowledge, which will limit the real-time underwater images. Alex et al. [39] converted the image to the
performance of video observations. Treibitz et al. [15] take YCbCr color space and achieved good enhancement effects.
FIGURE 3. Comparison before and after color compensation: 1. The first row is original image; 2. The second row is color
compensated underwater images.
Thus, we first compensate the original image with the red In order to obtain more accurate reflection components of
channel before performing color correction. Mathematically, S (x, y), we convolve S (x, y) with Gaussian kernel func-
the compensation for the red channel is defined as follows: tions. Therefore, the reflection component can be defined
(4), as shown at the bottom of the next page, where
Irc (x) = Ir (x) + α.(Īg − Īr ). (1 − Ir (x)) .Ig (x) (1)
c ∈ {R, G, B} corresponds to three color channels of R,
where Ir (x) represent the red channel, Ig (x) represent the G and B, MSRc (x, y) represents the output image corre-
green channel, Īr represent the average values of Ir (x), Īg rep- sponding to the cth channel, N represents the number of
resent the average values of Ig (x), and α represents constant Gaussian functions corresponding to different standard devi-
parameters. Considering the red channel attenuation is strong. ations, wn represents the weight for each output image, where
Finally, extensive experiments determine that the value of α w1 + w2 + · · · + wn = 1, and σ is the standard deviation.
is set to 1.5 with a better compensating effect. Typically, σ ∈ {σ1 , σ2 , σ3 , σ4 , σ5 , σ6 } is the vector of Gaus-
The blue channel may also show significant attenuation in sian fuzzy coefficients, 0 ≤ σ1 < σ2 < 50 are two small
a water environment with more plankton or organic matter. scales, 50 ≤ σ3 < σ4 < 100 are two medium scales, and
Then the compensation of the blue channel can be expressed 100 ≤ σ5 < σ6 are two large scales.
as: We have already given the validity of MSRCR application
to some images with thick fog for color correction in [55].
Ibc (x) = Ib (x) + α.(Īg − Īb ). (1 − Ib (x)) .Ig (x) (2)
We applied MSRCR to the underwater image in the third
where Ib (x) represent the blue channel, Ig (x) represent the row of Fig. 4, and we find that although MSRCR improved
green channel, Īb represent the average values of Ib (x), Īg rep- color and contrast, it brought about the phenomenon of detail
resent the average values of Ig (x), and α represents constant loss and reddishness. By observing the tricolor histogram and
parameters, the compensation factor α of the blue is set to 1.0. the MSRCR corrected images and comparing the probability
Fig. 3 shows the underwater images before and after color of their corresponding 0-pixel value, the probability of the
compensation. The overall visual effect of degraded under- 0-pixel value of G and B channels increased significantly
water images is improved, and green color is effectively sup- after MSRCR correction, while the probability of the 0-pixel
pressed by color compensation. From left to right, the result value of the R channel decreased significantly. Therefore,
of color compensation is better when the green color of the the probability increases of the 0-pixel value of MSRCR-
degraded image is more obvious. However, the compensated corrected images is the leading cause of the blurring of
images still suffer from blurred details and color distortion. details. In addition, we found that the tricolor histograms of
We will focus on the blur of detail in section B. most enhanced images are distributed in the middle region,
and a few images are distributed in the right area.
IV. COLOR CORRECTION In summary, we propose an auto-level-based MSR color
This section uses a multi-scale Retinex [38], [53] to complete correction method to address the blurring of details and color
the color correction. In the visual perception of human eyes, distortion of images.
the application of color constancy theory enables human Firstly, the gray histogram of R, G, and B channels is calcu-
eyes to adapt to different lighting conditions. In recent lated using auto-levels. Then, the highlight value and shadow
years, multi-scale Retinex with color restoration (MSRCR) value of R, G, and B channels are determined by clipping
has introduced color correction for each channel to effec- proportion and used as clipping boundary. Finally, the same
tively suppress the problems associated with MSR-enhanced linear stretch is applied to the middle part of each channel, and
images [54]. each gray value is ensured to be in the interval [0, 255]. There-
The visible light image consists of two parts: illumination fore, the expression of linear stretch is (5), as shown at the
and reflection, so it can be defined as: bottom of the next page, where MSRCRc (x, y) represents the
gray value after color correction, Min = I _Sort (m ∗ n ∗ per)
log (S (x, y)) = log (L (x, y)) + log (R (x, y)) (3)
and Max = I _Sort (m ∗ n ∗ (1 − per)) represent the lower
where S (x, y) represents the visible light image, and L (x, y) and upper limits of clipping boundary, m represent the rows,
represents the illumination, R (x, y) represents reflection. n represent the columns, I _Sort represents the matrix of the
sorted grayscale values and I _Sort = Sort (MSRc ), and per expressed as follows:
is the proportion of clipping and is set to 0.5% by extensive
2 2
experiments. X X
k
(i, wk (mk , nk , σk ) Gkl−1
G
l
j) =
Fig. 4 shows that red artifacts and color distortion still
mk =−2 nk =−2
appear in local areas of the image with the MSRCR-based (6)
(2x + mk , 2y + nk )
color correction method. Therefore, we propose reconstruct-
(1 ≤ l ≤ N , 0 ≤ i ≤ Rl , 0 ≤ j ≤ Cl )
ing the detail and edge information using the Gaussian dif-
ference pyramid and fuse it into the color-corrected image to
highlight the details of the image in section C. where N is the maximum number of layers, Rl denotes the
number of rows of the lth layer image, Cl denotes the number
of columns of the lth layer image. The size of the image Gkl
A. DETAIL SHARPENING
of the l th layer of the Gaussian pyramid is four times smaller
The finer the scale, the more image detail increases, and the compared to Gkl−1 . We construct two Gaussian pyramids G1l
course the scale, the more image detail is lost. On this basis, and G2l when k = 2, w1 is a Gaussian kernel with a scale
the multi-scale space is built to describe the image feature and radius of 3, and w2 is a Gaussian kernel with a scale and
information changing with the scale. More multi-scale spatial radius is 5.
representation sequences are obtained by introducing the con- Two Gaussian pyramids G1l and G2l with the same number
tinuous variation parameters of the scale. The scale spaces are of layers are obtained by Eq. (6). The small-scale image
extracted from these sequences to realize the feature extrac- obtained by the decomposition contains rich image details.
tion at different resolutions. Pyramid transformation has been The detail of large-scale image is significantly reduced,
gradually applied to image defogging [56], image classifi- which mainly including contour information and with bet-
cation [57], and underwater image enhancement [47], [48] ter noise immunity. The analysis found that combining the
due to its good performance in feature extracting and edge- advantages of both is beneficial to the preservation of details
preserving. and contour information. The Gaussian differential pyramid
An enhanced underwater image with blurred details in can be obtained by the differential of adjacent images of each
section C, thereby this section main considers the reconstruc- layer of the Gaussian pyramid. The construction method of
tion of detailed information. We firs get the Retinex enhanced Dl as:
image I1 , and then the details of the R, G, and B channels are
taken by three steps of decomposition, differential, and recon- Dl (i, j) = G1l (i, j) − G2l (i, j) (7)
struction of the Gaussian pyramid. Finally, the reconstructed
details and image I1 are fused to get image I2 , as shown in Based on Eq. (7), double up-sampling is performed from
Fig. 5. the lth layer to the first layer, and then the images up sampled
Decomposition process of Gaussian pyramid transforma- on this layer are accumulated to the upper layer, and this
tion: first define Gaussian kernel wk with k different scales operation is repeated until it is accumulated to the 0th layer.
and different window sizes, and then use wk to implement Finally, the reconstructed expression of detail information
convolution operation with the original image I1 to get k can be defined as:
image of same size G0 , and G0 as the 0th layer of the k th X
Id (i, j) = {Dl−1 (i, j) + U (Dl (i, j))} (8)
Gaussian pyramid. Similarly, the construction method of the
l−1
image Gkl in the l th layer of the k th Gaussian pyramid: the
image Gkl−1 of the (l − 1)th layer and the Gaussian kernel where Id is the reconstructed detail image and U (Dl (i, j))
wk are convolved, and the convolution result is subjected to is the double up-sampling operation on the image of the l th
binary extraction in the row and column directions. Gkl is layer. In summary, the reconstructed detail information Id
N
X
(x, wn {log(Sc (x, y)) − log(Gn (x, y) ∗ Sc (x, y))}
MSRc y) =
n=1 (4)
− (x − x_centerw )2 + (y − y_centerw )2
1
Gn (x, y) =
exp
2πσn2 2σn2
FIGURE 4. Comparison of different color correction methods: 1. The first row is original image; 2. The third row is color-corrected image of MSRCR; 3. The
fifth row is color-corrected image of auto-levels-based MSR; 4. The remaining rows are tricolor histogram.
and image I1 are fused to obtain the sharpened image I2 as FIGURE 6. Comparison before and after sharpening details: 1. The first
follows: row is original image; 2. The second row is color-corrected images; 3. The
third row is detail sharpened images.
I2 (i, j) = Id (i, j) + I1 (i, j) (9)
The second and third rows of Fig. 6 show that the more B. LOCAL CONTRAST ENHANCEMENT
the area detail with higher contrast, and the more blurred the Based on color correction and detail sharpening, the final
details of the area with lower contrast. The operation high- enhanced image has richer color information and a more
lights the details better, but the effect is poor and limited color extensive dynamic range to improve the image further.
improvement for low-contrast areas. Therefore, we suggest Therefore, it is necessary to carry out pixel balance on the
further processing of the images using the CHALE method three-color channels of the image. AHE [58] is an improved
in the D section. method of traditional histogram equalization, but the method
FIGURE 8. Comparison the color accuracy of different sharpening methods. From left to right: 1. Original image, 2. EUIV [47], 3. UDER [20], 4.
MILHD [60], 5. GDCP [61], 6. UICR [62], 7. Our results.
A. COLOR ACCURACY TEST compared the following methods: UDER [20], MILHD [60],
To verify the accuracy of our method for color restoration, IBLA [6], Tstep [64], GDCP [61], and UICR [62]. The
we chose ColorChecker Chart 24 as a reference for color comparison results are shown in Fig. 9. Table 1 and Table 2
accuracy. We compared the following methods: EUIV [47], provide the five-evaluation metrics the five metrics are infor-
UDER [20], MILHD [60], GDCP [61], and UICR [62]. The mation entropy (IE) [55], average gradient (AG) [55], patch-
comparison results are shown in Fig. 8. based contrast quality index (PCQI) [65], underwater image
As shown in Fig. 8, in terms of global color restoration, colorfulness measure (UIQM) [66], and underwater color
UDER [20] and GDCP [61] can’t completely remove the image quality evaluation (UCIQE) [67]. Where IE, AG, and
color cast. EUIV [47], MILHD [60] and UICR [62] obtained PCQI are normal metrics for natural image enhancement
good results, but MILHD [60] and UICR [62] presented red- methods, UIQM and UCIQE are special metrics for underwa-
dish and dark appearance respectively. However, our method ter image enhancement methods. IE is employed to evaluate
has better visual effects and higher color accuracy. In terms the color information of the image. AG is employed to evalu-
of local color restoration, the color accuracy of UDER [20], ate the clarity of the image. PCQI is used for the human eye’s
GDCP [61], and UICR [62] are not ideal. EUIV [47] and perception of image contrast. UIQM evaluates the quality of
MILHD [60] show better results, but our method has high underwater images by the colorfulness, sharpness, and con-
robustness for different camera-captured underwater images trast of underwater images. UNIQUE evaluates the quality
regarding color fidelity. of underwater images by the linear combination of chroma,
saturation, and contrast of underwater images.
Fig. 9 shows that the methods of IBLA [6] and GDCP [61]
B. UNDERWATER ENHANCING EVALUATION are not ideal for color correction and contrast enhancement of
We first evaluate our method using the dataset of UICR [62]. underwater scenes. UDER [20] and MILHD [60] are better
Which mainly includes the color charts and the 3D structure than IBLA [6] and GDCP [61] in restoring the contrast of
of the scene [62], [63]. The configuration details of this the scene. However, UDER [20] and MILHD [60] still have
underwater image and acquisition camera set are in [62]. We significant color bias. For instance, the restoration results of
TABLE 1. Quantitative results based on IE, AG and UIQM metrics for each method in Fig. 9.
TABLE 2. Quantitative results based on IE, PCQI, UIQM and UCIQE metrics for each method in Fig. 10.
FIGURE 9. Comparison the results of different methods. From left to right: 1. Original image, 2. UDER [20], 3. MILHD [60], 4. IBLA [6], 5.
Tstep [64], 6. GDCP [61], 7. UICR [62], 8. Our results. The result of the correspond to these images is shown in Table 1.
R3008 and R3204 obtained by UDER [20] are significantly appearance. Tstep [64] has higher robustness in restoring the
reddish in appearance, while the restoration results of other scene’s contrast, but there is some color bias, such as the
images are slightly blue and dark. This set of underwater enhanced images of the R3008 and R3204 with a reddish
images is enhanced by MILHD [60] with an obvious reddish appearance. UICR [62] has higher robustness in restoring
FIGURE 10. Comparison the results of different methods. From left to right: 1. Original image, 2. EUIV [47], 3. UDER [20], 4. IBLA [6], 5.
Tstep [64], 6. GDCP [61], 7. UICR [62], 8. Our results. The result of the correspond to these images is shown in Table 2.
scene contrast and color, but the recovered image with the TABLE 3. Running time with IBLA [6], UDER [20], GDCP [61], UICR [62],
MILHD [60], Tstep [64], EUIV [47], and our method.
problem of blurring the details. Our methods are superior to
comparison methods in color correction, contrast, and detail
enhancement and have similar or usually higher values of IE,
AG, and UIQM metrics in Table 1.
As shown in Fig. 10, as can be observed that UDER [20],
IBLA [6], and GDCP [61] didn’t perform well. EUIV [47]
and Tstep [64] have higher robustness in restoring the contrast
of the scene, but they also have some color bias. For example,
the enhanced image of Diver2 with reddish was obtained
by EUIV [47], and the restored images of Diver4 and Coral
reef1 with reddish were obtained by Tstep [64]. In spite of the restored images are lost. For instance, the restored images
UICR [62] performing well for underwater scenes, details of of Fish2 with overexposure and detail loss were obtained by
the restored images are lost. For instance, the restored images UICR [62]. As shown in Table 2, our method is significantly
of Fish2 with overexposure and detail loss were obtained by better than other methods.
UICR [62]. As shown in Table 2, our method is significantly To compare the running time, on the basis of the same
better than other methods. hardware configuration, system, and Matlab R2019b, the
As shown in Fig. 10, as can be observed that UDER [20], average time of each method running ten times is shown
IBLA [6], and GDCP [61] didn’t perform well. EUIV [47] in Table 3. It can be seen that our method outperforms
and Tstep [64] have higher robustness in restoring the contrast other comparison methods. The advantage becomes more
of the scene, but they also have some color bias. For example, pronounced with an increase in image resolution. UDER [20],
the enhanced image of Diver2 with reddish was obtained by IBLA [6], GDCP [61], and UICR [62] are based on the
EUIV [47], and the restored images of Diver4 and Coral methods of solving complex physical models and therefore
reef1 with reddish were obtained by Tstep [64]. Despite run longer. MILHD [60] increased the complexity of the
UICR [62] performing well for underwater scenes, details of method. EUIV [47] is slightly more complicated than our
FIGURE 11. Comparison results in extreme scenes with the nonuniform illumination condition. from left to right: 1. DCP [18], 2. EUIV [47], 3. UDER [20],4.
IBLA [6], 5. Tstep [64], 6. GDCP [61],7. UICR [62], 8. Our results.
FIGURE 12. Comparison results on underwater transmission estimation. from left to right: 1. EUIV [47], 2. UDER [20], 3. MILHD [60], 4. IBLA [6], 5. Tstep
[64], 6. GDCP [61], 7. UICR [62], Our results.
C. APPLICATION
The purpose of enhancing the underwater image is to benefit
autonomous underwater navigation [68], underwater target
tracking [69], and key feature point matching [48]. To verify
the application effect of our method after image enhancement
using the key feature point matching as an example.
Fig. 13 shows feature matching results using SIFT on
the original and enhanced images. It is not difficult to see
that more feature matching points can be obtained under the
same threshold condition by using our method to enhance the
FIGURE 13. Applying standard SIFT significantly improves the accuracy of image.
point matching on our enhanced image (bottom) compared to the
original underwater image (top).
VI. CONCLUSION
We propose an underwater image enhancement method,
method because of the use of too many fusion images and which mainly includes four parts: color compensation, color
weight maps. Tstep [64] doesn’t require too many complex correction, detail sharpening, and contrast enhancement.
operations and therefore is superior to our method. The run- Our proposed method realizes the color compensation from
ning time of IBLA [6], UDER [20], GDCP [61], UICR [62], multi-channel to color correction. It solves the detail blurring
and MILHD [60] is significantly increasing with the rapid and low contrast by the detail sharpening of the Gaussian
increase in image resolution, while Tstep [64], EUIV [43] and differential pyramid and the local contrast enhancement
our method are slowly growing. of CLAHE. Experimental results show that our method
Besides, Fig. 11 further considers that captured underwa- improves contrast, detail information, and color correction by
ter images usually appear yellowish and have low contrast. multi-scale Retinex (MSR) based on auto-levels. In particu-
For such images, we need to perform color compensation, lar, our method maintains the advantages of both qualitative
which can effectively correct the image’s color. It can be and quantitative metrics and has fast and highly efficient
seen from Fig. 11 that our method outperforms the other performance.
methods. Fig. 12 shows that the proposed method has signif-
icantly improved the estimation of transmission maps based REFERENCES
on DCP [18]. It can be seen that UDER [20], Tstep [64], and [1] W. Zhang, P. Zhuang, H.-H. Sun, G. Li, S. Kwong, and C. Li, ‘‘Underwater
GDCP [61] estimate that the effectiveness of the transmission image enhancement via minimal color loss and locally adaptive con-
trast enhancement,’’ IEEE Trans. Image Process., vol. 31, pp. 3997–4010,
map is poor. In contrast, EUIV [47], IBLA [6], and UICR [62] 2022.
are superior to them in the estimation of transmission maps. [2] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, ‘‘Underwater
Furthermore, MILHD [60] method has a good transmittance image enhancement via medium transmission-guided multi-color space
embedding,’’ IEEE Trans. Image Process., vol. 30, pp. 4985–5000, 2021.
estimation performance, but some details are lost. However,
[3] Z. Jiang, Z. Li, S. Yang, X. Fan, and R. Liu, ‘‘Target oriented perceptual
our method significantly improves transmission map estima- adversarial fusion network for underwater image enhancement,’’ IEEE
tion and highlights the details. Trans. Circuits Syst. Video Technol., vol. 14, no. 8, pp. 1–15, Aug. 2021.
[4] W. Zhang, Y. Wang, and C. Li, ‘‘Underwater image enhancement by attenu- [28] A. S. A. Ghani and N. A. M. Isa, ‘‘Enhancement of low quality underwater
ated color channel correction and detail preserved contrast enhancement,’’ image through integrated global and local contrast correction,’’ Appl. Soft
IEEE J. Ocean. Eng., vol. 47, no. 3, pp. 1–18, Mar. 2022. Comput., vol. 37, pp. 332–344, Dec. 2015.
[5] P. Zhuang, J. Wu, F. Porikli, and C. Li, ‘‘Underwater image enhancement [29] R. Hummel, ‘‘Image enhancement by histogram transformation,’’ Comput.
with hyper-Laplacian reflectance priors,’’ IEEE Trans. Image Process., Graph. Image Process., vol. 6, no. 2, pp. 184–195, Apr. 1977.
vol. 31, pp. 5442–5455, 2022. [30] S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz,
[6] Y.-T. Peng and P. C. Cosman, ‘‘Underwater image restoration based on T. Greer, B. ter Haar Romeny, J. B. Zimmerman, and K. Zuiderveld,
image blurriness and light absorption,’’ IEEE Trans. Image Process., ‘‘Adaptive histogram equalization and its variations,’’ Comput. Vis.,
vol. 26, no. 4, pp. 1579–1594, Apr. 2017. Graph., Image Process., vol. 39, no. 3, pp. 355–368, Sep. 1987.
[7] A. S. A. Ghani and N. A. M. Isa, ‘‘Underwater image quality enhancement [31] M. S. Hitam, W. N. J. H. W. Yussof, E. A. Awalludin, and Z. Bachok,
through integrated color model with Rayleigh distribution,’’ Appl. Soft ‘‘Mixture contrast limited adaptive histogram equalization for underwater
Comput., vol. 27, pp. 219–230, Feb. 2015. image enhancement,’’ in Proc. Int. Conf. Comput. Appl. Technol. (ICCAT),
[8] X. Zhao, T. Jin, and S. Qu, ‘‘Deriving inherent optical properties from back- Jan. 2013, pp. 1–5.
ground color and underwater image enhancement,’’ Ocean Eng., vol. 94, [32] Y.-C. Liu, W.-H. Chan, and Y.-Q. Chen, ‘‘Automatic white balance for
pp. 163–172, Jan. 2015. digital still camera,’’ IEEE Trans. Consum. Electron., vol. 41, no. 3,
[9] C. Li, J. Guo, and C. Guo, ‘‘Emerging from water: Underwater image pp. 460–466, Aug. 1995.
color correction based on weakly supervised color transfer,’’ IEEE Signal
[33] G. Buchsbaum, ‘‘A spatial processor model for object colour perception,’’
Process. Lett., vol. 25, no. 3, pp. 323–327, Mar. 2018.
J. Franklin Inst., vol. 310, no. 1, pp. 1–26, Jul. 1980.
[10] M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas,
[34] J. van de Weijer, T. Gevers, and A. Gijsenij, ‘‘Edge-based color constancy,’’
‘‘Synthetic aperture confocal imaging,’’ ACM Trans. Graph., vol. 23, no. 3,
IEEE Trans. Image Process., vol. 16, no. 9, pp. 2207–2214, Sep. 2007.
pp. 825–834, Aug. 2004.
[11] T. Treibitz and Y. Y. Schechner, ‘‘Turbid scene enhancement using multi- [35] A. Gijsenij, T. Gevers, and J. van de Weijer, ‘‘Improving color constancy
directional illumination fusion,’’ IEEE Trans. Image Process., vol. 21, by photometric edge weighting,’’ IEEE Trans. Pattern Anal. Mach. Intell.,
no. 11, pp. 4662–4667, Jul. 2012. vol. 34, no. 5, pp. 918–929, May 2011.
[12] B. Ouyang, F. Dalgleish, A. Vuorenkoski, W. Britton, B. Ramos, and [36] Y. Gao, H.-M. Hu, B. Li, and Q. Guo, ‘‘Naturalness preserved nonuniform
B. Metzger, ‘‘Visualization and image enhancement for multistatic under- illumination estimation for image enhancement based on Retinex,’’ IEEE
water laser line scan system using image-based rendering,’’ IEEE J. Ocean. Trans. Multimedia, vol. 20, no. 2, pp. 335–344, Feb. 2017.
Eng., vol. 38, no. 3, pp. 566–580, Jul. 2013. [37] K. R. Joshi and R. S. Kamathe, ‘‘Quantification of retinex in enhancement
[13] S. G. Narasimhan and S. K. Nayar, ‘‘Contrast restoration of weather of weather degraded images,’’ in Proc. Int. Conf. Audio, Lang. Image
degraded images,’’ IEEE Trans. Pattern Anal. Mach. Learn., vol. 25, no. 6, Process., Jul. 2008, pp. 1229–1233.
pp. 713–724, Jun. 2003. [38] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, ‘‘A retinex-
[14] Y. Y. Schechner and Y. Averbuch, ‘‘Regularized image recovery in scat- based enhancing approach for single underwater image,’’ in Proc. IEEE Int.
tering media,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 9, Conf. Image Process. (ICIP), Oct. 2014, pp. 4572–4576.
pp. 1655–1660, Sep. 2007. [39] S. M. Alex and M. H. Supriya, ‘‘Underwater image enhancement using
[15] T. Treibitz and Y. Y. Schechner, ‘‘Active polarization descattering,’’ IEEE single scale retinex on a reconfigurable hardware,’’ in Proc. Int. Symp.
Trans. Pattern Anal. Mach. Intell., vol. 31, no. 3, pp. 385–399, Mar. 2009. Ocean Electron. (SYMPOL), Nov. 2015, pp. 1–5.
[16] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, [40] J. Perez, A. C. Attanasio, N. Nechyporenko, and P. J. Sanz, ‘‘A deep learn-
M. Uyttendaele, and D. Lischinski, ‘‘Deep photo: Model-based photograph ing approach for underwater image enhancement,’’ in Proc. Int. Work-Conf.
enhancement and viewing,’’ ACM Trans. Graph., vol. 27, no. 5, pp. 1–10, Interplay Between Natural Artif. Comput. Cham, Switzerland: Springer,
Dec. 2008. 2017, pp. 183–192.
[17] Y. Tian, B. Liu, X. Su, L. Wang, and K. Li, ‘‘Underwater imaging based on [41] W. Zhang, L. Dong, and W. Xu, ‘‘Retinex-inspired color correction and
LF and polarization,’’ IEEE Photon. J., vol. 11, no. 1, pp. 1–9, Feb. 2019. detail preserved fusion for underwater image enhancement,’’ Comput.
[18] K. He, J. Sun, and X. Tang, ‘‘Single image haze removal using dark Electron. Agricult., vol. 192, Jan. 2022, Art. no. 106585.
channel prior,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 12, [42] X. Zong, Z. Chen, and D. Wang, ‘‘Local-CycleGAN: A general end-to-
pp. 2341–2353, Dec. 2011. end network for visual enhancement in complex deep-water environment,’’
[19] P. Drews, E. do Nascimento, F. Moraes, S. Botelho, and M. Campos, Appl. Intell., vol. 51, no. 4, pp. 1947–1958, Oct. 2021.
‘‘Transmission estimation in underwater single images,’’ in Proc. IEEE Int. [43] W. Zhang, L. Dong, T. Zhang, and W. Xu, ‘‘Enhancing underwater image
Conf. Comput. Vis. Workshops, Dec. 2013, pp. 825–830. via color correction and bi-interval contrast enhancement,’’ Signal Pro-
[20] P. L. J. Drews, E. R. Nascimento, S. S. C. Botelho, and cess., Image Commun., vol. 90, Jan. 2021, Art. no. 116030.
M. F. M. Campos, ‘‘Underwater depth estimation and image restoration [44] S. Sun, H. Wang, H. Zhang, M. Li, M. Xiang, C. Luo, and P. Ren, ‘‘Under-
based on single images,’’ IEEE Comput. Graph. Appl., vol. 36, no. 2, water image enhancement with reinforcement learning,’’ IEEE J. Ocean.
pp. 24–35, Mar. 2016. Eng., early access, Apr. 7, 2022, doi: 10.1109/JOE.2022.3152519.
[21] M. Zhang and J. Peng, ‘‘Underwater image restoration based on
[45] R. Liu, Z. Jiang, S. Yang, and X. Fan, ‘‘Twin adversarial contrastive
a new underwater image formation model,’’ IEEE Access, vol. 6,
learning for underwater image enhancement and beyond,’’ IEEE Trans.
pp. 58634–58644, 2018.
Image Process., vol. 31, pp. 4922–4936, 2022.
[22] Y. Liu, S. Rong, X. Cao, T. Li, and B. He, ‘‘Underwater single image
[46] J. Wang, P. Li, J. Deng, Y. Du, J. Zhuang, P. Liang, and P. Liu, ‘‘CA-GAN:
dehazing using the color space dimensionality reduction prior,’’ IEEE
Class-condition attention GAN for underwater image enhancement,’’ IEEE
Access, vol. 8, pp. 91116–91128, 2020.
Access, vol. 8, pp. 130719–130728, 2020.
[23] Y. Fang, W. Lin, B.-S. Lee, C.-T. Lau, Z. Chen, and C.-W. Lin, ‘‘Bottom-
up saliency detection model based on human visual sensitivity and ampli- [47] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, ‘‘Enhancing underwater
tude spectrum,’’ IEEE Trans. Multimedia, vol. 14, no. 1, pp. 187–198, images and videos by fusion,’’ in Proc. IEEE Conf. Comput. Vis. Pattern
Sep. 2012. Recognit., Jun. 2012, pp. 81–88.
[24] M.-J. Seow and V. K. Asari, ‘‘Ratio rule and homomorphic filter for [48] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, ‘‘Color
enhancement of digital colour image,’’ Neurocomputing, vol. 69, nos. 7–9, balance and fusion for underwater image enhancement,’’ IEEE Trans.
pp. 954–958, Mar. 2006. Image Process., vol. 27, no. 1, pp. 379–393, Jan. 2018.
[25] S.-B. Gao, M. Zhang, Q. Zhao, X.-S. Zhang, and Y.-J. Li, ‘‘Underwater [49] R. Sethi and S. Indu, ‘‘Fusion of underwater image enhancement and
image enhancement using adaptive retinal mechanisms,’’ IEEE Trans. restoration,’’ Int. J. Pattern Recognit. Artif. Intell., vol. 34, no. 3, Mar. 2020,
Image Process., vol. 28, no. 11, pp. 5580–5595, Nov. 2019. Art. no. 2054007.
[26] S. Wang, K. Gu, S. Ma, W. Lin, X. Liu, and W. Gao, ‘‘Guided image [50] X. Liu, Z. Gao, and B. M. Chen, ‘‘MLFcGAN: Multilevel feature fusion-
contrast enhancement based on retrieved images in cloud,’’ IEEE Trans. based conditional GAN for underwater image color correction,’’ IEEE
Multimedia, vol. 18, no. 2, pp. 219–232, Feb. 2015. Geosci. Remote Sens. Lett., vol. 17, no. 9, pp. 1488–1492, Sep. 2020.
[27] H. Xu, G. Zhai, X. Wu, and X. Yang, ‘‘Generalized equalization model for [51] J. Y. Chiang and Y.-C. Chen, ‘‘Underwater image enhancement by wave-
image enhancement,’’ IEEE Trans. Multimedia, vol. 16, no. 1, pp. 68–82, length compensation and dehazing,’’ IEEE Trans. Image Process., vol. 21,
Jan. 2013. no. 4, pp. 1756–1769, Apr. 2012.
[52] S. Negahdaripour and A. Sarafraz, ‘‘Improved stereo matching in scat- PEIXIN QU received the M.S. degree in physi-
tering media by incorporating a backscatter cue,’’ IEEE Trans. Image cal electronics from Belarusian State University,
Process., vol. 23, no. 12, pp. 5743–5755, Dec. 2014. in 2007. Currently, he is an Associate Professor
[53] S. Zhang, T. Wang, J. Dong, and H. Yu, ‘‘Underwater image enhancement with the Henan Institute of Science and Technol-
via extended multi-scale Retinex,’’ Neurocomputing, vol. 245, Jul. 2017, ogy. His research interests include the areas of
pp. 1–9. machine learning and wireless sensor networks.
[54] D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, ‘‘A multiscale Retinex
for bridging the gap between color images and the human observation of
scenes,’’ IEEE Trans. Image Process., vol. 6, no. 7, pp. 965–976, Jul. 1997.
[55] W. Zhang, L. Dong, X. Pan, J. Zhou, L. Qin, and W. Xu, ‘‘Single image
defogging based on multi-channel convolutional MSRCR,’’ IEEE Access,
vol. 7, pp. 72492–72504, 2019.
[56] C. O. Ancuti and C. Ancuti, ‘‘Single image dehazing by multi-scale
fusion,’’ IEEE Trans. Image Process., vol. 22, no. 8, pp. 3271–3282,
Aug. 2013.
[57] L. Zhang, Y. Gao, Y. Xia, Q. Dai, and X. Li, ‘‘A fine-grained image
categorization system by cellet-encoded spatial pyramid modeling,’’ IEEE
Trans. Ind. Electron., vol. 62, no. 1, pp. 564–571, Jan. 2015.
[58] X. Sun, P. L. Rosin, R. R. Martin, and F. C. Langbein, ‘‘Bas-relief gen-
eration using adaptive histogram equalization,’’ IEEE Trans. Vis. Comput.
Graphics, vol. 15, no. 4, pp. 642–653, Aug. 2009.
[59] S. K. Wajid, A. Hussain, and K. Huang, ‘‘Three-dimensional local energy- YING ZHENG received the M.S. degree in com-
based shape histogram (3D-LESH): A novel feature extraction technique,’’
puter software and theory from Henan Normal
Expert Syst. Appl., vol. 112, pp. 388–400, Dec. 2018.
University, in 2013. Currently, she is a Lecturer
[60] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, ‘‘Underwater
image enhancement by dehazing with minimum information loss and
with the Henan Institute of Science and Technol-
histogram distribution prior,’’ IEEE Trans. Image Process., vol. 25, no. 12, ogy. Her research interests include machine learn-
pp. 5664–5677, Dec. 2016. ing, deep learning applications, and agricultural
[61] Y.-T. Peng, K. Cao, and P. C. Cosman, ‘‘Generalization of the dark channel informatization.
prior for single image restoration,’’ IEEE Trans. Image Process., vol. 27,
no. 6, pp. 2856–2868, Jun. 2018.
[62] D. Berman, D. Levy, S. Avidan, and T. Treibitz, ‘‘Underwater single
image color restoration using haze-lines and a new quantitative dataset,’’
IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 8, pp. 2822–2837,
Aug. 2021.
[63] D. Berman, T. Treibitz, and S. Avidan, ‘‘Diving into haze-lines: Color
restoration of underwater images,’’ in Proc. Brit. Mach. Vis. Conf. (BMVC),
vol. 1, no. 2, 2017, pp. 1–12.
[64] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, ‘‘Two-step approach for
single underwater image enhancement,’’ in Proc. Int. Symp. Intell. Signal
Process. Commun. Syst. (ISPACS), Nov. 2017, pp. 789–794.
[65] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, ‘‘A patch-structure
representation method for quality assessment of contrast changed images,’’
WENYI ZHAO is currently pursuing the Ph.D.
IEEE Signal Process. Lett., vol. 22, no. 12, pp. 2387–2390, Dec. 2015.
[66] K. Panetta, C. Gao, and S. Agaian, ‘‘Human-visual-system-inspired under- degree with the School of Artificial Intelligence,
water image quality measures,’’ IEEE J. Ocean. Eng., vol. 41, no. 3, Beijing University of Posts and Telecommuni-
pp. 541–551, Jul. 2016. cations, Beijing, China. His research interests
[67] M. Yang and A. Sowmya, ‘‘An underwater color image quality evaluation include machine learning, deep learning, and
metric,’’ IEEE Trans. Image Process., vol. 24, no. 12, pp. 6062–6071, machine vision.
Dec. 2015.
[68] J. Kim, H. Joe, S. C. Yu, J. S. Lee, and M. Kim, ‘‘Time-delay controller
design for position control of autonomous underwater vehicle under dis-
turbances,’’ IEEE Trans. Ind. Electron., vol. 63, no. 2, pp. 1052–1061,
Feb. 2016.
[69] A.-A. Saucan, T. Chonavel, C. Sintes, and J.-M. Le Caillec, ‘‘CPHD-DOA
tracking of multiple extended sonar targets in impulsive environments,’’
IEEE Trans. Signal Process., vol. 64, no. 5, pp. 1147–1160, Mar. 2016.
SONGLIN JIN received the M.S. degree in com- WEIDONG ZHANG received the Ph.D. degree in
puter application technology from Central China information and communication engineering from
Normal University, in 2010. Currently, he is a Lec- the School of Information Science and Technol-
turer with the Henan Institute of Science and Tech- ogy, Dalian Maritime University, Dalian, China,
nology. His research interests include machine in June 2022. He is a Lecturer with the School of
learning and hyperspectral image processing. Information Engineering, Henan Institute of Sci-
ence and Technology, Xinxiang, China. His cur-
rent research interests include image processing,
computer vision, and machine learning.