A Fast Image Dehazing Algorithm Based On Negative Correction
A Fast Image Dehazing Algorithm Based On Negative Correction
Signal Processing
journal homepage: www.elsevier.com/locate/sigpro
a r t i c l e in f o abstract
Article history: Dehazing is an important but difficult issue for image processing. Recently, many dehazing
Received 2 July 2013 algorithms have been proposed based on the dark channel prior. However, these
Received in revised form algorithms fail to achieve a good tradeoff between the dehazing performance and the
27 November 2013
computational complexity. Moreover, the perceptual quality of these algorithms can be
Accepted 18 February 2014
further improved, especially for sky areas. Therefore, this paper firstly introduces the
Available online 2 March 2014
concept of negative correction inspired by the practical application of photographic
Keywords: developing and a fast image dehazing algorithm is accordingly proposed. Based on
Negative correction the observation of the photographic developing, we find that the contrast of images
Dehazing
can be enlarged and their saturation can also be increased when their negative images
Image enhancement
(or reverse image) are rectified. Thus, instead of estimating the transmission map, the
The dark channel prior
Maximum-filter correction factor of negative is estimated and it is used to rectify the corresponding haze
images. In order to suppress halos, a modified maximum-filter is proposed to limit the
larger value of correction factor of local region. The experimental results demonstrate that
the proposed algorithm can effectively remove hazes and maintain the naturalness of
images. Moreover, the proposed algorithm can significantly reduce the computational
complexity by 56.14% on average when compared with the state-of-the-art.
& 2014 Elsevier B.V. All rights reserved.
1. Introduction With lowered contrast and faded color, the degraded dull
images lose their visual vividness and appeal.
Image restoration and enhancement is a classical research Image dehazing is an important issue in many scene
area but is still a hot topic. There are many reasons for image understanding applications such as surveillance systems,
quality decline. Therefore, a lot of specialized methods [1–5] intelligent vehicles, satellite imaging, aerial imagery [6], or
are proposed to improve image quality. One source of the target identification and feature extraction in a reliable
reasons is the presence of haze. Haze is caused by suspended fashion. However, image dehazing remains a challenge due
particles or water droplets in the atmosphere. The visibility to the unknown scene depth information. According to the
of the scene is degraded due to a series of reactions, such previous research, the conventional contrast enhancement
as scattering, refraction, and absorption between these algorithms, such as the histogram stretching and equal-
particles or water droplets and light from the atmosphere. ization, the linear mapping, and the gamma correction, fail
to achieve an appropriate result and will introduce color
distortion, which cannot meet the requirement of practical
applications [7].
n
Corresponding author at: Beijing Key Laboratory of Digital Media, Since the amount of scattering is a function of the
School of Computer Science and Engineering, Beihang University, Beijing
distance, the degradation is spatial variant. In order to extract
100191, China. Fax: þ86 10 82316225.
E-mail address: [email protected] (H.-M. Hu). depth information for image dehazing, researchers focused
1
The authors contributed equally to this work. on image dehazing by using additional information [8,9] or
http://dx.doi.org/10.1016/j.sigpro.2014.02.016
0165-1684 & 2014 Elsevier B.V. All rights reserved.
Y. Gao et al. / Signal Processing 103 (2014) 380–398 381
multiple images [10–15]. Recently, many single image haze images) have higher contrast than images plagued by bad
removal algorithms [16–22] have been proposed by using weather. However, although the perceptual quality can
reasonable priors/assumptions [23–25] and the widely-used be improved, the enhanced images fail to maintain the
haze imaging model proposed in Refs. [13,14]. original color. Since haze-free images do not always have
Tan et al. [23] proposed to maximize the local contrast the maximum contrast, this algorithm may easily lead
of an input image to improve the image visibility, due to image color over-saturation and distortion. The above
to the observation that haze-free images (or enhanced conclusion can be observed from Fig. 1(b), wherein there is
Fig. 1. The technique limitation of the current dehazing algorithms. ((b) is from He's project page: http://research.microsoft.com/en-us/um/people/kahe/
cvpr09/comparisons.html; (e) is from Fattal's project page: http://www.cs.huji.ac.il/ raananf/projects/defog/). (a) Original image, (b) Tan result [18],
(c) Our result, (d) Original image, (e) Fattal result [24], (f) Our result, (g) Original image, (h) DCP result [25], (i) Our result, (j) Original image, (k) GDCP result
[20], and (l) Our result.
382 Y. Gao et al. / Signal Processing 103 (2014) 380–398
obvious distortion in the playground area. Fattal et al. [24] drawn in Section 6. The paper is concluded in Section 7
firstly estimated the reflectivity of the scene and derived and Section 8 is Appendix A.
the transmission rate of light propagating in the air by
assuming that the transmission and target surface shading 2. Related work and observation
is partially uncorrelated. This algorithm requires that the
independent components significantly vary. Dense haze In this section, we introduce the haze imaging model
regions (lack of variation) will make premise unreliable. and the dark channel prior is analyzed in detail. Then, we
Moreover, it needs enough color information, otherwise introduce the concept of the negative correction, which
it cannot get a reliable transmission map, which will was originally used in the practical application of photo-
result in color distortion of recovered image (as shown in graphic developing. Note that the proposed fast dehazing
Fig. 1(e)). Thus, it fails to deal with gray level images. algorithm is based on the negative correction.
He et al. firstly proposed the dark channel prior (DCP) for
image dehazing in Ref. [25], which is based on statistics of a 2.1. Haze imaging model and dark channel prior
large number of haze-free images. They found that low
intensity pixels often existed in most of the local region Firstly, a color image can be denoted with a three-
except sky. According to these low intensity pixels, we can dimensional vector I(x,y), where (x,y) refers to a discrete
directly estimate the transmission map and achieve a sig- spatial location. Although the physical mechanism of haze
nificant perceptual quality improvement by combining with is sophisticated, the general hazy imaging model becomes
the haze imaging model and the soft matting [26]. Although simple when we only consider single scattering [13,14].
this algorithm yields state-of-the-art results for most haze This model has been widely used in computer vision and it
images, the soft matting in Ref. [26] has a high computational can be represented as follows:
complexity, and it cannot work well in some cases. Especially, Iðx; yÞ ¼ Jðx; yÞ tðx; yÞ þA ð1 tðx; yÞÞ ð1Þ
the assumption is not suitable for the sky. Some improved
algorithms [18–22] and generalization works [7,16] are where I is the observed image degraded by haze, J is the
proposed based on this assumption. surface radiance or the haze-free image, A is the global
Recently, He et al. [20] suggest applying the guided atmospheric light. t is the transmission that is not scat-
filter for image dehazing instead of soft matting to tered and reaches the cameras along that ray, when the
improve the dark channel prior (it is simply labeled as atmosphere is homogenous, tðx; yÞ ¼ e βdðx;yÞ . Here, β is the
GDCP in this paper). This algorithm reduces the complex- scattering coefficient of the atmosphere, and d is the scene
ity. However, the guided filter can only obtain an approx- depth. The first term Jðx; yÞ tðx; yÞ on the right side is
imate solution to the soft matting [26]; this may lead to known as direct attenuation, and the second term A
image blurring which can be observed from Fig. 1(k). ð1 tðx; yÞÞ is called air-light. Direct attenuation describes
Therefore, in order to further improve the perceptual the scene radiance and its decay in the medium, while air-
quality, while reducing the computational complexity, this light results from previously scattered light and leads to
paper firstly introduces the concept of negative correction the shift of the scene color. The basic idea of haze removal
inspired by the practical application of photographic is to obtain J(x,y) by estimating A and t(x,y).
developing and a fast image dehazing algorithm is accord- In order to estimate A and t(x,y), He et al. [25] proposed
ingly proposed. Based on the observation of the photo- the dark channel prior based on the observation of out-
graphic developing, we find that the contrast of images door haze-free images. The observation is that most of the
can be enlarged and their saturation can also be increased haze-free images contain some pixels which have rela-
when their negative images (or reverse image) are revised. tively low intensities in one color channel at least. For-
Thus, instead of estimating the transmission map, the mally, for an image J, its dark channel can be defined as
correction factor of negative of images is estimated and Idark ðx; yÞ ¼ min ð min I c ðx; yÞÞ ð2Þ
it is used to rectify the corresponding haze images. In ðx;yÞ A Ωððx;yÞ c A fR;G;Bg
order to further suppress the slight halo, a modified where Ic is a color channel of I and Ωðx; yÞ is a local patch
maximum-filter is proposed to limit the maximum value centered at (x,y).
of the correction factor in local region. The experimental For a haze-free image (or enhanced image) J, its dark
results demonstrate that the proposed dehazing algorithm
channel tends to be zero, namely J dark -0. This observation is
can efficiently remove the haze. Moreover, when com-
called the dark channel prior. According to the haze imaging
pared with DCP [25] and GDCP [20], the proposed algo-
model, the preliminary transmission can be simply esti-
rithm can not only maintain the most details, but also
mated by the dark channel prior, tðx; yÞ ¼ 1 minð min
significantly reduce the computational complexity by c ðx;yÞ A Ωðx;yÞ
94.13% and 56.14% on average, respectively. ðI c ðx; yÞ=AÞÞ. Picking the top 0.1% brightest pixels in the dark
The paper is organized as follows. Starting with the channel, the pixels with the highest intensity in the input
descriptions of related work and negative correction by image can represent the global atmospheric light A [25].
photographic developing in Section 2, we present the In fact, assuming constant transmission in a local
negative correction model and the proposed dehazing patch is imprecise. The edge information of the
algorithm in Section 3. The proposed post-process for obtained transmission map may be seriously lost. The
halos suppression is introduced in Section 4. Along with preliminary transmission is coarse and has severe arti-
comparisons of results in Section 5, the performance of the facts. Therefore, in order to improve the accuracy and
proposed algorithm is evaluated. Some discussion will be capture the depth changes at object edges, we need to
Y. Gao et al. / Signal Processing 103 (2014) 380–398 383
refine the transmission map by using the soft matting Recently, He et al. [20] proposed the guided filter to
algorithm, which could be computational intensive refine the coarse transmission map and speed up the
[20,21]. dehazing process. Since GDCP is based on the guided filter
Fig. 2. A simple simulation of negative correction with a gray image. (a) Negative of original image, (b) Adjusted negative by gamma transformation,
(c) Adjusted negative by logarithmic transformation, (d) Adjusted negative by linear transformation, (e) Original image, (f) Adjusted image by (b),
(g) Adjusted image by (c), (h) Adjusted image by (c).
Fig. 3. Negative linear transformation results with color hazing images. (a–e) Original images, (f–j) Negatives, (k–o) Adjusted negative, (p–t) Dehazing
images.
384 Y. Gao et al. / Signal Processing 103 (2014) 380–398
and the guided filter can only obtain an approximate As a result, the areas of the frame that are stuck by light
solution to the soft matting in Ref. [26], it fails to achieve are transformed into dark black metallic-silver areas on
the comparable result when compared with the algorithm the negative. If there is a heavy deposit, the area is dense
proposed in Ref. [25]. Moreover, from the texture details (black). Since the negative is the reverse of the final image
point of view, the dehazing performance of GDCP is in the print, the dense black area in the negative becomes
degraded. white in the print. Both over-exposed images and haze
images have this problem [29]. Obviously, normal-minus
2.2. Observations of the negative correction by photographic development (the negative is whiter) is required. Reduc-
developing tion is used to improve negatives that are dense [29]. The
process of reduction consists of converting part of the
Originally, the image enhancement technology was silver image into a compound which can be dissolved.
firstly applied in the photographic developing with only There are types of reducers and most of them are propor-
the optical image itself. The basic process of photographic tional reducers. They remove silver from the negative in
developing enhancement lies in getting appropriate infor- direct proportion to the total amount [33].
mation from the negative and its basic idea was obtained Under the inspiration of this process, we increase the
based on two experimental and observational discoveries. brightness of haze image negative by gamma transforma-
One is that the densities can be adjusted through applying tion, logarithmic transformation and linear transformation
more or less development to the negative [27]. The other is for simulation. Fig. 2 provides an example with a gray
that the development modification has its primary effect image. Fig. 2(e) and (a) shows one original image
on the high values (light areas) [28]. Therefore, a consider- with haze and the negative of this image, respectively.
able degree of negative development controls for inter- Fig. 2(b) and (c) shows the negatives by gamma transfor-
pretive and expressive objectives [29], which means that mation (gamma¼0.6) and logarithmic transformation.
the quality of print image can be enhanced through And Fig. 2(d) represents the brightness of the negative
adjusting the degree of negative development. The control which is proportionally increased by 90%. Obviously, linear
of negative development is called the negative correction. transformation can get a better result.
It is well known that when light strikes the emulsion, a The photographic developing enhancement can be
change occurs in the microscopic crystals of silver-halide. easily implemented for gray images, but it cannot produce
These crystals that are struck by light are altered so that, a satisfactory result for color images due to complex
when developed, they form crystals of black metallic silver. chemical changes. In terms of digital images, since they
Fig. 4. Illustration of the negative correction with a 1-D curve. (a) Original image, (b) Adjust negative by gamma transformation gamma ¼0.6, (c) Adjust
negative by gamma transformation gamma¼0.2, (d) Adjusted negative by logarithmic transformation, (e) Negative is proportionally increased by 40%, and
(f) Negative is proportionally increased by 60%.
Y. Gao et al. / Signal Processing 103 (2014) 380–398 385
have operational and inclusion-free negatives, both gray gamma transformation and logarithmic transformation do
images and color images can be easily enhanced. The not have such properties as shown in Fig. 4(b)–(d).
observation has been validated on a large set of color From an intuitive point of view, increasing the negative
images. We illustrate this in Fig. 3. brightness by linear transformation is equivalent to remov-
ing a percentage of the negative from the original image. The
higher the original image brightness is, the lower the
2.3. Analysis of the negative correction negative brightness is; so this process can expand image
brightness (gray value) difference and can improve the
A simple 1-D example is provided to explain the correc- contrast. Similar to the contrast, from the perspective of a
tion of the negative process through Fig. 4. In Fig. 4, the single pixel, the higher the channel value of the original
solid lines represent the original haze image and the image is, the lower the channel value of the negative is. As a
enhanced images, and their negatives are represented by result, this process can reduce the proportion of the mini-
the dotted lines accordingly. It can be observed in Fig. 4(a) mum channel and can improve the saturation. This has been
that the original haze image (solid line) has a relatively proved by converting to the HSI space. You can find it in
low contrast. We increase the brightness of negative to Appendix A section of this paper.
make the darkest point of the original image reach a very
small value by gamma transformation, logarithmic trans-
formation and linear transformation respectively as shown 3. The proposed dehazing algorithm based on negative
in Fig. 4(b)–(f). From Fig. 4(e) and (f) we can see that correction
the contrast is enhanced with linear increase of negative
brightness and the more the proportion is the more the In this section, we first establish a basic negative
contrast is enlarged (curve becomes steeper). However, correction model. With this negative correction model
Fig. 6. Comparison results of halo region between the DCP and the proposed algorithm. (a) Original image, (b) Halo-effect of DCP, and (c) Halo-effect of our
algorithm.
Fig. 7. Comparison of intermediate results between the DCP and the proposed algorithm. (a) Original image, (b) DCP result without soft matting, and (c)
Our result without post-process.
386 Y. Gao et al. / Signal Processing 103 (2014) 380–398
and the observation above, we enhance haze image by Let J(x,y) denote an enhanced image. Thus, the process
assuming a simple local linear relationship without of negative correction can be represented as
constant term.
J c ðx; yÞ ¼ W FðI c ðx; yÞÞ; c A fR; G; Bg ð4Þ
Let I(x,y) and Iðx; yÞ denote a color image with three
channels RGB and the negative of this color image, where F is a transformation function to adjust the nega-
respectively. Then, we have tive, the adjustment of one channel is shown in Fig. 5.
Obviously, it is important but difficult to establish a
I c ðx; yÞ ¼ W I c ðx; yÞ; c A fR; G; Bg ð3Þ
suitable transformation function F. From the analysis
c
where I (x,y) is a color channel of I(x,y) and W is a matrix above, we know that increasing the negative brightness
the same size as Ic(x,y) and all elements in W are equal linearly helps to improve the contrast and the saturation to
to 255. achieve dehazing effects. Since the depth is not uniform, in
Fig. 8. Comparison results in noise image. (a) Haze image with manually random noise, (b) DCP result without soft matting, and (c) Our result without
post-process.
Fig. 9. An example of halo-effect. (a) The minimum color channel of the original image, (b) The negative of the minimum color channel, and (c) The
minimum color channel of the enhanced image.
Fig. 10. Experimental results comparison between without and with the proposed post-process (natural scenery image). (a) Original image, (b) The
enhanced image without post-process, (c) The enhanced image with proposed post-process, (d) The negative of the original image, (e) The negative of
enhanced image without post-process, and (f) The negative of enhanced image with proposed post-process.
Y. Gao et al. / Signal Processing 103 (2014) 380–398 387
order to get better results, we assume that there is a local gradually increasing with a short interval. In this paper, the
linear model in the three color channels between the interval is set to 0.2.
negative of input image Iðx; yÞ and the negative of the Meanwhile, the value of 1/βk should be limited to avoid
enhanced image Jðx; yÞ. With this assumption, we have over enhancing when the scene object is very bright itself
(e.g., snowy ground, sky or white wall). The dark channel
J ck ðx; yÞ ¼ αk I ck ðx; yÞ; ðx; yÞ A wk ; c A fR; G; Bg ð5Þ
prior is not reliable at all in these areas. According to our
where wk is a local patch, and αk is a factor, αk 4 1. experiments, the maximum of 1/βk should be limited to a
In fact, (5) can be rewritten by combining with (3): range between 1.5 and 2. In this paper, the maximum of
1/βk is set to 1.5 to improve the perceptual quality in
1
J ck ðx; yÞ ¼ I ck ðx; yÞ I ck ðx; yÞ; c A fR; G; Bg ð6Þ these areas.
βk
Note that the proposed algorithm works on non-
where βk is the correction factor, and 1=βk ¼ αk 1, so overlapping patches by subtracting between an image
1=βk 4 0. and its transformed negative, which is different from
Let minðJ ck ðx; yÞÞ denote the minimal component of DCP. This process can partly overcome the halo-problem
c
J k ðx; yÞ. It is obvious that minðJ ck ðx; yÞÞ is no less than zero. of DCP. As shown in Fig. 6(a), it is the original image and
c
Then the calculation of the parameter βk can be simplified the black blocks represent the edge. If the patch sizes are
to obtain an appropriate parameter in the solution space 10 10, a large halo will be produced due to repeated
under the following conditions. scanning of the darkest pixel with DCP. Possible halo
8 region is shown as Fig. 6(b). Our algorithm obviously
< J ck ðx; yÞ ¼ I ck ðx; yÞ β1 I ck ðx; yÞ; 1=βk 4 0
k
: ð7Þ reduces the halo region based on non-overlapping
: minð min ðJ ck ðx; yÞÞÞZ 0; c A fR; G; Bg patches, as Fig. 6(c) shows. As the size of β matrix is
c ðx;yÞ A wk
min
[M/m,N/n], we use bicubic interpolation to resize it. Where
Let I ðx; yÞ represent the minimum color channel. It m and n are the sizes of patches, and M and N are the
can be rewritten as width and height of the input image. The intermediate
8
> min min min results are shown in Fig. 7. It is obvious that the proposed
< J k ðx; yÞ ¼ I k ðx; yÞ β1k I k ðx; yÞ; 1=βk 4 0
ð8Þ algorithm contains fewer halos and block artifacts com-
>
: min ðJ min
k ðx; yÞÞÞ Z0 pared with DCP. Moreover, the slight halos of the proposed
ðx;yÞ A wk
algorithm can be suppressed through post-process, which
It is clear that the bigger 1/βk is, the larger the contrast will be discussed in Section 4. In addition, the proposed
is. This conclusion has been analyzed in the previous algorithm can significantly reduce the computational com-
chapter. Theoretically, 1/βk should be maximum in the plexity due to adopting non-overlapping patches and is
solution space to obtain a better perceptual quality when suitable for parallelization, while DCP has a higher com-
the darkest pixel is equal to 0. However, the darkest pixel plexity with overlapping patches. The detailed comparison
is not always exactly equal to 0, so it is not necessary for of the computational complexity is given in Section 5.
1/βk to be the actual maximum. To improve the robustness Furthermore, DCP is not immune to noises, while the
of this algorithm, we search for the optimal value of βk by proposed algorithm is robust to noises when we relax the
Fig. 11. Experimental results comparison between without and with the proposed post-process (city scenery image). (a) Original image, (b) The enhanced
image without post-process, (c) The enhanced image with proposed post-process, (d) The negative of the original image, (e) The negative of enhanced
image without post-process, and (f) The negative of enhanced image with proposed post-process.
388 Y. Gao et al. / Signal Processing 103 (2014) 380–398
restriction of (7) by limiting 95% pixels (rather than all the in Fig. 8(c). This intermediate experiment demonstrates
pixels) of one patch to meet the restriction of (7); the the robustness of the proposed algorithm.
robustness comparison between the proposed algorithm
and DCP is shown in Fig. 8, wherein some random noises 4. The proposed modified maximum-filter for halo
are explicitly added to an original haze image. As shown in suppression
Fig. 8(b), DCP results in serious halos around each noise
point. However, the proposed algorithm can achieve The above-mentioned dehazing algorithm based on
good results whether there are noises or not, as shown negative correction can effectively remove haze, but the
Fig. 12. Experimental result of DCP with MMF. (a) Original image, (b) DCP preliminary result, and (c) DCP with MMF.
Fig. 13. Experimental result comparison for city image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result, (f) Tarel result,
(g) Eriksson result, (h) GDCP result, and (i) Our result.
Y. Gao et al. / Signal Processing 103 (2014) 380–398 389
enhanced image remains with halos. Since the algorithm the experimental results. When the depth does not change
works on patches, β may be larger for some areas in one significantly, it is approximated to the mean filter. The
patch where depth changes abruptly or contains object weights are fixed, and as a result, MMF is faster than using
contour. Halos are obvious within such regions. An exam- the joint bilateral filter [30–32] which needs to calculate
ple is given in Fig. 9. the weights based on spatial and range filter kernels.
In order to make the darkest pixel become low, the Although the proposed post-process is relatively simple,
search result of β is 2.8 according to the proposed algo- it can effectively suppress the slight halos for different
rithm above. Then, the minimum color channel of the size objects. The experimental results are shown in Figs. 10
enhanced image is as shown in Fig. 9(c). Since β is larger and 11(c).
for the left, the pixels still have high values. This part is Note that it also can be applied to DCP. Since DCP has
likely to have halos, which can be observed from Figs. 10 a wider range of halo, it needs a larger filter radius, and
and 11(b). Therefore, an original filter named modified speed will be slower than our algorithm. An example is
maximum filter (MMF) is introduced as the post-process shown in Fig. 12.
to suppress the larger values of the local region. The
weights of the values which are greater than the center 5. Experimental results
value are set to sðs o1Þ and the weights of other values
located in the filter window are set to 1. The smaller s is, In this section, the performance of the proposed algorithm
the stronger the suppressive effect is, and vice versa. is fully evaluated by comparing with seven image dehazing
The value of s can be determined according to different algorithms. One of them is traditional algorithm, namely
applications, but it is set to 0.3 in this paper according to Contrast Limited Adaptive Histogram Equalization (labeled
Fig. 14. Experimental result comparison for campus image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
390 Y. Gao et al. / Signal Processing 103 (2014) 380–398
as CLAHE) [33]. Two of them are based on traditional computed by Michelson contrast ððmaxðIÞ minðIÞ=
algorithms and are used for dehazing by National Aeronautics maxðIÞ þ minðIÞÞ þ 0:01Þ [37] based on gray-scale intensity.
and Space Administration (NASA) and department of elec-
trical and computer engineering, university of Wisconsin– 5.1. Subjective comparison
Madison, namely Multi-Scale Retinex with Color Restoration
(MSRCR) [34] and Eriksson [35]. The other four dehazing Since Eriksson, CLAHE and MSRCR do not consider
algorithms, namely Fattal [24], DCP [25], Tarel [18], GDCP [10], degeneration reasons, they focus on enhancing the images
are proposed recently. The major parameters of the compared visibility. They are prone to color distortion or over-
algorithms are set as follows. MSRCR: the spatial extents enhancement, as shown in Figs. 15(g) and 16(b) In addi-
of the Gaussian function are 30, 80, and 250. Tarel: the tion, it is likely to dehaze incompletely, such as in Fig. 13
filtering radius is computed by 2 |(max(h,w)/50|þ1, (b)–(c).
where h and w are the height and width of the image. GDCP: Fattal algorithm needs enough image color information
the filtering radius is 60, and the smoothing threshold is and sufficient variance. So when the haze is dense, the
10 6 . All other parameters are consistent with the original color becomes pale, this algorithm cannot reliably estimate
papers. the transmission, as Figs. 13–15(d) show. The proposed
The test dataset consists of 200 images with different algorithm gives better result in these areas.
scenes. Due to the space limitation, this section presents Tarel attempts to get dark channel through the median
seven representatives, including four city scenery images filter. As the median filter is not conformal and edge-
(city, campus, street, and road), and three natural scenery preserving, it does not adhere to the depth information of
images (mountain, park, and tree). We first test the the scene. In some tiny edge regions, the desirable dehazing
algorithms from the subjective aspect, and then perform results cannot be achieved, as shown in Figs. 13–19(f).
objective assessment by using the information entropy DCP preserves the naturalness well, but the color
[36] and the contrast descriptor. The overall contrast of some local areas where the transmissions are small
is measured by taking the mean of regional contrasts may dramatically change after recovery, such as the area
Fig. 15. Experimental result comparison for street image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
Y. Gao et al. / Signal Processing 103 (2014) 380–398 391
Fig. 16. Experimental result comparison for road image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
marked by rectangles in Figs. 14 and 15(e). Guided filter and can achieve clearer results than DCP and GDCP in local
can only obtain an approximate solution to the soft areas (as shown in the red rectangle).
matting. So when compared with DCP, GDCP fails to
achieve the comparable result. The image will be blurred 5.2. Objective evaluation
as shown in Fig. 14(h). In addition, there also may be
residual block effect as Fig. 15(h) shows. The detailed As subjective assessment depends on the human visual
comparison with GDCP is shown in Fig. 20. system, it is hard to find an objective measure that is in
Next, in Fig. 21, we illustrate another set of images with accordance with the subjective assessment. Objective
the sky. It is obvious that the proposed algorithm can assessment is often used to explain some important
achieve a good perceptual quality in sky areas and it can characteristics of the image [38,39]. Apart from computa-
maintain a relative high brightness and smoothness of the tional complexity comparison, we assess the detailed
sky areas. However, since DCP and GDCP completely trust enhancement through the information entropy and the
dark channel prior in all regions, both of them fail to contrast descriptor.
achieve an appropriate perceptual quality in the sky areas. From Section 5.1, we have seen that only DCP, GDCP and
In addition, the proposed algorithm also works for the proposed algorithm can obtain satisfactory results on
gray-scale images. In this case, we set three channels to various haze images. Thus, we evaluate the quality of these
have the same value. As shown in Fig. 22(d), the proposed three algorithms by objective assessment. We run the
algorithm can enhance details of gray-scale haze images experiment with various image sizes and select 3 images
392 Y. Gao et al. / Signal Processing 103 (2014) 380–398
Fig. 17. Experimental result comparison for mountain image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
for each size (pictures may be cut for testing), and record the recovery in some flat high-brightness regions, as shown in
minimum time in Table 1 (the time reported in the paper is Figs. 14 and 15(e). Moreover, the proposed algorithm is
from MATLAB implementation on a PC with Intel(R) Core obviously better than GDCP.
(TM) i5-3230M CPU @ 2.60 GHz with 8 GB RAM). Compared In summary, the proposed algorithm can not only effec-
with DCP and GDCP, the proposed algorithm significantly tively and efficiently remove hazes, but also can maintain
reduces the computational complexity by 94.13% and 56.14% the naturalness of images without halos. Moreover, when
on average, respectively. compared with DCP and GDCP, the proposed algorithm can
Information entropy is a statistical measure of ran- significantly reduce the computational complexity by
domness. Higher entropy value generally indicates more 94.13% and 56.14% on average, respectively.
details. Table 2 shows the information entropy of seven
groups of images and image Bridge. From Table 2, we can 6. Discussion
see that the proposed algorithm outperforms DCP and
GDCP. DCP is worse than GDCP according to the quantita- In this section, we will illustrate the advantage of the
tive evaluation. The reason may be that some inconspic- proposed algorithm. The relationship between the pro-
uous places become completely black or completely white posed algorithm and DCP will also be explained. Then the
after restoration. limitation of the proposed algorithm is also analyzed.
Table 3 demonstrates the quantitative measure of
contrast. Michelson contrast is typically used for periodic 6.1. The comparison between DCP and the proposed
patterns and textures. In general, the greater contrast algorithm
corresponds to the higher visibility. From Table 3 we can
see that the proposed algorithm is almost the same level to One disadvantage of DCP is the inability to properly
improve the contrast compared with DCP and DCP gets the preserve edges, which is caused by working on patch.
highest visibility. The tiny difference is partly due to over- A dark pixel is repeated scanning, and it will affect an
enhance results that the color dramatically changes after area. In order to recover the refined transmission map,
Y. Gao et al. / Signal Processing 103 (2014) 380–398 393
Fig. 18. Experimental result comparison for park image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
this patch-based algorithm requires a complex post- 6.2. The discussion about the correction factor
process. However, the proposed algorithm works on non-
overlapping patches by subtracting between an image and To avoid over-enhancement, the correction factor βk
its transformed negative, which is completely different should be limited. We suggest that the maximum of 1/βk
from DCP. This can significantly reduce the computational should be set to a value between 1.5 and 2. In this paper,
complexity and can partly overcome the halo-problem. In the maximum of 1/βk is set to 1.5 according to experi-
order to suppress residual halos, we analyze the cause of mental results. Fig. 23 shows results with different para-
halos and propose modified maximum-filter, which can be meter restrictions.
applied to DCP. This post-process requires smaller amount This means that it keeps a certain amount of haze for
of calculation than that of DCP and more suitable for high-brightness region. The higher the brightness is,
practical applications. the more haze is reserved (the greater I(x,y) is, the
When compared with DCP [25], the proposed algo- smaller Iðx; yÞ is). Because the brighter I(x,y) is the
rithm carries out a different process to enhance dehazing more the haze-opaque is and the farther away gene-
images and can achieve better results according to the rally [23]. If we remove the haze thoroughly in distant
experimental results. Note that the proposed algorithm region, the image may lose the feeling of depth and
implicitly follows the dark channel prior. The DCP calcu- seem unnatural. More importantly, sky region will cer-
lates the dark channel firstly, and then obtains transmis- tainly be over-enhanced. Therefore, the sky region
sion. However, the proposed algorithm approaches the can maintain high brightness and smoothness by restrict-
dark channel prior by varying the correction factor. If the ing 1/βk. It should be noted that other white areas will
proposed algorithm does not restrict the range of para- also remain a certain amount of haze, but the haze
meters, the dark channel of the enhanced image tends to removal is not critical in these areas. Moreover, this
zero. From this point of view, the proposed algorithm is parameter can be flexibly adjusted according to specific
inherently related to DCP. applications.
394 Y. Gao et al. / Signal Processing 103 (2014) 380–398
Fig. 19. Experimental result comparison for tree image. (a) Original image, (b) CLAHE result, (c) MSRCR result, (d) Fattal result, (e) DCP result,
(f) Tarel result, (g) Eriksson result, (h) GDCP result, and (i) Our result.
6.3. Limitation of the proposed algorithm may be resolved by combining the proposed algorithm
with our previous work [40]. In our previous work, an
Although the proposed algorithm performs well for enhancement algorithm for non-uniform illumination
most haze images, it has two limitations. The first limita- images was proposed to preserve naturalness while
tion is that that the proposed algorithm is suitable for haze enhancing details. One of our future works is to propose
images rather than fog images. Actually, since the fog a comprehensive dehazing algorithm by integrating our
imaging is complicated, most current researches focus on previous works.
dehazing and they cannot perform well for fog images
either. In order to resolve this limitation, the characteristic 7. Conclusion
of fog images should be fully exploited and a reasonable
fog images model should be established based on statistics In this paper, a fast image dehazing algorithm is
and observation. The other limitation is that the proposed proposed based on negative correction to improve the
algorithm fails to achieve appropriate results for the non- perceptual quality while reducing the computational com-
uniform illumination images. When the atmospheric light plexity. Instead of estimating the transmission map, the
is not a constant, especially when there is a strong sunlight correction factor of negative of images is estimated and it
in the sky areas, the result may not be satisfactory. Note is used to rectify the corresponding haze images. In order
that this limitation is the same as other single-image to suppress halos, a modified maximum-filter is proposed
dehazing algorithms, such as DCP and GDCP. The experi- to limit the value of correction factor in local region.
mental result is shown in Fig. 24. However, this limitation The experimental results demonstrate that the proposed
Y. Gao et al. / Signal Processing 103 (2014) 380–398 395
Fig. 20. Comparison results of details for bridge image. (a) Original image, (b) GDCP result, and (c) Our result.
Fig. 21. Comparison results in sky areas. (a) Original image, (b) DCP result, (c) GDCP result, and (d) Our result.
396 Y. Gao et al. / Signal Processing 103 (2014) 380–398
Fig. 22. Comparison results in gray-scale image. (a) Gray-level image, (b) DCP result, (c) GDCP result, and (d) Our result. (For interpretation of the
references to color in this figure, the reader is referred to the web version of this article.)
Table 1
Computational complexity comparison.
Resolution Images DCP time (s) GDCP time (s) Our algorithm
Time (s) Compared with DCP (%) Compared with GDCP (%)
Table 2
Quantitative measurement results of information entropy.
algorithm can effectively remove haze and preserve nat- dehazing algorithm can be realized by integrating our
uralness when compared with current dehazing algo- previous works. This will be one of our future works.
rithms. Especially, when compared with the state-of-the-
art DCP and GDCP, the proposed algorithm can improve
both the subjective quality and the objective quality. Acknowledgment
Although the proposed algorithm performs well for haze
images, it fails to achieve appropriate results for fog images This work was partially supported by the National
and the non-uniform illumination images. This limitation Science Fund for Distinguished Young Scholars (No.
is the same as other single image dehazing algorithms. In 61125206), the 973 Program (Project no. 2010CB327900),
order to further improve the performance, a comprehensive the National Natural Science Foundation of China (NSFC,
Y. Gao et al. / Signal Processing 103 (2014) 380–398 397
Table 3
Quantitative measurement results of contrast.
Fig. 23. Results with different parameter restrictions. (a) Original image, (b) 1=βk A ð0; 1:5, (c) 1=βk A ð0; 2, and (d) 1=βk A ð0; 3.
Fig. 24. The limitation of single image dehazing algorithms. (a) Original image, (b) DCP result, (c) GDCP result, and (d) Our result.
No. 61370121), and Outstanding Tutors for doctoral dis- Since 0 rb r g r r r255 and 0 rr r g r b r255, we
sertations of S&T project in Beijing (No. 20131000602). obtain
we have [17] L. Kratz, K. Nishino, Factorizing scene albedo and depth from a single
foggy image, in: Proceedings of the IEEE Conference on Computer
bb b Vision and Pattern Recognition, 2009, pp. 1701–1708.
r
r r þ g g þb b r þg þ b [18] J.P. Tarel, Fast visibility restoration from a single color or gray level
image, in: Proceedings of the International Conference on Computer
Therefore Vision, 2009, pp. 2201–2208.
[19] S.C. Pei, T.Y..Lee, Nighttime haze removal using color transfer pre-
3ðb bÞ 3b processing and dark channel prior, in: Proceedings of the IEEE
Sat 0 ¼ 1 Z Sat ¼ 1 International Conference on Image Processing, 2012, pp. 957–960.
r r þ g g þ b b r þgþb
[20] K.M. He, J. Sun, X.O. Tang, Guided image filtering, IEEE Trans. Pattern
Anal. Mach. Intell. 35 (6) (2013) 1397–1409.
[21] C. Xiao, J. Gan, Fast image dehazing using guided joint bilateral filter,
Vis. Comput. 28 (6–8) (2012) 713–721.
References [22] B. Xie, F. Guo, Z. Cai, Improved single image dehazing using dark
channel prior and multi-scale retinex, in: Proceedings of the Inter-
national Conference on Intelligent Systems Design and Engineering
[1] R.M. Yan, L. Shao, Y. Liu, Nonlocal hierarchical dictionary learning Applications, 2010, pp. 848–851.
using wavelets for image denoising, IEEE Trans. Image Process. 22 [23] R.T. Tan, Visibility in bad weather from a single image, in: Proceed-
(12) (2013) 4689–4698.
ings of the IEEE Conference on Computer Vision and Pattern
[2] L. Shao, H. Zhang, G. de Haan, An overview and performance
Recognition, 2008, pp. 1–8.
evaluation of classification-based least squares trained filters, IEEE
[24] R. Fattal, Single image dehazing, ACM Trans. Gr. 27 (3) (2008) 72.
Trans. Image Process. 17 (10) (2008) 1772–1782.
[25] K.M. He, J. Sun, X.O. Tang, Single image haze removal using dark
[3] L. Shao, R. Yan, X. Li, Y. Liu, From heuristic optimization to dictionary
channel prior, in: Proceedings of the IEEE Conference on Computer
learning: a review and comprehensive comparison of image denois-
Vision and Pattern Recognition, 2009, pp. 1956–1963.
ing algorithms, IEEE Trans. Cybern, http://dx.doi.org/10.1109/
[26] A. Levin, D. Lischinski, Y. Weiss, A closed form solution to natural
TCYB.2013.2278548, in press.
image matting, in: Proceedings of the IEEE Conference on Computer
[4] M. Maggioni, G. Boracchi, A. Foi, K. Egiazarian, Video denoising,
deblocking, and enhancement through separable 4-d nonlocal Vision and Pattern Recognition, 2006, pp. 61–68.
spatiotemporal transforms, IEEE Trans. Image Process. 21 (9) (2012) [27] Constant Quality Prints (Part 1), U.S. Camera, 1940, 3(12):55–57,
3952–3966. 60–61.
[5] T.S. Cho, C.L. Zitnick, N. Joshi, S.B. Kang, R. Szeliski, W.T. Freeman, [28] A. Adams, The Negative, The Ansel Adams Photograph Series, Little
Image restoration by matching gradient distributions, IEEE Trans. Brown and Company New York, 1981.
Pattern Anal. Mach. Intell. 34 (4) (2012) 683–694. [29] Eye of the Photographer, New York Institute of Photography,
[6] G.A. Woodell, D.J. Jobson, Z. Rahman, G.D. Hines, Advanced image New York, Revised 1993.
processing of aerial imagery, Vis. Inf. Process. XV 6246 (2006) [30] S. Paris, F. Durand, A fast approximation of the bilateral filter using a
62460E. signal processing approach, in: Proceedings of the European Con-
[7] C.O. Ancuti, C. Ancuti, C. Hermans, P. Bekaert, A fast semi-inverse ference on Computer Vision, 2006, pp. 568–580.
approach to detect and remove the haze from a single image. In: [31] G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe,
Proceedings of the Asian Conference on Computer Vision, 2010, K. Toyama, Digital photography with flash and no-flash image pairs,
pp. 501–514. ACM Trans. Gr. 23 (3) (2004) 664–672.
[8] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, [32] J. Kopf, M.F. Cohen, D. Lischinski, M. Uyttendaele, Joint bilateral
M. Uyttendaele, D. Lischinski, Deep photo: model-based photograph upsampling, ACM Trans. Gr. 26 (3) (2007) 96.
enhancement and viewing, ACM Trans. Gr. 27 (5) (2008) 116. [33] K. Zuiderveld, Contrast limited adaptive histogram equalization, Gr.
[9] S. Narasimhan, S. Nayar, Interactive weathering of an image using Gems IV (1994) 474–485.
physical models, IEEE Workshop Color Photom. Methods Comput. [34] D.J. Jobson, Z. Rahman, G.A. Woodell, A multiscale retinex for
Vis. 6 (6.4) (2003) 1. bridging the gap between color images and the human observation
[10] Y. Schechner, S. Narasimhan, S. Nayar, Instant dehazing of images of scenes, IEEE Trans. Image Process. 6 (7) (1997) 965–976.
using polarization, in: Proceedings of the IEEE Conference on [35] B. Eriksson, Automatic Image De-weathering using Curvelet-based
Computer Vision and Pattern Recognition, 2001, pp. 325–332. Vanishing Point Detection (2010).
[11] Y. Schechner, S. Narasimhan, S. Nayar, Polarization-based vision [36] Z. Ye, H. Mohamadian, Y. Ye, Discrete entropy and relative entropy
through haze, Appl. Opt. 42 (3) (2003) 511–525. study on nonlinear clustering of underwater and arial images, in:
[12] S. Shwartz, E. Namer, Y. Schechner, Blind haze separation, in: Proceedings of the International Conference on Computer Vision,
Proceedings of the IEEE Conference on Computer Vision and Pattern 2007, pp. 318–323.
Recognition, 2006, pp. 1984–1991. [37] A. Michelson, Studies in Optics, Dover Publications New York, 1995.
[13] S.G. Narasimhan, S.K. Nayar, Chromatic framework for vision in bad [38] K.B. Gibson, T.Q. Nguyen, A perceptual based contrast enhancement
weather, in: Proceedings of the IEEE Conference on Computer Vision metric using AdaBoost, in: IEEE International Symposium on Circuits
and Pattern Recognition, 2000, pp. 598–605.
and Systems, 2012, pp. 1875–1878.
[14] S.G. Narasimhan, S.K. Nayar, Vision and the atmosphere, Int. J.
[39] I. Avcibas, B. Sankur, K. Sayood, Statistical evaluation of image
Comput. Vis. 48 (3) (2002) 233–254.
quality measures, J. Electron. Imag. 11 (2) (2002) 206–223.
[15] S.G. Narasimhan, S.K. Nayar, Contrast restoration of weather
[40] S.H. Wang, J. Zheng, H.M. Hu, B. Li, Naturalness preserved enhance-
degraded images, IEEE Trans. Pattern Anal. Mach. Intell. 25 (6)
ment algorithm for non-uniform illumination images, IEEE Trans.
(2003) 713–724.
Image Process. 22 (9) (2013) 3538–3548.
[16] K. Nishino, L. Kratz, S. Lombardi, Bayesian defogging, Int. J. Comput.
Vis. 98 (3) (2012) 263–278.