0% found this document useful (0 votes)
4 views15 pages

Underwater Image Enhancement

The document presents a novel approach for enhancing underwater images by addressing the issues of light scattering and color change. The proposed algorithm, Wavelength Compensation and Image Dehazing (WCID), systematically compensates for these distortions and accounts for artificial lighting, resulting in improved visibility and color fidelity. The performance of the WCID algorithm is evaluated through objective and subjective assessments, demonstrating its effectiveness in enhancing underwater image quality.

Uploaded by

Nirmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views15 pages

Underwater Image Enhancement

The document presents a novel approach for enhancing underwater images by addressing the issues of light scattering and color change. The proposed algorithm, Wavelength Compensation and Image Dehazing (WCID), systematically compensates for these distortions and accounts for artificial lighting, resulting in improved visibility and color fidelity. The performance of the WCID algorithm is evaluated through objective and subjective assessments, demonstrating its effectiveness in enhancing underwater image quality.

Uploaded by

Nirmal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/51899044

Underwater Image Enhancement by Wavelength Compensation and


Dehazing

Article in IEEE Transactions on Image Processing · December 2011


DOI: 10.1109/TIP.2011.2179666 · Source: PubMed

CITATIONS READS

1,120 7,158

2 authors:

John Y. Chiang Ying-Ching Chen


National Sun Yat-sen University National Sun Yat-sen University
191 PUBLICATIONS 3,967 CITATIONS 3 PUBLICATIONS 1,182 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by John Y. Chiang on 04 January 2014.

The user has requested enhancement of the downloaded file.


1756 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

Underwater Image Enhancement by Wavelength


Compensation and Dehazing
John Y. Chiang and Ying-Ching Chen

Abstract—Light scattering and color change are two major


sources of distortion for underwater photography. Light scat-
tering is caused by light incident on objects reflected and deflected
multiple times by particles present in the water before reaching
the camera. This in turn lowers the visibility and contrast of the
image captured. Color change corresponds to the varying degrees
of attenuation encountered by light traveling in the water with dif-
ferent wavelengths, rendering ambient underwater environments
dominated by a bluish tone. No existing underwater processing
techniques can handle light scattering and color change distor-
tions suffered by underwater images, and the possible presence
of artificial lighting simultaneously. This paper proposes a novel
systematic approach to enhance underwater images by a dehazing
algorithm, to compensate the attenuation discrepancy along the
propagation path, and to take the influence of the possible presence
of an artifical light source into consideration. Once the depth map,
i.e., distances between the objects and the camera, is estimated,
Fig. 1. Hazing and bluish effects caused by light scattering and color change in
the foreground and background within a scene are segmented. underwater images. This image is part of an underwater footage on the Youtube
The light intensities of foreground and background are compared website filmed by the Bubble Vision Company.
to determine whether an artificial light source is employed during
the image capturing process. After compensating the effect of
artifical light, the haze phenomenon and discrepancy in wave- haze caused by light that is reflected from a surface and is
length attenuation along the underwater propagation path to
deflected and scattered by water particles, and color change due
camera are corrected. Next, the water depth in the image scene is
estimated according to the residual energy ratios of different color to varying degrees of light attenuation for different wavelengths
channels existing in the background light. Based on the amount of [3]–[5]. Light scattering and color change result in contrast
attenuation corresponding to each light wavelength, color change loss and color deviation in images acquired underwater. For
compensation is conducted to restore color balance. The perfor- example, in Fig. 1, the haze in the school of Carangid, the diver,
mance of the proposed algorithm for wavelength compensation and the reef at the back is attributed to light scattering, whereas
and image dehazing (WCID) is evaluated both objectively and
color change is the reason for the bluish tone appearing in
subjectively by utilizing ground-truth color patches and video
downloaded from the Youtube website. Both results demonstrate the brown coral reef at the bottom and the yellow fish in the
that images with significantly enhanced visibility and superior upper-right corner.
color fidelity are obtained by the WCID proposed. Haze is caused by suspended particles such as sand, minerals,
Index Terms—Color change, image dehazing, light scattering, and plankton that exist in lakes, oceans, and rivers. As light re-
underwater image, wavelength compensation. flected from objects propagates toward the camera, a portion of
the light meets these suspended particles. This in turn absorbs
and scatters the light beam, as illustrated in Fig. 2. In the absence
I. INTRODUCTION of blackbody radiation [6], the multiscattering process along the

A CQUIRING clear images in underwater environments


is an important issue in ocean engineering [1], [2]. The
quality of underwater images plays a pivotal role in scientific
course of propagation further disperses the beam into homoge-
neous background light.
Conventionally, the processing of underwater images focuses
missions such as monitoring sea life, taking census of popu- solely on compensating either light scattering or color change
lations, and assessing geological or biological environments. distortion. Techniques targeting on removal of light scattering
Capturing images underwater is challenging, mostly due to distortion include exploiting the polarization effects to compen-
sate for visibility degradation [7], using image dehazing to re-
Manuscript received July 16, 2011; revised December 04, 2011; accepted De- store the clarity of the underwater images [8], and combining
cember 06, 2011. Date of publication December 13, 2011; date of current ver- point spread functions and a modulation transfer function to re-
sion March 21, 2012. The associate editor coordinating the review of this man-
duce the blurring effect [9]. Although the aforementioned ap-
uscript and approving it for publication was Prof. Wai-Kuen Cham.
The authors are with the Department of Computer Science and Engi- proaches can enhance scene contrast and increase visibility, dis-
neering, National Sun Yat-Sen University, Kaohsiung 80424, Taiwan (e-mail: tortion caused by the disparity in wavelength attenuation, i.e.,
[email protected]; [email protected]).
color change, remains intact. On the other hand, color-change
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. correction techniques estimate underwater environmental pa-
Digital Object Identifier 10.1109/TIP.2011.2179666 rameters by performing color registration with consideration of

1057-7149/$26.00 © 2011 IEEE


CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1757

Fig. 3. Different wavelengths of light are attenuated at different rates in water.


Fig. 2. Natural light enters from air to an underwater scene point . The light The blue color travels the longest in the water due to its shortest wavelength.
reflected propagates distance to the camera. The radiance perceived by the This is the reason that underwater images are dominated by blue color.
camera is the sum of two components: the background light formed by multi-
scattering and the direct transmission of reflected light.

A. Underwater Model
The background light in an underwater image can be used
light attenuation [10], employing histogram equalization in both
to approximate the true in-scattering term in the full radiative
RGB and HSI color spaces to balance the luminance distribu-
transport equation to achieve the following simplified hazy
tions of color [11], and dynamically mixing the illumination of
image formation model [14], [15]:
an object in a distance-dependent way by using a controllable
multicolor light source to compensate color loss [12]. Despite
red, green, blue
the improved color balance, these methods are ineffective in re-
(1)
moving the image blurriness caused by light scattering. A sys-
tematic approach is needed to take all the factors concerning where is a point in the underwater scene, is the image
light scattering, color change, and possible presence of artificial captured by the camera, is the scene radiance at point
light source into consideration. , is the residual energy ratio of after reflecting
The algorithm for wavelength compensation and image de- from point in the underwater scene and reaching the camera,
hazing (WCID) proposed in this paper combines techniques is the homogeneous background light, and is the light
of WCID to remove distortions caused by light scattering and wavelength. Note that the residual energy ratio is a
color change. Dark-channel prior [13], an existing scene-depth function of both wavelength and the object–camera distance
derivation method, is used first to estimate the distances of the . summarizes the overall effects for both light scat-
scene objects to the camera. The low intensities in the dark tering and color change suffered by light with wavelength
channel are mainly due to three factors: 1) shadows, e.g., the traveling the underwater distance . The direct attenuation
shadows of creatures, plankton, plants, or rocks in seabed im- term describes the decay of scene radiance in the
ages; 2) colorful objects or surfaces, e.g., green plants, red or water [16]. The residual energy ratio can be represented
yellow sands, and colorful rocks/minerals, deficient in certain alternatively as the energy of a light beam with wavelength
color channels; and 3) dark objects or surfaces, e.g., dark crea- before and after traveling distance within the water
tures and stone [8]. Based on the depth map derived, the fore- and , respectively, as follows:
ground and background areas within the image are segmented.
The light intensities of foreground and background are then Nrer (2)
compared to determine whether an artificial light source is em-
ployed during the image acquiring process. If an artificial light
source is detected, the luminance introduced by the auxiliary where the normalized residual energy ratio Nrer corresponds
lighting is removed from the foreground area to avoid overcom- to the ratio of residual to initial energy for every unit of distance
pensation in the stages followed. Next, the dehazing algorithm propagated and is the medium extinction coefficient [15].
and wavelength compensation are utilized to remove the haze The normalized residual energy ratio Nrer depends on the
effect and color change along the underwater propagation path light wavelength transmitted [17], as illustrated in Fig. 3, where
to the camera. The residual energy ratio among different color red light possesses longer wavelength and lower frequency and
channels in the background light is employed to estimate the thereby attenuates faster than the blue counterpart. This results
water depth within an underwater scene. Energy compensation in the bluish tone prevalent in underwater images [18].
for each color channel is carried out subsequently to adjust the Other than the wavelength of light transmitted, the normal-
bluish tone to a natural color. With WCID, expensive optical in- ized residual energy ratio Nrer is also affected by water
struments or stereo image pairs are no longer required. WCID salinity and concentration of phytoplankton [17]. In light of this
can effectively enhance visibility and restore the color balance observation, oceanic water is further classified into three cate-
of underwater images, rendering high visual clarity and color gories. Type-I waters represent extremely clear oceanic waters.
fidelity. Most clear coastal waters with a higher level of attenuation
1758 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

, the amount of residual light formed after wave-


length attenuation can be formulated according to the energy
attenuation model in (2) as follows:

Nrer red, green, blue (4)

At point , the light reflected again travels distance to


the camera forming pixel , red, green, blue . Along
this underwater object–camera path, two phenomena occur, i.e.,
light scattering and color change. Note that color change oc-
curs not only along the surface–object propagation path but
also along the object–camera route. Light emanated from
point is equal to the amount of illuminating ambient light
reflected, i.e., Nrer
, where is the reflectivity of point for light with
wavelength . By following the image formation model in a
hazy environment in (1), the image formed at the camera
Fig. 4. Flowchart of the WCID algorithm proposed. can be formulated as follows:

Nrer
belong to the Type-II class. Turbid upwelling coastal waters are red, green, blue (5)
listed as Type III. Water types I, II, and III roughly correspond
to oligo-, meso-, and eutrophic waters [19]. For every meter where the background light represents the part of the object
of ocean type I that a light beam passes through, the values of reflected light and ambient light scattered toward the
normalized residual energy ratio Nrer in red (700 m) light, camera by particles in the water.
green (520 m) light, and blue (440 m) light are 82%, 95%, The background light increases as the object is placed
and 97.5%, respectively. Based on the water type considered, farther away from the camera [13], [20]–[23]. Alternatively, the
the normalized residual energy ratio Nrer can be adjusted residual energy ratio in the above equation can be rep-
based on that of Ocean Type I as follows: resented in terms of normalized residual energy ratio Nrer
using (2) as follows:
if m red ,
Nrer Nrer
Nrer if m green ,
if m blue . Nrer red, green, blue (6)
(3)
Equation (6) incorporates light scattering during the course
of propagation from object to the camera , and the wave-
II. UNDERWATER IMAGE FORMATION MODEL
length attenuation along both the surface–object path
The proposed WCID algorithm proceeds in a direction in- and object–camera route , Once the scene depth, i.e., ob-
verse to the underwater image formation path discussed above, ject–camera distance , is known through the dark-channel
as depicted in Fig. 4. First, consider the possible presence and prior, the value of residual energy ratio Nrer after wave-
influence of the artificial light source . Next, remove the light length attenuation can be calculated; thus, the direct attenuation
scattering and color change that occurred along the course of term Nrer is derivable through a
propagation from the object to the camera. Finally, com- dehazing procedure. The surface–object distance is cal-
pensate the disparities of wavelength attenuation for traversing culated by comparing the residual energy ratio of different color
the water depth to the top of the image and fine-tune the en- channels. Given the water depth , the amount of reflecting
ergy loss by deriving a more precise depth value for every point light , i.e., free of light scattering and color
within an image. change, from point illuminated by airlight is determined.
Fig. 2 illustrates an underwater image formation model. Ho- Moreover, the artificial light source is often provided to
mogeneous skylight entering above into the water is the major overcome insufficient lighting commonly encountered in an un-
source of illumination in an underwater environment. Incident derwater photographic environment. The luminance contributed
light traverses from the surface of water reaching the image by the artificial light source has to be removed before the de-
scene, covering a range from depth through , where hazing and wavelength compensation operations to avoid over-
corresponds to the image depth range. During the course of compensation. When the artificial light source is detected, the
propagation, light with different wavelengths is subjected to light emitted has first to travel distance before reaching
varying degrees of attenuation. The color change of ambient point . The residual energy after the course of propagation is
lighting makes the underwater environment tinted with a bluish Nrer . The total amount of light impinges on point
hue. As airlight incident from air to the water reaches the un- is therefore the summation of ambient lighting and the
derwater scene point with depth , i.e., attenuated artificial light Nrer . The total amount of
CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1759

incident light Nrer Nrer is re- If point belongs to a part of the foreground object, the value
flected with reflectivity and bounces back distance of the dark channel is very small, i.e., . Taking the
before reaching the camera. During both forward and backward min operation in the local patch on the hazy image
courses of propagation pertinent to , color change occurs. in (7), we have
Accordingly, (6) can be further modified as the hazy image for-
mation equation Nrer

Nrer red, green, blue (10)

Since is the homogeneous background light and the residual


energy ratio Nrer on the small local patch sur-
rounding point is essentially a constant Nrer [21], the
(7) value on the second term of (10) can be subsequently re-
The underwater image formation model in (7) takes the moved as
hazing effect, wavelength attenuation, and artificial lighting
into consideration. Given the signal perceived, our goal Nrer
is to remove the influences of artificial lighting, hazing, and
color change along the object–camera path, and color change Nrer red, green, blue (11)
suffered from water surface to object, as outlined in the pre-
We rearrange the above equation and perform one more min
vious paragraph. The following subsections will discuss the
operation among all three color channels as follows:
steps for the estimation of the object–camera distance ,
the artificial light source , the water depth , the depth range
, and the corresponding procedures for dehazing and energy
compensation.
Nrer
A. Distance Between the Camera and the Object:
Nrer red, green, blue
The common approach for estimating the depth of objects (12)
within a scene, i.e., depth map, often requires two images for
parallax [20]. In a hazy environment, haze increases with dis- The first term on (12) can be shown to satisfy the following
tance; therefore, haze itself can be a useful depth clue for scene inequality (refer to Appendix I for details):
understanding. Consequently, evaluating the concentration of
haze in a single image is sufficient to predict the distance
between the object in the scene and the camera [21]. The dark- Nrer
channel prior [8], which is an existing scene-depth derivation
method, is based on the observation that, in most of the non-
Nrer
background light patches , where , on a haze-free
underwater image, at least one color channel has a very low in- (13)
tensity at some pixels. In other words, the minimum intensity in
such a patch should have a very low value, i.e., a dark channel. The first term in the numerator of (13) is equal to the dark
Note that the low intensity observed through the dark channel channel defined in (9) and is very close to zero. There-
is a consequence of low reflectivity existing in certain fore, (12) can be rewritten as
color channels. No pixels with a very low value can be found in
the local patch , which implies the existence of haze. The
concentration of haze in a local patch can then be quantified by Nrer
dark-channel prior. This in turn provides the object–camera dis-
tance [13]. red, green, blue (14)
As formulated in (7), light reflected from point is
Among all color channels, Nrer red possesses the lowest
residual value. This indicates that Nrer ,
Nrer Nrer red, green, blue is simply equal to Nrer red . The
red, green, blue (8) background light is usually assumed to be the pixel in-
tensity with the highest brightness value in an image [20].
We define the dark channel for the underwater However, this simple assumption often renders erroneous
image as results due to the presence of self-luminous organisms or an
extremely smooth surface, e.g., a white fish. In order to increase
the detection robustness of background light, a min operation
red, green, blue (9)
is first performed in every local patch of all pixels in a
1760 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

Fig. 6. (a) Depth map obtained after refining with image matting. Blowups of
(b) Frame I and (c) Frame II. When compared with Fig. 5(b) and (c), the depth
Fig. 5. (a) Depth map obtained by estimating , which is the distance be- map after refinement reduces the mosaic effect and captures the contours of
tween the object and the camera using dark-channel prior. Blowups of (b) Frame objects more accurately.
I and (c) Frame II. Visible mosaic artifacts are observed due to the block-based
operation of dark-channel prior.

hazy image . The brightest pixel value among all local minima
corresponds to the background light as follows:

red, green, blue (15)

Given the values of , , and Nrer red , distance


between point on an object and the camera can be determined.
The depth map of Fig. 1 is shown in Fig. 5(a).
Block-based dark-channel prior will inevitably introduce a
mosaic artifact and produce a less accurate depth map, as shown
Fig. 7. Illuminated by an artificial light source, the intensity of the foreground
in Fig. 5(b) and (c). By imposing a locally linear assumption for appears brighter than that of the background.
foreground and background colors and applying image matting
to repartition the depth map, the mosaic effect is reduced, and
object contours can be identified more precisely [13], [22]. Ap-
plying image matting to the underwater depth map derived by
the general dark-channel methodology is a novel approach. We
denote the depth map of Fig. 5(a) as . The depth map
after refinement can be formulated as

(16)

where is a unit matrix, is a regularization coefficient, and


represents the matting Laplacian matrix as follows:

Fig. 8. When the luminance contributed by an artificial light source is not


deducted first, an overexposed image will be obtained after the compensation
stages followed.
(17)

overcompensation for the stages followed, as illustrated in


where represents the original image, the coordinates of pixel
Fig. 8. Modeling, detecting, and compensating the presence of
are , is the Kronecker delta, is the color covariance
an artificial light source are novel to the processing of under-
of a small area , is the mean color value of , and is
water images. The existence of an artificial light source can
a regularization coefficient. Fig. 6 shows the depth map after
be determined by comparing the difference between the mean
applying image matting to remove mosaic distortion.
luminance of the foreground and the background. In an under-
water image without artificial lighting, the dominant source
B. Removal of the Artificial Light Source
of light originates from the airlight above the water surface.
Artificial light sources are often supplemented to avoid The underwater background corresponds to light transmitted
insufficient lighting commonly encountered in an underwater without being absorbed and reflected by objects and is therefore
photographic environment, as shown in Fig. 7. If an artificial the brighter part of the image. Higher mean luminance in the
light source is employed during the image capturing process, foreground of an image than that in the background indicates
the luminance contributed by must be deducted first to avoid the existence of a supplementary light source. The foreground
CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1761

and the background of an image can be segmented based on the


depth map derived earlier as follows:

foreground if
area - type (18)
background if

where is the distance between the object and the camera,


whereas is a threshold.
Upon detection of the presence of an artificial lighting, the
added luminance introduced by the artificial light source has to
be removed. The influence of an artificial lighting perceived by
the camera is a function of the amount of luminance contributed
by the light source and surface reflectance of objects. Since a
point light source emanates spherically, the amount of the lu-
minance supplied is inversely proportional to the square of the
distance between the object and the light source. The closer the
object, the stronger the intensity of the artificial light source,
and vice versa. As discussed earlier, the light Fig. 9. Distribution of (a) the luminance of an artificial light source and (b) red,
Nrer Nrer reflected from point (c) green, and (d) blue channel reflectivity.
is equal to the product of reflectivity times the summa-
tion of the ambient lighting Nrer and the attenu-
ated artificial light Nrer . For a fixed object–camera
distance , the intensity of an attenuated artificial lighting
Nrer is a constant. The difference in terms of bright-
ness perceived by a camera for fixed can therefore be at-
tributed to the reflectance of objects. Of all the pixels with the
same , finding solutions for , , , and
is an overdetermined problem and can be considered
as an optimization least squares solution. In this problem, the
number of equations exceeds the number of unknowns. There
is generally no exact solution but an approximate solution that
minimizes the quadratic error Nrer
Nrer [Note that the term
Nrer Nrer
Nrer Nrer is known through (7)]. This
approximate solution is called the least squares solution and is
computed by using the pseudoinverse of as follows:

Nrer Nrer
Fig. 10. Underwater image obtained after the removal of the artificial light is
red, green, blue (19) shown in the right panel of the split screen, whereas the original one is in the left
panel. The influence of an artificial light depends on the object–camera distance.
The closer the object, the more significant the impact is.
Fig. 9 shows the distribution of the luminance of an artificial
light source present and the reflectance of red, green, and blue
channels in Fig. 7.
After deriving the luminance contributed by the artificial light The split screens of Fig. 10(a) and (b) show the results (see
source and reflectivity , red, green, blue , at right panel) after eliminating the artificial lighting detected in
point , the influence caused by the artificial lighting can be Figs. 1 and 7 (see left panel), respectively. Due to the size of
removed by subtraction from (7) as follows: scene area covered, the amount of artificial light received in
Fig. 7 is more concentrated and larger than that of Fig. 1.
Nrer Nrer C. Compensation of Light Scattering and Color Change Along
Nrer Nrer the Object–Camera Path
After removing the artificial light source and deriving dis-
Nrer red, green, blue tance between an object and the camera, the haze can be
(20) removed by subtracting the in-scattering term
1762 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

Fig. 12. Underwater image after removing light scattering and color change by
considering the artificial light source , the object–camera distance , and
the depth from the water surface to top of image . As the depths of the top and
bottom of the image are different, visible color change distortion still exists at
the lower portion of the image.

Since our goal is to obtain a haze-free and color-corrected


image, . The only unknown left in (22) is the water
depth of point . In the next subsection, the derivation of
scene depth will be discussed.

D. Underwater Depth at the Top of the Photographic Scene:


Fig. 11. Underwater image obtained (a) after eliminating haze and (b) color
change along the object–camera light propagation route is shown in the
For the homogeneous skylight , the energy corresponding to
right panel of the split screen, respectively, whereas the original image, i.e., in red, green, blue channels right above the water surface shall be
Fig. 1, is in the left panel. A bluish color offset remains prevalent. the same, i.e., . After penetrating the
water depth , the energy of each color channel after attenuation,
i.e., the underwater ambient light , red, green, blue ,
in (1) of the hazy image formation model from image per-
becomes , , and , respectively. To estimate the
ceived by the camera, i.e.,
underwater depth , the corresponding intensity of the ambient
Nrer lighting shall be detected first. Therefore, the water depth
is the least squares solution that makes the difference between
Nrer Nrer (21) the attenuated version of the incident light , , and
after propagation, and the detected ambient lighting ,
The image after dehazing is shown in the right panel of red, green, blue in depth , with energy , ,
Fig. 11(a). Next, the color change encountered during the ob- and at the minimum as follows:
ject–camera path can be corrected by dividing both sides of (21)
by the wavelength-dependent attenuation ratio Nrer . Nrer
Therefore, the image after the dehazing and correction of color
change introduced through the propagation path can be red, green, blue (23)
formulated as
Once is determined, the amount of attenuation in each
Nrer
wavelength can be utilized to compensate the energy differences
Nrer and correct the color change distortion by dividing (22) with
Nrer Nrer as follows:
red, green, blue (22) red, green, blue (24)
Nrer
The right panel of the split screen in Fig. 11(b) shows the
result after the removal of light scattering and correction of color where is the restored energy of the underwater image
change along the object–camera path . A bluish color offset after haze removal and calibration of color change, as shown
remains prevalent across the whole frame. This is due to the in Fig. 12. However, the water depths at the top and bottom of
disparity in the amount of wavelength attenuation encountered the scene are usually not the same in a typical image. Employing
when skylight penetrates through the water surface reaching the a single value of depth to compensate the entire image will
imaging scene, causing underwater environments illuminated leave visible color change at the bottom portion since the inten-
by the bluish ambient light. The further estimation of the water sity of incident light decreases significantly as the water depth
depth of the image scene is required to satisfactorily correct the increases. Thus, the refinement of depth estimation for each
color change introduced along the course of propagation from point in the image is necessary to achieve correct energy com-
the water surface to the photographic scene. pensation at different depths.
CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1763

Fig. 13. Underwater image obtained after processing with WCID. The depth
of the underwater scene ranges from 5 m at the top of the image to 8 m at the
bottom portion.
Fig. 15. (a) The color board, serving as a ground truth, of the image taken
above water in Fig. 14(a). Color boards obtained after processing Fig. 14(b) with
(b) WCID, (c) dark-channel-based dehazing algorithm, (d) chromatism-based
dehazing algorithm, and (e) histogram equalization. The dotted white rectangles
correspond to the effective color areas for computing SNR values, as listed in
Table I.

TABLE I
QUANTITATIVE PERFORMANCE EVALUATION: SNR VALUES FOR THE
Fig. 14. Diver holding a board with six patches of designated colors, (a) before EFFECTIVE COLOR AREAS IN FIG. 15(b)–(e) (WATER DEPTH 5 m) AND
diving, diving at a depth of (b) 5 m, and (c) 15 m. FIG. 16(a)–(d) (WATER DEPTH 10 m) BY EMPLOYING THE EFFECTIVE COLOR
AREA IN FIG. 15(a) AS THE GROUND-TRUTH IMAGE ARE LISTED

E. Image Depth Range


The image acquired covers a depth range of tan underwater
scene from to , as shown in Fig. 2. When light pen-
etrates through water covering the depth range , a disparate
amount of color change will be induced at the top and at the
bottom of the image. This phenomenon necessitates varying en-
ergy compensation adjusted according to the depth of each un-
derwater point to correctly rectify color change.
Let the top background pixel be the point at the top of the image
background with the underwater depth , and let the bottom
point at the bottom of the image background with the underwater
depth , where represents the range of water depth
covered in an image. By detecting the corresponding intensity of
background light at depths and , the value of the shal-
lowest water depth and the deepest water depth can be
derived from (23). The water depth of any point in the image is
between the depth range through . Therefore, the depth
of an image point can be fine-tuned by linear interpolation, i.e.,
that of the top and bottom background points. Suppose that pixel
and top and bottom background pixels are located on scanline
, , and , respectively. The underwater depth of pixel Fig. 16. Color boards obtained after processing Fig. 14(c) with (a) WCID,
can be derived pointwise by linear interpolation as follows: (b) dark-channel-based dehazing algorithm, (c) chromatism-based dehazing al-
gorithm, and (d) histogram equalization. The dotted white rectangles correspond
to the effective color areas for computing SNR values, as listed in Table I.
(25)

Once the underwater depths for all pixels are obtained, the re- Other than taking the artificial light source , the ob-
stored energy of the underwater image after haze removal ject–camera distance , and the water depth into
and calibration of color change in (24) can be modified to better consideration, Fig. 13 demonstrates the result of fine tuning
compensate the energy loss during light propagation along dif- the amount of wavelength compensation by deriving the water
ferent water depths by dividing (22) with Nrer instead depth of every image pixel , where the Nrer values
of Nrer used for red, green, and blue light are 82%, 95%, and 97.5%,
respectively. In comparison with Fig. 12, the color change
suffered at the lower part of the image is greatly reduced. Color
Nrer balance is restored for the whole image, rather than just the top
red, green, blue (26) portion of the frame.
1764 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

Fig. 17. Image obtained after processing with the (a) WCID. Fig. 13 is included here for ease of comparison. (b) Dark-channel-based dehazing algorithm. (c) Chro-
matism-based dehazing algorithm. (d) Histogram equalization.

Fig. 18. (a) Image obtained after processing with WCID. The underwater depth of the photographic scene ranges from 17 to 18 m. The brightness emitted by
the artificial light source is estimated to be 100. (b) Image obtained after processing with the dark-channel-based algorithm, and (c) chromatism-based dehazing
algorithm, respectively. (d) Image obtained by histogram equalization, haze effects, and color change still exist.

III. EXPERIMENTAL RESULTS color patches and videos downloaded from Youtube website.
Both results demonstrate superior haze removing and color bal-
The performance of the proposed WCID algorithm is evalu- ancing capabilities of the proposed WCID over traditional de-
ated both objectively and subjectively by utilizing ground-truth hazing and histogram equalization methods.
CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1765

Fig. 19. (a) Underwater image extracted from the Youtube website [26]. (b) Image obtained after processing with WCID. The underwater depth of the photo-
graphic scene ranges from 18 to 22 m. (c)–(e) Image obtained after processing with dark-channel-based and chromatism-based dehazing algorithms and histogram
equalization, respectively.

A. Objective Performance Evaluation match those of the ground-truth image. The areas occluded by
the diver’s fingers and the lower portion of the color board,
Pictures taken for a diver holding a board with six patches which is not within the scope of the image taken in Fig. 14(b),
of designated colors before diving (ground truth), and are exluded. The dotted white rectangles in Figs. 15 and 16
diving at a depth of 5 and 15 m, respectively, are shown correspond to the effective color areas employed in deriving
in Fig. 14(a)–(c). The colors processed by the WCID proposed, the SNR values. As shown in Table I, the WCID proposed
by dark-channel dehazing [8], by chromatism-based dehazing obtains the highest SNR value in both depths. In addition, the
[24], and histogram equalization algorithms [11] are compared performance of WCID is the most robust through different
with the ground-truth ones taken above water, as shown in water depths due to the incorporation of wavelength compen-
Figs. 15(a)–(e) and 16(a)–;(d). Due to the difficulty in main- sation, image dehazing, and artificial lighting removal. On the
taining a consistent spatial relationship between the color board other hand, the SNR value obtained by histogram equalization
and the camera, size and orientation variances exist between drops significantly as the depth increases. This is due to the fact
pictures taken underwater. In calculating the SNR values, the that the dynamic range of the red color component decreases
size and orientation of the color board are first adjusted to substantially with the depth due to wavelength attenuation. An
1766 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

Fig. 20. (a) Underwater shipwreck image extracted from the Youtube website [27]. (b) Image obtained after processing with WCID. The underwater depth of
the photographic scene ranges from 3 to 8 m. (c)–(e) Image obtained after processing with dark-channel-based and chromatism-based dehazing algorithms and
histogram equalization, respectively.

attempt, without taking dehazing or wavelength compensa- dark-channel-based algorithm and a chromatism-based de-
tion into consideration, blindly expanding the range of color hazing algorithm, respectively. Although the contrast of the
components will adversely accompany with a huge amount of image is increased, the color change appears even more promi-
error. The relatively low value of the SNR can be attributed to nent as the attenuated energy is not compensated individually
the presence of water bubbles, the misalignment in matching based on different wavelengths. Fig. 17(d) illustrates the image
color patches with different dimensions and orientations, and after histogram equlaization processing, from which the haze
water permeated through the seam between the green color effect and color change still remain. In comparison with results
patch and the board, making it distinct from the counterpart in processed by these conventional techniques, the image pro-
the ground-truth image. cessed by the WCID method, as shown in Fig. 13, is effectively
haze free and color balanced. We give the image the original
B. Subjective Performance Evaluation color and clarity that it would have had if it were not taken
Fig. 1 is extracted from an underwater video on the Youtube underwater.
website filmed by the Bubble Vision Company. [26]. This Fig. 7 is another image from the same video sequence used
AVI-formatted video is 350 s long with a resolution of 720p. to demonstrate the effect of the presence of a strong artificial
Fig. 17(b) and (c) shows the result after processing with a light source. The result after WCID processing, i.e., Fig. 18(a),
CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1767

Fig. 21. (a) Relatively large white shiny regions are marked by the red lines. (b) The marked regions are mistakenly classified as background by the dark-channel
prior. (c) Depth map refinement through graph-cut-based -expansion. Most of the misjudged areas are corrected.

is superior to those by the dehazing algorithms or histogram reasonable estimate of the actual value. In addition, a calibra-
equalization, as shown in Fig. 18(b)–(d), in terms of both color tion procedure might be performed first by drivers before an
balance and clarity. image-capturing session by taking a test picture at known water
Two more sets of test images and processing results are depth and underwater propagation distance to fine-tune the
shown in Figs. 19 and 20 to facilitate further comparison. rate of light energy loss. In addition, the artificial lighting is
Fig. 19(a) is included for its variety in terms of subjects’ scene assumed to be a point source emitting uniform omnidirectional
depth, encompassing close to far range, whereas Fig. 20(a) is light beams across all wavelength spectrums. This is different
included for the uniqueness of subject, i.e., a shipwreck, on the from the linear or surface light source with a strong beam
sea floor. A consistent dehazing effect and color balance are directionality and nonuniform color characteristic commonly
obtained throught the processing of the WCID proposed. More
encountered in underwater photography. The precise estima-
test videos can be downloaded from ftp://140.117.168.28.
tion of the luminance distribution of the light source is also
IV. CONCLUSION demanding. If the geographic location in taking the underwater
footage and the characteristics of the light source employed are
The WCID algorithm proposed in this paper can effectively
known a priori, even better results in haze removal and color
restore image color balance and remove haze. To the best
balance can be reached.
of our knowledge, no existing techniques can handle light
scattering and color change distortions suffered by underwater Another source for causing compensation errors is the esti-
images simultaneously. The experimental results demonstrate mation of the scene depth by the dark-channel prior, as
superior haze removing and color balancing capabilities of commonly encountered in depth derivation by utilizing a single
the proposed WCID over traditional dehazing and histogram image. Relatively large white shiny regions of a foreground
equalization methods. However, the salinity and the amount of object might be misjudged as far away ones. For example, in
suspended particles in ocean water vary with time, location, Fig. 21(a), the area marked by red lines are mistaken as back-
and season, making accurate measurement of the rate of light ground, as shown in Fig. 21(b). Two approaches are suggested
energy loss Nrer difficult. Errors in the rate of light energy to alleviate the above situation [25]. One is enlarging the size of
loss will affect the precision of both the water depth and the local patch formulated in (9). Another is depth refine-
the underwater propagation distance derived. Constant ment by utilizing spatial and temporal correlation within and be-
monitoring and long-term tabulation of the rate of light energy tween video frames. The depth map refined through the graph-
loss according to time, location, and season might provide a cut-based -expansion method [25] is shown in Fig. 21(c).
1768 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012

APPENDIX A Following this line of thought, we obtain the second equation


In order to show the validity of the inequality shown in the at the bottom of the page.
first equation at the bottom of the page, supports from the fol- Next, we want to show the numerator at the right-hand side of
lowing two lemmas are needed. the above equation can be further represented as an alternative
Lemma 1: form as shown in the third equation at the bottom of the page.
Lemma 2:
red, green, blue

Proof: Let red, green, blue


Nrer Nrer
Proof: Let
Nrer red, green, blue
Nrer Nrer
According to the law of physics, the normalized residual en- red, green, blue
ergy ratio Nrer red Nrer green Nrer blue . The fol-
Nrer red, green, blue
lowing inequality for ambient light can be established in an
underwater environment as follows: According to the normalized residual energy ratio
Nrer red Nrer green Nrer blue , the inequality for
is as follows:
This implies that and that
.
Then
Then

Nrer Nrer
Nrer

Nrer Nrer Nrer

Nrer Nrer
Nrer

Nrer Nrer Nrer

Nrer Nrer Nrer

Nrer Nrer Nrer


CHIANG AND CHEN: UNDERWATER IMAGE ENHANCEMENT BY WAVELENGTH COMPENSATION AND DEHAZING 1769

Therefore [14] S. Shwartz, E. Namer, and Y. Y. Schechner, “Blind haze separation,”


in Proc. IEEE CVPR, 2006, vol. 2, pp. 1984–1991.
[15] J. T. Houghton, The Physics of Atmospheres, 2nd ed. Cambridge,
U.K.: Cambridge Univ. Press, 2001, ch. 2.
Nrer Nrer [16] W. N. McFarland, “Light in the sea—Correlations with behaviors
of fishes and invertebrates,” Amer. Sci. Zoology, vol. 26, no. 2, pp.
389–401, 1986.
Nrer [17] S. Q. Duntley, “Light in the sea,” J. Opt. Soc. Amer., vol. 53, no. 2, pp.
214–233, 1963.
Nrer Nrer [18] L. A. Torres-Méndez and G. Dudek, “Color correction of underwater
images for aquatic robot inspection,” in Proc. EMMCVPR, 2005, vol.
3757, Lecture Notes in Computer Science, pp. 60–73.
Nrer [19] N. G. Jerlov, Optical Oceanography. Amsterdam, The Netherlands:
Elsevier, 1968.
[20] R. Tan, “Visibility in bad weather from a single image,” in Proc. IEEE
CVPR, 2008, vol. 1, pp. 1–8.
ACKNOWLEDGMENT [21] R. Fattal, “Single image dehazing,” in Proc. Int. Conf. Comput. Graph.
Interact. Tech., 2008, pp. 1–9.
The authors would like to thank the anonymous reviewers for [22] A. Levin, D. Lischinski, and Y. Weiss, “A closed form solution to nat-
their valuable suggestions. ural image matting,” in Proc. IEEE CVPR, 2006, vol. 1, pp. 61–68.
[23] Y. Y. Schechner and N. Karpel, “Clean Underwater Vision,” in Proc.
IEEE CVPR, 2004, vol. 1, pp. 536–543.
REFERENCES [24] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in
underwater single image dehazing,” in Proc. IEEE OCEANS, 2010, pp.
[1] K. Lebart, C. Smith, E. Trucco, and D. M. Lane, “Automatic indexing 1–8.
of underwater survey video: algorithm and benchmarking method,” [25] C. Perer and H. Richard, “Improved Single Image Dehazing Using Ge-
IEEE J. Ocean. Eng., vol. 28, no. 4, pp. 673–686, Oct. 2003. ometry,” in Proc. IEEE DICTA, 2009, pp. 103–110.
[2] J. Yuh and M. West, “Underwater robotics,” Adv. Robot., vol. 15, no. [26] [Online]. Available: http://www.youtube.com/user/bubblevision.
5, pp. 609–639, 2001. [27] [Online]. Available: http://www.youtube.com/watch?v=
[3] J. R. Zaneveld and W. Pegau, “Robust underwater visibility param- NKmc5dlVSRk&hd=1
eter,” Opt. Exp., vol. 11, no. 23, pp. 2997–3009, 2003.
[4] E. Trucco and A. T. Olmos-Antillon, “Self-tuning underwater image
restoration,” IEEE J. Ocean. Eng., vol. 31, no. 2, pp. 511–519, Apr.
2006. John Y. Chiang received the B.S. degree in elec-
[5] J. S. Jaffe, “Computer modeling and the design of optimal underwater trical engineering from National Taiwan University,
imaging systems,” IEEE J. Ocean. Eng., vol. 15, no. 2, pp. 101–111, Taipei, Taiwan, in 1985, and the M.S. and Ph.D.
Apr. 1990. degrees in electrical engineering from the North-
[6] M. C. W. van Rossum and T. M. Nieuwenhuizen, “Multiple scattering western University, Evanston, IL, in 1987 and 1990,
of classical waves: Microscopy, mesoscopy and diffusion,” Rev. respectively.
Modern Phys., vol. 71, no. 1, pp. 313–371, Jan. 1999. He is currently an Associate Professor with the
[7] Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and Department of Computer Science and Engineering,
structure by polarization analysis,” IEEE J. Ocean. Eng., vol. 30, no. National Sun Yat-Sen University, Kaohsiung,
3, pp. 570–587, Jul. 2005. Taiwan. His research interests include image pro-
[8] L. Chao and M. Wang, “Removal of water scattering,” in Proc. Int. cessing, computerized tongue diagnosis in traditional
Conf. Comput. Eng. Technol., 2010, vol. 2, pp. 35–39. Chinese medicine, and content-based image retrieval.
[9] W. Hou, D. J. Gray, A. D. Weidemann, G. R. Fournier, and J. L. Forand,
“Automated underwater image restoration and retrieval of related op-
tical properties,” in Proc. IGARSS, 2007, vol. 1, pp. 1889–1892.
[10] A. Yamashita, M. Fujii, and T. Kaneko, “Color registration of under-
water image for underwater sensing with consideration of light attenu- Ying-Ching Chen received the B.S. degree in infor-
ation,” in Proc. Int. Conf. Robot. Autom., 2007, pp. 4570–4575. mation engineering from I-Shou University, Kaoh-
[11] K. Iqbal, R. Abdul Salam, A. Osman, and A. Zawawi Talib, “Under- siung, Taiwan, in 2009 and the M.S. degree from Na-
water image enhancement using an integrated color model,” Int. J. tional Sun Yat-Sen University, Kaohsiung, in 2011.
Comput. Sci., vol. 34, no. 2, pp. 2–12, 2007. He is currently fulfilling his one-year mandatory
[12] I. Vasilescu, C. Detwiler, and D. Rus, “Color-accurate underwater military service. His research interests include image
imaging using perceptual adaptive illumination,” in Proc. Robot. Sci. processing and pattern recognition.
Syst., Zaragoza, Spain, 2010.
[13] K. He, J. Sun, and X. Tang, “Single image haze removal using Dark
Channel Prior,” in Proc. IEEE CVPR, 2009, vol. 1, pp. 1956–1963.

View publication stats

You might also like