0% found this document useful (0 votes)
21 views

Underwater Image Enhancement Using GAN

The process of enhancing the distorted underwater images to clear image is known as Underwater image enhance- ment. Distorted images are the raw underwater images that taken from the deep portion of ocean, river etc by using different cameras. In general underwater images are mainly used in underwater robotics, ocean pasture and environmental monitor- ing, ocean exploration etc.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Underwater Image Enhancement Using GAN

The process of enhancing the distorted underwater images to clear image is known as Underwater image enhance- ment. Distorted images are the raw underwater images that taken from the deep portion of ocean, river etc by using different cameras. In general underwater images are mainly used in underwater robotics, ocean pasture and environmental monitor- ing, ocean exploration etc.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

Underwater Image Enhancement using GAN


Sabnam Jebin C.1; Rahamathulla K2
Department of Computer Science & Engineering
Government Engineering College Thrissur, India

Abstract:- The process of enhancing the distorted In this project, we investigate the use of GANs for
underwater images to clear image is known as enhanc- ing underwater images. Our goal is to develop a
Underwater image enhance- ment. Distorted images are GAN-based method that can produce high-quality enhanced
the raw underwater images that taken from the deep images from a dataset of degraded underwater images. The
portion of ocean, river etc by using different cameras. In proposed method involves training a GAN on a dataset of
general underwater images are mainly used in degraded and high- quality underwater images to learn the
underwater robotics, ocean pasture and environmental statistical properties of the data. In this paper [2] Evaluate
monitor- ing, ocean exploration etc. The underwater the effectiveness of our method by comparing the quality of
image enhancement process is done by using underwater the generated images with the original degraded images and
image dataset which includes the distorted images (raw with other state-of- the-art methods. We will also analyze the
underwater images) and the corre- sponding enhanced performance of our method in terms of its ability to preserve
underwater images. Currently used image enhancement important features and details in the images.
methods cannot provide sufficient satisfaction to the
underwater image enhancement. So proposed a new The contributions of this project are two fold. First, we
method by using Generative Adversarial Network (GAN), pro- pose a GAN-based method for enhancing underwater
which tries to produce more images from the dataset. images that can produce high-quality images with improved
contrast, color, and sharpness. Second, we present an
Keywords:- Generative Adversarial Network(GAN), Under- evaluation of the effectiveness of our method in comparison
Water Image Enhancement. to other state-of- the-art methods. The results of this project
have the potential to advance the field of underwater imaging
I. INTRODUCTION and provide a valuable tool for underwater exploration and
research.
Underwater imaging is an important task for various
appli- cations, including marine biology, oceanography, and Table I: Abbreviations and Acronyms
under- water exploration. However, the quality of underwater
images is often degraded due to the effects of water on  GAN : Generative adversarial network
light, such as scattering, absorption, and attenuation. This  UW-GAN : Underwater generative
degradation can result in poor contrast, color distortion, and adversarial network
blurring, which can make it difficult to extract useful  UWC-Net : Underwater coarse-level
information from the images. In this paper [1] traditional generative network
image enhancement methods have limited effectiveness in  UWF-Net : Underwater fine-level network
improving the quality of underwater images. In Figure 1.1  CNN : Convolutional neural network
shows that the general overview of the underwater image  PAM : Parallel attention module
enhancement. Here we can see that the left image set is the  ALM : Adaptive learning module
distorted (poor quality) underwater images and the right
 EUVP : Enhancing underwater visual
image set is the corresponding enhanced (good quality)
perception
underwater images.
 SSIM : Structural similiarity
 PSNR : Peak signal-to-noise ratio
In recent years, deep learning methods, particularly
Genera- tive Adversarial Networks (GANs), have shown
great promise in image enhancement tasks. GANs are a type II. LITERATURE SURVEY
of neural network that can learn to generate realistic images
by training a generator network to produce images that are A. Underwater Image Enhancement
The image enhancement is done to improve an image
similar to a dataset of real images, and a discriminator
suitability for a given task, such as making it more pleasing
network to differentiate between real and generated images.
GANs have been successfully applied to tasks such as super- to the eye. To put it another way, the main goal of image
resolution, and style transfer. enhance- ment is to modify a given image so that the finished
product is better suited than the original image for a given
application. In this paper [2] propose a conditional generative
adversarial network-based technique for improving real time
underwater images. Similarly, [3] present an improved
multiscale dense generative adversarial network for the

IJISRT24JUN403 www.ijisrt.com 1135


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

process of underwater image enhancement. Create an difficulties, [7] offer LANet, an adaptive learning attention
objective function that monitors the adversarial training on the network for underwater image enhancement based on
dataset and examines the global content, colour, local texture, supervised learning. A multiscale fu- sion module is first
and style information to assess the perceived image quality. suggested to merge various spatial data. Then, create a
Additionally, propose a sizable dataset with paired and brandnew parallel attention module (PAM) that combines
unpaired underwater image collections of ( ”bad” and pixel and channel attention with a focus on lighted elements
”excellent” quality) collected with seven distinct cameras and more important colour information. The shallow
under a range of visibility conditions during mar- itime trips information can then be retained by an adaptive learning
and experiments using human-robot collaboration [EUVP]. module (ALM), which can then learn crucial feature
Perform Both qualitative and quantitative analyses show that information.
The suggested model can improve underwater image quality
through paired and unpaired training. The effects of absorption and scattering frequently result
in the degradation of underwater image quality. When utilised
The task of predicting underwater single-image depth is for analysis and display, degraded underwater photos have
challenging because there aren’t any large-scale datasets of some restrictions. For instance, low contrast and colour cast
underwater depth image data and the difficulties aren’t well- in underwater photos reduce accuracy of marine biological
posed. An precise depth estimate is needed for a single recog- nition rate and underwater item detection rate. A
underwater image in applications like marine engineering contrast en- hancement algorithm an underwater image
and underwater robots. In this paper [4] from an underwater dehazing algorithm [8] are part of a systematic underwater
single image, propose an end to-end underwater generative image enhancement technique that is proposed to get around
adversarial network (UW-GAN) for depth estimation . those restrictions. The vision, colour, and a natural undersea
Firstly, by using using the underwater coarse-level generative image appearance are restored using an effective underwater
net- work (UWC-Net) a coarse-level depth map is estimated. image dehazing approach. The lowest information loss
The estimated coarse-level depth map and the input picture principle serves as its foundation. An effective contrast
are concatenated as input into the underwater fine-level enhancement technique is suggested that enhances the
network (UWF- Net), which computes a fine-level depth contrast and brightness of underwater images based on a
map. Also, suggest a method for creating synthetic histogram distribution prior type. Two improved output
underwater images for huge databases. For the performance versions can be produced using the suggested procedure. One
analysis of the proposed network, both real-world and variant, ideal for display, has fairly authentic colours and a
artificial underwater datasets are used. natural appearance. The other version, which has greater
brightness and contrast, can be utilised to glean more
The major causes of the quality decline in photographs important data and reveal more specifics.
taken under cloudy settings are 1) various meteorological
factors and 2) the attenuation of reflected light. These Underwater image improvement has received a lot of in-
elements significantly alter the hue and visibility of the terest recently in both underwater vision and image pro-
acquired photos. To tackle these problems, [5] suggest an cessing. Enhancement of underwater images is a tough task
end-to-end trainable image de-hazing network [LIGHT-Net]. because, the complex underwater environment and lighting
Haze reduction and colour constancy modules make up the circumstances. Underwater images are typically wavelength-
proposed LIGHT- Net. The colour constancy module among dependent absorption causes degradation and scattering [9],
them removes the colour tint that the weather supplied to the including backward scattering and forward scattering.
hazy image. The suggested haze reduction module, which
uses an inception- residual block as its building block, aims B. Underwater Object Tracking
to lessen the impact of haze as well as to enhance visibility Applications including deep ocean exploration,
in the foggy image. In contrast to conventional feature underwater robot navigation, marine life monitoring [10], and
concatenation, the haze reduction module proposed in this homeland and maritime security all rely heavily on
research uses dense feature sharing to efficiently distribute underwater object tracking. These applications call for
the features discovered in the network’s first layers. effective and precise vision-based underwater marine
analytics, including methods for image augmentation, picture
The underwater image typically has low contrast, colour quality evaluation, and target tracking. Understanding marine
distortion, and fuzzy features because of the light’s atten- image/video analytics faces huge difficulties due to the
uation and dispersion in the water. By taking into account excessive noise and poor light con- ditions. Because of this,
underwater imaging specifics, an unique two-stage computer vision tasks like detection, recognition, and
underwater image convolutional neural network (CNN) based tracking are substantially more difficult in underwater
on structural decomposition (UWCNN-SD) [6] for situations than in open-air environments.
underwater picture en- hancement is suggested.
In this paper [5] numerous sophisticated tracking
Due to light’s dispersion and absorption as it moves methods have been presented as a result of the availability of
through water, underwater photographs have colour casts and big annotated datasets. Depending on how tracking features
inade- quate illumination. Tasks requiring vision of are gathered and applied, the methodologies used in the
underwater, such as detection and recognition, may be majority of these trackers can be divided into different groups.
hampered by these issues. To address these degradation KCF, HCF, DCF, CFNet, STRCF, BACF, and fDSST are

IJISRT24JUN403 www.ijisrt.com 1136


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

examples of trackers that fall within the category of kernelized feature-based tech- nique. The foundation of older trackers
correlation filters. Modern trackers including MDNet, like Struck is local and global feature extraction.
SiamFC, CCOT, and ECO have primarily used the CNN

Fig 1: Block Diagram of GAN

III. PROPOSED METHOD

A. Generative Adversarial Networks Generative Adversarial Networks (GANs) are a type of


The field of machine learning has taken positive deep learning model that can learn to generate new data
interest in GANs [11] due to both their theoretical appeal and samples that are similar to a given dataset. GANs consist of
their capacity the target probability distribution should be two networks such as a generator network and a discriminator
learned. To learn a nonlinear mapping between the distorted network. The generator network takes a random noise vector
image and the non distorted image, suggest a GAN. The as input and generates a new sample that is intended to be
suggested network uses an end-to-end, data-driven training similar to the real data. The discriminator network takes the
mechanism to produce improved output. generated sample and the real data as input and tries to
distinguish between them. The two networks are trained
Here provide an overview of GANs and their simultaneously, with the goal of the generator network
components, including generator and discriminator networks. learning to generate samples that are indistinguishable from
And also discuss the advantages of GANs over other deep the real data, while the discriminator network learns to
learning methods, such as their ability to generate realistic correctly classify real and generated samples. Figure 1 shows
and diverse samples. Finally, should review some popular that the general working of GAN.
GAN-based applications, such as image synthesis, image-to-
image trans- lation, and style transfer. The generative
adversarial network’s general design is depicted in Figure 1.
It is made up of a generator and a discriminator.

IJISRT24JUN403 www.ijisrt.com 1137


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

Fig 2: General Overview of the Proposed Framework

However, GANs also have their limitations, such as the Figure 3 shows the generator network that consist
need for large amounts of high-quality training data and the convolu- tional layer, batch normalization layer, Leaky
diffi- culty of training stable and robust models. Nevertheless, ReLU, decon- volutional layer and tanh function. This
GANs are a powerful and promising deep learning technique convolutional layer is used for the feature extractionby using
that has shown impressive results in a wide range of kernal or filter. Batch normalization is used for to standardize
applications. each input and output of each layer. Leaky ReLU is an
activation function. Tanh function is an activation function
B. Framework Overview that produce [0, 1] result.
The new underwater images (duplicate) are generated
us- ing a Generative Adversarial Network (GAN). The  Here is a General Overview of How a Generator in a GAN
proposed framework includes a generator and Works:
discriminator, as seen in Figure 2. This architecture shows
that the input taken by the generator from the dataset and  Input: The generator receives random noise vectors as
generate it’s duplicates of images. After that the input, which are typically drawn from a simple probabil-
discriminator takes the output of generator as it’s input for the ity distribution like a Gaussian distribution. These noise
discrimination of whether the image is real or fake. The output vectors are low-dimensional representations that capture
of discriminator indicates that true or false (1 or 0) the variability of the data.
corresponding to real or fake respectively.

C. Generator Network
There are many modules for feature extraction that have
been created. In the paper [12] includes the popular architec-
ture of inception looks for the structure of optimal local sparse
inside a structure of network. Even so, at the conclusion of
the block, these various scale characteristics concatenate in a
straightforward manner, contributing in part to the underuti-
lization of feature maps. Following the U-Net principles [13],
our generator network has been modified from UGAN. The
encoder and decoder networks make up this system.

IJISRT24JUN403 www.ijisrt.com 1138


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

Fig. 3: Architectures of Generator.

Fig 4: Architectures of Discriminator

 Transformation: The generator takes the input noise vec- D. Discriminator Network
tors and applies a series of mathematical transformations Figure 4 shows the network of discriminator. The pro-
through its neural network layers. These layers gradually posed discriminator network includes four layers. It consist
increase the complexity and dimensionality of the input of convolutional layer, batch normalization, Leaky ReLU and
to produce a higher-dimensional output. sigmoid function. Convolutional layer is used for the feature
 Output: The final layer of the generator typically em- extraction by using the kernal or filter map. And the batch
ploys an activation function, such as a sigmoid or a normalization is used for to standardize each input and output
hyperbolic tangent, to squash the output values into the of each layer. The Leaky ReLU is an activation function. The
desired range. The generated output is meant to resemble sigmoid activation function is the last layer of discriminator
the real data samples from the training set. which predict the output as true or false.
 Training: During the training phase of the GAN, the
generator’s objective is to generate synthetic samples that  Here is a General Overview of How a Discriminator in a
can deceive the discriminator, another component of the GAN Works:
GAN architecture (discussed further below). The
generator’s weights are updated based on the feedback it  Input: The discriminator takes as input either real data
receives from the discriminator. samples or generated data samples from the generator.

IJISRT24JUN403 www.ijisrt.com 1139


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

 Network Layers: The input data is processed through IV. DATASET


a series of layers in the discriminator network. These
layers can include convolutional or fully connected layers, The EUVP (Enhancing Underwater Visual Perception)
depending on the architecture chosen. dataset [4] contains separate sets of paired and unpaired image
 Feature Extraction: The discriminator’s layers gradually samples of poor and good perceptual quality to facilitate
extract meaningful features from the input data, which supervised training of underwater image enhancement
allow it to differentiate between real and fake samples. models. The EUVP dataset, developed by the Interactive
These features capture the distinguishing characteristics Robotics and Vision Lab at the University of Minnesota, is
of the data, such as texture, shape, or patterns. designed to facilitate research in enhancing underwater visual
 Activation Function: Typically, the layers in the dis- perception. It aims to improve the visibility and quality of
criminator are followed by an activation function, such as underwater images and videos through the development of
the sigmoid or softmax function. This activation function advanced computer vision algorithms and image processing
squashes the output values to a specific range (e.g., [0, 1] techniques.
for sigmoid) and provides a probability score that
represents the discriminator’s confidence in classifying V. COMPARISON STUDY WITH
the input as real or fake. OTHER METHODS
 Binary Classification: The output of the discriminator is
interpreted as a binary classification decision: either the Proposed framework is compared to three other methods
input data is classified as real (assigned a label of 1) for the underwater image enhancement with GAN. The
or fake (assigned a label of 0). The discriminator’s compar- ison methods are [2] FUnIE-GAN, [2] FUnIE-GAN-
objective is to accurately distinguish between real and UP, [14] UGAN.
fake samples.
 Training: During the training phase of the GAN, the We can evaluate our results using two different
discriminator is trained independently from the generator. approaches such as qualitative comparison and quantitative
It is presented with labeled real and fake samples and is comparison. Figure 6 shows that the qualitative and
optimized to correctly classify them. The weights of the quantitative evalua- tion of an underwater image. Here SSIM
discriminator are updated based on a loss function, such and PSNR as the quantitative metrices used for the
as binary cross-entropy, which measures the difference evaluation.
between the predicted labels and the true labels.
A. Qualitative Analysis
In the qualitative comparison, involves assessing the
visual quality and realism of the generated samples. It focuses
on subjective judgments made by human evaluators rather
than relying on quantitative metrics. Figure 5 shows the
qualitative comparison of our proposed method with other
deep learning methods.

Fig 5: Qualitative Comparison with other Method

IJISRT24JUN403 www.ijisrt.com 1140


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

Table 2: Quantitative Comparison with Euvp Dataset used metrics in the quantitative evaluation are SSIM and
Model PSNR SSIM PSNR. Table II shows the both PSNR and SSIM values
FUnIE-GAN 21.92 0.88 obtained from different methods. Based on this analysis, it is
FUnIE-GAN-UP 21.36 0.81 obvious that our method does not demonstrates good
UGAN 19.59 0.66 performance compared to the other previously used methods,
Ours 23.92 0.92 as evidenced by the PSNR and SSIM value.

B. Quantitative Analysis  SSIM: SSIM is a metric that quantifies the similarity


In the quantitative comparison, involves using objective between two images, considering their structural
metrics to measure various characteristics of the generated information. It measures the perceptual similarity by
samples. These metrics provide numerical scores or measure- evaluating three components: luminance, contrast, and
ments that can be used to compare different models or track structure. SSIM [15] produces a score between 0 and 1,
progress during training. In this paper [2], two commonly where 1 indicates perfect similarity.

Fig 6: SSIM and PSNR Value of Proposed Method

Where, µx(µy) denotes the mean, and σ2 (σ2) is The PSNR approximates the reconstruction quality of a
the variance of x(y); whereas σxy denotes the cross-correlation gen- erated image x compared to its ground truth y based on
between x and y. Additionally, c1 = (255 × 0.01)2 and c2 their Mean Squared Error (MSE). PSNR is expressed in
= (255 ×0.03)2 are constants that ensure numeric stability. decibels (dB) and provides a quantitative assessment of image
fidelity. Higher PSNR values indicate lower perceptual
SSIM takes into account both low-level pixel-wise differences between the generated and reference images,
compar- isons and higher-level structural comparisons. It implying higher image quality.
considers how well the local structures, textures, and edges of
the generated image match those of the ground truth (real) VI. CONCLUSION
image. Higher SSIM scores indicate higher visual similarity.
In this paper focused on exploring the application of
 PSNR: PSNR [16] is a metric commonly used to mea- Gen- erative Adversarial Networks (GANs) for underwater
sure the quality of reconstructed or generated images by image enhancement. The objective was to improve the visual
comparing them to a reference image. It measures the quality and clarity of underwater images, which often suffer
ratio of the peak power of the signal [17] (the maximum from poor visibility, color distortion, and low contrast.
possible value) to the noise power (the difference between Through the implementation and experimentation with GAN-
the generated and reference images). based ap- proaches, significant advancements have been
made in under- water image enhancement.

IJISRT24JUN403 www.ijisrt.com 1141


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

Fig 7: Output of our Proposed Method

The project began by conducting a comprehensive REFERENCES


literature review to understand the existing techniques and
challenges in underwater image enhancement. Various GAN [1]. K. Panetta, L. Kezebou, V. Oludare, and S. Agaian,
architectures and image enhancement algorithms were “Comprehensive underwater object tracking
studied to identify the most suitable approach for the task. benchmark dataset and underwater image
The selected GAN model was trained using a large dataset of enhancement with gan,” IEEE Journal of Oceanic
underwater images, aiming to learn the underlying structure Engineering, vol. 47, no. 1, pp. 59–75, 2021.
and characteristics of the underwater environment. [2]. M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater
image enhancement for improved visual perception,”
The evaluation of the proposed GAN-based underwater IEEE Robotics and Automation Letters, vol. 5, no. 2,
image enhancement approach involved both qualitative and pp. 3227–3234, 2020.
quantitative measures. Qualitative evaluation involved visual [3]. Y. Guo, H. Li, and P. Zhuang, “Underwater image
inspection and subjective assessment of the enhanced images enhancement using a multiscale dense generative
by experts and users, while quantitative evaluation utilized adversarial network,” IEEE Journal of Oceanic
established metrics such as Structural Similarity Index Engineering, vol. 45, no. 3, pp. 862–870, 2019.
(SSIM), Peak Signal-to-Noise Ratio (PSNR). The results [4]. P. Hambarde, S. Murala, and A. Dhall, “Uw-gan:
indicated a significant improvement in image quality and Single-image depth estimation and image
confirmed the superiority of the proposed GAN-based enhancement for underwater images,” IEEE Trans-
method compared to traditional image enhancement actions on Instrumentation and Measurement, vol. 70,
techniques. pp. 1–12, 2021.
[5]. Dudhane, P. W. Patil, and S. Murala, “An end-to-end
network for image de-hazing and beyond,” IEEE
Transactions on Emerging Topics in Computational
Intelligence, 2020.

IJISRT24JUN403 www.ijisrt.com 1142


Volume 9, Issue 6, June – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24JUN403

[6]. S. Wu, T. Luo, G. Jiang, M. Yu, H. Xu, Z. Zhu,


and Y. Song, “A two-stage underwater enhancement
network based on structure decom- position and
characteristics of underwater imaging,” IEEE Journal
of Oceanic Engineering, vol. 46, no. 4, pp. 1213–
1227, 2021.
[7]. S. Liu, H. Fan, S. Lin, Q. Wang, N. Ding, and Y. Tang,
“Adaptive learning attention network for underwater
image enhancement,” IEEE Robotics and Automation
Letters, vol. 7, no. 2, pp. 5326–5333, 2022.
[8]. C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B.
Wang, “Underwater image enhancement by dehazing
with minimum information loss and histogram
distribution prior,” IEEE Transactions on Image
Processing, vol. 25, no. 12, pp. 5664–5677, 2016.
[9]. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and
D. Tao, “An underwater image enhancement
benchmark dataset and beyond,” IEEE Transactions
on Image Processing, vol. 29, pp. 4376–4389, 2019.
[10]. M. Han, Z. Lyu, T. Qiu, and M. Xu, “A review on
intelligence dehazing and color restoration for
underwater images,” IEEE Transactions on Systems,
Man, and Cybernetics: Systems, vol. 50, no. 5, pp.
1820–1832, 2018.
[11]. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.
Warde-Farley, S. Ozair, A. Courville, and Y. Bengio,
“Generative adversarial networks,” Communications
of the ACM, vol. 63, no. 11, pp. 139–144, 2020.
[12]. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D.
Anguelov, D. Erhan, V. Vanhoucke, and A.
Rabinovich, “Going deeper with convolutions,” in
Proceedings of the IEEE conference on computer
vision and pattern recognition, pp. 1–9, 2015.
[13]. O. Ronneberger, P. Fischer, and T. Brox, “U-net:
Convolutional networks for biomedical image
segmentation,” in International Conference on
Medical image computing and computer-assisted
intervention, pp. 234– 241, Springer, 2015.
[14]. Fabbri, M. J. Islam, and J. Sattar, “Enhancing
underwater imagery using generative adversarial
networks,” in 2018 IEEE international conference on
robotics and automation (ICRA), pp. 7159–7165,
IEEE, 2018.
[15]. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P.
Simoncelli, “Image quality assessment: from error
visibility to structural similarity,” IEEE transactions
on image processing, vol. 13, no. 4, pp. 600–612,
2004.
[16]. Ignatov, N. Kobyshev, R. Timofte, K. Vanhoey, and
L. Van Gool, “Dslr-quality photos on mobile devices
with deep convolutional net- works,” in Proceedings
of the IEEE international conference on com- puter
vision, pp. 3277–3285, 2017.
[17]. Y.-S. Chen, Y.-C. Wang, M.-H. Kao, and Y.-Y.
Chuang, “Deep photo enhancer: Unpaired learning for
image enhancement from photographs with gans,” in
Proceedings of the IEEE conference on computer
vision and pattern recognition, pp. 6306–6314, 2018.

IJISRT24JUN403 www.ijisrt.com 1143

You might also like