2023_A Survey of Deep Learning-Based Low-Light Image Enhancement
2023_A Survey of Deep Learning-Based Low-Light Image Enhancement
Review
A Survey of Deep Learning-Based Low-Light Image Enhancement
Zhen Tian 1,2 , Peixin Qu 1,2, *, Jielin Li 1,2 , Yukun Sun 1,2 , Guohou Li 1,2 , Zheng Liang 3 and Weidong Zhang 1,2
1 School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China;
[email protected] (Z.T.); [email protected] (J.L.); [email protected] (Y.S.);
[email protected] (G.L.); [email protected] (W.Z.)
2 Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
3 School of Internet, Anhui University, Hefei 230039, China; [email protected]
* Correspondence: [email protected]
Abstract: Images captured under poor lighting conditions often suffer from low brightness, low
contrast, color distortion, and noise. The function of low-light image enhancement is to improve
the visual effect of such images for subsequent processing. Recently, deep learning has been used
more and more widely in image processing with the development of artificial intelligence technology,
and we provide a comprehensive review of the field of low-light image enhancement in terms of
network structure, training data, and evaluation metrics. In this paper, we systematically introduce
low-light image enhancement based on deep learning in four aspects. First, we introduce the related
methods of low-light image enhancement based on deep learning. We then describe the low-light
image quality evaluation methods, organize the low-light image dataset, and finally compare and
analyze the advantages and disadvantages of the related methods and give an outlook on the future
development direction.
1. Introduction
Citation: Tian, Z.; Qu, P.; Li, J.; Sun,
Due to the development of technology and the continuous improvement of photo-
Y.; Li, G.; Liang, Z.; Zhang, W. A
graphic equipment, we have higher and higher requirements for the quality of the images
Survey of Deep Learning-Based
we capture, but we often have difficulty obtaining suitable images because of the inter-
Low-Light Image Enhancement.
Sensors 2023, 23, 7763. https://
ference of environmental factors. Uneven lighting, low lighting, and other factors like
doi.org/10.3390/s23187763
backlighting can result in imperfect image information, diminishing the overall quality
of captured images. Figure 1 shows an example of an image under suboptimal lighting
Academic Editors: Liang-Jian Deng, conditions. Consequently, these issues can have a cascading effect on advanced tasks such
Honggang Chen, Xiaole Zhao and
as object recognition, detection, and classification. As artificial intelligence technologies
Yuwei Jin
continue to evolve, the associated industries are also changing, and thus the requirements
Received: 10 August 2023 for related downstream tasks are increasing. The quality of tasks completed in the image
Revised: 29 August 2023 processing area [1–5] can greatly affect the efficiency of upstream tasks.
Accepted: 2 September 2023 In daily life, we often encounter uncontrollable environmental or equipment factors
Published: 8 September 2023 that cause uneven lighting, darkness, backlighting, and blurring of captured images [6–9].
However, we have demand for high-quality images. Superior image quality is crucial
for everyday scenarios and holds significant importance across various sectors [10–14],
including intelligent transportation and vision monitoring. Therefore, quality enhancement
Copyright: © 2023 by the authors. of images has become a subject worthy of further exploration.
Licensee MDPI, Basel, Switzerland.
The enhancement of low-light images holds a significant role in image processing.
This article is an open access article
This involves enhancing the visual quality of images captured in low-light conditions by
distributed under the terms and
adjusting the contrast and brightness levels, thus improving visibility [15–17]. Conven-
conditions of the Creative Commons
tional techniques for enhancing low-light images often revolve around statistical learning,
Attribution (CC BY) license (https://
including approaches like local exposure compensation algorithms. While these methods
creativecommons.org/licenses/by/
4.0/).
can enhance image brightness effectively, they may concurrently introduce undesired noise
and distortion. In addition, traditional methods have some limitations [18–20]. While the
favorable notion of adopting the reflection component as the enhancement outcome may
hold inconsistently, particularly when factoring in diverse lighting attributes, it could po-
tentially result in impractical improvements, such as the omission of details and distortion
of color. Additionally, the model overlooks noise, allowing it to persist or amplify within
the enhanced output [21–25].
In recent years, deep learning methods have made significant advancements in various
fields, particularly in image processing tasks. Unlike traditional approaches, deep learning
techniques place a stronger emphasis on capturing the spatial features in an image, allowing
for better preservation of details and increased resistance to noise [26–29]. Deep learning
has demonstrated remarkable achievements in various fields, including enhancing low-
light images. In comparison with conventional approaches, deep learning-based solutions
for improving low-light image quality have gained substantial attention due to their
enhanced precision, robustness, and efficiency. The existing deep learning techniques for
low-light image enhancement establish a relationship between an output image and a
correspondingly enhanced input image under low-light conditions by designing a network
structure. However, this approach can result in a high dependency on the training data,
which limits its effectiveness to some extent.
This paper focuses on employing deep learning techniques for enhancing low-light
images while offering an extensive assessment and analysis of current methods within this
domain. The noteworthy contributions of this study encompass the following:
(1) We systematically classify and summarize the deep learning-based low-light image
enhancement methods proposed in recent years, introduce the core ideas of the mentioned
algorithms in detail, and provide an insightful analysis of the problems of and possible
solutions for the existing methods.
(2) We summarize the datasets in the field of low-light image enhancement in detail,
including the sources, characteristics, and application scenarios of the datasets. We also
provide a comprehensive comparison of different datasets and discuss their respective
advantages and disadvantages.
(3) We analyze the advantages and disadvantages of some existing methods through
experimental comparisons, present possible problems, and look forward to future re-
search directions.
Sensors 2023, 23, 7763 3 of 22
Table 1. Summary of the basic characteristics of representative methods based on deep learning.
Network Evaluation
Year Methods Training Data Test Data Platform
Structure Metric
Simulated by
gamma correction Simulated
2017 LLNet [30] SSDA PSNR SSIM Theano
and Gaussian self-selected
noise
Simulated by
random Simulated PSNR MAE SSIM
2018 LightenNet [31] Four layers Caffe MATLAB
illumination self-selected user study
values
LOL simulated by
Multi-scale
Retinex-Net [32] adjusting Self-selected - TensorFlow
network
histogram
Simulated by
Multi-branch Simulated PSNR SSIM AB
MBLLEN [33] gamma correction TensorFlow
fusion self-selected VIF LOE TOMI
and Poisson noise
Frequency PSNR FSIM
SICE [34] SICE SICE Caffe MATLAB
decomposition runtime FLOPs
Three
LOL LIME NPE PSNR SSIM LOE
2019 KinD [35] subnetworks LOL TensorFlow
MEF NIQE
U-Net
NPE LIME MEF
U-Net-like Unpaired real DICM VV User study NIQE
EnlightenGAN [36] PyTorch
network images BBD-100K classification
ExDARK
Fully connected User study
ExCNet [37] Real images IEpxD PyTorch
layers CDIQA LOD
Retouched image PSNR SSIM user
DeepUPE [38] Illumination map MIT-Adobe FiveK TensorFlow
pairs study
User study PI
SICE NPE LIME
U-Net-like PNSR SSIM MAE
2020 Zero-DCE [39] SICE MEF DICM VV PyTorch
network runtime face
DARK FACE
detection
Sensors 2023, 23, 7763 4 of 22
Table 1. Cont.
Network Evaluation
Year Methods Training Data Test Data Platform
Structure Metric
Recursive LOL images PSNR SSIM
DRBN [40] LOL PyTorch
network selected by MOS SSIM-GC
U-Net-like
network edge TensorFlow Pad-
EEMEFN [41] SID SID PSNR SSIM
detection dle
network
Three stages
SCIE LOL DICM PSNR SSIM NIQE
TBEFN [42] U-Net-like SCIE LOL TensorFlow
MEF NPE VV runtime P FLOPs
network
Laplacian
PSNR SSIM
pyramid MIT-Adobe FiveK
DSLR [43] MIT-Adobe FiveK NIQMC NIQE PyTorch
U-Net-like self-selected
BTMQI CaHDC
network
Neural
LOL MIT-Adobe LOL MIT-Adobe PSNR SSIM
2021 RUAS [44] architecture PyTorch
FiveK FiveK runtime P FLOPs
search
User study PI
SICE NPE LIME PNSR SSIM P
U-Net-like
Zero-DCE++ [45] SICE MEF DICM VV MAE runtime PyTorch
network
DARK FACE face detection
FLOPs
Recursive PSNR SSIM
DRBN [46] LOL LOL PyTorch
network SSIM-GC
DICM, ExDark
Encoder-decoder NIQE NIQMC
RetinexDIP [47] - fusion LIME PyTorch
networks CPCQI
NASA NPE VV
MEF LOL
Recursive simulated by LOL LIME NPE PNSR SSIM LOE
PRIEN [48] PyTorch
network adjusting MEF VV TMQI
histogram
Self-calibrated MIT LSRW
MIT LOL LSRW PSNR SSIM DE
2022 SCI [49] illumination DARK FACE PyTorch
DARK FACE EME LOE NIQE
network ACDC
PSNR SSIM
Encoder-decoder
LEDNet [15] LOL-Blur LOL-Blur MUSIQ NRQM PyTorch
networks
NIQE
Three PSNR SSIM VIF
REENet [50] SID SID TensorFlow
subnetworks NIQE LPIPS
PSNR SSIM
U-Net-like
LANNet [51] LOL SID LOL SID GMSD NLPD PyTorch
network
NIQE DISTS
SSIM PSNR MAE
LIME DICM MEF
2023 LPDM [52] Diffusion model LOL LPIPS NIQE PyTorch
NPE
BRISQUE SPAQ
Two-stage
FLW-Net [53] LOL-V1 LOL-V2 LOL-V1 LOL-V2 PSNR SSIM NIQE PyTorch
network
Encoder-decoder PSNR SSIM
NeRCo [54] LSRW LOL LSRW LIME PyTorch
networks NIQE LOE
Encoder-decoder LOL LOL-v2 MEF PSNR SSIM
SKF [55] - PyTorch
networks LIME NPE DICM LPIPS NIQE
Sensors 2023, 23, 7763 5 of 22
enhancement. Figure 3 provides a flow chart of CNN method for non-physical model.
By leveraging various techniques and models, researchers are continuously advancing the
field of image enhancement, addressing challenges related to contrast, noise, and preserving
image details.
adversarial networks. This network is trained using disparate pairs of low-light and normal-
light images. To improve the quality of vision and address issues such as noise and color
bias, they added an illumination-aware attention module to improve feature extraction.
Additionally, a new invariant loss is introduced to tackle overexposure problems, allowing
the network to adaptively enhance low-light images. These methods highlight the use of
GANs for unpaired image transformation and low-light image enhancement. By incorpo-
rating recurrent consistency loss and attention modules, researchers have made significant
advancements in preserving semantic information, reducing noise and color bias, and im-
proving the visual quality of transformed and enhanced images. Yan et al. [69] introduced
a low-light image enhancement method that leverages an optimization-enhanced enhance-
ment network module within the generative adversarial network (GAN) framework. This
method utilizes an enhancement network to input images into a generator, generating
similar images in a new space. Subsequently, a loss function is constructed and minimized
to train a discriminator, which then compares the generated images with real images to
enhance the network’s performance. Similarly, You et al. [70] introduced Cycle-CBAM,
a retinal image enhancement technique built upon the foundation of Cycle-GAN. This
method aims to elevate the quality of fundus images from lower to higher levels without
necessitating paired training data. To tackle challenges posed by texture information loss
and detail degradation due to unpaired image training, Cycle-GAN is augmented by the
integration of Convolutional Block Attention Module (CBAM). These strategies highlight
the utilization of GAN-based methodologies in enhancing both low-light and retinal images.
By optimizing the enhancement network module and incorporating attention mechanisms,
researchers strive to enhance the quality and fidelity of the resulting enhanced images.
Overall, the advantages of GAN-based low-light image enhancement methods lie
first in the fact that they are good at generating more realistic enhanced images. Secondly,
they can learn more advanced image features, which can be used to recover image details
and information. In addition, GANs are able to perform complex nonlinear modeling and
capture complex features. Lastly, GANs are able to generate diversity images, which makes
the results more diverse.
Although there are many advantages of GAN-based low-light image enhancement
methods, there are some significant drawbacks for the direction of our future work. The first
is that the training process of a GAN is complex, and the problem of unstable training may
occur. The second is that the training may suffer from the problem of pattern collapse,
which may lead to a lack of diversity in the results. In addition, a GAN also requires a
certain amount of data to support the training of the model. Finally, the performance
of a GAN receives the influence of various hyperparameters, which need to be carefully
adjusted to keep it stable.
2µm µn + C1
l (m, n) = ,
µ2m + µ2n + C1
2σm σn + C2
c(m, n) = 2 + σ2 + C
, (1)
σm n 2
σmn + C3
s(m, n) = .
σm σn + C3
In the above equation, l (m, n) is the mean to estimate the brightness, c(m, n) is the
variance to estimate the contrast, and s(m, n) is the covariance to estimate the structural
similarity. Meanwhile, µm and µn represent the mean of m and n, respectively, σm and σn
represent the standard deviation of m and n, respectively, σmn represents the covariance of
m and n, and C1 , C2 , and C3 all represent constants. Therefore, SSIM is defined as
(2µm µn + C1 )(2σmn + C2 )
SSIM(m, n) = , (2)
(µ2m + µ2n + C1 )(σm2 + σn2 + C2 )
where SSIM takes a value between 0 and 1. When two images are identical, the SSIM value
is one.
The PSNR is a widely used objective criterion for evaluating images in engineering
projects. It measures the ratio between the maximum signal and the background noise and
is commonly employed to assess the amount of information loss in a compressed image
compared with the original. The PSNR is measured in decibels (dB), with higher values
indicating better image quality (i.e., less noise). The range of PSNR values is from 0 to
positive infinity.
The PSNR is often calculated using the mean squared error (MSE). Considering a
pair of monochrome images labeled as I and J, where I signifies an unadulterated original
image and J denotes an angry rendition of I (e.g., I as the uncompressed source image
Sensors 2023, 23, 7763 11 of 22
and J as its compressed iteration), the calculation of their mean squared error involves
the following:
y
1 x
MSE = ∑ ∑
xy i=1 j=1
(I(i, j) − J(i, j))2 . (3)
MAX2I MAX I
PSNR = 10 · log10 ( ) = 20 · log10 ( √ ), (4)
MSE MSE
where MAX is the maximum pixel value of the image and the PSNR is measured in decibels
(dB). A higher PSNR value indicates better image quality. Generally speaking, PSNR
values above 40 dB are considered excellent, meaning the image quality is very close to
the original. PSNR values between 30 dB and 40 dB typically indicate good image quality,
with detectable but acceptable distortion. On the other hand, PSNR values between 20 dB
and 30 dB signify poor image quality. PSNR values below 20 dB indicate unacceptable
image quality. It is worth noting that PSNR values below −30 dB also indicate poor image
quality and are considered unacceptable.
where f (m, n) denotes the grayscale value of the image f corresponding to the pixel (m, n)
and B( f ) is the result of the image sharpness calculation.
The Tenengrad gradient function applies the Sobel operator to capture gradients both
horizontally and vertically. The definition for image sharpness within the context of the
base and Tenengrad gradient functions is outlined as follows:
where Q is the given edge detection threshold and Em and En are the convolution of the
Sobel horizontal and vertical edge detection operators at pixel point (m, n), respectively.
The information entropy function stands as a crucial metric for assessing the informa-
tion abundance within an image. As established by information theory, the information
held within an image f is quantified through the information entropy B( f ) associated with
said image:
G −1
B( f ) = − ∑ Pa ln( Pa ), (8)
a =0
Sensors 2023, 23, 7763 12 of 22
where Pa signifies the probability of encountering a pixel with the gray value a within
the image and G represents the total count of gray levels (typically set at 256). As per
Shannon’s information theory, maximal information is attained when entropy reaches its
peak. Applying this principle to image focusing, heightened B( f ) values correlate with
sharper images. It is noteworthy that the entropy function’s sensitivity is not particularly
pronounced, and due to image content variations, outcomes might occasionally deviate
from actual scenarios.
4. Benchmark Dataset
Deep learning relies on deep neural networks that require extensive training with
substantial data samples to achieve generalizability in the final model. Consequently, the
dataset size significantly influences deep learning endeavors. To cater to the needs of
low-light image enhancement research, there are several publicly available datasets with
varying sizes and diverse scenarios. Table 2 provides summary of different low-light image
data. These datasets include naturalness preserved enhancement (NPE), Vasileios Vonikakis
(VV), the low-light dataset (LOL), and multi-exposure image fusion (MEF), Figures 5 and 6
provide examples of different datasets. Researchers can utilize these datasets to train and
evaluate their low-light image enhancement models, providing a range of options to suit
different research requirements.
(a)
(b)
LOL
Figure 6. Example of paired low-light dataset for LOL. Here, (a) is a reference image, and (b) is a low
light image.
the domain of low-light image processing, providing paired images with well-defined
exposure characteristics.
(5) LOL dataset
In 2018, Wei et al. [32] introduced the LOL dataset, a paired collection comprising
500 low-light and normal-light image pairs. These pairs were further divided into 485
training pairs and 15 test pairs for evaluation purposes. The low-light images in the dataset
accurately represent the noise typically encountered during the process of capturing pho-
tographs. The majority of the images depict indoor scenes and are sourced from diverse
scenes and devices, including cell phones and digital cameras. As a result, the dataset
encompasses a wide variety of objects captured within the images. Additionally, the LOL
dataset covers various low-light conditions, including twilight and indoor low-light scenar-
ios, providing a comprehensive representation of challenging lighting situations. To ensure
consistency and comparability, all raw images were adjusted to a standardized resolution
of 400 × 600 pixels and converted to the portable web graphics format. Its diverse range
of images and realistic representation of low-light conditions make it a valuable tool for
research and development in the field.
(6) SICE dataset
In 2019, Cai et al. [34] presented the SICE dataset, an extensive collection of multi-
exposure images. The dataset creation process involved deploying multi-exposure fusion
(MEF) and high dynamic range (HDR) techniques to reconstruct reference images, yielding
heightened contrast and visibility improvements. To create the SICE dataset, the authors
employed 1200 sequences and applied 13 MEF and HDR algorithms, resulting in a total
of 15,600 fusion results (1200 sequences × 13 algorithms). The dataset comprises 589
meticulously selected high-resolution multiple-exposure sequences, consisting of a total of
4413 images. For each sequence, a set of contrast-enhanced images was produced using 13
diverse multiple-exposure image fusion techniques and a stack-based high dynamic range
imaging algorithm. Following this, subjective evaluations were carried out to determine
the optimal reference images for each scene.
(7) ExDark dataset
In 2019, Loh et al. [75] introduced the Exclusively Dark (ExDark) dataset, consisting
of 7363 low-light images. The dataset covers a range of low-light conditions from very
low-light environments to twilight, encompassing 10 different lighting conditions. The im-
ages were captured in various real-world scenarios using a diverse array of devices and
cameras, providing a wide representation of different scenes and shooting conditions.
These low-light images are subject to multiple factors, including insufficient lighting, noise,
and blur, which further contribute to the challenges associated with low-light photography.
The dataset includes annotations at both the image class level and the local object bounding
box level for 12 object classes, such as bicycle, car, cat, dog, chair, and cup. The ExDark
dataset serves as a valuable resource for researchers and practitioners in the field of low-
light image analysis. Its comprehensive collection of low-light images, diverse lighting
conditions, and detailed object annotations provide an excellent foundation for developing
and evaluating algorithms related to object recognition, localization, and other tasks in
challenging low-light environments.
(8) RELLISUR dataset
In 2021, Aakerberg et al. [76] introduced the RELLISUR dataset, a large-scale paired
dataset specifically designed for low-light and low-resolution image enhancement tasks.
The RELLISUR dataset comprises 12,750 paired images, consisting of real low-light and
low-resolution images paired with high-resolution reference images captured under normal
lighting conditions. The dataset covers a wide range of resolutions and low-light levels,
enabling the development and training of deep learning-based models. It enables the
exploration and development of deep learning models that can effectively enhance the
Sensors 2023, 23, 7763 15 of 22
quality and resolution of low-light images, bridging the gap between these two important
image enhancement tasks.
(9) LLIV-Phone dataset
In 2021, Li et al. [77] introduced the LLIV-Phone dataset, a comprehensive and challeng-
ing dataset specifically designed for low-illumination image and video analysis. The LLIV-
Phone dataset comprises 120 videos and 45,148 images captured using 18 different cell
phone cameras. The dataset covers a wide range of indoor and outdoor scenes with diverse
lighting conditions, including low light, underexposure, moonlight, dusk, darkness, ex-
treme darkness, backlighting, non-uniform lighting, and colored lighting. These real-world
scenes present various challenges associated with low-light conditions. It offers a wide
variety of low-light images and videos collected from real scenes, making it suitable for
testing and comparing the performance of different enhancement algorithms. By providing
a comprehensive collection of real low-illumination images and videos, the LLIV-Phone
dataset significantly contributes to the advancement of research in low-light image and
video enhancement, enabling the development and evaluation of robust algorithms in
this domain.
NPE
MEF
VV
ExDark
in this paper include the PSNR and SSIM. These quantitative evaluation metrics enable a
quantitative assessment of image enhancement algorithms, allowing for direct comparisons
and providing a basis for performance analysis.
Tables 3–5 provide a quantitative analysis of several different methods, using the
PSNR, SSIM, and NIQE as evaluation metrics. In these three metrics, the higher the values
of the PSNR and SSIM, the better the result, reflecting an improvement in image quality,
while the opposite is true for the NIQE; the smaller the value of the NIQE, the better the
perceived quality. Upon reviewing the tables, it can be observed that the SCI-easy [49]
algorithm consistently achieved excellent results for the PSNR, SSIM, and NIQE. The algo-
rithm’s performance surpassed that of the other methods evaluated, demonstrating better
quantitative metrics in terms of image quality. These quantitative assessments provide
valuable insights into the algorithms’ performance and their ability to enhance image
quality. It is important to consider these metrics along with qualitative evaluations and
other factors to gain a comprehensive understanding of the algorithms’ effectiveness in
different aspects.
Table 3. Quantitative comparison of different deep learning low-light image enhancement algorithms
in terms of PSNR.
Metrics PSNR
Table 4. Quantitative comparison of different deep learning low-light image enhancement algorithms
in terms of SSIM.
Metrics SSIM
Table 5. Quantitative comparison of different deep learning low-light image enhancement algorithms
in terms of NIQE.
Metrics NIQE
Author Contributions: Conceptualization, W.Z. and P.Q.; methodology, Z.T.; software, Z.T. and
J.L.; validation, Z.T. and Y.S.; formal analysis, G.L. and Z.L.; investigation, Z.T., J.L. and Y.S.;
writing—original draft preparation, Z.T. and P.Q.; writing—review and editing, Z.T. and W.Z.;
visualization, Z.T. All authors have read and agreed to the published version of the manuscript.
Sensors 2023, 23, 7763 19 of 22
Funding: This work was supported in part by the Natural Science Foundation of Henan Province
under Grants 232300420428 and 212300410345, and the Major Special Project of Xinxiang City under
Grants 21ZD003, and the Key Specialized Research and Development Program of Science and
Technology of Henan Province under Grants 232102111127, 232102210058, and 232102210018 and the
Innovation Training Program for college Students of Henan Province under Grants 202310467015,
202310467007, 202310467031.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are publicly available data (sources
stated in the citations). Please contact the corresponding author regarding data availability.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Zhang, W.; Zhuang, P.; Sun, H.-H.; Li, G.; Kwong, S.; Li, C. Underwater Image Enhancement via Minimal Color Loss and Locally
Adaptive Contrast Enhancement. IEEE Trans. Image Process. 2022, 31, 3997–4010. [CrossRef]
2. Zhang, Q.; Yuan, Q.; Song, M.; Yu, H.; Zhang, L. Cooperated Spectral Low-Rankness Prior and Deep Spatial Prior for HSI
Unsupervised Denoising. IEEE Trans. Image Process. 2022, 31, 6356–6368.
3. Liu, Y.; Yan, Z.; Ye, T.; Wu, A.; Li, Y. Single Nighttime Image Dehazing Based on Unified Variational Decomposition Model and
Multi-Scale Contrast Enhancement. Eng. Appl. Artif. Intell. 2022, 116, 105373.
4. Sun, H.-H.; Lee, Y.H.; Dai, Q.; Li, C.; Ow, G.; Yusof, M.L.M.; Yucel, A.C. Estimating Parameters of the Tree Root in Heterogeneous
Soil Environments via Mask-Guided Multi-Polarimetric Integration Neural Network. IEEE Trans. Geosci. Remote Sens. 2021, 60,
1–16. [CrossRef]
5. Xiong, J.; Liu, G.; Liu, Y.; Liu, M. Oracle Bone Inscriptions Information Processing Based on Multi-Modal Knowledge Graph.
Comput. Electr. Eng. 2021, 92, 107173. [CrossRef]
6. Zhang, W.; Dong, L.; Xu, W. Retinex-Inspired Color Correction and Detail Preserved Fusion for Underwater Image Enhancement.
Comput. Electron. Agric. 2022, 192, 106585.
7. Sun, H.-H.; Cheng, W.; Fan, Z. Learning to Remove Clutter in Real-World GPR Images Using Hybrid Data. IEEE Trans. Geosci.
Remote Sens. 2022, 60, 1–14.
8. Liu, Y.; Yan, Z.; Wu, A.; Ye, T.; Li, Y. Nighttime Image Dehazing Based on Variational Decomposition Model. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 640–649.
9. Zhang, W.; Li, Z.; Sun, H.-H.; Zhang, Q.; Zhuang, P.; Li, C. SSTNet: Spatial, Spectral, and Texture Aware Attention Network Using
Hyperspectral Image for Corn Variety Identification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5.
10. Zhou, J.; Li, B.; Zhang, D.; Yuan, J.; Zhang, W.; Cai, Z.; Shi, J. UGIF-Net: An Efficient Fully Guided Information Flow Network for
Underwater Image Enhancement. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4206117. [CrossRef]
11. Zhang, W.; Jin, S.; Zhuang, P.; Liang, Z.; Li, C. Underwater Image Enhancement via Piecewise Color Correction and Dual Prior
Optimized Contrast Enhancement. IEEE Signal Process. Lett. 2023, 30, 229–233. [CrossRef]
12. Pan, X.; Cheng, J.; Hou, F.; Lan, R.; Lu, C.; Li, L.; Feng, Z.; Wang, H.; Liang, C.; Liu, Z. SMILE: Cost-Sensitive Multi-Task Learning
for Nuclear Segmentation and Classification with Imbalanced Annotations. Med. Image Anal. 2023, 116, 102867.
13. Liu, Y.; Teng, Q.; He, X.; Ren, C.; Chen, H. Multimodal Sensors Image Fusion for Higher Resolution Remote Sensing Pan
Sharpening. IEEE Sens. J. 2022, 22, 18021–18034. [CrossRef]
14. Zhang, W.; Zhou, L.; Zhuang, P.; Li, G.; Pan, X.; Zhao, W.; Li, C. Underwater Image Enhancement via Weighted Wavelet Visual
Perception Fusion. IEEE Trans. Circuits Syst. Video Technol. 2023. [CrossRef]
15. Zhou, S.; Li, C.; Change Loy, C. Lednet: Joint Low-Light Enhancement and Deblurring in the Dark. In Proceedings of the
European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022;
pp. 573–589.
16. Chen, H.; He, X.; Yang, H.; Wu, Y.; Qing, L.; Sheriff, R.E. Self-Supervised Cycle-Consistent Learning for Scale-Arbitrary Real-World
Single Image Super-Resolution. Expert Syst. Appl. 2023, 212, 118657. [CrossRef]
17. Liu, Y.; Wang, A.; Zhou, H.; Jia, P. Single Nighttime Image Dehazing Based on Image Decomposition. Signal Process. 2021,
183, 107986. [CrossRef]
18. Yue, H.; Guo, J.; Yin, X.; Zhang, Y.; Zheng, S.; Zhang, Z.; Li, C. Salient Object Detection in Low-Light Images via Functional
Optimization-Inspired Feature Polishing. Knowl.-Based Syst. 2022, 257, 109938. [CrossRef]
19. Zhuang, P.; Wu, J.; Porikli, F.; Li, C. Underwater Image Enhancement With Hyper-Laplacian Reflectance Priors. IEEE Trans. Image
Process. 2022, 31, 5442–5455.
20. Liu, Y.; Yan, Z.; Tan, J.; Li, Y. Multi-Purpose Oriented Single Nighttime Image Haze Removal Based on Unified Variational Retinex
Model. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 1643–1657. [CrossRef]
21. Zhang, W.; Wang, Y.; Li, C. Underwater Image Enhancement by Attenuated Color Channel Correction and Detail Preserved
Contrast Enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [CrossRef]
Sensors 2023, 23, 7763 20 of 22
22. He, J.; He, X.; Zhang, M.; Xiong, S.; Chen, H. Deep Dual-Domain Semi-Blind Network for Compressed Image Quality Enhance-
ment. Knowl.-Based Syst. 2022, 238, 107870.
23. Liu, Q.; He, X.; Teng, Q.; Qing, L.; Chen, H. BDNet: A BERT-Based Dual-Path Network for Text-to-Image Cross-Modal Person
Re-Identification. Pattern Recognit. 2023, 141, 109636. [CrossRef]
24. Huang, S.-C.; Cheng, F.-C.; Chiu, Y.-S. Efficient Contrast Enhancement Using Adaptive Gamma Correction with Weighting
Distribution IEEE Trans. Image Process. 2012, 22, 1032–1041. [CrossRef] [PubMed]
25. Wang, Q.; Fu, X.; Zhang, X.-P.; Ding, X. A Fusion-Based Method for Single Backlit Image Enhancement. In Proceedings of the
2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4077–4081.
26. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement via Robust Retinex Model. IEEE
Trans. Image Process. 2018, 27, 2828–2841. [CrossRef] [PubMed]
27. Fu, G.; Duan, L.; Xiao, C. A Hybrid L2−Lp Variational Model for Single Low-Light Image Enhancement with Bright Channel
Prior. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September
2019; pp. 1925–1929.
28. Ren, X.; Yang, W.; Cheng, W.-H.; Liu, J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE
Trans. Image Process. 2020, 29, 5862–5876. [CrossRef]
29. Ueda, Y.; Moriyama, D.; Koga, T.; Suetake, N. Histogram specification-based image enhancement for backlit image. In Proceedings
of the IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020;
pp. 958–962. [CrossRef]
30. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-Light Image Enhancement. Pattern
Recognit. 2017, 61, 650–662. [CrossRef]
31. Li, C.; Guo, J.; Porikli, F.; Pang, Y. LightenNet: A Convolutional Neural Network for Weakly Illuminated Image Enhancement.
Pattern Recognit. Lett. 2018, 104, 15–22. [CrossRef]
32. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808.04560.
33. Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-Light Image/Video Enhancement Using CNNs; BMVC: Newcastle, UK, 2018; Volume 220,
p. 4.
34. Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process.
2018, 27, 2049–2062. [CrossRef]
35. Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-Light Image Enhancer. In Proceedings of the 27th ACM
International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640.
36. Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Fang, C.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. Enlightengan: Deep Light Enhancement
without Paired Supervision. IEEE Trans. Image Process. 2021, 31, 2340–2349. [CrossRef]
37. Zhang, L.; Zhang, L.; Liu, X.; Shen, Y.; Zhang, S.; Zhao, S. Zero-Shot Restoration of Back-Lit Images Using Deep Internal Learning.
In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1623–1631.
38. Wang, R.; Zhang, Q.; Fu, C.-W.; Shen, X.; Zheng, W.-S.; Jia, J. Underexposed Photo Enhancement Using Deep Illumination
Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA,
15–20 June 2019; pp. 6849–6857.
39. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image
Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA,
13–19 June 2020; pp. 1780–1789.
40. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light
Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA,
USA, 13–19 June 2020; pp. 3063–3072.
41. Zhu, M.; Pan, P.; Chen, W.; Yang, Y. Eemefn: Low-Light Image Enhancement via Edge-Enhanced Multi-Exposure Fusion
Network. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34,
pp. 13106–13113.
42. Lu, K.; Zhang, L. TBEFN: A Two-Branch Exposure-Fusion Network for Low-Light Image Enhancement. IEEE Trans. Multimed.
2020, 23, 4093–4105. [CrossRef]
43. Lim, S.; Kim, W. DSLR: Deep Stacked Laplacian Restorer for Low-Light Image Enhancement. IEEE Trans. Multimed. 2020, 23,
4272–4284. [CrossRef]
44. Liu, R.; Ma, L.; Zhang, J.; Fan, X.; Luo, Z. Retinex-Inspired Unrolling with Cooperative Prior Architecture Search for Low-Light
Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN,
USA, 20–25 June 2021; pp. 10561–10570.
45. Li, C.; Guo, C.; Chen, C.L. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE Trans. Pattern
Anal. Mach. Intell. 2021, 44, 4225–4238. [CrossRef] [PubMed]
46. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. Band Representation-Based Semi-Supervised Low-Light Image Enhancement:
Bridging the Gap between Signal Fidelity and Perceptual Quality. IEEE Trans. Image Process. 2021, 30, 3461–3473. [CrossRef]
47. Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement.
IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1076–1088. [CrossRef]
Sensors 2023, 23, 7763 21 of 22
48. Li, J.; Feng, X.; Hua, Z. Low-Light Image Enhancement via Progressive-Recursive Network. IEEE Trans. Circuits Syst. Video
Technol. 2021, 31, 4227–4240. [CrossRef]
49. Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward Fast, Flexible, and Robust Low-Light Image Enhancement. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5637–5646.
50. Huang, H.; Yang, W.; Hu, Y.; Liu, J.; Duan, L.-Y. Towards Low Light Enhancement with Raw Images. IEEE Trans. Image Process.
2022, 31, 1391–1405. [CrossRef] [PubMed]
51. Wang, K.; Cui, Z.; Wu, G.; Zhuang, Y.; Qian, Y. Linear Array Network for Low-Light Image Enhancement. arXiv 2022,
arXiv:2201.08996.
52. Panagiotou, S.; Bosman, A.S. Denoising Diffusion Post-Processing for Low-Light Image Enhancement. arXiv 2023,
arXiv:2303.09627.
53. Zhang, Y.; Di, X.; Wu, J.; FU, R.; Li, Y.; Wang, Y.; Xu, Y.; YANG, G.; Wang, C. A Fast and Lightweight Network for Low-Light
Image Enhancement. arXiv 2023, arXiv:2304.02978.
54. Yang, S.; Ding, M.; Wu, Y.; Li, Z.; Zhang, J. Implicit Neural Representation for Cooperative Low-Light Image Enhancement. arXiv
2023, arXiv:2303.11722.
55. Wu, Y.; Pan, C.; Wang, G.; Yang, Y.; Wei, J.; Li, C.; Shen, H.T. Learning Semantic-Aware Knowledge Guidance for Low-Light
Image Enhancement. arXiv 2023, arXiv:2304.07039.
56. Ren, W.; Liu, S.; Ma, L.; Xu, Q.; Xu, X.; Cao, X.; Du, J.; Yang, M.-H. Low-Light Image Enhancement via a Deep Hybrid Network.
IEEE Trans. Image Process. 2019, 28, 4364–4375. [CrossRef]
57. Tao, L.; Zhu, C.; Xiang, G.; Li, Y.; Jia, H.; Xie, X. LLCNN: A Convolutional Neural Network for Low-Light Image Enhancement.
In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), IEEE, St. Petersburg, FL, USA, 10–13
December 2017; pp. 1–4.
58. Xu, K.; Yang, X.; Yin, B.; Lau, R.W. Learning to Restore Low-Light Images via Decomposition-and-Enhancement. In Proceedings
of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2281–2290.
59. Gharbi, M.; Chen, J.; Barron, J.T.; Hasinoff, S.W.; Durand, F. Deep Bilateral Learning for Real-Time Image Enhancement. ACM
Trans. Graph. (TOG) 2017, 36, 1–12. [CrossRef]
60. Shen, L.; Yue, Z.; Feng, F.; Chen, Q.; Liu, S.; Ma, J. Msr-Net: Low-Light Image Enhancement Using Deep Convolutional Network.
arXiv 2017, arXiv:1711.02488.
61. Wu, H.; Zheng, S.; Zhang, J.; Huang, K. Fast End-to-End Trainable Guided Filter. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1838–1847.
62. Wei, K.; Fu, Y.; Huang, H. 3-D Quasi-Recurrent Neural Network for Hyperspectral Image Denoising. IEEE Trans. Neural Netw.
Learn. Syst. 2020, 32, 363–375. [CrossRef]
63. Meng, Y.; Kong, D.; Zhu, Z.; Zhao, Y. From Night to Day: GANs Based Low Quality Image Enhancement. Neural Process. Lett.
2019, 50, 799–814. [CrossRef]
64. Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Dslr-Quality Photos on Mobile Devices with Deep Convolu-
tional Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017;
pp. 3277–3285.
65. Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Wespe: Weakly Supervised Photo Enhancer for Digital Cameras.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23
June 2018; pp. 691–700.
66. Chen, Y.-S.; Wang, Y.-C.; Kao, M.-H.; Chuang, Y.-Y. Deep Photo Enhancer: Unpaired Learning for Image Enhancement from
Photographs with Gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT,
USA, 18–23 June 2018; pp. 6306–6314.
67. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In
Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232.
68. Fu, Y.; Hong, Y.; Chen, L.; You, S. LE-GAN: Unsupervised Low-Light Image Enhancement Network Using Attention Module and
Identity Invariant Loss. Knowl.-Based Syst. 2022, 240, 108010. [CrossRef]
69. Yan, L.; Fu, J.; Wang, C.; Ye, Z.; Chen, H.; Ling, H. Enhanced Network Optimized Generative Adversarial Network for Image
Enhancement. Multimed. Tools Appl. 2021, 80, 14363–14381. [CrossRef]
70. You, Q.; Wan, C.; Sun, J.; Shen, J.; Ye, H.; Yu, Q. Fundus Image Enhancement Method Based on CycleGAN. In Proceedings of
the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, Berlin,
Germany, 23–27 July 2019; pp. 4500–4503.
71. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity.
IEEE Trans. Image Process. 2004, 13, 600–612. [CrossRef]
72. Wang, S.; Zheng, J.; Hu, H.-M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images.
IEEE Trans. Image Process. 2013, 22, 3538–3548. [CrossRef]
73. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24,
3345–3356. [CrossRef] [PubMed]
74. Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300.
Sensors 2023, 23, 7763 22 of 22
75. Loh, Y.P.; Chan, C.S. Getting to Know Low-Light Images with the Exclusively Dark Dataset. Comput. Vis. Image Underst. 2019,
178, 30–42. [CrossRef]
76. Aakerberg, A.; Nasrollahi, K.; Moeslund, T.B. RELLISUR: A Real Low-Light Image Super-Resolution Dataset. In Proceedings of
the Thirty-fifth Conference on Neural Information Processing Systems-NeurIPS 2021, Online, 6–14 December 2021.
77. Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.-M.; Gu, J.; Loy, C.C. Low-Light Image and Video Enhancement Using Deep Learning:
A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 9396–9416. [CrossRef] [PubMed]
78. Chen, Z.; Jiang, Y.; Liu, D.; Wang, Z. CERL: A Unified Optimization Framework for Light Enhancement With Realistic Noise.
IEEE Trans. Image Process. 2022, 31, 4162–4172. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.