Deep_Learning_Techniques_for_Hyperspectral_Image_A
Deep_Learning_Techniques_for_Hyperspectral_Image_A
A P REPRINT
Mohamed Fadhlallah Guerri 1,2 , Cosimo Distante 1,2 , Paolo Spagnolo 2 , Fares Bougourzi 3 , and Abdelmalik Taleb-Ahmed 4
1
arXiv:2304.13880v1 [cs.CV] 26 Apr 2023
A BSTRACT
In the recent years, hyperspectral imaging (HSI) has gained considerably popularity among computer
vision researchers for its potential in solving remote sensing problems, especially in agriculture field.
However, HSI classification is a complex task due to the high redundancy of spectral bands, limited
training samples, and non-linear relationship between spatial position and spectral bands. Fortunately,
deep learning techniques have shown promising results in HSI analysis. This literature review
explores recent applications of deep learning approaches such as Autoencoders, Convolutional Neural
Networks (1D, 2D, and 3D), Recurrent Neural Networks, Deep Belief Networks, and Generative
Adversarial Networks in agriculture. The performance of these approaches has been evaluated
and discussed on well-known land cover datasets including Indian Pines, Salinas Valley, and Pavia
University.
Keywords Hyperspectral imaging · Deep learning · Agriculture · Convolutional Neural Network · Recurrent Neural
Network · Generative Adversarial Network
1 Introduction
In the last 20 years, there has been an increasing need to assess the quality and safeguarding of horticultural and
agricultural produce. With the advent of sophisticated agricultural technologies, this has become an indispensable aid
for farmers in managing crop health and resource utilization [1]. Traditional approaches for obtaining crop classification
results through field measurement, investigation, and statistics are time-consuming, labor-intensive, and expensive [2].
Therefore, a non-destructive, non-polluting, and quick technology such as Hyperspectral Imaging (HSI) has emerged as
a potential solution. HSI has the potential to capture multiple images across different wavelengths, enabling precise
monitoring of spatial and temporal variations in farmland. This capability facilitates rapid and accurate predictions of
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
crop growth [3]. HSI finds diverse applications in agriculture, ranging from crop management [4], forecasting crop
yield [5], and detecting crop diseases [6] to monitoring land usage [7], water resources [8], and soil conditions [9]. Deep
learning methods have shown promising results in many agricultural applications, enabling farmers to make crucial
decisions when needed. They offer a number of benefits over traditional Machine Learning (ML) methods, including
the ability to automatically extract highly relevant characteristics. Crop classification tasks have seen significant growth
in the use of deep learning algorithms in recent years, with several significant efforts to enhance this problem utilizing
current deep learning algorithms [10].
HSI has revolutionized the way farmers approach agriculture by enabling them to make quick and informed decisions
about their crops. The integration of deep learning algorithms with HSI has enhanced the accuracy and efficiency
of crop classification and other agricultural applications, leading to a more sustainable and profitable agricultural
industry. The recent advancements in Artificial Intelligence (AI) have led to the integration of AI techniques with
various applications in research and the business world. This integration has opened up new possibilities for the
development of smart systems in horticultural, agricultural, and food domains, especially since The rise of ML, a
sub-branch of AI that deals with algorithms that learn to recognize patterns in data to make decisions [11]. The analysis
and interpretation of enormous volumes of data produced by Hyperspectral Imaging (HSI) systems present numerous
difficulties and continue to be a bottleneck in many horticultural and agricultural applications. However, AI approaches,
particularly Deep Learning (DL), can use the depth of the spectral and spatial information to identify correlations with
quality parameters when applied to HSI data [12]. DL has become increasingly important due to its superior efficiency
and quality compared to conventional machine learning models [13, 14, 15, 16]. Combining hyperspectral data with
cutting-edge AI approaches, especially DL, offers a wide range of possibilities for fresh product quality management.
Several studies have highlighted the potential of HSI in the agricultural section, the use of ML algorithms for data
analysis and interpretation has also been extensively explored. Table 1 represents some of previous review papers
contribution of HSI in the field of agriculture.
The primary contribution of this paper is to examine the application of HSI technology and to close gaps in the study of
HSI systems. This review paper highlights the advantages of using HSI over RGB cameras, such as the substantial
quantity of data acquired in a single image, which is not visible to the human eye. Additionally, the paper discusses
the use of Deep Learning (DL), which can offer a superior efficiency and quality for product quality management. By
combining HSI data with DL techniques, this paper provides insights into new possibilities for the development of
smart systems in horticultural, agricultural, and food domains, and contributes to the advancement of the use of AI in
the industry. This paper’s main contributions can be summed up as follows:
• Firstly, this paper provides a comprehensive discussion of the general concepts and essential information
related to HSI technology and imaging methods. This discussion is intended to enhance the understanding of
the technology and its potential applications for young researchers.
• Secondly, the paper analyzes various publicly available HSI agricultural datasets, highlighting their unique
features, and how they have been utilized in agriculture.
• Thirdly, the study reviews different techniques and approaches of deep learning used for HSI, providing an
in-depth analysis of their strengths and limitations.
• Lastly, the paper presents and discusses the main applications in the field of agriculture that benefit from HSI
technology. This section provides insights into how HSI can be used to improve crop yield, quality, and safety.
Section 2 of this article provides basic information on hyperspectral imaging and tools and explication. Section 3
discusses four common methods of acquiring hyperspectral images, while section 4 presents some publicly available
datasets. Section 5 analyzes various approaches and techniques of deep learning utilized in hyperspectral imaging
technology. In Section 6, different approaches to the application of hyperspectral imaging technology in agriculture are
discussed. Finally, the article concludes with a discussion of limitations and potential areas for future research.
2
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Table 1: Summary of Surveys and reviews related to HSI in agriculture with comparison
2 Hyperspectral Imaging
Hyperspectral imaging (HSI or HI), also known as chemical and spectroscopic imaging, is a novel technique has been
developed that merges conventional imaging with spectroscopy, allowing for the simultaneous acquisition of both spatial
and spectral data from an image. Despite being originally designed for remote sensing, HSI technology’s advantages
over traditional machine vision are now evident in diverse fields, including agriculture. Optical sensing and imaging
techniques have advanced to the point where HSI is now a valuable method for technical inspection and consistency
measurement of fruits and vegetables.
The HSI system is composed of four crucial elements, namely the illumination source, primary lens, region of
interest (ROI) detector, and spectroscopic imager. Choosing an appropriate illumination source is essential for optimal
performance and reliability. Choosing an appropriate objective lens relies on the capability to focus gathered light
originating from a limited region onto the detector unit, resulting in the formation of pixels in the output image.
Achieving appropriate spatial resolution is critical in HSI systems and is determined by the optical input slot volume
with respect to wavelength and the detector component size of primary lenses [21]. The spectrograph receives the
light from the objective lenses and disperses it into separate wavelengths. This is accomplished through the use of
imaging spectrographs that rely on diffraction gratings, which consist of evenly spaced grooves and play a significant
role in scattering wavelengths [22]. In the end, the dispersed light is captured by a detector that transforms photons into
electrical signals. These signals are analyzed by a computer to determine the intensity rates of different wavelengths.
Two main types of solid-state region detectors, the charge-coupled device (CCD) and the complementary metal-oxide
semiconductor (CMOS), are utilized as image sensors. [23].
HSI aims to capture the spectral range of each pixel in an image of a scene to facilitate substance identification,
target detection, and processing [24]. High spectral resolution and small band images are typically produced using
a combination of hyperspectral images, spectroscopic methods, 2-dimensional geometric space, and 1-dimensional
3
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
spectral detail detection. Through extensive research and development, HSI has found many useful applications in the
quality assessment of precision agriculture. HSI combines spectroscopic and imaging techniques into a single device
that can obtain the spatial map of spectral variation [1].
Figure 1: Hyperspectral Image acquisition, datacube and spectral content of several pixels (each color in the 2D plot
represents a single pixel content).
HSI can be used in precision agriculture to assess the health of crops based on their distinctive signatures at different
growth stages. The spectral behavior of a scene is recorded using HSI sensors, which are space-sensing devices that
take many digital images of the same scene at once, each reflecting a contained or continuous spectrum. When a
specific substance is subjected to a light source with a given spectral bandwidth, specific sections of the light are
emitted, absorbed, and/or reflected depending on the substance’s structure. The spectral signature of a material is the
term used to describe this reaction [25]. Even though the hyperspectral image’s data volume is always very high and
has colinearity problems, chemometric methods are needed to extract the crucial intimate analysis. HSI can be used
in precision agriculture to assess the health of crops based on their distinctive signatures at different growth stages.
Space-sensing devices are called hyperspectral image sensors to capture the spectral behavior of a scene as a series of
simultaneous digital images, each representing a contained or continuous spectral spectrum. When a specific substance
is subjected to a light source with a given spectral bandwidth, specific sections of the light are emitted, absorbed, and/or
reflected depending on the substance’s structure. This reaction is referred to as the spectral signature of a material
[25]. This information is stored in a cubic data structure, as seen in Fig 1, where each spectral band is "stacked" by its
wavelength. Therefore, the measurements of spectral responses enable the classification of distinct materials or the
observation of specific compositional characteristics in biological subjects. Even though the hyperspectral image’s
data volume is always very large and has colinearity problems, chemometric methods are needed to extract the crucial
intimate analysis.
4
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Figure 2: Flowchart of the typical steps for analyzing hyperspectral image data.
The acquisition of high-quality images that satisfy the study objectives is a crucial first step in the analysis of HSI.
For accurate results, the proper selection of sensors and platforms is necessary, as well as the optimal spectral and
spatial resolution settings, illumination design, scan rate, frame rate, and exposure time [26]. The following stage is
image pre-processing, which includes spectrum correction and calibration. The procedure consists of Standardizing the
spectrum and spatial axes of the hyperspectral image, assessing the precision and reproducibility of the acquired data
under various operating conditions, and removing instrumental errors and the curvature effect [27]. Image segmentation
is usually carried out as a preprocessing step before the formal spectral analysis to extract target objects from the
background or create a mask that defines the area of interest for further extraction of information [28]. The last step is
selecting a model and applying it to the data, these can be regression models or classification models, depending on the
goals of the research.
5
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
3 Acquisition Modes
Based on the methods used to acquire both spectral and spatial information, hyperspectral systems are classified into 4
categories: whisk-broom, push broom, staring, and snapshot, in Fig 3.
1. whisk-broom
The whiskbroom imaging method is a technique used to acquire images in remote sensing applications. It
involves scanning a target area or scene one line at a time, using a sensor that moves back and forth across
the target area. In the whiskbroom imaging method, the sensor is typically mounted on a platform, such as
an airplane or satellite, and moves across the target area in a series of parallel lines. As the sensor moves
across the target area, it captures a series of narrow strips of the scene, one line at a time. These strips are
then combined to create a complete image of the target area. In HSI, the sensor captures a series of images at
different spectral wavelengths, which are combined to create a 3D hyperspectral image cube. The whiskbroom
imaging method can be used to capture each of the individual images in the hyperspectral cube, providing a
complete image of the scene at each spectral wavelength. One of the advantages of the whiskbroom imaging
method is its ability to capture images with high spatial resolution. Because the sensor moves across the target
area in a series of parallel lines, it can capture a large number of closely spaced image strips, which can be
combined to create a high-resolution image of the target area. The whiskbroom imaging method is widely used
in a variety of remote sensing applications, including environmental monitoring [29, 30], mineral exploration
[31], and defense and surveillance [32]. It is particularly useful in applications where high-resolution images
of the target area are required, or where rapid image capture is necessary. Overall, the whiskbroom imaging
method is a powerful imaging technique that can be used in a variety of remote sensing applications. Its ability
to capture high-resolution images of the target area makes it a useful tool for a wide range of applications.
2. Push broom
The push broom method is a technique used to acquire hyperspectral images, particularly in remote sensing
applications. It involves scanning a scene or target area one line at a time, using a hyperspectral camera that
remains stationary [33]. The camera captures a series of images at different spectral wavelengths, which
are combined to create a 3D hyperspectral image cube. The push broom method typically uses a line array
sensor, which consists of a linear array of pixels that captures an image one line at a time. The sensor is
mounted on a platform or satellite, which moves relative to the scene being imaged. As the platform moves,
the sensor captures a series of images at different spectral wavelengths, one line at a time. The process is
repeated until the entire target area has been scanned. Each line of the hyperspectral image cube represents
the spectral information for each pixel in that line, across all the spectral wavelengths captured by the sensor.
The resulting hyperspectral image cube contains information about the reflectance or absorption of each pixel
at each wavelength, allowing for detailed analysis and interpretation of the data. The push broom method is
6
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
preferred over other methods such as the whiskbroom method, which scans the scene using a scanning mirror
or rotating prism to acquire an image line by line. The push broom method is generally considered to be more
efficient and faster, as it requires only a linear array of pixels and does not require any mechanical components
to move the sensor. The push broom method is widely used in a variety of remote sensing applications. It
is particularly useful for monitoring large areas over time, as it allows for the detection of subtle changes in
spectral signatures that can indicate changes in vegetation health [34], mineral content [35], or environmental
conditions [36].
3. Staring
In the staring imaging method, the sensor captures a complete image of the target area or scene all at once. The
sensor can be a camera or other imaging device that uses an array of pixels to capture the image. The sensor
is typically mounted on a platform or satellite, which remains stationary during the image capture process
[37]. Unlike the push broom method, which scans the scene one line at a time, the staring imaging method
captures the entire scene at once. This can be useful in applications where a complete image of the scene is
required, such as in surveillance or mapping applications [38]. The staring imaging method is typically used
in conjunction with other techniques, such as HSI, to capture images with high spatial and spectral resolution.
In hyperspectral imaging, the sensor captures a series of images at different spectral wavelengths, which are
combined to create a 3D hyperspectral image cube. The staring imaging method can be used to capture each
of the individual images in the hyperspectral cube, providing a complete image of the scene at each spectral
wavelength. The staring imaging method is particularly useful in applications where a high-resolution image of
the entire scene is required, or where rapid image capture is necessary. Overall, the staring imaging method is
a powerful imaging technique that can be used in a variety of remote sensing applications. Its ability to capture
a complete image of the scene in a single snapshot makes it a useful tool for a wide range of applications.
4. Snapshot
The snapshot imaging method is a technique used to acquire images in a single snapshot, particularly in
optical imaging applications. It involves capturing an image of the entire field of view all at once, using
a sensor that is designed to capture a large area in a single exposure [39]. The snapshot imaging method
typically uses a specialized sensor known as a focal plane array (FPA) [40] or a detector array. The FPA is
an array of photodetectors that captures the image of the entire field of view simultaneously. The FPA can
be made up of different types of photodetectors, such as charge-coupled devices (CCDs) or complementary
metal-oxide-semiconductor (CMOS) sensors, depending on the application. One of the main advantages of the
snapshot imaging method is its ability to capture high-speed images of fast-moving objects or events. Because
the entire field of view is captured in a single snapshot, there is no need for any mechanical scanning or motion
of the sensor, which allows for rapid image capture. The snapshot imaging method is widely used in a variety
of applications, including high-speed imaging, microscopy [41], astronomy [42], and biomedical imaging
[43]. In high-speed imaging applications, the method can be used to capture images of fast-moving objects or
events, such as explosions or high-speed collisions. In microscopy, the method can be used to capture images
of small, fast-moving particles or cells. In astronomy, the method can be used to capture images of distant
stars and galaxies. Overall, the snapshot imaging method is a powerful imaging technique that can be used in a
variety of optical imaging applications. Its ability to capture high-speed images of fast-moving objects and
events makes it a useful tool in many scientific and industrial applications.
Table 2 displays a few of the suggested HSI datasets for agriculture. More HSI data can be gathered as this technique
develops, allowing for the availability of larger datasets. The quantity of data, spatial resolution, spectral channels, and
variety of scenarios are the most critical characteristics of the available datasets.
Numerous papers used Indian pines [44] and University of Pavia datasets [44]. Both datasets are captured by airborne
hyperspectral-imaging sensors and contain pixel-level ground truth. The Indian Pines dataset, which comprises 224
band hyperspectral images, was captured by the AVIRIS sensor to target LULC in the agricultural domain. Researchers
frequently utilize this dataset to analyze LULC patterns, and typically focus on the 200 spectral bands while excluding
7
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Indian Pines [44] 1992 NASA AVIRIS 145 x 145 220 400 - 2500 10249 16 20
Salinas [44] 1998 NASA AVIRIS 512 x 217 224 360 - 2500 54129 16 3.7
Pavia University [44] 2001 ROSIS-03 sensor 610 x 610 115 430 - 860 42776 9 1.3
Botswana [44] 2004 NASA EO-1 1496 × 256 242 400-2500 3248 14 30
Chikusei [45] 2014 Headwall Hyperspec- 2517x2335 128 363-1018 77592 19 2.5
VNIR-C imaging sen-
sor
WHU-Hi-HanChuan 2016 Headwall Nano- 1217 × 303 274 400-1000 257530 16 0.109
[46] Hyperspec
imaging sensor
WHU-Hi-HongHu [46] 2017 Headwall Nano- 940 × 475 270 400-1000 386693 22 0.043
Hyperspec
imaging sensor
WHU-Hi-LongKou 2018 Headwall Nano- 550 × 400 270 400-1000 204542 9 0.463
[46] Hyperspec
imaging sensor
the water absorption bands. The Salinas dataset, which focuses on different agricultural classes, is captured by the same
sensor used to capture the Indian Pines dataset. These two datasets are quite similar in terms of their data types. The
ROSIS airborne sensor captured the University of Pavia dataset, producing pictures with 103 spectral bands. Botswana
[44] is another airborne pixel-level labeled imagery dataset used for land cover classification. The Wuhan University
RSIDEA research group has gathered and made available the WHU-Hi dataset (Wuhan UAV-borne hyperspectral
image) [46], which is used as a benchmark dataset for studies on accurate crop classification and hyperspectral image
classification. Three distinct UAV-borne hyperspectral datasets are included in the WHU-Hi dataset: WHU-Hi-LongKou,
WHU-Hi-HanChuan, and WHU-Hi-HongHu. All of the data were collected in Hubei province, China, in agricultural
areas that grew a variety of crops. The spectral classes from the Pavia University and Salinas datasets are homogeneously
distributed throughout the hyperspectral image [47]. WHU-Hi-LongKou and Pavia University datasets have a lesser
number of classes compared with other datasets. Unmanned aerial vehicle (UAV)-borne hyperspectral systems using
Headwall Nano-Hyperspec sensor can acquire hyperspectral imagery with a high spatial resolution [48]. In the Chikusei
dataset [45], the Hyperspec-VNIR-C imaging sensor was used to capture hyperspectral data over urban and rural regions
in Chikusei, Ibaraki, Japan. The dataset contains ground truth high-resolution color images captured by EOS 5D Mark
II for 19 classes.
The accuracy of ML technologies is rising steeply because of their built-in mechanical capabilities such as feature
extraction, selection, and reduction of spatial-spectral and contextual features. Not only are these technologies intelligent
and cognitive, but they also possess a high degree of precision [49]. The most recent Deep Learning (DL) techniques
for classifying hyperspectral data, including CNN, SAE, RNN, GAN, DBN, TL, and AL, are presented in Figure 6 and
elaborated upon in detail below. Table 3 illustrates some of the DL classification methods. According to the Scopus
statistics, there are 109 relevant papers from 2011 to 2023 where "hyperspectral images" and "agriculture" and "deep
learning" are used as keywords Fig. 4. It is interesting to see how in 2018 there has been a strong increase of the
published papers in the agriculture sector, thanks to a better availability of the deep learning frameworks.
8
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Figure 4: Number of published articles by year on deep learning with hyperspectral data applied in agriculture sector,
(source: Scopus).
Figure 5: Pie-chart of related articles on DL approaches used for HSI classification, (source: Scopus).
9
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
The distribution of literary studies analyzed for each of the selected DL techniques is shown in Figure 5 as a pie-chart
with percentage values for each category.
Figure 6: The various categories of prominent deep learning techniques utilized for HSI classification
The most widely used neural network for classifying images is the convolutional neural network (CNN), whose primary
structural unit is the convolutional (CONV) layer. In comparison to other methods, CNN has been widely employed
for image classification [50], detection [51] and segmentation [52, 53]. Deep neural networks are capable of learning
deep feature representation for analyzing hyperspectral images and can achieve excellent classification accuracy in
various datasets. CNN has become a popular technique for LULC classification due to its exceptional ability to process
hyperspectral images effectively by extracting spectral-spatial discriminative features, which is evident in numerous
studies [54, 55]. In fact, several studies have shown that CNN outperforms traditional machine learning algorithms
such as Random Forest, Support Vector Machine, and k-Nearest Neighbors on multiple datasets [56], authors have
proposed a per-superpixel model that combines multi-scale CNN for LULC classification and leverages high-level
feature extraction. This per-superpixel multi-scale CNN approach has effectively addressed misclassification caused
by the scale effect in complex LULC classes, surpassing the per-superpixel single-scale CNN method, as evidenced
by the results of [57]. Another notable advancement in this field is the introduction of a deep hybrid dilated residual
network (DHDRN) in [58], which has been evaluated on three publicly available hyperspectral datasets and compared
to state-of-the-art methods. The experimental findings have highlighted that the proposed DHDRN method achieves
superior classification accuracy and efficiency, surpassing previous methods. Specifically, the DHDRN method has
achieved impressive overall classification accuracy rates of 99.26%, 99.44%, and 97.96% on the Pavia University, Indian
10
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Pines, and Salinas datasets, respectively. The authors have conducted ablation studies to understand the contribution of
each component of the proposed method, which has revealed that the hybrid architecture and dilated convolutional
layers play a crucial role in achieving the best classification performance. The Deep Residual Network (ResNet) [59] is
considered a significant milestone in the history of CNN. ResNet has addressed the problem of training deep CNN
models [60]. Recently, ResNet has been successfully used in hyperspectral image analysis, such as hyperspectral image
classification [61], hyperspectral image denoising [62], increasing the spatial resolution of hyperspectral images [63],
and unsupervised spectral-spatial feature learning of hyperspectral images [64]. Fig 7 shows a graphical representation
of CNN architecture.
In 2016, [65] utilized the SAE model to obtain practical high-level features for remote sensing image classification.
This was the first application of deep learning to HSI analysis. The input data was reconstructed using auto-encoders
(AE) which are composed of the encoder and decoder. The AE was trained separately and connected to each layer of
the SAE [66]. The abstract features were extracted by rebuilding the input data layer by layer. During the unsupervised
pretraining stage, the features learned from one AE were used as input data for training the next AE in a greedy way,
thus reducing each AE’s reconstruction error. After pretraining, the parameters, such as weights and biases, of all AEs
were used as initial values for SAE. The parameters of each layer were adjusted using backpropagation of error when
the labeled data was used as the supervised signal, and the parameters of the structure were updated using the stochastic
gradient descent algorithm. [67, 68] proposed denoising auto-encoders (DAE) and stacked denoising auto-encoders
(SDAE) as other enhancement strategies. Authors in [69] presented a novel and robust approach for hyperspectral
image classification using a compact and discriminative stacked autoencoder (CDSAE). The approach consisted of two
steps, in the initial step, low-dimensional discriminative features were extracted by applying a local Fisher discriminant
regularization to each hidden layer of the SAE. In the second step, an effective classifier was integrated into the training
process. The proposed method was evaluated on three different HSI datasets, and the results demonstrated its remarkable
superiority in accurate and reliable hyperspectral image classification. Fig 8 depicted a graphical illustration of SAE.
11
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
In 2006, Hinton proposed the deep belief network (DBN) [70], which uses the Restricted Boltzmann Machine (RBM) as
a learning module similar to SAE’s use of auto-encoders. However, DBN employs a symmetrical connection structure,
consisting of several RBMs, with connections between the layers rather than within the units of each layer. The output
of one layer serves as the input for the next layer. The RBM layers are initially pre-trained in an unsupervised manner
using unlabeled samples to preserve the characteristics as much as possible. The entire DBN network is fine-tuned
using a small number of labeled samples and the backpropagation algorithm [71]. The deep features extracted are used
for detection and classification tasks. Random initialization of weight parameters is critical because it enables DBN
networks to overcome the primary limitations of the backpropagation technique, such as local optimization and long
training times. In [72], the authors utilized the Firefly Harmony Search Deep Belief Network (FHS-DBN) model for
LULC classification on four benchmark datasets. The DBN was trained using a hybrid approach combining the Firefly
Algorithm (FA) and the Harmony Search (HS) algorithm to obtain the FHS algorithm. This approach showed promising
results for LULC classification, demonstrating the potential of integrating multiple optimization algorithms in deep
learning models for improved performance. As DBNs have been used in hyperspectral image classification for some
time, their efficacy in the field is well-established. While DBNs have been shown to be effective for a wide range of
tasks, they also have some challenges, including:
• Lack of Transferability: DBNs are often trained on one specific dataset and are not easily transferable to other
HSI datasets, which can limit their ability to generalize to new hyperspectral images
• Over-complexity: DBNs can have a large number of parameters, which can lead to over-complex models that
are difficult to train and may not generalize well to new hyperspectral images
• Limited Ability to Handle Noisy Data: DBNs can be sensitive to noisy data, which is a common problem in
HSI due to various environmental factors such as atmospheric turbulence, clouds, and shadows
• High Computational Cost: DBNs can be computationally expensive to train, especially for large hyperspectral
images, which can limit their applicability in real-world scenarios
12
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
The concept of a recurrent neural network (RNN) was first introduced by Williams in 1989 [73]. RNNs differ from
feed-forward neural networks by incorporating a recurrent hidden state that depends on previous steps. This enables
RNNs to recognize patterns in data sequences and temporal properties. Recently, RNNs have been used to classify
hyperspectral images, as they can efficiently analyze hyperspectral pixels as sequential data [74]. However, standard
RNN models suffer from issues such as gradient explosion or disappearance, which have been partially resolved with the
introduction of Long Short-Term Memory Networks (LSTMs) [75] and Gated Recurrent Units (GRUs) [76]. Different
LSTM models, such as LSTM-F, LSTM-S (unidirectional), and base LSTM (b-LSTM)-S (bidirectional), have been
introduced to address sliding-window segmentation and operate in various modes. The bidirectional-convolutional long
and short-term memory network (Bi-CLSTM) has been used to learn spectral-spatial characteristics automatically from
hyperspectral data, resulting in improved classification performance of about 1.5% compared to a 3D-CNN [77]. A
method for HSI classification, called Spectral-Spatial LSTM (SS-LSTM), has been proposed in [78]. This method
utilizes Convolutional Neural Networks (CNNs) and Principal Component Analysis (PCA) to extract spectral and spatial
features from the hyperspectral image, respectively. These features are then fed into separate LSTM layers to capture
the temporal dependencies and interdependencies between them. The SS-LSTM is trained on a large hyperspectral
dataset and evaluated on three benchmark datasets.
There are several problems associated with applying Recurrent Neural Networks (RNNs) to HSI data :
• Handling Structural Changes: RNNs can have difficulty in handling structural changes in hyperspectral images,
such as changes in atmospheric conditions, illumination, and viewpoint, which can impact their performance
on hyperspectral image classification tasks
• Data Variability: Agricultural hyperspectral images can vary significantly due to changes in weather conditions,
soil conditions, and plant growth stages, making it challenging to train RNNs effectively
• Training Difficulties: Training RNNs on HSI data can be challenging due to the sequential nature of the data,
and the need for appropriate training algorithms and techniques to handle the data effectively
13
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Generative Adversarial Network (GAN) was first proposed in 2014, which generates samples based on required
class label through adversarial training [79]. Generative techniques aim to identify the distribution parameters from
the data and generate new samples according to the identified models. Several improved GANs, including Deep
Convolutional GAN [80], 1-D and 3-D GAN [81], Capsule GAN [82], Cascade Conditional GAN [83], MDGAN [84],
and 3DBF-GAN [85], have been utilized for hyperspectral imaging. GANs have shown very promising results with a
small number of labeled samples by fully exploiting sufficient unlabeled samples [86]. There are several challenges in
applying Generative Adversarial Networks (GANs) to HSI data, including:
• Data heterogeneity: hyperspectral data can have heterogeneous features, making it difficult for GANs to
capture the full range of information present in the data
• Limited labeled data: In many applications of HSI, labeled data is limited, which can impact the ability of
GANs to learn effectively from the data
• Data variability: hyperspectral data can be affected by various environmental factors, such as atmospheric
conditions, which can lead to significant variability in the data
• Data imbalance: hyperspectral data often has imbalanced class distributions, which can affect the performance
of GANs
14
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
*
Active learning (AL) is a technique that shows promise in addressing the challenge of limited labeled samples in HSI. It
involves an iterative process of selecting the most informative examples from a subset of unlabeled samples based on
their uncertainty and intrinsic distribution and structure [87, 88]. AL is more efficient than traditional semi-supervised
learning methods and can train deep networks with fewer training samples [89]. Various AL approaches have been
proposed for HSI classification, including random sampling (RS) [90], maximum uncertainty sampling (MUS) [91],
multiview (MV) [92], and mutual information (MI)-based sampling [93]. Applying AL approaches to HSI data presents
several challenges, including:
• Query strategy selection: Selecting the appropriate query strategy is crucial for effective active learning in
HSI. There is a trade-off between the cost of acquiring labels and the quality of the model being learned, and
different query strategies may perform better or worse depending on the specifics of the data and task at hand
• Data variability: Agricultural data can be affected by various environmental factors, such as weather conditions
and soil variability, which can lead to significant variability in the data. This variability can make it difficult
for active learning algorithms to accurately model the underlying data distribution
• Label noise: Labeling data in agriculture can be subjective and prone to human error, leading to label noise in
the data. This can impact the performance of active learning algorithms that rely on accurate labels
The application of transfer learning models to HSI analysis has proved to be successful and reliable. A few top layers of
CNN are developed using a small number of training samples, while the bottom and middle layers can be transferred
15
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
from models of other scenarios [94]. The overall classification accuracy of CNN-transfer is higher than CNN when
training samples are low. Hyperspectral image super-resolution is also a challenge.In order to improve the resolution of
hyperspectral images, a novel framework is developed that makes use of information from natural images [95]. The
proposed approach utilizes transfer learning to extend the mapping between low and high-resolution images, which is
learned by a deep convolutional neural network, to the hyperspectral domain. There are several challenges in applying
transfer learning to HSI data, including:
• Domain shift: The distribution of the source and target data may differ significantly, causing a domain shift.
This can impact the effectiveness of transfer learning algorithms
• Task specificity: The target task in agriculture may be different from the source task, which can impact the
ability of transfer learning algorithms to effectively transfer knowledge
• Model compatibility: The source and target data may have different dimensionalities or structures, which can
impact the ability of transfer learning algorithms to effectively transfer knowledge
16
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Table 4: An overview of the key research conducted on soil analysis using HSI
Measuring soil TAs content [110] DNN-CARS R2CV = 0.69, RMSECV = 0.61, RECV
= 6.56
The concentration of nitro- [111] DASU DASU-based DL 46.6% N for g/100 g sample
gen in soil network
Prediction of soil organic [112] fractional order deriva- highest R2CV = 0.66
carbon (SOC) content tive (FOD), Random for-
est (RF)
Prediction of heavy metal [113] RF, SRF, RRF, GRF, Rp2 =0.75, RMSEp = 8.24
concentrations in agricul- HySpex VNIR-1600
tural soils
Detection of soil organic [114] Partial Least Squares Re- R2 = 0.75, r = 0.87 , RPD = 2.1
matter gression (PLSR)
A model for estimating the [115] MLR, PCR R2 =0.724%, RMSE= 24.92% MRE=
content of Pb in soil 28.22%
estimation of soil properties [116] Hybrid features, LSTM R2= 0.85, RMSE= 10.56
Estimating the concentra- [117] VGG19 transfer learning Model accuracy = 81.25%, RM-
tion of soil heavy metals SEas=2.89, RMSEcd=0.12, RM-
SEpb=0.22
This section illustrates and outlines the most significant contributions of different agricultural sectors in HSI.
Crop growth requires a healthy soil environment. Quick and accurate access to information on soil nutrient content is a
requirement for scientific manuring. Poor soil management threatens the quality and effectiveness of the soils, which
are a major factor in the rural generation [107]. The various factors impacting soil and soil erosion, such as bright sun
and heavy rain, are greatly influenced by the regional climate. In order to address agricultural issues, such as crop
quality and yield, soil erosion must be identified. HSI technology can help in soil analysis by capturing a large amount
of data across a wide range of the electromagnetic spectrum. The resulting images can then be analyzed to identify and
quantify various soil properties, such as soil moisture content [108], nutrient levels [109], and mineral composition [35].
This information can be used to map soil characteristics and support more informed decision-making for agriculture,
environmental management, and other soil-related applications. Researchers look for simpler, non-destructive methods
to identify soil organic matter (SOM), as it is essential in the soil-plant ecosystem. Table 4 summarizes some of the
most important research on HSI for soil analysis applications.
Yield prediction is one of the most significant areas of precision agriculture research. Crop management, crop supply
matching with demand, yield prediction, yield mapping, and crop supply mapping are essential for maximizing
production [118]. One of the major issues in agricultural management that can be solved most effectively by precision
farming methods is crop production estimation. HSI technology can help with crop yield estimation by providing
information about the health and vigor of crops, including data on plant chlorophyll content, water content, and nutrient
levels. This information can be used to estimate crop yield, predict crop stress and potential yield losses, and optimize
crop management practices, leading to improved yields. By capturing detailed spectral information from the visible
and near-infrared regions of the electromagnetic spectrum, HSI can also identify and map specific plant species and
17
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Table 5: An overview of the key research on the application of HSI for crop yield prediction.
vegetation types, and detect changes in the landscape over time, which are all valuable for crop yield estimation. Table
5 summarizes the most important research on HSI for crop yield estimation applications,
One significant area of study for several agricultural applications is the identification and classification of the crop
using hyperspectral images. HSI technology can assist in agricultural crop classification by capturing detailed spectral
information of crops in the visible and near-infrared regions of the electromagnetic spectrum. This information can
be used to identify unique spectral signatures for different crop types and growth stages, which can then be used
for classification purposes. Machine learning algorithms are typically employed to analyze the hyperspectral data
and classify crops based on their spectral reflectance characteristics. This information can be used to create maps of
crop types and growth stages, which can provide valuable insights for agricultural management and decision-making.
Additionally, HSI can be used to detect crop stress [124], diagnose potential yield-limiting factors [125], and monitor
crop health, which are all important for crop management and optimization. HSI can also be used to monitor changes
in the landscape over time, allowing for the detection of crop growth and yield changes and providing a useful tool
for precision agriculture. Table 6 summarizes some of the most important research on HSI for agricultural crop
classification applications.
Estimating nutrients and biomass in crops assists in the classification of crop conditions and various soil-characterized
crop categories to promote agricultural development for farmers or others. Rapid determination of the nutritional
content of lettuce cultivars grown hydroponically. HSI technology can assist in the estimation of contaminants and
nutrients in crops by collecting detailed spectral data that can be used to identify specific chemicals and substances in
the crops and diagnose potential nutrient deficiencies [132] and can also detect contaminants, such as heavy metals,
that may be present in the crops. Additionally, HSI technology can be used to detect trends in nutrient uptake and
availability [133], which can be valuable for nutrient management and fertilization practices. The technology can also
be used to monitor the impact of contaminants on crop health and estimate potential yield losses, providing a useful
tool for precision agriculture and ensuring the safety and quality of crops for human and animal consumption. Table 7
summarizes some of the most important research on HSI for Contaminants and nutrient estimation applications.
The appearance of several pests and illnesses in the crops poses serious challenges to farmers. Some of the frequent
causes of illness infections include nematodes, bacteria, viruses, and fungi. Due to unawareness of crop diseases and the
need for expert support and advice, farmers have historically avoided diagnosing or suspecting the majority of infections.
In order to prevent actual damage to the crops, disease infections should be detected early on. HSI technology can assist
18
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Table 6: An overview of the most significant studies conducted on agricultural crop classification using HSI
Corn seed variety classifica- [126] DCNN, KNN DCNN training accuracy=100% Testing
tion accuracy rate=94.4%, 57 and validation
accuracy rate= 93.3%
Variety identification of [127] LR, SVM, CNN,RNN Classification accuracy over 90%
coated maize kernels and LSTM
Prediction of intact oranges [128] ANN (SSC)RMSECV =0.87% (TA) RM-
consistency parameters SECV =0.23 RMSECV =2.78 for MI
RMSECV =1.11 for brima
The inherent uniformity of [129] Near-infrared HSI Rcv2=0.83, TSC Rcv2=0.81, RPD=2.20
apple fruit slices is accentu- and RPD=2.39
ated
Predict the viability of pep- [130] HSI PLS-SVM, Xray PLS-SVM OA= 88.99%, Ensemble-
per seeds Ensemble-SVM SVM OA= 92.51%
Improving Green Pepper [131] CVNN based on 1D Fast Accuracy= 94.89%, F1 Score= 89.55%
Segmentation Fourier Transform
Table 7: An overview of the most significant research on the use of HSI for estimating contaminants and nutrients.
19
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Table 8: overview of the most significant research conducted in the field of agricultural applications of HSI for the
monitoring of plant diseases and invasive plant species
Detection of target spot and [140] MLP Neural Network, Classification accuracy =99% for both
bacterial spot diseases in Stepwise Discriminant target spot (TS)
tomato Analysis (STDA)
classification of the asymp- [141] 2DCNN and 3DCNN accuracy= 79.0% F1 score= 0.83
tomatic biotrophic phase of with attention networks
PLB disease
3D deep learning for plant [142] DCNN Classification accuracy= 95.73%, F1
disease recognition score = 0.87
Identification of red-berried [143] Monte-Carlo, SVM, GLD classification accuracy=89.93%
wine grape infected with
grapevine leaf-roll disease
aflatoxin B1 detection [144] Dual-branch CNN classification accuracy 91.30%
assessment of weed com- [145] 3D-CNN RMSE = 0.106 and 0.152 using 13 fea-
petitiveness in maize farm- ture bands
land ecosystems
Disease detection of basal [146] Mask RCNN and VGG16 classification accuracy 91.93%
stem rot
in plant disease monitoring and invasive plant species detection by analyzing the reflectance spectra of the plants in
multiple narrow, contiguous wavelength bands. This technology can detect subtle changes in the chemical and physical
properties of plants that are not visible to the human eye [139], such as changes in chlorophyll content or leaf water
content. This information can then be used to identify plant stress and disease symptoms [33], such as discoloration
or wilting, or to distinguish between invasive plant species and native species based on differences in their spectral
signatures. This allows for early detection and monitoring of plant diseases and invasive species, which can lead to
improved management strategies and outcomes. Table 8 summarizes some of the most important research on HSI for
Plant disease monitoring and invasive plant species applications.
Even if not strictly correlated with agriculture, plastic pollution has become one of the most emergent issues threatening
aquatic and terrestrial ecosystems; so, its implications in agriculture are intrinsic.
The detection of plastic in the wild is a challenging area. As evident, the size of the monitored plastics is related to the
resolution of the sensor, i.e. the flight altitude of the UAV. Microplastic can be detected only in the laboratory, with a
distance between the target and the sensor in the range of centimeters [147]. On the other hand, macroplastics can be
detected at a higher distance; basically, many approaches make use of satellite images [148], [149]. These approaches
suffer from the main limitations of this kind of sensor: low spatial resolution, fixed time samples, rigid protocols for
data access, and no customizable acquisition campaigns. On the other hand, they can count on high-quality data in
terms of spectral resolution. The target of such approaches is large plastic waste detection, and it is performed by means
of some specific spectral indices such as Reversed Normalized Difference Vegetation Index (RNDVI), Normalized
Difference Water Index 2 (NDWI2), Plastic Index (PI), and Floating Debris Index (FDI), as reported in [150] and [151].
The possibility to mount a hyperspectral sensor on a UAV opens a large field of applications oriented to small plastics
detection [152]. In this scenario, the limitations of satellite data are overcome, and the increase in the spatial resolution
gives a comfortable contribution to developing on-demand models able to perform path planning, data acquisition, and
data processing in a compact time lapse. These advantages are indisputable and contribute to the growing use of these
methodologies. In [153] and [154] authors propose an approach to detect litter in a marine environment by using limited
samples of the spectral bandwidths. The processing of acquired data can be performed by random forest classifiers
20
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[152] or, recently, by deep learning-based approaches [155]. The use of information coming from a sensor working in
Short-Wave InfraRed (SWIR) bandwidth is proposed in [156]. This area of the spectrum is mainly used to distinguish
between different kinds of plastics (i.,e PET, PVC, and so on), while the detection can be performed also in different
bandwidths, specifically the range 600-900nm is suitable for this. In the last years, the use of plastic in agriculture has
become massive, mainly due to an increase in plastic-covered greenhouse farming areas and plastic-mulched farmlands;
this makes it necessary to develop tools and approaches for the automatic detection and monitoring of such areas, for
sustainable development of horticulture, including high-quality agricultural production, and reduced pollution [157],
[158], [159].
7 Discussion
HSI technology offers considerable advantages over conventional nondestructive testing methods. The inaccuracy
caused by manual processes, instrument usage, and various reagent preparation steps that are included in classical
detection methods is unpredictable. The hyperspectral data of the sample must first be extracted using HSI technology
by taking pictures of the sample during detection. This data can then be merged with a machine-learning algorithm
to determine the sample quality. In order to deliver accurate, quick, and nondestructive detection, hyperspectral
detection eliminates errors caused by external variables including reagents, instruments, and operators. The early
utilization of hyperspectral data encountered limitations. Initially, researchers followed a conventional approach of
pre-processing (if necessary), extracting, and selecting discriminative features before applying a classifier to identify
land cover groups. Feature extraction techniques such as PCA, ICA, and wavelets were emphasized, but these classic
mathematical methods were insufficient for handling the massive amount of data involved in HSI classification. They
were not capable of accurately predicting multiclass problems and had difficulty with feature selection and storage.
Consequently, researchers faced challenges in analyzing, processing, and classifying HSIs. However, the development
of ML/DL technologies has provided researchers with new avenues for addressing these challenges. Despite this
progress, analyzing and extracting information from HSIs remains challenging due to the large number of highly
correlated bands and high spatial-spectral features embedded in the electromagnetic spectrum. Finding appropriate
technologies for classifying these interconnected, high-dimensional images requires sufficient quality-labeled data, and
unsupervised approaches often result in inaccurate results due to a lack of coherence between spectral clusters and
target regions.
8 Conclusion
This review paper comprehensively presents the individual information for each method, including their performance,
research gaps, and achievements. Additionally, it introduces a novel research methodology that distinguishes this work
from others. Through an in-depth examination of each methodology, significant inferences are drawn, enhancing the
novelty of the work. Furthermore, the paper highlights the applications of HSI technology in agriculture and details the
current research scenario on HSI classification. The paper also covers some of the recently developed techniques that
may be particularly useful in future research.
This article presents an overview of the technologies and procedures used for HSI classification from its inception to
the present day. Despite the significant challenges associated with processing high-band data, researchers have made
significant progress in this field over the last decade, improving existing techniques and developing new ones. With
advancements in technology and the introduction of machine learning, HSI classification has become more accurate
than traditional and contemporary state-of-the-art methodologies. As a result, deep learning has emerged as the most
prominent tool for HSI classification in the past decade, leading to a greater exploration of remote sensing and space
imagery features.
HSI exploration has opened up new avenues for research with numerous real-world applications. While HSI spectral
bands offer certain advantages over multispectral and RGB imaging, they also come with limitations and disadvantages.
Therefore, it is crucial to address the challenges associated with HSI analysis, some of which are listed below:
21
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
• Target detection remains a significant difficulty in HSI due to the unpredictable nature of target and background
spectra, making it challenging to develop efficient target detection algorithms.
This paper provides an in-depth analysis of popular deep learning methods used for HSI classification and evaluates
their effectiveness. The techniques discussed below offer researchers new and improved approaches to HSI analysis:
• Meta-learning, which involves creating algorithms that use the same datasets and combine predictions from
various models, is an unexplored area of research in HSI classification.
• Newer datasets, such as the RIT-18 Remote Sensory Dataset [160] and the olive [161] and grape berry [162]
datasets , need to be used to test existing studies and improved techniques. Although the RIT-18 dataset is a
multispectral dataset, this offers reliable and more versatile classification methods.
• Automatic parameter selection and optimization for DL and other approaches can benefit from an effective
evolutionary technique or genetic algorithm.
• Data cubes offer a wealth of information about a scene, but the strong correlation between bands can result in
duplicate information. Using automatic protocols to diagnose redundant information can improve classification
accuracy.
• Creating cheap, compact, and lightweight mobile HSI setups, such as hardware like filters, sensors, and
lighting sources, is an exciting research topic.
• Recent approaches to pattern recognition can be explored to learn information from data cubes more effectively
and efficiently, as spectral signatures are obtained by extracting and analyzing data from high-dimensional
data cubes.
Acknowledgement
This work was supported by the Ministry of Enterprises and Made in Italy with the grant ENDOR "ENabling technologies
for Defence and mOnitoring of the foRests" - PON 2014-2020 FESR - CUP B82C21001750005.
References
[1] Prabira Kumar Sethy, Chanki Pandey, Yogesh Kumar Sahu, and Santi Kumari Behera. Hyperspectral imagery
applications for precision agriculture-a systemic survey. Multimedia Tools and Applications, pages 1–34, 2021.
[2] Lankapalli Ravikanth, Digvir S Jayas, Noel DG White, Paul G Fields, and Da-Wen Sun. Extraction of spectral
information from hyperspectral data and application of hyperspectral imaging for food and agricultural products.
Food and bioprocess technology, 10:1–33, 2017.
[3] Yufeng Ge, Geng Bai, Vincent Stoerger, and James C Schnable. Temporal dynamics of maize plant growth,
water use, and leaf water content using automated high throughput rgb and hyperspectral imaging. Computers
and Electronics in Agriculture, 127:625–632, 2016.
[4] Xia Zhang, Yanli Sun, Kun Shang, Lifu Zhang, and Shudong Wang. Crop classification based on feature band
set construction and object-oriented approach using hyperspectral images. IEEE Journal of Selected Topics in
Applied Earth Observations and Remote Sensing, 9(9):4117–4128, 2016.
[5] Bo Li, Xiangming Xu, Li Zhang, Jiwan Han, Chunsong Bian, Guangcun Li, Jiangang Liu, and Liping Jin.
Above-ground biomass estimation and yield prediction in potato by using uav-based rgb and hyperspectral
imaging. ISPRS Journal of Photogrammetry and Remote Sensing, 162:161–172, 2020.
[6] A-K Mahlein, T Rumpf, P Welke, H-W Dehne, L Plümer, U Steiner, and E-C Oerke. Development of spectral
indices for detecting and identifying plant diseases. Remote Sensing of Environment, 128:21–30, 2013.
[7] Thomas Selige, Jürgen Böhner, and Urs Schmidhalter. High resolution topsoil mapping using hyperspectral
image and field data in multivariate regression modeling procedures. Geoderma, 136(1-2):235–244, 2006.
[8] Ö Gürsoy, AC Birdal, F Özyonar, and E Kasaka. Determining and monitoring the water quality of kizilirmak river
of turkey: First results. The International Archives of Photogrammetry, Remote Sensing and Spatial Information
Sciences, 40(7):1469, 2015.
[9] B Weber, C Olehowski, T Knerr, J Hill, K Deutschewitz, DCJ Wessels, B Eitel, and B Büdel. A new approach for
mapping of biological soil crusts in semidesert areas with hyperspectral imagery. Remote Sensing of Environment,
112(5):2187–2201, 2008.
22
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[10] Abdelmalek Bouguettaya, Hafed Zarzour, Ahmed Kechida, and Amine Mohammed Taberkit. Deep learning
techniques to classify agricultural crops through uav imagery: a review. Neural Computing and Applications,
pages 1–26, 2022.
[11] Jean Frederic Isingizwe Nturambirwe and Umezuruike Linus Opara. Machine learning applications to non-
destructive defect detection in horticultural products. Biosystems Engineering, 189:60–83, 2020.
[12] Alberto Signoroni, Mattia Savardi, Annalisa Baronio, and Sergio Benini. Deep learning meets hyperspectral
image analysis: A multidisciplinary review. Journal of Imaging, 5(5):52, 2019.
[13] Edoardo Vantaggiato, Emanuela Paladini, Fares Bougourzi, Cosimo Distante, Abdenour Hadid, and Abdelmalik
Taleb-Ahmed. Covid-19 recognition using ensemble-cnns in two new chest x-ray databases. Sensors, 21(5):1742,
2021.
[14] Garima Jaiswal, Arun Sharma, and Sumit Kumar Yadav. Critical insights into modern hyperspectral image
applications through deep learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery,
11(6):e1426, 2021.
[15] Fares Bougourzi, Cosimo Distante, Fadi Dornaika, and Abdelmalik Taleb-Ahmed. Cnr-iemn-cd and cnr-iemn-csd
approaches for covid-19 detection and covid-19 severity detection from 3d ct-scans. In Computer Vision–ECCV
2022 Workshops: Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part VII, pages 593–604. Springer, 2023.
[16] Fares Bougourzi, Fadi Dornaika, and Abdelmalik Taleb-Ahmed. Deep learning based face beauty prediction via
dynamic robust losses and ensemble regression. Knowledge-Based Systems, 242:108246, 2022.
[17] Bing Lu, Phuong D Dao, Jiangui Liu, Yuhong He, and Jiali Shang. Recent advances of hyperspectral imaging
technology and applications in agriculture. Remote Sensing, 12(16):2659, 2020.
[18] Ning Zhang, Guijun Yang, Yuchun Pan, Xiaodong Yang, Liping Chen, and Chunjiang Zhao. A review of
advanced technologies and development for hyperspectral-based plant disease detection in the past three decades.
Remote Sensing, 12(19):3188, 2020.
[19] Chunying Wang, Baohua Liu, Lipeng Liu, Yanjun Zhu, Jialin Hou, Ping Liu, and Xiang Li. A review of deep
learning used in the hyperspectral image analysis for agriculture. Artificial Intelligence Review, 54(7):5205–5253,
2021.
[20] Anton Terentev, Viktor Dolzhenko, Alexander Fedotov, and Danila Eremenko. Current state of hyperspectral
remote sensing for early plant disease detection: A review. Sensors, 22(3):757, 2022.
[21] Gerda J Edelman, Edurne Gaston, Ton G Van Leeuwen, PJ Cullen, and Maurice CG Aalders. Hyperspectral
imaging for non-contact analysis of forensic traces. Forensic science international, 223(1-3):28–39, 2012.
[22] BE Woodgate, RA Kimble, CW Bowers, S Kraemer, ME Kaiser, AC Danks, JF Grady, JJ Loiacono, M Brumfield,
L Feinberg, et al. The space telescope imaging spectrograph design1. Publications of the Astronomical Society
of the Pacific, 110(752):1183, 1998.
[23] Abbas El Gamal and Helmy Eltoukhy. Cmos image sensors. IEEE Circuits and Devices Magazine, 21(3):6–20,
2005.
[24] Chein-I Chang. Hyperspectral imaging: techniques for spectral detection and classification, volume 1. Springer
Science & Business Media, 2003.
[25] Dimitris G Manolakis, Ronald B Lockwood, and Thomas W Cooley. Hyperspectral imaging remote sensing:
physics, sensors, and algorithms. Cambridge University Press, 2016.
[26] Di Wu and Da-Wen Sun. Advanced applications of hyperspectral imaging technology for food quality and safety
analysis and assessment: A review—part ii: Applications. Innovative Food Science & Emerging Technologies,
19:15–28, 2013.
[27] Maider Vidal and José Manuel Amigo. Pre-processing of hyperspectral images. essential steps before image
analysis. Chemometrics and Intelligent Laboratory Systems, 117:138–148, 2012.
[28] Jiangbo Li, Ruoyu Zhang, Jingbin Li, Zheli Wang, Hailiang Zhang, Baishao Zhan, and Yinglan Jiang. Detection
of early decayed oranges based on multispectral principal component image combining both bi-dimensional
empirical mode decomposition and watershed segmentation method. Postharvest Biology and Technology,
158:110986, 2019.
[29] A Merlaud, D Constantin, F Mingireanu, I Mocanu, C Fayt, J Maes, G Murariu, M Voiculescu, L Georgescu,
and M Van Roozendael. Small whiskbroom imager for atmospheric composition monitoring (swing) from an
unmanned aerial vehicle (uav). In Proceedings of the 21st ESA Symposium on European Rocket & Balloon
Programmes and related Research, Thun, Switzerland, pages 9–13, 2013.
23
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[30] Alexis Merlaud, Frederik Tack, Daniel Constantin, Lucian Georgescu, Jeroen Maes, Caroline Fayt, Florin
Mingireanu, Dirk Schuettemeyer, Andreas Carlos Meier, Anja Schönardt, et al. The small whiskbroom imager
for atmospheric composition monitoring (swing) and its operations from an unmanned aerial vehicle (uav) during
the aromat campaign. Atmospheric Measurement Techniques, 11(1):551–567, 2018.
[31] Amin Beiranvand Pour and Mazlan Hashim. Aster, ali and hyperion sensors data for lithological mapping and
ore minerals exploration. SpringerPlus, 3:1–19, 2014.
[32] Young-Ran Lee, Ayman Habib, and Kyung-Ok Kim. A study on aerial triangulation from multi-sensor imagery.
Korean Journal of Remote Sensing, 19(3):255–261, 2003.
[33] Anne-Katrin Mahlein, Ulrike Steiner, Christian Hillnhütter, Heinz-Wilhelm Dehne, and Erich-Christian Oerke.
Hyperspectral imaging for small-scale analysis of symptoms caused by different sugar beet diseases. Plant
methods, 8:1–13, 2012.
[34] Rocío Hernández-Clemente, Alberto Hornero, Matti Mottus, Josep Peñuelas, Victoria González-Dugo,
JC Jiménez, L Suárez, Luis Alonso, and Pablo J Zarco-Tejada. Early diagnosis of vegetation health from
high-resolution hyperspectral and thermal imagery: Lessons learned from empirical relationships and radiative
transfer modelling. Current forestry reports, 5:169–183, 2019.
[35] Marena Manley. Near-infrared spectroscopy and hyperspectral imaging: non-destructive analysis of biological
materials. Chemical Society Reviews, 43(24):8200–8214, 2014.
[36] Stefan Thomas, Jan Behmann, Angelina Steier, Thorsten Kraska, Onno Muller, Uwe Rascher, and Anne-Katrin
Mahlein. Quantitative assessment of disease severity and rating of barley cultivars based on hyperspectral
imaging in a non-invasive, automated phenotyping platform. Plant methods, 14(1):1–12, 2018.
[37] Yuanyue Guo, Xuezhi He, and Dongjin Wang. A novel super-resolution imaging method based on stochastic
radiation radar array. Measurement Science and Technology, 24(7):074013, 2013.
[38] Zhixin Zhang, Yun Shao, Wei Tian, Qiufang Wei, Yazhou Zhang, and Qingjun Zhang. Application potential of
gf-4 images for dynamic ship monitoring. IEEE Geoscience and Remote Sensing Letters, 14(6):911–915, 2017.
[39] William R Johnson, Daniel W Wilson, Wolfgang Fink, Mark Humayun, and Greg Bearman. Snapshot hyperspec-
tral imaging in ophthalmology. Journal of biomedical optics, 12(1):014036–014036, 2007.
[40] Y Arslan, F Oguz, and C Besikci. Extended wavelength swir ingaas focal plane array: Characteristics and
limitations. Infrared Physics & Technology, 70:134–137, 2015.
[41] Liang Gao, Robert T Kester, Nathan Hagen, and Tomasz S Tkaczyk. Snapshot image mapping spectrometer
(ims) with high sampling density for hyperspectral microscopy. Optics express, 18(14):14330–14344, 2010.
[42] AR Offringa, Benjamin McKinley, Natasha Hurley-Walker, FH Briggs, RB Wayth, DL Kaplan, ME Bell,
Lu Feng, AR Neben, JD Hughes, et al. Wsclean: an implementation of a fast, generic wide-field imager for radio
astronomy. Monthly Notices of the Royal Astronomical Society, 444(1):606–619, 2014.
[43] Changben Yu, Jin Yang, Nan Song, Ci Sun, Mingjia Wang, and Shulong Feng. Microlens array snapshot
hyperspectral microscopy system for the biomedical domain. Applied Optics, 60(7):1896–1902, 2021.
[44] Hyperspectral Remote Sensing Scenes. Available online: https://www.ehu.eus/ccwintco/index.php. Hyperspec-
tral_Remote_Sensing_Scenes (accessed on 22 April 2020), 2020.
[45] Naoto Yokoya and Akira Iwasaki. Airborne hyperspectral data over chikusei. Space Appl. Lab., Univ. Tokyo,
Tokyo, Japan, Tech. Rep. SAL-2016-05-27, 5, 2016.
[46] Yanfei Zhong, Xinyu Wang, Yao Xu, Shaoyu Wang, Tianyi Jia, Xin Hu, Ji Zhao, Lifei Wei, and Liangpei
Zhang. Mini-uav-borne hyperspectral remote sensing: From observation and processing to applications. IEEE
Geoscience and Remote Sensing Magazine, 6(4):46–62, 2018.
[47] Himanshi Yadav, Alberto Candela, and David Wettergreen. A study of unsupervised classification techniques for
hyperspectral datasets. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium,
pages 2993–2996. IEEE, 2019.
[48] Yanfei Zhong, Xin Hu, Chang Luo, Xinyu Wang, Ji Zhao, and Liangpei Zhang. Whu-hi: Uav-borne hyperspectral
with high spatial resolution (h2) benchmark datasets and classifier for precise crop identification based on deep
convolutional neural network with crf. Remote Sensing of Environment, 250:112012, 2020.
[49] Amirhossein Hassanzadeh, Jan van Aardt, Sean Patrick Murphy, and Sarah Jane Pethybridge. Yield modeling of
snap bean based on hyperspectral sensing: a greenhouse study. Journal of Applied Remote Sensing, 14(2):024519,
2020.
24
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[50] Thomas Blaschke, Geoffrey J Hay, Maggi Kelly, Stefan Lang, Peter Hofmann, Elisabeth Addink, Raul Queiroz
Feitosa, Freek Van der Meer, Harald Van der Werff, Frieke Van Coillie, et al. Geographic object-based image
analysis–towards a new paradigm. ISPRS journal of photogrammetry and remote sensing, 87:180–191, 2014.
[51] Qin-Qin Tao, Shu Zhan, Xiao-Hong Li, and Toru Kurihara. Robust face detection using local cnn and svm based
on kernel combination. Neurocomputing, 211:98–105, 2016.
[52] Fares Bougourzi, Cosimo Distante, Fadi Dornaika, and Abdelmalik Taleb-Ahmed. Pdatt-unet: Pyramid dual-
decoder attention unet for covid-19 infection segmentation from ct-scans. Medical Image Analysis, page 102797,
2023.
[53] Fares Bougourzi, Cosimo Distante, Fadi Dornaika, and Abdelmalik Taleb-Ahmed. D-trattunet: Dual-decoder
transformer-based attention unet architecture for binary and multi-classes covid-19 infection segmentation. arXiv
preprint arXiv:2303.15576, 2023.
[54] Wei Lin, Xiangyong Liao, Juan Deng, and Yao Liu. Land cover classification of radarsat-2 sar data using
convolutional neural network. Wuhan University Journal of Natural Sciences, 21(2):151–158, 2016.
[55] Naftaly Wambugu, Yiping Chen, Zhenlong Xiao, Mingqiang Wei, Saifullahi Aminu Bello, José Marcato Junior,
and Jonathan Li. A hybrid deep convolutional neural network for accurate land cover classification. International
Journal of Applied Earth Observation and Geoinformation, 103:102515, 2021.
[56] Manuel Carranza-García, Jorge García-Gutiérrez, and José C Riquelme. A framework for evaluating land use
and land cover classification using convolutional neural networks. Remote Sensing, 11(3):274, 2019.
[57] Yangyang Chen, Dongping Ming, and Xianwei Lv. Superpixel based land cover classification of vhr satellite
image combining multi-scale cnn and scale parameter estimation. Earth Science Informatics, 12:341–363, 2019.
[58] Feilong Cao and Wenhui Guo. Deep hybrid dilated residual networks for hyperspectral image classification.
Neurocomputing, 384:170–181, 2020.
[59] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
[60] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.
[61] Zilong Zhong, Jonathan Li, Lingfei Ma, Han Jiang, and He Zhao. Deep residual networks for hyperspectral
image classification. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages
1824–1827, 2017.
[62] Qiangqiang Yuan, Qiang Zhang, Jie Li, Huanfeng Shen, and Liangpei Zhang. Hyperspectral image denoising
employing a spatial–spectral deep residual convolutional neural network. IEEE Transactions on Geoscience and
Remote Sensing, 57(2):1205–1218, 2018.
[63] Chen Wang, Yun Liu, Xiao Bai, Wenzhong Tang, Peng Lei, and Jun Zhou. Deep residual convolutional neural
network for hyperspectral image super-resolution. In International conference on image and graphics, pages
370–380. Springer, 2017.
[64] Lichao Mou, Pedram Ghamisi, and Xiao Xiang Zhu. Unsupervised spectral–spatial feature learning via deep
residual conv–deconv network for hyperspectral image classification. IEEE Transactions on Geoscience and
Remote Sensing, 56(1):391–406, 2017.
[65] Chen Xing, Li Ma, and Xiaoquan Yang. Stacked denoise autoencoder based feature extraction and classification
for hyperspectral images. Journal of Sensors, 2016, 2016.
[66] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep
networks. Advances in neural information processing systems, 19, 2006.
[67] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing
robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine
learning, pages 1096–1103, 2008.
[68] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, Pierre-Antoine Manzagol, and Léon Bottou.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising
criterion. Journal of machine learning research, 11(12), 2010.
[69] Peicheng Zhou, Junwei Han, Gong Cheng, and Baochang Zhang. Learning compact and discriminative stacked
autoencoder for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing,
57(7):4823–4833, 2019.
[70] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural
computation, 18(7):1527–1554, 2006.
25
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[71] Yushi Chen, Xing Zhao, and Xiuping Jia. Spectral–spatial classification of hyperspectral data based on deep belief
network. IEEE journal of selected topics in applied earth observations and remote sensing, 8(6):2381–2392,
2015.
[72] Anil B Gavade and Vijay S Rajpurohit. A hybrid optimization-based deep belief neural network for the
classification of vegetation area in multi-spectral satellite image. International Journal of Knowledge-based and
Intelligent Engineering Systems, 24(4):363–379, 2020.
[73] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks.
Neural computation, 1(2):270–280, 1989.
[74] Lichao Mou, Pedram Ghamisi, and Xiao Xiang Zhu. Deep recurrent neural networks for hyperspectral image
classification. IEEE Transactions on Geoscience and Remote Sensing, 55(7):3639–3655, 2017.
[75] Zhuocheng Jiang, W David Pan, and Hongda Shen. Lstm based adaptive filtering for reduced prediction errors of
hyperspectral images. In 2018 6th IEEE international conference on wireless for space and extreme environments
(WISEE), pages 158–162. IEEE, 2018.
[76] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated
recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[77] Qingshan Liu, Feng Zhou, Renlong Hang, and Xiaotong Yuan. Bidirectional-convolutional lstm based spectral-
spatial feature learning for hyperspectral image classification. Remote Sensing, 9(12):1330, 2017.
[78] Feng Zhou, Renlong Hang, Qingshan Liu, and Xiaotong Yuan. Hyperspectral image classification using
spectral-spatial lstms. Neurocomputing, 328:39–47, 2019.
[79] I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, and Y Bengio. Generative
adversarial nets. in: Proceedings of the international conference on neural information processing systems. 2014.
[80] FJ Chen, JM Li, and DY Yang. Hyperspectral image classification based on generative adversarial networks.
Comput Eng Appl, 55(22):172–179, 2019.
[81] Lin Zhu, Yushi Chen, Pedram Ghamisi, and Jón Atli Benediktsson. Generative adversarial networks for
hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 56(9):5046–5063,
2018.
[82] Zhixiang Xue. A general generative adversarial capsule network for hyperspectral image spectral-spatial
classification. Remote Sensing Letters, 11(1):19–28, 2020.
[83] Xiaobo Liu, Yulin Qiao, Yonghua Xiong, Zhihua Cai, and Peng Liu. Cascade conditional generative adversarial
nets for spatial-spectral hyperspectral sample generation. Science China Information Sciences, 63(4):1–16, 2020.
[84] Hongmin Gao, Dan Yao, Mingxia Wang, Chenming Li, Haiyun Liu, Zaijun Hua, and Jiawei Wang. A hyper-
spectral image classification method based on multi-discriminator generative adversarial networks. Sensors,
19(15):3269, 2019.
[85] Zhi He, Han Liu, Yiwen Wang, and Jie Hu. Generative adversarial networks-based semi-supervised learning for
hyperspectral image classification. Remote Sensing, 9(10):1042, 2017.
[86] Ying Zhan, Dan Hu, Yuntao Wang, and Xianchuan Yu. Semisupervised hyperspectral image classification based
on generative adversarial networks. IEEE Geoscience and Remote Sensing Letters, 15(2):212–216, 2017.
[87] Ying Cui, Xiaowei Ji, Kai Xu, and Liguo Wang. A double-strategy-check active learning algorithm for
hyperspectral image classification. Photogrammetric Engineering & Remote Sensing, 85(11):841–851, 2019.
[88] Zhao Lei, Yi Zeng, Peng Liu, and Xiaohui Su. Active deep learning for hyperspectral image classification with
uncertainty learning. IEEE geoscience and remote sensing letters, 19:1–5, 2021.
[89] Kaushal Bhardwaj, Arundhati Das, and Swarnajyoti Patra. Spectral-spatial active learning with superpixel
profile for classification of hyperspectral images. In 2020 6th international conference on signal processing and
communication (ICSC), pages 149–155. IEEE, 2020.
[90] Justin S Smith, Ben Nebgen, Nicholas Lubbers, Olexandr Isayev, and Adrian E Roitberg. Less is more: Sampling
chemical space with active learning. The Journal of chemical physics, 148(24):241733, 2018.
[91] Vu-Linh Nguyen, Mohammad Hossein Shaker, and Eyke Hüllermeier. How to measure uncertainty in uncertainty
sampling for active learning. Machine Learning, 111(1):89–122, 2022.
[92] Zhou Zhang, Edoardo Pasolli, and Melba M Crawford. An adaptive multiview active learning approach for
spectral–spatial classification of hyperspectral images. IEEE Transactions on Geoscience and Remote Sensing,
58(4):2557–2570, 2019.
26
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[93] Chenying Liu, Lin He, Zhetao Li, and Jun Li. Feature-driven active learning for hyperspectral image classification.
IEEE Transactions on Geoscience and Remote Sensing, 56(1):341–354, 2017.
[94] Jingxiang Yang, Yongqiang Zhao, Jonathan Cheung-Wai Chan, and Chen Yi. Hyperspectral image classification
using two-channel deep convolutional neural network. In 2016 IEEE international geoscience and remote
sensing symposium (IGARSS), pages 5079–5082. IEEE, 2016.
[95] Yuan Yuan, Xiangtao Zheng, and Xiaoqiang Lu. Hyperspectral image superresolution by transfer learning. IEEE
Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(5):1963–1974, 2017.
[96] Jaime Zabalza, Jinchang Ren, Jiangbin Zheng, Huimin Zhao, Chunmei Qing, Zhijing Yang, Peijun Du, and
Stephen Marshall. Novel segmented stacked autoencoder for effective dimensionality reduction and feature
extraction in hyperspectral imaging. Neurocomputing, 185:1–10, 2016.
[97] Simranjit Singh and Singara Singh Kasana. Efficient classification of the hyperspectral images using deep
learning. Multimedia Tools and Applications, 77:27061–27074, 2018.
[98] Mercedes E Paoletti, Juan Mario Haut, Javier Plaza, and Antonio Plaza. A new deep convolutional neural
network for fast hyperspectral image classification. ISPRS journal of photogrammetry and remote sensing,
145:120–147, 2018.
[99] Radhesyam Vaddi and Prabukumar Manoharan. Hyperspectral image classification using cnn with spectral and
spatial features integration. Infrared Physics & Technology, 107:103296, 2020.
[100] Simranjit Singh and Singara Singh Kasana. A pre-processing framework for spectral classification of hyperspec-
tral images. Multimedia Tools and Applications, 80(1):243–261, 2021.
[101] Yanting Zhan, Ke Wu, and Yanni Dong. Enhanced spectral–spatial residual attention network for hyperspectral
image classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
15:7171–7186, 2022.
[102] Shivam Pande and Biplab Banerjee. Hyperloopnet: Hyperspectral image classification using multiscale self-
looping convolutional networks. ISPRS Journal of Photogrammetry and Remote Sensing, 183:422–438, 2022.
[103] Chandra Shekhar Yadav, Monoj Kumar Pradhan, Syam Machinathu Parambil Gangadharan, Jitendra Kumar
Chaudhary, Jagendra Singh, Arfat Ahmad Khan, Mohd Anul Haq, Ahmed Alhussen, Chitapong Wechtaisong,
Hazra Imran, et al. Multi-class pixel certainty active learning model for classification of land cover classes using
hyperspectral imagery. Electronics, 11(17):2799, 2022.
[104] Zhengying Li, Hong Huang, Zhen Zhang, and Guangyao Shi. Manifold-based multi-deep belief network for
feature extraction of hyperspectral image. Remote Sensing, 14(6):1484, 2022.
[105] Mohammed Q Alkhatib, Mina Al-Saad, Nour Aburaed, Saeed Almansoori, Jaime Zabalza, Stephen Marshall,
and Hussain Al-Ahmad. Tri-cnn: a three branch model for hyperspectral image classification. Remote Sensing,
15(2):316, 2023.
[106] Lijian Zhou, Xiaoyu Ma, Xiliang Wang, Siyuan Hao, Yuanxin Ye, and Kun Zhao. Shallow-to-deep spatial–
spectral feature enhancement for hyperspectral image classification. Remote Sensing, 15(1):261, 2023.
[107] Ying-Qiang Song, Xin Zhao, Hui-Yue Su, Bo Li, Yue-Ming Hu, and Xue-Sen Cui. Predicting spatial variations
in soil nutrients with hyperspectral remote sensing at regional scale. Sensors, 18(9):3086, 2018.
[108] Di Wu, Hui Shi, Songjing Wang, Yong He, Yidan Bao, and Kangsheng Liu. Rapid prediction of moisture content
of dehydrated prawns using online hyperspectral imaging system. Analytica Chimica Acta, 726:57–66, 2012.
[109] Shengyao Jia, Hongyang Li, Yanjie Wang, Renyuan Tong, and Qing Li. Hyperspectral imaging analysis for the
classification of soil types and the determination of soil total nitrogen. Sensors, 17(10):2252, 2017.
[110] Lifei Wei, Yangxi Zhang, Qikai Lu, Ziran Yuan, Haibo Li, and Qingbin Huang. Estimating the spatial distribution
of soil total arsenic in the suspected contaminated area using uav-borne hyperspectral imagery and deep learning.
Ecological Indicators, 133:108384, 2021.
[111] Ajay Kumar Patel, Jayanta Kumar Ghosh, Shivam Pande, and Sameer Usmangani Sayyad. Deep-learning-based
approach for estimation of fractional abundance of nitrogen in soil from hyperspectral data. IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing, 13:6495–6511, 2020.
[112] Yongsheng Hong, Long Guo, Songchao Chen, Marc Linderman, Abdul M Mouazen, Lei Yu, Yiyun Chen, Yaolin
Liu, Yanfang Liu, Hang Cheng, et al. Exploring the potential of airborne hyperspectral image for estimating
topsoil organic carbon: Effects of fractional-order derivative and optimal band combination algorithm. Geoderma,
365:114228, 2020.
27
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[113] Kun Tan, Huimin Wang, Lihan Chen, Qian Du, Peijun Du, and Cencen Pan. Estimation of the spatial distribution
of heavy metal in agricultural soils using airborne hyperspectral imaging and random forest. Journal of hazardous
materials, 382:120987, 2020.
[114] Amanda Silveira Reis, Marlon Rodrigues, Glaucio Leboso Alemparte Abrantes dos Santos, Karym Mayara
de Oliveira, Renato Herrig Furlanetto, Luís Guilherme Teixeira Crusiol, Everson Cezar, and Marcos Rafael Nanni.
Detection of soil organic matter using hyperspectral imaging sensor combined with multivariate regression
modeling procedures. Remote Sensing Applications: Society and Environment, 22:100492, 2021.
[115] Shiqi Tian, Shijie Wang, Xiaoyong Bai, Dequan Zhou, Qian Lu, Mingming Wang, and Jinfeng Wang. Hy-
perspectral estimation model of soil pb content and its applicability in different soil types. Acta Geochimica,
39(3):423–433, 2020.
[116] Simranjit Singh and Singara Singh Kasana. Quantitative estimation of soil properties using hybrid features and
rnn variants. Chemosphere, 287:131889, 2022.
[117] Sangeetha Annam and Anshu Singla. Estimating the concentration of soil heavy metals in agricultural areas
from aviris hyperspectral imagery. International Journal of Intelligent Systems and Applications in Engineering,
11(2s):156–164, 2023.
[118] Khalid A Al-Gaadi, Abdalhaleem A Hassaballa, ElKamil Tola, Ahmed G Kayad, Rangaswamy Madugundu,
Bander Alblewi, and Fahad Assiri. Prediction of potato crop yield using precision agriculture techniques. PloS
one, 11(9):e0162219, 2016.
[119] Wei Yang, Tyler Nigon, Ziyuan Hao, Gabriel Dias Paiao, Fabián G Fernández, David Mulla, and Ce Yang.
Estimation of corn yield based on hyperspectral imagery and convolutional neural network. Computers and
Electronics in Agriculture, 184:106092, 2021.
[120] Luz Angelica Suarez, Andrew Robson, John McPhee, Julie O’Halloran, and Celia van Sprang. Accuracy of
carrot yield forecasting using proximal hyperspectral and satellite multispectral data. Precision Agriculture,
21(6):1304–1326, 2020.
[121] Lei Pang, Sen Men, Lei Yan, and Jiang Xiao. Rapid vitality estimation and prediction of corn seeds based on
spectra and images using deep learning and hyperspectral imaging techniques. IEEE Access, 8:123026–123036,
2020.
[122] Jakob Geipel, Anne Kjersti Bakken, Marit Jørgensen, and Audun Korsaeth. Forage yield and quality estimation
by means of uav and hyperspectral imaging. Precision Agriculture, 22(5):1437–1463, 2021.
[123] Nizom Farmonov, Khilola Amankulova, József Szatmári, Alireza Sharifi, Dariush Abbasi-Moghadam, Seyed
Mahdi Mirhoseini Nejad, and László Mucsi. Crop type classification by desis hyperspectral imagery and machine
learning algorithms. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
16:1576–1588, 2023.
[124] Christoph Römer, Mirwaes Wahabzada, Agim Ballvora, Francisco Pinto, Micol Rossini, Cinzia Panigada, Jan
Behmann, Jens Léon, Christian Thurau, Christian Bauckhage, et al. Early drought stress detection in cereals:
simplex volume maximisation for hyperspectral image analysis. Functional Plant Biology, 39(11):878–890,
2012.
[125] Nathalie Vigneau, Martin Ecarnot, Gilles Rabatel, and Pierre Roumet. Potential of field hyperspectral imaging as
a non destructive method to assess leaf nitrogen content in wheat. Field Crops Research, 122(1):25–31, 2011.
[126] Jun Zhang, Limin Dai, and Fang Cheng. Corn seed variety classification based on hyperspectral reflectance
imaging and deep convolutional neural network. Journal of Food Measurement and Characterization, 15(1):484–
494, 2021.
[127] Chu Zhang, Yiying Zhao, Tianying Yan, Xiulin Bai, Qinlin Xiao, Pan Gao, Mu Li, Wei Huang, Yidan Bao, Yong
He, et al. Application of near-infrared hyperspectral imaging for variety identification of coated maize kernels
with deep learning. Infrared Physics & Technology, 111:103550, 2020.
[128] Cecilia Riccioli, Dolores Pérez-Marín, and Ana Garrido-Varo. Optimizing spatial data reduction in hyperspectral
imaging for the prediction of quality parameters in intact oranges. Postharvest Biology and Technology,
176:111504, 2021.
[129] Weijie Lan, Benoit Jaillais, Catherine MGC Renard, Alexandre Leca, Songchao Chen, Carine Le Bourvellec,
and Sylvie Bureau. A method using near infrared hyperspectral imaging to highlight the internal quality of apple
fruit slices. Postharvest Biology and Technology, 175:111497, 2021.
[130] Suk-Ju Hong, Seongmin Park, Ahyeong Lee, Sang-Yeon Kim, Eungchan Kim, Chang-Hyup Lee, and Ghiseok
Kim. Nondestructive prediction of pepper seed viability using single and fusion information of hyperspectral
and x-ray images. Sensors and Actuators A: Physical, 350:114151, 2023.
28
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[131] Xinzhi Liu, Jun Yu, Toru Kurihara, Congzhong Wu, Zhao Niu, and Shu Zhan. Pixelwise complex-valued neural
network based on 1d fft of hyperspectral data to improve green pepper segmentation in agriculture. Applied
Sciences, 13(4):2697, 2023.
[132] Michael S Watt, Grant D Pearse, Jonathan P Dash, Nathanael Melia, and Ellen Mae C Leonardo. Application
of remote sensing technologies to identify impacts of nutritional deficiencies on forests. ISPRS Journal of
Photogrammetry and Remote Sensing, 149:226–241, 2019.
[133] Driss Haboudane, Nicolas Tremblay, John R Miller, and Philippe Vigneault. Remote estimation of crop
chlorophyll content using spectral indices derived from hyperspectral data. IEEE Transactions on Geoscience
and Remote Sensing, 46(2):423–437, 2008.
[134] Nathalie Al Makdessi, Martin Ecarnot, Pierre Roumet, and Gilles Rabatel. A spectral correction method for
multi-scattering effects in close range hyperspectral imagery of vegetation scenes: application to nitrogen content
assessment in wheat. Precision Agriculture, 20(2):237–259, 2019.
[135] Dehua Gao, Minzan Li, Junyi Zhang, Di Song, Hong Sun, Lang Qiao, and Ruomei Zhao. Improvement of
chlorophyll content estimation on maize leaf by vein removal in hyperspectral image. Computers and Electronics
in Agriculture, 184:106077, 2021.
[136] Chen Liu, Wenqian Huang, Guiyan Yang, Qingyan Wang, Jiangbo Li, and Liping Chen. Determination of starch
content in single kernel using near-infrared hyperspectral images from two sides of corn seeds. Infrared Physics
& Technology, 110:103462, 2020.
[137] Judit Rubio-Delgado, Carlos J Pérez, and Miguel A Vega-Rodríguez. Predicting leaf nitrogen content in olive
trees using hyperspectral data for precision agriculture. Precision Agriculture, 22(1):1–21, 2021.
[138] Neelam Agrawal and Himanshu Govil. A deep residual convolutional neural network for mineral classification.
Advances in Space Research, 71(8):3186–3202, 2023.
[139] Peyman Moghadam, Daniel Ward, Ethan Goan, Srimal Jayawardena, Pavan Sikka, and Emili Hernandez. Plant
disease detection using hyperspectral imaging. In 2017 International Conference on Digital Image Computing:
Techniques and Applications (DICTA), pages 1–8. IEEE, 2017.
[140] Jaafar Abdulridha, Yiannis Ampatzidis, Sri Charan Kakarla, and Pamela Roberts. Detection of target spot
and bacterial spot diseases in tomato using uav-based and benchtop-based hyperspectral imaging techniques.
Precision Agriculture, 21(5):955–978, 2020.
[141] Chao Qi, Murilo Sandroni, Jesper Cairo Westergaard, Ea Høegh Riis Sundmark, Merethe Bagge, Erik Alexander-
sson, and Junfeng Gao. In-field classification of the asymptomatic biotrophic phase of potato late blight based
on deep learning and proximal hyperspectral imaging. Computers and Electronics in Agriculture, 205:107585,
2023.
[142] Koushik Nagasubramanian, Sarah Jones, Asheesh K Singh, Soumik Sarkar, Arti Singh, and Baskar Ganapa-
thysubramanian. Plant disease identification using explainable 3d deep learning on hyperspectral images. Plant
methods, 15(1):1–10, 2019.
[143] Zongmei Gao, Lav R Khot, Rayapati A Naidu, and Qin Zhang. Early detection of grapevine leafroll disease
in a red-berried wine grape cultivar using hyperspectral imaging. Computers and Electronics in Agriculture,
179:105807, 2020.
[144] Hongfei Zhu, Lianhe Yang, and Zhongzhi Han. Quantitative aflatoxin b1 detection and mining key wavelengths
based on deep learning and hyperspectral imaging in subpixel level. Computers and Electronics in Agriculture,
206:107561, 2023.
[145] Zhaoxia Lou, Longzhe Quan, Deng Sun, Hailong Li, and Fulin Xia. Hyperspectral remote sensing to assess
weed competitiveness in maize farmland ecosystems. Science of The Total Environment, 844:157071, 2022.
[146] Lai Zhi Yong, Siti Khairunniza-Bejo, Mahirah Jahari, and Farrah Melissa Muharam. Automatic disease detection
of basal stem rot using deep learning and hyperspectral imaging. Agriculture, 13(1):69, 2023.
[147] Chunmao Zhu, Yugo Kanaya, Masashi Tsuchiya, Ryota Nakajima, Hidetaka Nomaki, Tomo Kitahashi, and
Katsunori Fujikura. Optimization of a hyperspectral imaging system for rapid detection of microplastics down to
100 µm. MethodsX, 8:101175, 2021.
[148] Kyriacos Themistocleous, Christiana Papoutsa, Silas Michaelides, and Diofantos Hadjimitsis. Investigating
detection of floating plastic litter from space using sentinel-2 imagery. Remote Sensing, 12(16):2648, 2020.
[149] Konstantinos Topouzelis, Dimitris Papageorgiou, Alexandros Karagaitanakis, Apostolos Papakonstantinou, and
Manuel Arias Ballesteros. Remote sensing of sea surface artificial floating plastic targets with sentinel-2 and
unmanned aerial systems (plastic litter project 2019). Remote Sensing, 12(12):2013, 2020.
29
Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
[150] Robert Page, Samantha Lavender, Dean Thomas, Katie Berry, Susan Stevens, Mohammed Haq, Emmanuel
Udugbezi, Gillian Fowler, Jennifer Best, and Iain Brockie. Identification of tyre and plastic waste from combined
copernicus sentinel-1 and-2 data. Remote Sensing, 12(17):2824, 2020.
[151] Lauren Biermann, Daniel Clewley, Victor Martinez-Vicente, and Konstantinos Topouzelis. Finding plastic
patches in coastal waters using optical satellite data. Scientific reports, 10(1):1–10, 2020.
[152] I Cortesi, A Masiero, G Tucci, and K Topouzelis. Uav-based river plastic detection with a multispectral camera.
International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, 2022.
[153] Gil Gonçalves and Umberto Andriolo. Operational use of multispectral images for macro-litter mapping and
categorization by unmanned aerial vehicle. Marine Pollution Bulletin, 176:113431, 2022.
[154] Marco Balsi, Monica Moroni, Valter Chiarabini, and Giovanni Tanda. High-resolution aerial detection of marine
plastic litter by hyperspectral sensing. Remote Sensing, 13(8):1557, 2021.
[155] Nisha Maharjan, Hiroyuki Miyazaki, Bipun Man Pati, Matthew N Dailey, Sangam Shrestha, and Tai Nakamura.
Detection of river plastic using uav sensor data and deep learning. Remote Sensing, 14(13):3049, 2022.
[156] Jennifer Cocking, Bhavani E Narayanaswamy, Claire M Waluda, and Benjamin J Williamson. Aerial detection of
beached marine plastic using a novel, hyperspectral short-wave infrared (swir) camera. ICES Journal of Marine
Science, 79(3):648–660, 2022.
[157] Bijeesh Kozhikkodan Veettil, Dong Doan Van, Ngo Xuan Quang, and Pham Ngoc Hoai. Remote sensing of
plastic-covered greenhouses and plastic-mulched farmlands: Current trends and future perspectives. Land
Degradation & Development, 34(3):591–609, 2023.
[158] N. Levin, R. Lugassi, U. Ramon, O. Braun, and E. Ben-Dor. Remote sensing as a tool for monitoring plasticulture
in agricultural landscapes. International Journal of Remote Sensing, 28(1):183–202, 2007.
[159] Juan Jesús Roldán, Guillaume Joossen, David Sanz, Jaime Del Cerro, and Antonio Barrientos. Mini-uav based
sensory system for measuring environmental variables in greenhouses. Sensors, 15(2):3334–3350, 2015.
[160] Ronald Kemker, Carl Salvaggio, and Christopher Kanan. Algorithms for semantic segmentation of multispectral
remote sensing imagery using deep learning. ISPRS Journal of Photogrammetry and Remote Sensing, 2018.
[161] Samuel Domínguez-Cid, Julio Barbancho, Diego F Larios, FJ Molina, Ariel Gómez, and C León. In-field
hyperspectral imaging dataset of manzanilla and gordal olive varieties throughout the season. Data in Brief,
46:108812, 2023.
[162] Maxime Ryckewaert, Daphné Héran, Carole Feilhes, Fanny Prezman, Eric Serrano, Aldrig Courand, Silvia
Mas-Garcia, Maxime Metz, and Ryad Bendoula. Dataset containing spectral data from hyperspectral imaging
and sugar content measurements of grapes berries in various maturity stage. Data in Brief, 46:108822, 2023.
30