Synthetic Aperture Radar Image Classification Using Deep Learning
Synthetic Aperture Radar Image Classification Using Deep Learning
Abstract:- Powerful remote sensing tools like Synthetic Signal Reflection: These radar signals interact with
Aperture Radar (SAR) can detect targets and classify objects and the terrain on the Earth's surface. When the
land cover, as well as give useful data for monitoring signals encounter an object or a change in the surface,
disasters and other uses. Different Deep Learning such as a building, a tree, or the ground itself, some of
processes have made a remarkable changes in the past the energy is bounced back towards the SAR system.
few years, in terms of precision and effectiveness of SAR Receiving Signals: The SAR system has a sensitive
picture classification. To improve the precision and antenna that collects the reflected signals, which are
dependability of SAR picture interpretation, this often referred to as "backscatter."
research paper presents a thorough investigation of SAR Measurement of Time Delay: SAR systems measure the
image categorization utilising deep learning techniques, time it takes for the transmitted signal to travel to the
such as convolutional neural networks (CNNs) and target and back. By knowledge of speed of light, these
recurrent models. We review the state of the art now, calculate the distance to the objects on the Earth's
suggest a fresh approach, and indicate potential future surface.
research possibilities. Our findings show that deep Synthetic Aperture: Unlike traditional radar, SAR has a
learning is excellent at classifying SAR images, laying synthetic aperture, which is created by moving the radar
the groundwork for more advanced remote sensing antenna along a known path. As the platform (e.g.,
applications. satellite) moves, it collects radar data from different
positions. This motion effectively creates a larger
Keywords:- SAR, Deep Learning(DL), CNN. antenna, allowing for the formation of high-resolution
images.[2]
I. INTRODUCTION Data Processing: The raw radar data collected from
multiple positions is then processed to create a coherent
Synthetic Aperture Radar (SAR) has been used for image. Complex algorithms are used to account for the
remote sensing for Earth, for over 40-45 years. The first different positions of the antenna, the time delays, and
successful in 1964 by Jet Propulsion Laboratory. SAR gives the phase differences in the received signals.
high-resolution, day and night and weather independent Image Formation: The processed data is used to
images for multiple different applications that range from construct a detailed and high-resolution image of the
geographical science and climate change researches to Earth Earth's surface. This image can reveal features such as
system monitoring and environment, 2-D and 3-D mapping, buildings, vegetation, water bodies, and terrain with a
detection in change, security-related applications up to very high level of detail. [3]
different from one another. Radar aperture is another word
used for the antenna on the space let in light. Radar aperture B. Applications of SAR
uses electromagnetic waves as a medium. Radar in SAR is Due to the formation of high-resolution imagery Sar is
an acronym for radio Detection and Ranging. [1] widely used in different fields, SAR has many applications,
including but not limited to:
A. Working of SAR
Environmental Monitoring: Sar is used to map land and
SAR : remote sensing technology that make use of
land coverage. Deforestation and monitor forest.
radar to create high resolution images of the Earth’s surface. Monitoring of crop and crop classification and yield
It works by transmission of microwave signals towards the estimation. Soil moisture and subsidence monitoring.
target area and then receiving the signals that are reflected. A Detection of land and coastal changes. Glacier
simplified explanation of how SAR works can be said as monitoring and ice sheet studies.
follows :
Disaster Management and Response: Detection and
Transmission of Radar Signals: SAR systems are
monitoring of natural calamities for example
typically mounted on satellites, aircraft, or other
earthquakes, floods, and landslides. Damage estimation
platforms. These generate microwave signals, bands that and assessment done by disaster in affected areas.
are emitted towards the ground. Monitoring of volcanic eruptionsand ash cloud tracking.
Detection of oil spills and monitoring its spread
Urban Planning and Infrastructure Monitoring: Analysis targets, including vehicles and personnel. Mapping of
of Urban growth and planning. Monitoring terrain and assessment ofthe battlefield.
infrastructure such as roads, bridges, and buildings for Geological Exploration: Detection of underground
deformation and structural integrity. Identification of geological structures. Identification of mineral deposits
illegal constructionand urban sprawl. and resources. Assessment of geological hazards like
Coastal Applications: Ship detection and vessels for fault lines and sinkholes.
maritime surveillance. Monitoring of sea ice, currents, Scientific Research: Studies of Earth's crustal movements
and oil spills in oceans. Coastal zone management, and tectonic plate dynamics. Monitoring of
including erosion and land subsidence monitoring. environmental changes in remote and inaccessible areas.
Agriculture and Forestry: Monitoring of crop health, Research on climate changeand its impacts.
growth, and disease outbreaks. Forest inventory and Search and Rescue Operations: Detection of distressed
canopy height estimation. Assessment of deforestation ships or aircraft at sea. Locating missing persons in
and illegal logging activities. wilderness or disaster-stricken areas
Archaeological and Cultural Heritage Preservation: Weather Forecasting: Monitoring and tracking severe
Identification of archaeological sites and buried weather phenomena such as hurricanes and typhoons.[4]
structures. Monitoring and preservation of historical Infrastructure Planning and Development: Planning
monuments and cultural heritage sites. transportation networks, including road and railway
Military and Defence: Reconnaissance and surveillance construction. Monitoring construction activities and
of enemy territory. Detection and tracking of moving infrastructure projects.
The output of the pooling layers is then applied to one or Inference: After training, the CNN can make predictions
more fully connected layers to forecast or categorise the on new, unseen data. It passes the input through the
image.[5] layers, and the final layer's output represents the
Unsupervised learning techniques can be used when network's prediction.
working with unlabeled data. Using autoencoders is one
of the most well-liked methodsfor accomplishing this. IV. ISSUES AND CHALLENGES
As a result, you can conduct computations in the initial
section of CNN while fitting data into a space with few After studying and reviewing different literature on the
dimensions. application of SAR images. One major problem is faced by
After doing this, you must reconstruct using additional many users is the availability of labeled SAR training data
layers that up-sample the data you already have. and that too with high resolution. The biggest dataset that is
available for free use is MSTAR dataset but that only
Convolutional Neural Networks (CNNs) work by contains limited military features. Many researchers have
utilising several operations and layers to input data, typically created their own unique style to combine available data but
images, to automatically learn and extract features and that turns to be expensive. There are many techniques that
patterns at different levels of abstraction. Here's how CNNs can be applied to increase the efficiency but a reasonable
work in more detail: dataset is required.
Input Layer : An image (or a stack of images) is the input
to a CNN. A grid of pixel values serves as an image's V. FUTURE SCOPE AND CONCLUSION
representation.
Convolutional Layers: The convolutional layer is the This paper provides the most recent deep learning-
foundation of a CNN. Within this layer, small filters based SAR image classification using deep learning. In
(also called kernels) slide or "convolve" over the input many practical scenarios, the classification of SAR images
image. Each filter detects specific features or patterns is the crucial final step in their interpretation. Given the
(edges, textures, shapes) by performing element-wise remarkable progress obtained using deep learning in a
multiplications and summations. variety of computer vision tasks, we conducted an
Activation Function: After convolutional, there is an investigation into the latest exclusive approaches utilizing
activation function (typically “ReLU - Rectified Linear deep learning techniques for SAR image perception. This
Unit”) is used element-wise to enter non- linearity. This investigation shed light on the specific architectural choices
helps the network learn complex patterns and made in each method, as well as the configurations and
relationships. parameters employed. Nevertheless, a pressing challenge
associated with SAR image classification is the occurrence
Pooling Layers: The pool layer is the next layer. This
shrinks the sample size for a specific feature map, or of misclassifications. This issue remains unresolved, and
down samples it. Additionally, by lowering the number addressing it is of paramount importance, as the
misinterpretation of targets can result in misinformation in
of parameters the network must process, processing
becomes significantly faster. The result of this is a map real-world applications. Our study also delved into the
strengths and weaknesses of each of these approaches,
with a pool.
providing a comprehensive overview. In the future
Fully Connect Layers: After several convolutional and
expansion of data set is required with more labelling of
pooling layers, one or more thick layers that are
dataset.[6]
entirely connected are typically added. These layers
flatten the high-level features and perform classification
REFERENCES
or regression tasks.
Learnable Parameters: All layers of the CNN have [1]. C. W. Sherwin, J. P. Ruina and R. D. Rawcliffe,
learnable parameters. The convolutional layers have "Some early developments in synthetic aperture radar
filter weights, and the fully connected layers have systems", IRE Trans. Military Electronics, vol. MIL-6,
weights and biases. During the training phase, these pp. 111-115, Apr. 1962.
parameters are learned via backpropagation and [2]. J. V. Evans and T. Hagfors, Radar Astronomy, New
optimisation methods like gradient descent. York:McGraw- Hill, 1968.
Loss Function: A loss function calculates the [3]. A. Gulli and S. Pal, Deep learning with Keras, Packt
discrepancy between the target and the expected output Publishing Ltd, 2017
(ground truth). [4]. C. Elachi, Spaceborne Radar Remote Sensing:
Backpropagation: The parameters of the network are Applications and Techniques, New York:IEEE, 1988
updated in the opposite direction using the gradients of [5]. O. Russakovsky, J. Deng, H. Su, J. Krause, S.
the loss with respect to the parametersof the network. Satheesh, S. Ma, et al., "ImageNet Large Scale Visual
Training: CNNs are trained on a labelled dataset. The Recognition Challenge", International Journal of
network learns to recognize features that are relevant to Computer Vision, vol. 115, no. 3, pp. 211-252, 2015
the task it's designed for. For example, in image [6]. N. Aafaq, A. Mian, W. Liu, S. Z. Gilani and M. Shah,
classification, it learns to recognize features that "Video description", ACM Computing Surveys, vol. 52,
distinguish different objects or categories. no. 6, pp. 1-37, Jan 2020