0% found this document useful (0 votes)
6 views

Target Recognition of SAR Images via Matching Attr

This paper presents a target recognition method for synthetic aperture radar (SAR) images that utilizes attributed scattering centers (ASCs) matched to binary target regions. The proposed method enhances robustness and performance, achieving an average recognition rate of 98.34% on the MSTAR dataset, outperforming traditional techniques under various operating conditions. The methodology includes extracting binary target regions and ASCs, followed by a similarity measure based on score-level fusion for classification.

Uploaded by

Haris Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Target Recognition of SAR Images via Matching Attr

This paper presents a target recognition method for synthetic aperture radar (SAR) images that utilizes attributed scattering centers (ASCs) matched to binary target regions. The proposed method enhances robustness and performance, achieving an average recognition rate of 98.34% on the MSTAR dataset, outperforming traditional techniques under various operating conditions. The methodology includes extracting binary target regions and ASCs, followed by a similarity measure based on score-level fusion for classification.

Uploaded by

Haris Ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

sensors

Article
Target Recognition of SAR Images via Matching
Attributed Scattering Centers with Binary
Target Region
Jian Tan 1,2 , Xiangtao Fan 1,2 , Shenghua Wang 3, * and Yingchao Ren 2
1 Hainan Key Laboratory of Earth Observation, Sanya 572029, China; [email protected] (J.T.);
[email protected] (X.F.)
2 Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of
Sciences, Beijing 100094, China; [email protected]
3 School of Public Administration and Mass Media, Beijing Information Science and Technology University,
Beijing 100093, China
* Correspondence: [email protected]

Received: 25 July 2018; Accepted: 5 September 2018; Published: 10 September 2018 

Abstract: A target recognition method of synthetic aperture radar (SAR) images is proposed via
matching attributed scattering centers (ASCs) to binary target regions. The ASCs extracted from the
test image are predicted as binary regions. In detail, each ASC is first transformed to the image domain
based on the ASC model. Afterwards, the resulting image is converted to a binary region segmented
by a global threshold. All the predicted binary regions of individual ASCs from the test sample are
mapped to the binary target regions of the corresponding templates. Then, the matched regions
are evaluated by three scores which are combined as a similarity measure via the score-level fusion.
In the classification stage, the target label of the test sample is determined according to the fused
similarities. The proposed region matching method avoids the conventional ASC matching problem,
which involves the assignment of ASC sets. In addition, the predicted regions are more robust
than the point features. The Moving and Stationary Target Acquisition and Recognition (MSTAR)
dataset is used for performance evaluation in the experiments. According to the experimental results,
the method in this study outperforms some traditional methods reported in the literature under
several different operating conditions. Under the standard operating condition (SOC), the proposed
method achieves very good performance, with an average recognition rate of 98.34%, which is higher
than the traditional methods. Moreover, the robustness of the proposed method is also superior to the
traditional methods under different extended operating conditions (EOCs), including configuration
variants, large depression angle variation, noise contamination, and partial occlusion.

Keywords: synthetic aperture radar (SAR); target recognition; attributed scattering center (ASC);
region matching; score fusion

1. Introduction
Owing to the merits of synthetic aperture radar (SAR), interpreting high-resolution SAR images is
becoming an important task for both military and civilian applications. As a key step of SAR interpretation,
automatic target recognition (ATR) techniques are employed to decide the target label in an unknown
image [1]. Typically, a general SAR ATR method is comprised of two parts: feature extraction and a
decision engine. The former tries to obtain low-dimensional representations from the original images
while conveying the original discrimination capability. In addition, the high dimensionality of the original
image is reduced significantly, which helps improve the efficiency of the following classification. Different
kinds of features are adopted or designed for SAR target recognition in the previous literature. The

Sensors 2018, 18, 3019; doi:10.3390/s18093019 www.mdpi.com/journal/sensors


Sensors 2018, 18, 3019 2 of 19

features describing the physical structures or shape of the target are extracted for SAR ATR, e.g., binary
target region [2–4], target outline [5,6], target’s radar shadow [7], local texture [8], etc. Park et al. [2] design
several descriptors from the binary target region for SAR target recognition. In [3], a SAR ATR method
is designed through the matching of binary target regions, where the region residuals are processed
by the binary morphological operations to enhance divergences between different classes. In [4], the
binary target region is first described by the Zernike moments, and a support vector machine (SVM) is
employed for classification afterwards. The target outline is taken as the discriminative feature in [5],
which is approached by the Elliptical Fourier Series (EFS). Then, SVM is used to classify the outline
descriptors. Yuan et al. use the local gradient ratio pattern to describe SAR images with application to
target recognition [8]. The projection features are also prevalent in SAR ATR. Principal component analysis
(PCA) [9], linear discriminant analysis (LDA) [9], and non-negative matrix factorization (NMF) [10] are
often used to extract the projection features. Based on the idea of manifold learning, other projection
features are designed to exploit the properties of the training samples [11–13]. In the high frequency area,
the total backscattering of a whole target can be regarded as the summation of several individual scattering
centers [14]. In this way, the scattering center features are discriminative for SAR target recognition. Several
SAR ATR methods have been proposed using the attributed scattering centers (ASCs) which achieve
good effectiveness and robustness [15–19]. In the classification stage, the classifiers (decision engines) are
adopted or designed according to the properties of the extracted features. For features with unified forms,
e.g., feature vectors extracted by PCA, classifiers like SVM [4,5,20,21], adaptive boosting (AdaBoost) [22],
sparse representation-based classification (SRC) [21,23,24], etc., can be directly used for classification
tasks. The deep learning method, i.e., convolution neural network (CNN), is also demonstrated to be
notably effective for image interpretation [25–29]. In CNN, the hierarchical deep features are learned by
the convolution layers with a softmax classifier to perform the multi-class regression at the end. However,
for features with no specific orders, e.g., ASCs, the former classifiers cannot be directly employed for
classification. Usually, a similarity measure between these features is defined [16–18]. Afterwards,
the target label is assigned as the template class achieving the maximum similarity.
This paper proposes an efficient and effective method for SAR ATR via matching ASCs with
binary target regions. In previous works [16–18] using ASCs for SAR ATR, a complex one-to-one
correspondence is often built for the following similarity evaluation. In [16], Chiang et al. solve the
assignment problem between two ASC sets using the Hungarian algorithm and evaluate the similarity
as the posterior probability. Ding et al. exploit the line and triangle structures in the ASC set during the
similarity evaluation based on the one-to-one correspondences between two ASC sets [17,18]. However,
it is still a difficult and complex task to precisely build the correspondence between the ASCs for the
following reasons [30]. First, there are always missing or false alarms caused by the extended operating
conditions (EOCs) such as occlusion, noises, etc. Second, the ASCs cannot be extracted with no errors.
As a result, the extraction errors also cause problems. Lastly, as point features, the ASCs lack of high
stability, especially because SAR images change greatly with variations in the target azimuth [31].
As a remedy, in this study, each of the extracted ASCs from the test image is represented by a binary
region. In detail, the backscattering field of the ASC is first calculated based on the ASC method,
and then transformed to the image domain. Afterwards, a global threshold is used to segment the
reconstructed images of individual ASCs as binary regions. In the image domain, the spatial positions
of the ASCs can be intuitively observed. For ASCs with higher amplitudes, they tend to produce
regions with larger areas because their images contain more pixels with high intensities. In addition,
the distributed ASCs with lengths could also maintain their attributes at proper thresholds. Hence,
the predicted binary regions actually embody the attributes of the ASCs such as the spatial positions,
relative amplitudes, and lengths. The binary regions of individual ASCs are matched to the extracted
binary target region from the corresponding template samples. The overlap and differences during
the region matching reflect the correlations between the test image and corresponding templates from
various classes. Based on the region matching results, three matching scores are defined. To combine
Sensors 2018, 18, 3019 3 of 19

the strengths of different scores, a score-level fusion is performed to obtain a unified similarity. Finally,
the target label is determined according to the calculated similarities.
In the remainder of this study, we do the following: in Section 2, we introduce the extraction of
binary target region and ASCs. The main methodology of matching ASCs with the binary target region
is presented in Section 3. In Section 4, experiments are conducted on the Moving and Stationary Target
Acquisition and Recognition (MSTAR) dataset. Finally, in Section 5, we draw conclusions according to
the experimental results, and outline some future work.

2. Extraction of Binary Target Region and ASCs

2.1. Target Segmentation


We first obtain the binary target region using the target segmentation algorithm. In this study, the
detailed target segmentation algorithm consists of the following steps:
(1): Equalize the original image intensities into the range of 0 to 1 by the standard histogram
equalization algorithm [32].
(2): Perform mean filtering on the equalized image with a 3 × 3 kernel [32].
(3): Preliminarily segment the “smoothed” image using the normalized threshold of 0.8.
(4): Remove false alarms caused by the noises using the Matlab “bwareaopen” function, which is
capable of removing regions with a few pixels.
(5): Perform the binary morphological closing operation [32] to fill the possible holes and connect
the target region.
Figure 1 illustrates the implementation of target segmentation with a SAR image of BMP2 tank in
the MSTAR dataset shown as Figure 1a. The equalized and smoothed images from Step 2 and Step 3
are displayed in Figure 1b,c, respectively. After the preliminary segmentation, the result is shown
in Figure 1d, in which there are some false alarms brought by the noises or clutters. In this step, the
threshold is set to be 0.8 mainly according to the repetitive observations at different thresholds, as well
as referring to the previous works [22]. The pixel number for the “bwareaopen” function is set to 20;
thus, the isolated regions with less than 20 pixels can be eliminated. The result is obtained as Figure 1e.
The morphological closing operation is conducted using the 7 × 7 diamond structuring element shown
as Figure 2. Finally, the intact binary region is obtained as Figure 1f. The binary target region describes
the physical structures and geometrical properties of the target. Actually, it is a continuous region
connecting the images of individual scattering centers on the target. From this aspect, the binary target
region can be used as the reference for ASC matching.
Sensors 2018, 18, 3019 4 of 19
Sensors 2018,
Sensors 18,18,
2018, x FOR PEER
x FOR REVIEW
PEER REVIEW 4 of4 18
of 18

(a)(a) (b)(b) (c)(c) (d)(d)

(e)(e) (f)(f)
Figure
Figure1. Illustration
1. Illustrationof of
Illustration the target
ofthe
the segmentation
target
targetsegmentation
segmentation algorithm:
algorithm:
algorithm:(a)(a)
original SAR
original
(a) SAR
original image
SAR of of
image BMP2
image BMP2 tank; (b)
tank;
of BMP2 (b)
tank;
equalized
(b)
equalized image;
equalized (c)
image;
image; smoothed
(c) smoothed image
(c) smoothed image after
image mean
after
after mean filtering;
mean (d) preliminary
filtering;
filtering; (d) (d) segmentation
preliminary
preliminary result;
segmentation
segmentation (e)(e)
result;
result;
result
(e) after
result
result the
after opening
after
thethe
opening operation;
opening (f)(f)
operation;
operation; result after
(f) result
result the
after
after closing
the the operation.
closing
closing operation.
operation.

Figure
Figure
Figure 2. The
2. The
2. The structuring
structuring elements
elements
structuring used
used
elements in
in in
used the
thethe closing
closing operation.
operation.
closing operation.
2.2.
2.2. ASC
ASC Extraction
Extraction
2.2. ASC Extraction
2.2.1. ASC Model
2.2.1. ASC
2.2.1. ASCModel
Model
SAR images reflect the target’s electromagnetic characteristics in the high frequency region, which
SAR
SARimages
images reflect thethe
reflect target’s electromagnetic characteristics in in
thethehigh frequency region,
can be quantitively modeled astarget’s electromagnetic
a summation characteristics
of local properties, i.e., scattering high frequency
centers region,
[14]. The target’s
which
which can be quantitively
can be quantitively modeled as
modeled as a summation of local properties, i.e., scattering centers [14].
backscattering field can be expressed asafollows:
summation of local properties, i.e., scattering centers [14].
The target’s backscattering field can be expressed as follows:
The target’s backscattering field can be expressed as follows:
K K
(Ef((,ff, ;,φ;
EE θ) = ∑
K
Efi (, f ;, θ
φ; θ ) (1)
 ;)θ)  
θ Ei (E
i
( f ,  i;)θii ) (1)(1)
i 1i =1
i 1

In In
Equation
In Equation
Equation (1),ff and
(1),
(1), fand φdenotes
and denotesthethe
denotes frequency
frequency
the andand
frequency andaspect
aspect angle,
angle,
aspect respectively.
respectively.
angle, K Knumber
K is the
respectively. is is
thethe
of the ASCs
number
number of of in
thethe the
ASCs
ASCs radar
in in measurement.
thethe
radar For a single
measurement.
radar measurement. For aASC,
For itsASC,
single
a single backscattering
ASC,itsits field canfield
backscattering
backscattering be calculated
can
field canbebe
according
calculated to ASC model
according to ASC [14] Equation
model [14] (2).
Equation (2).
calculated according to ASC model [14] Equation (2).
f f αi · exp(j4π f ff cos φ + y sin φ))
− j4π
, ;θθi ))= AAi( ·j ( j )fαi) αexp(
EEi ((f f, φ;
i E ( f ,  i; θ ) i A  ( j f c ) i  exp(
j 4π
c ( x(i(xcos   yi sin
xii cos  )) ))
yii sin (2)
i i i f 2π f c c
c f
·sinc( cc Li sin(φ − φi )) · exp(−2π f γi sin φ) (2)(2)
2π2π f f
sinc(
 sinc( L sin(   )) exp(-2πfγ sin )
i  ))  exp(-2πifγ sin )
where c denotes the propagation velocity c c i Li sin(
of electromagnetic
i i wave and θ = { θi } =
[ Ai , αi , xi , yi , Li , f i , γi ](i = 1, 2, · · · , K ) represents the attribute set of all the ASCs in a SAR image.
where
where c c denotes denotes the propagation velocity
velocity ( xof,of electromagnetic
electromagnetic wave wave and
In detail, for the ith ASC, the Ai is the propagation
complex amplitude; i yi ) denote the spatial positions; αi is
and
θθ {θi{}θ } [A[i A, αi, ,αxi,,xyi, ,yLi, ,Lfi,, fγi, ]( i 1, 2, , K ) represents the attribute set of all the
γ i ]( i  1, 2,  , K ) represents the attribute set of all the ASCs in a SAR ASCs in a SAR
i i i i i i i

image.
image.InIndetail, detail,forforthetheithithASC, ASC,Ai Ais isthethecomplex
i complexamplitude;
amplitude; xi,xyi, y denote
i i
denotethethespatial
spatial
Sensors 2018, 18, 3019 5 of 19

the frequency dependence; for a distributed ASC, Li and φi represent the length and orientation,
respectively; and γi denotes the aspect dependence of a localized ASC.

2.2.2. ASC Extraction Based on Sparse Representation


The characteristics of a single SAR image can be approximated by only a few ASCs. So, the ASCs
to be extracted are actually sparse in the model-parameter domain, which discretize the parameter
space to form an overcomplete dictionary [33,34]. Therefore, the sparse representation can be employed
to estimate the ASC parameters. The ASC model in Equation (1) is first expressed as Equation (3).

s = D (θ ) × σ (3)

where s is obtained by reformulating the 2-D measurement E( f , φ; θ ) into a vector; D (θ ) represents the
overcomplete dictionary. In detail, each column of D (θ ) stores the vector form of the electromagnetic
field of one element in the parameter space θ; σ denotes a sparse vector and each element in it represents
the complex amplitude A.
In practical situations, the noises and possible model errors should also be considered. Therefore,
Equation (3) is reformulated as follows:

s = D (θ ) × σ + n (4)

In Equation (4), n denotes the error term, which is modeled as a zero-mean additive white
Gaussian process. Afterwards, the attributes of the ASCs can be estimated as follows:

σ̂ = argminkσ k0 , s.t. ks − D (θ ) × σ k2 ≤ ε (5)


σ

In Equation (5), ε = knk2 represents the noise level; k•k0 denotes l0 -norm and σ̂ is the estimated
complex amplitudes with respect to the dictionary D (θ ). As a nondeterministic polynomial-time
hard (NP-hard) problem, the sparse representation problem in Equation (5) is computationally
difficult to solve. As a remedy, some greedy methods, e.g., the orthogonal matching pursuit (OMP),
are available [33,34]. Algorithm 1 illustrates the detailed procedure of ASC extraction based on
sparse representation.

Algorithm 1 ASC Extraction based on Sparse Representation


Input: The vectorized SAR image s, noise level ε, and overcomplete dictionary D (θ ).
Initialization: Initial parameters of the ASCs θ̂ = ∅, reconstruction error r = s, counter t = 1.
1. while kr k22 > ε do
2. Calculate correlation: C (θ ) = D H (θ ) × r, where (•) H represents conjugate transpose.
3. Estimate parameters: θ̂t = argmaxC (θ ), θ̂ = θ̂ ∪ θ̂t .
θ
4. Estimate amplitudes: σ̂ = D † (θ̂ ) × s, where (•)† represents the Moore-Penrose pseudo-inverse, D (θ̂ )
denotes the overcomplete dictionary from the parameter set θ̂.
5. Update residual: r = s − D (θ̂ ) × σ̂.
6. t = t + 1
Output: The estimated parameters set θ̂.
Sensors 2018, 18, 3019 6 of 19
Sensors 2018, 18, x FOR PEER REVIEW 6 of 18

3.
3. Matching
Matching ASCs
ASCs with
with Binary
Binary Target
Target Region

3.1. Region
3.1. Region Prediction
Prediction by
by ASC
ASC
As point
As point features,
features,the thematching
matchingofoftwo twoASC ASCsets setsis is a complex
a complex andand difficult
difficult task,
task, as as analyzed
analyzed in
in previous
previous research
research [30].[30].
As As a remedy,
a remedy, in in thisstudy,
this study,the theextracted
extractedASCs ASCsfrom fromthe the test
test image
image are are
represented as
represented as binary
binary regions
regions using
using a thresholding
thresholding method.method. The The backscattering
backscattering field field of
of each
each ASCASC is is
first calculated
first calculatedbased basedonon thethe
ASCASCmodel in Equation
model (2). Afterwards,
in Equation the imaging
(2). Afterwards, the process
imagingis process
performed is
to transform the backscattering field to the image domain. In
performed to transform the backscattering field to the image domain. In this study, the imagingthis study, the imaging process is
consistent
process with the MSTAR
is consistent with theimages
MSTAR including
images zeropadding, windowing
including zeropadding, (−35 dB Taylor
windowing (−35 dB window),
Taylor
and 2D fast
window), andFourier
2D fasttransform
Fourier (FFT).
transform The (FFT).
detailed Theoperating
detailedparameters of MSTAR SAR
operating parameters imagesSAR
of MSTAR can
be referred to [32]. Denoting the maximum intensity of the images
images can be referred to [32]. Denoting the maximum intensity of the images from individual ASCs from individual ASCs as m, the
global
as threshold
m , the global for region for
threshold prediction is set to beism/α,
region prediction set towhere
be mα/ α is ,the scaleαcoefficient
where is the scale larger than 1.
coefficient
Figure 3 shows the predicted binary regions of three ASCs with different
larger than 1. Figure 3 shows the predicted binary regions of three ASCs with different amplitudes amplitudes at α = 30. The
at
images from ASCs with higher amplitudes tend to have higher pixel
α 30 . The images from ASCs with higher amplitudes tend to have higher pixel intensities, as intensities, as shown in Figure 3a
(from left
shown in to right).3aTheir
Figure (from predicted binary Their
left to right). regions are shown
predicted in Figure
binary 3b, correspondingly.
regions are shown in Figure It shows
3b,
that the stronger ASCs produce binary regions with larger areas. Figure
correspondingly. It shows that the stronger ASCs produce binary regions with larger areas. Figure 4 4 shows the predicted binary
region the
shows of apredicted
distributed ASC.
binary As shown,
region the lengthASC.
of a distributed of the Asdistributed
shown, theASC length canofbethemaintained
distributedinASC the
predicted region at the proper threshold. Therefore, the predicted binary
can be maintained in the predicted region at the proper threshold. Therefore, the predicted binary region can effectively convey
the discriminative
region can effectively attributes
convey of the
the original ASC, such
discriminative as spatial
attributes positions,
of the original relative
ASC, amplitudes,
such as spatial and
lengths. Figure
positions, relative 5 illustrates
amplitudes, the target’s
and lengths. image
Figurereconstructed
5 illustratesby thealltarget’s
the extracted ASCs, as wellby
image reconstructed as
the predicted regions. Figure 5a shows a SAR image of BMP2 tank.
all the extracted ASCs, as well as the predicted regions. Figure 5a shows a SAR image of BMP2 tank. The ASCs of the original image
are extracted
The ASCs of the based on sparse
original imagerepresentation
are extracted based and used to reconstruct
on sparse representation the target’s
and used image, as shown
to reconstruct
in Figure
the target’s 5b. The reconstruction
image, as shown in Figure result5b. showsThe that the extracted
reconstruction ASCs
result can remove
shows that the the background
extracted ASCs
interference, while the target’s characteristics can be maintained. Figure
can remove the background interference, while the target’s characteristics can be maintained. Figure 5c shows the overlap of all
5c
the predicted
shows regions.
the overlap ofClearly,
all the the predicted
predicted regionsClearly,
regions. can convey the the geometrical
predicted shape
regions canand scattering
convey the
center distribution
geometrical shape and of the original center
scattering image.distribution of the original image.

(a)

(b)
Figure
Figure 3.
3.Images
Imagesand
andbinary
binaryregions
regionsof
ofASCs
ASCswith
withdifferent
differentamplitudes:
amplitudes:(a)
(a)images;
images;(b)
(b)binary
binaryregions.
regions.
Sensors 2018, 18, x FOR PEER REVIEW 7 of 18
Sensors 2018, 18, 3019 7 of 19
Sensors 2018, 18, x FOR PEER REVIEW 7 of 18

(a) (b)
(a) (b)
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.

(a) (b) (c)


(a)
Figure 5. Illustration (b) prediction: (a) original image; (b)
of ASC extraction and region (c)reconstructed
Figure 5. Illustration of ASC extraction and region prediction:
image using ASCs; (c) overlap of all the predicted regions. (a) original image; (b) reconstructed
image using ASCs; (c) overlap of all the predicted regions.
Figure 5.
3.2. Region Illustration of ASC extraction and region prediction: (a) original image; (b) reconstructed
Matching
image using ASCs; (c) overlap of all the predicted regions.
3.2. Region Matching regions of individual ASCs are mapped to the target region from the corresponding
The predicted
template
3.2. The
Region samples.
MatchingItregions
predicted is assumed that the template
of individual ASCs samples
are mappedare always
to the obtained
target in some cooperative
region from the
corresponding template samples. It is assumed that the template samples are alwayssignal-to-noise
conditions. Hence, the template images contain the properties of the intact target at high obtained in
The
ratios predicted
(SNR). regions
Theconditions.
detailed stepsofofindividual
the region ASCs arebetween
matching mapped to the target region from the
some cooperative Hence, the template images containthe thetest sample
properties and its corresponding
of the intact target
corresponding template samples. It is assumed that the template samples are always obtained in
attemplate sample can be
high signal-to-noise summarized
ratios (SNR). The as follows:
detailed steps of the region matching between the test
someStepcooperative conditions. Hence, the template images contain the properties of the intact target
sample and 1: itsThe extracted ASCs
corresponding templatefromsample
the testcan sample are converted
be summarized to binary regions according to
as follows:
at high 3.1.
Section signal-to-noise ratios (SNR). The detailed steps of the region matching between the test
Step 1: The extracted ASCs from the test sample are converted to binary regions according
sample Stepand2: its
Map corresponding template regions
each of the predicted sample can ontobe thesummarized
binary target as region
follows: from the corresponding
to Section 3.1.
Step
template 1: The extracted ASCs from the test sample are converted to binary regions according
Step 2:sample.
Map each of the predicted regions onto the binary target region from the corresponding
to Section
Step 3.1.
3: The overlapped region between all the predicted regions and the binary target region
template sample.
Stepthe
reflects 2: correlation
Map each ofbetween
the predicted testregions
thebetween andall onto thesample;
template binary target
and the region from the
unmatched corresponding
regions
Step 3: The overlapped region the predicted regions and the binary targetrepresent
region
template sample.
their differences.
reflects the correlation between the test and template sample; and the unmatched regions represent
Step
Figure 3: 6The overlapped
displays region
the results of between
the regionallmatching
the predicted
betweenregions and the binary
the predicted regionstarget
of theregion
BMP2
their differences.
reflects
SAR the
image6 incorrelation
Figure the between
5a and binarythe test
target and template
regions sample;
from the and
template the unmatched regions represent
Figure displays results of the region matching between thesamples
predicted of BMP2,
regionsT72, andBMP2
of the BTR70
their differences.
targets in the MSTAR dataset. The white regions represent the overlap between the predicted regions
SAR image in Figure 5a and binary target regions from the template samples of BMP2, T72, and
Figure
of the 6 displays
test ASCs the results of the region matching betweentemplates,
the predicted regions of theregions
BMP2
BTR70 targets in and
thebinary
MSTAR target region
dataset. fromwhite
The the corresponding
regions represent the whereas
overlap the grey
between the
SAR image in Figure 5a and binary target regions from the template samples of BMP2, T72, and
predicted regions of the test ASCs and binary target region from the corresponding templates,
BTR70 targets in the MSTAR dataset. The white regions represent the overlap between the
whereas the grey regions reflect their differences. Clearly, the region overlap with the correct class
predicted regions of the test ASCs and binary target region from the corresponding templates,
has a much larger area than those of the incorrect classes. Three scores are defined to evaluate the
whereas the grey regions reflect their differences. Clearly, the region overlap with the correct class
matching results, as follows.
has a much larger area than those of the incorrect classes. Three scores are defined to evaluate the
matching results, as follows.
Sensors 2018, 18, 3019 8 of 19
Sensors 2018, 18, x FOR PEER REVIEW 8 of 18

reflect their differences. Clearly, the region overlap with the correct class has a much larger area than
those of the incorrect classes. Three scores M are definedRMto evaluate RMthe matching results, as follows.
G1 G2 G3 (6)
N , Rt , RN
M R R
G1 = , G2 = M , G3 = M (6)
where N is the number of predictedN regions, Ri.e., t the RN
number of all the extracted ASCs. M
denotes the number of predicted regions, which are assumed to be matched with the template’s
where N is the number of predicted regions, i.e., the number of all the extracted ASCs. M denotes
target region. RM denotes the total area of all the matched regions; RN and Rt are the areas of
the number of predicted regions, which are assumed to be matched with the template’s target region.
Rall
M
the predicted
denotes the regions
total area and
of allbinary target region,
the matched regions;respectively.
R N and Rt For
are athe
predicted
areas ofregion,
all the itpredicted
is judged
regions and binary target region, respectively. For a predicted region, it is judged to be matchedthan
to be matched only if the overlap between itself and the template’s binary region is larger onlyhalf
if
of its area.
the overlap between itself and the template’s binary region is larger than half of its area.
ToTocombine
combinethetheadvantages
advantagesofofthe thethree
threescores,
scores,a alinear
linearfusion
fusionalgorithm
algorithmisisperformed
performedtotoobtain
obtain
the overall similarity as Equation (7)
the overall similarity as Equation (7) [35]. [35].

S ωG ωG ωG (7)
S = ω1 G11 +1 ω2 G2 2 2+ ω33G33 (7)
where ω1 , ω2 and ω3 denote the weights; S represents the fused similarity. With little prior
where ω1 , ω 2 and ω3 denote the weights; S represents the fused similarity. With little prior information
information
on which score oniswhich
more score is more
important, important,
equal weightsequal weights are
are assigned assigned
to the to the three
three scores in thisscores
study,ini.e.,
this
ωstudy,
1 = ω2i.e.,
ω ω
= ω3 1= 1/3.2
ω3
1/ 3 .

(a) (b) (c)


Figure Region
Figure6. 6. matching
Region results
matching between
results a BMP2
between imageimage
a BMP2 and three
and template classes being:
three template classes(a) BMP2;
being: (a)
(b) T72; (c) BTR70.
BMP2; (b) T72; (c) BTR70.

3.3.
3.3.Target
TargetRecognition
Recognition
The
Theproposed
proposedmatching
matchingscheme
schemeforforthe
theextracted
extractedASCs
ASCsand
andbinary
binarytarget
targetregion
regionisisperformed
performed
with application to SAR target recognition. The basic procedure of our method is illustrated
with application to SAR target recognition. The basic procedure of our method is illustrated in Figure 7,in
which
Figure can
7, be summarized
which as follows. as follows.
can be summarized
(1)
(1) The
TheASCs
ASCsofofthe
thetest
testimage
imageare areestimated
estimatedand andpredicted
predictedasasbinary
binaryregions.
regions.
(2)
(2) Theazimuth
The azimuthofofthe
thetest
testimage
imageisisestimated
estimatedtotoselect
selectthe
thecorresponding
correspondingtemplate
templateimages.
images.
(3)
(3) Extractthe
Extract thebinary
binarytarget
targetregions
regionsofofall
allthe
theselected
selectedtemplate
templatesamples.
samples.
(4)
(4) Matchedthe
Matched thepredicted
predictedregions
regionstotoeach
eachofofthe
thetemplate
templateregions
regionsand
andcalculate
calculatethe
thesimilarity.
similarity.
(5)
(5) Decide the target label to be the template class, which achieves the maximum
Decide the target label to be the template class, which achieves the maximum similarity. similarity.
Specifically, the azimuth estimation algorithm in [22] is used, which also uses the binary target
Specifically, the azimuth estimation algorithm in [22] is used, which also uses the binary target
region. So, it can directly perform on the target region from Section 2 to obtain the estimated
region. So, it can directly perform on the target region from Section 2 to obtain the estimated azimuth.
azimuth. The estimation precision of the method is about ±5°. Accordingly, in this study, the
The estimation precision of the method is about ±5◦ . Accordingly, in this study, the template samples
template samples with azimuths in the interval of [−3°: 1°: 3°] around the estimated one are used as
with azimuths in the interval of [−3◦ : 1◦ : 3◦ ] around the estimated one are used as the potential
the potential templates. In addition, to overcome the 180° ambiguity, the template selection is
templates. In addition, to overcome the 180◦ ambiguity, the template selection is performed on
performed on the estimated azimuth and its 180° symmetric one, and the average of the similarities
the estimated azimuth and its 180◦ symmetric one, and the average of the similarities from all the
from all the candidate template samples is adopted as the final similarity for target recognition. The
candidate template samples is adopted as the final similarity for target recognition. The scale coefficient
scale coefficient to determine the global threshold is set as α 30 according to the experimental
to determine the global threshold is set as α = 30 according to the experimental observations for
observations for parameter selection.
parameter selection.
Sensors 2018, 18, x FOR PEER REVIEW 9 of 18

Sensors 2018, 18, 3019 Estimated azimuth 9 of 19


Template database
Sensors 2018, 18, x FOR PEER REVIEW 9 of 18

Estimated azimuth
Template database

Target 1 Target 2 Target C


Binary region
... Binary region
Binary region

Target 1 Target 2 Target C


Binary region
... Binary region
Binary region

Test image ASC extraction Region prediction Region matching

Test image ASC extraction Region prediction Region matching

Similarities

Similarities

Maximum similarity
Maximum similarity

Target type
Target type
Figure 7. The basic procedure of target recognition.
Figure7.7.The
Figure Thebasic
basic procedure
procedure ofoftarget recognition.
target recognition.
4. Experiment
4. Experiment on on
on
4. Experiment MSTAR
MSTAR Dataset
Dataset
MSTAR Dataset

4.1. Experimental Setup


4.1. Experimental Setup

4.1.1.4.1.1.
MSTAR Dataset
MSTAR Dataset
Dataset
The The
widelywidely used
used
used benchmarkdataset
benchmark
benchmark dataset for
dataset for
forevaluating
evaluating
evaluating SAR ATR
SAR
SAR methods,
ATR
ATR i.e., i.e.,
methods,
methods, MSTAR
i.e., dataset,
MSTAR
MSTAR is
dataset,
dataset, is
is adopted
adopted
adopted for experimental
forforexperimental evaluation
experimentalevaluation in this
evaluationininthis paper.
thispaper. The
paper.The dataset
Thedataset is collected by
datasetisis collected the
collected by Sandia National
by the Sandia National
Laboratory airborne SAR sensor platform, working at X-band with HH polarization. There are ten
Laboratory airborne SAR sensor platform, working at X-band with HH HH polarization.
polarization. There are ten
classes of ground targets with approaching physical sizes, whose names and optic images are
classes
classes ofofground
ground targets with
targets approaching
with approaching physical sizes, sizes,
physical whosewhose
names andnamesoptic andimages
opticareimages
presented
are
presented in Figure 8. The collected SAR images have resolutions of 0.3 m × 0.3 m. The detailed
in Figure
presented 8.
inThe collected
Figure 8. SAR
The images
collected have
SAR resolutions
images have of 0.3 m ×
resolutions 0.3 m.
of The
0.3 m detailed
×
template and test sets are given in Table 1, where samples from 17° depression angle are adopted as0.3 m. template
The and
detailed
test sets are given in Table 1, where samples from 17 ◦ depression angle are adopted as the templates,
template and test whereas
the templates, sets are images
given in
at Table
15° are1, where samples
classified. from 17° depression angle are adopted as
whereas
the templates, at 15◦ are
imageswhereas classified.
images at 15° are classified.

Figure 8. Optic images of the ten targets to be classified.

Figure 8. Optic images of the ten targets to be classified.


classified.
Sensors 2018, 18, 3019 10 of 19

Table 1. The template/training and test sets for the experiments.

Template/Training Set Test Set


Target Serial
Depr. Number Depr. Number
9563 17◦ 233 15◦ 195
BMP2 9566 17◦ 232 15◦ 196
c21 17◦ 233 15◦ 196
BTR70 c71 17◦ 233 15◦ 196
132 17◦ 232 15◦ 196
T72 812 17◦ 231 15◦ 195
S7 17◦ 228 15◦ 191
ZSU23/4 D08 17◦ 299 15◦ 274
ZIL131 E12 17◦ 299 15◦ 274
T62 A51 17◦ 299 15◦ 273
BTR60 k10yt7532 17◦ 256 15◦ 195
D7 92v13015 17◦ 299 15◦ 274
BDRM2 E71 17◦ 298 15◦ 274
2S1 B01 17◦ 299 15◦ 274
Depr. is abbreviation of “depression angle”; the picture of each target is given in Figure 8.

4.1.2. Reference Methods


In order to reflect the merits of the proposed method, several prevalent SAR target recognition
methods are taken as the references, as described in Table 2. For the SVM method, the classifier is
performed by the LIBSVM package [36] on the feature vectors extracted by PCA, whose dimensionality
is set to be 80 according to previous works [21,24]. In SRC, the OMP algorithm is chosen to resolve
the sparse representation tasks of the 80-dimension PCA features. The A-ConvNet is a taken as a
representative SAR ATR method using CNN. The designed networks in [25] is used for training and
testing based on the original image intensities. The target recognition method based on ASCs in [28]
is compared, in which a similarity measure between two ASC sets is formed for target recognition.
The region matching method in [3] is also compared. The target region of the test sample is matched
with the regions from different classes of templates and the similarities are calculated to determine the
target label. All the methods are implemented on a PC with Intel i7 (Intel, Hanoi, Vietnam) 3.4 GHz
CPU and 8 GB RAM.
In the following tests, we first perform the experiment to classify the ten targets under SOC.
Then, several EOCs including the configuration variants, large depression angle variation, noise
contamination, and partial occlusion, are used for further evaluation of the performance of our method.

Table 2. Methods to be compared with the proposed one.

Method Feature Classifier Reference


SVM Feature vector from PCA SVM [20]
SRC Feature vector from PCA SRC [24]
A-ConvNet Image intensities CNN [28]
ASC Matching ASCs One-to-one matching [18]
Region Matching Binary target region Region matching [3]

4.2. Experiment under SOC


At first, the recognition task is conducted under SOC based on the template and test sets in Table 1.
Specifically, for BMP2 and T72 with three configurations, only “9563” for BMP2 and “132” for T72
are used in the template samples. Table 3 displays the confusion matrix of our method on the ten
targets, in which the percentage of correct classification (PCC) of each class is illustrated. Clearly,
Sensors 2018, 18, 3019 11 of 19

the PCCs of these targets are over 96%, and the average PCC is calculated to be 98.34%. Table 4
displays the average PCCs, as well as the time consumption (for classifying one MSTAR image) of all
the methods. Our method achieves the highest PCC, indicating its effectiveness under SOC. Although
CNN is demonstrated to be effective for SAR ATR, it cannot work well if the training samples are
insufficient. In this experimental setup, there are some configuration variants between the template
and test sets of BMP2 and T72. As a result, the performance of A-ConvNet cannot rival the proposed
method. Compared with the ASC Matching and Region Matching methods, our method performs
much better, indicating that the classification scheme in this study can better make use of ASCs and
target region to enhance the recognition performance. As for the time consumption, the classifiers
like SVM, SRC, and CNN perform more efficiently than the proposed method because of the unified
form of the features used in these methods. The ASC matching consumes the most time because
it involves complex one-to-one matching between ASC sets. Compared with the region matching
method in [3], the proposed method is relatively more efficient. The method in [3] needs to process the
region residuals between two binary target regions, which is more time-consuming than the proposed
region matching method.

Table 3. Confusion matrix of the proposed method on the ten targets under SOC.

Target BMP2 BTR70 T72 T62 BDRM2 BTR60 ZSU23/4 D7 ZIL131 2S1 PCC (%)
BMP2 553 6 7 0 0 3 2 0 1 2 96.44
BTR70 0 196 0 0 0 0 0 0 0 0 100
T72 6 4 562 0 0 0 2 5 3 0 96.73
T62 0 0 0 274 0 0 0 0 0 0 100
BDRM2 0 0 0 0 274 0 0 0 0 0 100
BTR60 1 0 0 0 0 193 0 1 0 0 98.94
ZSU23/4 1 0 0 1 1 0 269 0 1 1 98.16
D7 1 0 0 0 0 1 0 271 0 1 98.88
ZIL131 0 2 0 0 1 0 2 0 269 0 98.18
2S1 0 4 0 0 0 1 0 0 0 269 98.18
Average 98.34
PCC: percentage of correction classification.

Table 4. Average PCCs of all the methods under the standard operating condition.

ASC Region
Method Proposed SVM SRC A-ConvNet
Matching Matching
PCC (%) 98.34 95.66 94.68 97.52 95.30 94.68
Time consumption (ms) 75.8 55.3 60.5 63.2 125.3 88.6

4.3. Experiment under EOCs


The template/training samples are usually collected or simulated under some cooperative
conditions. EOCs refer to those conditions occurred in the test samples, which are not included in the
template/training set, e.g., configuration variants, depression angle variance, noise contamination, etc.
To improve the robustness, it is desirable that the ATR methods work robustly under different types of
EOCs. In the following paragraphs of this subsection, we evaluate the proposed method under several
typical EOCs.

4.3.1. EOC 1-Configuration Variants


The ground military target often has different configurations. Figure 9 shows four different
configurations of a T72 tank, which have some locally structurally modifications. In practical
applications, the configurations of the test samples may not be included in the template set. Table 5
lists the template and test samples for the experiment under configuration variants. The configurations
of BMP2 and T72 to be classified are different to their counterparts in the template sets. Table 6 displays
the classification results of different configurations by our method. The test configurations can be
Sensors 2018, 18, 3019
x FOR PEER REVIEW 12
12of
of19
18

be 98.64%. Table 7 compares the average PCCs of different methods under configuration variants.
recognized
The proposedwith PCCs works
method highermost
than robustly
96%, andunder
the average PCC isvariants
configuration calculated
withto the
be 98.64%. Table 7
highest average
compares the average PCCs of different methods under configuration variants. The proposed
PCC. For targets of different configurations, they share similar physical sizes and shape with some method
works most robustly In
local modifications. under
this configuration
case, the targetvariants
regionwith
and the highest
local average
descriptors PCC.
can For targets
provide of different
more robustness
configurations, they share similar physical sizes and shape with some local modifications.
than the global features, like image intensities or PCA features; that’s why the ASC Matching In this case,
and
the target region and local descriptors can provide more robustness than the
Region Matching methods outperform the SVM, SRC, and CNN methods in this situation. global features, like image
intensities or PCA features; that’s why the ASC Matching and Region Matching methods outperform
the SVM, SRC, and CNN methods
Table in thisand
5. Template situation.
test sets with configuration variants.

Table 5.Depr.
Template andBMP2
test sets withBDRM2
configurationBTR70
variants. T72
Template set 17° 233 (9563) 298 233 232 (132)
Depr. BMP2 BDRM2 BTR70 T72
426 (812)
Template set 17◦ 233 (9563) 298 233 232573
(132)
(A04)
428 (9566) 426 (812)
Test set 15°, 17° 0 0 573 (A05)
429 (c21) 573573
(A04)
428 (9566) (A07)
Test set 15◦ , 17◦ 0 0 573 (A05)
429 (c21)
573567 (A10)
(A07)
567 (A10)
Table 6. Classification results of different configurations of BMP2 and T72.
Table 6. Classification
Target Serial results
BMP2of different
BRDM2configurations
BTR-70 T72of BMP2
PCCand(%)
T72.

Target 9566
Serial 412
BMP2 11
BRDM2 2
BTR-70 T723 96.26
PCC (%)
BMP2
c21 420 4 2 3 97.90
9566 412 11 2 3 96.26
BMP2 812 18 14
c21 420 20 3407 95.54
97.90
A04
812 5
18 81 00 560
407 97.73
95.54
T72 A05
A04 15 18 00 571
560 99.65
97.73
T72 A05
A07 31 21 03 571
565 99.65
98.60
A07 3 2 3 565 98.60
A10 7 0 2 558 98.41
A10 7 0 2 558 98.41
Average 98.58
Average 98.58

Figure 9. Four
Four different configurations of T72 tank.

Table 7.
Table PCCs of
7. PCCs of all
all the
the methods
methods under
under configuration
configuration variants.
variants.
Method
Method Proposed
Proposed SVM
SVM SRCSRC A-ConvNet
A-ConvNetASC
ASC Matching Region
Matching Region Matching
Matching
PCC
PCC(%)
(%) 98.58
98.58 95.67
95.67 95.44
95.44 96.16
96.16 97.12
97.12 96.55
96.55

4.3.2. EOC 2-LargeDepression


EOC 2-Large DepressionAngle
AngleVariation
Variation
The platform
platform conveying
conveying SAR SARsensors
sensorsmay mayoperate
operateat atdifferent
differentheights.
heights. Consequently, the
Consequently,
depression angle
the depression of the
angle measured
of the measured image
imageis likely to to
is likely bebe
different
differentwith
withthose
thoseofofthe
thetemplate
templatesamples,
samples,
which are often collected at only one or or few
few depression
depression angles.
angles. The
The template
template and and test
test sets in the
present experiment are showcased in Table Table 8,8, where
where three
three targets
targets (2S1,
(2S1, BDRM2,
BDRM2, and andZSU23/4)
ZSU23/4) are
classified. Images
Images at 17◦ are
at 17° are adopted
adopted as as the
the template
template samples,
samples, whereas
whereas those
those at 30° 30◦ and 45 ◦ are
45°
classified. SAR
SAR images
images of of 2S1
2S1 target
target at 17◦ , 30°
at 17°, 30◦ and 45◦ depression
and 45° depression angles
angles are
are shown
shown inin Figure
Figure 10,
10,
respectively.
respectively. ItIt shows
showsthatthatthe
thelarge
large depression
depression angle
angle variations
variations notably
notably change
change the appearances
the appearances and
and scattering
scattering patterns
patterns of theof the target.
target. The results
The results from our from our method
method under
under large large depression
depression angle
angle variation
variation are displayed in Table 9. It achieves the average PCCs of 97.80% and 76.16% at 30° and 45°
Sensors 2018, 18, 3019 13 of 19

Sensors 2018, 18, x FOR PEER REVIEW 13 of 18


are displayed in Table 9. It achieves the average PCCs of 97.80% and 76.16% at 30◦ and 45◦ depression
angles, respectively.
depression angles, The performances
respectively. of all the methods
The performances of allunder a large depression
the methods angledepression
under a large variation
are displayed in Table 10. All the ◦
angle variation are displayed in PCCs
Tablefall
10.sharply
All theatPCCs
a 45 fall
depression
sharply angle, mainly
at a 45° becauseangle,
depression the
test images
mainly have significant
because differences
the test images with the training
have significant ones,with
differences as shown in Figure
the training 10. as
ones, In shown
the ASCin
matching
Figure 10. method,
In the the
ASC similarity
matching evaluation
method,isthe
performed
similaritybased on the correspondence
evaluation of twoon
is performed based ASCthe
sets. So, some stable
correspondence ASCs
of two under
ASC sets.large
So, depression
some stableangle
ASCsvariance still help
under large correct angle
depression target variance
recognition.
still
Therefore, it achieves
help correct a higher average
target recognition. PCC than
Therefore, SVM, SRC,
it achieves CNN,
a higher and region
average PCCmatching
than SVM, methods at a
SRC, CNN,
45 ◦ ◦ ◦
anddepression angle. Inmethods
region matching comparison, our depression
at a 45° method obtainsangle.theInhighest accuracies
comparison, our at both 30obtains
method and 45the
depression angles, validating
highest accuracies at both 30°its highest
and robustness
45° depression in thisvalidating
angles, case. its highest robustness in this case.

-6 -6 -6

-4 -4 -4

-2 -2 -2

Range(m)
Range(m)
Range(m)

0 0 0

2 2 2

4 4 4

6 6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)

(a) (b) (c)


◦ ; (b) 30◦ ; (c) 45◦ .
Figure 10.SAR
Figure10. SARimages
imagesfrom
fromdepression
depressionangles
anglesof:of:(a)
(a)1717°; (b) 30°; (c) 45°.

Table 8. Template and test sets with large depression angle variation.
Table 8. Template and test sets with large depression angle variation.
Depr. 2S1
Depr. 2S1 BDRM2
BDRM2 ZSU23/4
ZSU23/4
Template
Templateset set 17◦ 17° 299
299° 298
298° 299
299°
30◦ 30° 288
288 287
287 288
288
Test set set
Test 45◦ 45° 303
303 303
303 303
303

◦ and 45◦ depression angles.


Table9.9.Classification
Table Classificationresults
resultsofofthe
theproposed
proposedmethod
methodatat3030° and 45° depression angles.

Classification Results
Classification Results
Depr.
Depr. Target
Target PCC (%)
PCC (%) Average (%)
Average (%)
2S1 BDRM2
2S1 BDRM2 ZSU23/4
ZSU23/4
2S1
2S1 278
278 66 44 96.53
96.53
◦ BDRM2 11 285 1 99.30
30°
30 BDRM2 285 99.30 97.68
97.68
ZSU23/4
ZSU23/4 44 44 280
280 97.22
97.22
2S1
2S1 229
229 48
48 26
26 75.58
75.58
45 ◦ BDRM2 10 242 51 79.87 75.82
45° BDRM2 10 242 51 79.87 75.82
ZSU23/4 54 31 218 71.95
ZSU23/4 54 31 218 71.95

Table 10. PCCs of all the methods at 30◦ and 45◦ depression angles.
Table 10. PCCs of all the methods at 30° and 45° depression angles.
PCC (%)
Method PCC (%)
Method 30◦ ◦
4545°
30°
Proposed
Proposed 97.68
97.68 75.82
75.82
SVM
SVM 96.87
96.87 65.05
65.05
SRC 96.24 64.32
SRC
A-ConvNet
96.24
97.16
64.32
66.27
A-ConvNet
ASC Matching 97.16
96.56 66.27
71.35
ASC Matching
Region Matching 96.56
95.82 71.35
64.72
Region Matching 95.82 64.72

4.3.3. EOC 3-Noise Contamination


Noise contamination is a common situation in the practical application of SAR ATR because of
the noises from the environment or SAR sensors [37–39]. To test the performance of our method
under possible noise contamination, we first simulate noisy images by adding Gaussian noises to
Sensors 2018, 18, 3019 14 of 19
Sensors 2018, 18, x FOR PEER REVIEW 14 of 18

4.3.3.
the test EOC 3-Noise
samples in TableContamination
1. In detail, the original SAR image is first transformed into the frequency
domain. Afterwards, the complex
Noise contamination is a common Gaussian noisesinare
situation theadded to application
practical the frequency spectrum
of SAR according
ATR because of to
the preset SNR.
the noises from Finally, the noisyor
the environment frequency
SAR sensors data is transformed
[37–39]. back into image
To test the performance of our domain
method underto obtain
the noisy SAR
possible
Sensors image.
noise
2018, Figure 11 shows
contamination,
18, x FOR PEER REVIEWwe firstthe noisy noisy
simulate SAR images
imagesby with different
adding levels
Gaussian of noise
noises to theaddition.
14 of test
18
samples in
The average Tableof1.all
PCCs In the
detail, the original
methods under SAR image
noise is first transformed
contamination into the
are plotted as frequency
Figure 12.domain.
As shown,
the test samples
Afterwards, in Table 1. In detail, the original SAR image is firstspectrum
transformed into thetofrequency
our method achieves the highest PCC at each noise level, indicating the best robustness preset
the complex Gaussian noises are added to the frequency
domain. Afterwards, the complex Gaussian noises are added to the frequency spectrum according to
according the regarding
SNR.
possiblethe Finally,
noise the noisy
contamination. frequency data is transformed back into image domain to obtain the noisy SAR
preset SNR. Finally, theAt lowfrequency
noisy SNRs, the dataintensity distribution
is transformed back into changes greatly.
image domain However,
to obtain
image. Figure 11 shows the noisy SAR images with different levels of noise addition. The average
the ASCs thecan
noisykeepSARtheir
image. properties so thatthethey
Figure 11 shows noisycan
SAR beimages
precisely
withextracted by sparse
different levels of noiserepresentation.
addition.
PCCs of all the methods under noise contamination are plotted as Figure 12. As shown, our method
In addition, the target
The average PCCs region
of all thestill contains
methods underpixels with higher are
noise contamination intensities
plotted as than
Figurethe
12. background
As shown, or
achieves the highest PCC at each noise level, indicating the best robustness regarding possible noise
shadow our method achieves
pixels. Then, the the highest
target region PCC at
can alsoeach noise level,
be segmented indicating
properly.the best
Thisrobustness
is alsothe regarding
theASCs
reason why
contamination.
possible noiseAt low SNRs, the
contamination. Atintensity
low SNRs, distribution
the intensity changes greatly.
distribution However,
changes greatly. However, can
the ASC
keepMatching method
their properties so and Region
that they canMatching
be precisely method
extractedperform
by sparsebetter than SVM, SRC,
representation. and CNN.
In addition,
the ASCs can keep their properties so that they can be precisely extracted by sparse representation.
the In
target regionthe
addition, still contains
target pixels
region still with higher intensities than the background or shadow pixels.
-6 -6 contains pixels with higher intensities
-6 than the background or
Then, the target region can also be segmented
shadow pixels. Then, the target region properly. This is also the reason
can also be segmented properly. This why
is also theASC
the Matching
reason why
-4 -4 -4
method andMatching
the ASC Region Matching method
method and Regionperform better
Matching thanperform
method SVM, SRC, and
better CNN.
than SVM, SRC, and CNN.
-2 -2 -2
Range(m)

Range(m)

Range(m)
0 -6 0 -6 -6 0
-4 -4 -4
2 2 2
-2 -2 -2
Range(m)

Range(m)

Range(m)
4 4 4
0 0 0
6 6 6
-6 -4 2 -2 0 2 4 6 -6 2 -4 -2 0 2 4 6 2 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
4 4 4
6
(a) 6
(b) 6
(c)
-6 -6 -4 -2 0 2 4 6 -6 -6 -4 -2 0 2 4 6 -6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
-4 (a) -4 (b) -4 (c)
-6 -6 -6
-2 -2 -2
Range(m)
Range(m)

Range(m)

-4 -4 -4
0 0 0
-2 -2 -2
Range(m)
Range(m)

Range(m)

2 2 2
0 0 0
4 2 4 2 2 4

6 4 6 4 4 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
6 CrossRange(m) 6 CrossRange(m) 6 CrossRange(m)
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
(d)
CrossRange(m) (e)
CrossRange(m) (f)
CrossRange(m)
(d) (e) (f)
Figure 11. Images with noise addition (SNR): (a) original image; (b) 10 dB; (c) 5 dB; (d) 0 dB; (e) −5 dB; (f)
Figure Images
11. 11.
Figure with
Images noise
with noiseaddition
addition(SNR):
(SNR): (a) original
originalimage;
image;(b)
(b)1010dB;
dB;(c)(c) 5 dB;
5 dB; (d)(d) 0 dB;
0 dB; −5(f)
(e)dB;
(e) −5 dB;
−10 dB.
(f) −
−10
10 dB.
dB.

Figure 12. Performance comparison of all the methods under noise contamination.

Figure
Figure 12. 12. Performance
Performance comparisonof
comparison of all
all the
the methods
methodsunder
undernoise contamination.
noise contamination.
Sensors2018,
Sensors 2018,18,
18,3019
x FOR PEER REVIEW 1515
ofof
1918

4.3.4. EOC 4-Partial Occlusion


4.3.4. EOC 4-Partial Occlusion
In fact, the target may be occluded by the obstacles; thus, a certain proportion of the target may
not be fact,
In the target
captured by SAR may be occluded
sensors. by the obstacles;
In this experiment, thus, a certain
the occluded SAR imagesproportion of the target
are generated as the
may not be captured by SAR sensors. In this experiment, the
occlusion model in [40,41]; then, the performance of different methods is evaluatedoccluded SAR images are generated as
at different
the occlusion
occlusion model
levels. in [40,41];
In detail, then, proportion
a certain the performanceof the of different
binary targetmethods is evaluated
region from at different
the original image is
occlusion levels. In detail, a certain proportion of the binary target region
first occluded from different directions. Afterwards, the remaining target region and background are from the original image is
first occluded from different directions. Afterwards, the remaining target
filled with the original pixels, while the occluded region is filled with the randomly picked region and background are
filled with the pixels.
background originalIn pixels, while different
this way, the occluded
levelsregion is filled with
of partially the randomly
occluded SAR images pickedfrom
background
different
pixels. In this way, different levels of partially occluded SAR images from
directions can be generated for target recognition. In Figure 13, some occluded images are shown, different directions can bein
generated for target recognition. In Figure 13, some occluded images are
which 20% of the target regions are occluded from different directions. Figure 14 plots the PCCs shown, in which 20% ofoftheall
target regions are
the methods occluded
under partialfrom differentOur
occlusion. directions.
methodFigureobtains 14 the
plotshighest
the PCCs PCCsof all
at the methods
different under
occlusion
partial
levels,occlusion.
indicatingOur methodeffectiveness
its highest obtains the highest PCCs atocclusion.
under partial different The occlusion levels,
predicted indicating
regions of ASCsits
highest
reflect effectiveness
the local featuresunder ofpartial occlusion.
the target. AlthoughThe apredicted
part of the regions
targetofisASCs reflect
occluded, the
the local features
remaining parts
ofcan
the still
target. Although a part of the target is occluded, the remaining parts
keep stable. In the proposed method, the ASCs are extracted to describe the local can still keep stable. In the
proposed method, the ASCs are extracted to describe the local
characteristics of the original image. The predicted regions can effectively convey thecharacteristics of the original image.
The predicted regions
discrimination of thecan effectively
remaining convey
parts, whichtheare
discrimination
not occluded. of the remainingthe
By matching parts, which are
predicted not
regions
occluded. By matching the predicted regions with the intact target region
with the intact target region of the template samples, the proposed method can keep robust under of the template samples, the
proposed method can keep robust under partial occlusion. Similar to the
partial occlusion. Similar to the conditions of noise corruption, the ASC Matching, and Region conditions of noise corruption,
the ASC Matching,
Matching methodsand Region
perform Matching
better methods
than the perform
classifiers better than
performed on thetheglobal
classifiers performed
features, on
i.e., SVM,
the global
SRC and CNN. features, i.e., SVM, SRC and CNN.

-6 -6 -6

-4 -4 -4

-2 -2 -2
Range(m)
Range(m)

Range(m)

0 0 0

2 2 2
4 4 4

6 6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
(a) (b) (c)
-6 -6
-4 -4
-2 -2
Range(m)

Range(m)

0 0
2 2
4 4
6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m)
(d) (e)
Figure 13.Occluded
Figure13. Occludedimages
imagesatatthetheocclusion
occlusionlevellevelofof20%
20%fromfromdifferent
differentdirections
directionsbeing:
being:(a)
(a)original
original
image;
image;(b)
(b)direction
direction1;1;(c)
(c)direction
direction2;2;(d)
(d)direction
direction3;3;(e)(e)direction
direction4.4.
Sensors 2018, 18, 3019 16 of 19
Sensors 2018, 18, x FOR PEER REVIEW 16 of 18

FigureFigure 14. Performance


14. Performance comparisonof
comparison of all
all the
themethods
methodsunder partial
under occlusion.
partial occlusion.

5. Conclusions
5. Conclusions
In this study, we propose an effective method for SAR ATR by matching ASCs to binary target
In this study, we propose an effective method for SAR ATR by matching ASCs to binary target
region. Instead of directly matching the points features, i.e., ASCs, to the target region, each ASC is
region. predicted
Instead of directly matching the points features, i.e., ASCs, to the target region, each ASC is
as a binary region using a thresholding method. The binary regions of individual ASCs
predicted as ainbinary
vary the areasregionand using
shapes, a thresholding
which reflect theirmethod. The binary
attributes such asregions
spatial of individual
positions, ASCs vary
relative
in the areas and shapes,
amplitudes, which Afterwards,
and lengths. reflect theirthe attributes
predictedsuch as spatial
regions positions,
of the test sample relative
are mapped amplitudes,
to the and
binary target region from the corresponding templates. Finally, a
lengths. Afterwards, the predicted regions of the test sample are mapped to the binary target region similarity measure is defined
from theaccording to the region
corresponding matching Finally,
templates. results, and the target label
a similarity is determined
measure is defined according to the highest
according to the region
similarity. The MSTAR dataset is employed for experiments. Based on the experimental results,
matching results, and the target label is determined according to the highest similarity. The MSTAR
conclusions are drawn as follows.
dataset is employed for experiments.
(1) The proposed method works Based on for
effectively thetheexperimental
recognition task results, conclusions
of ten targets under SOC are drawn
as follows.
with a notably high PCC of 98.34%, which outperforms other state-of-the-art methods.
(1) The (2)proposed methodtypes
Under different works of effectively
EOCs (including for the recognitionvariants,
configuration task of large
ten targets
depressionunder SOC with
angle
variation, noise contamination, and partial occlusion),
a notably high PCC of 98.34%, which outperforms other state-of-the-art methods. the proposed performs more robustly than
(2) the reference methods owing to the robustness of the region features as well as the designed
Under different types of EOCs (including configuration variants, large depression angle
classification scheme.
variation, noise contamination, and partial occlusion), the proposed performs more robustly than
(3) Although not superior in efficiency, the higher effectiveness and robustness make the
the reference
proposed method a owing
methods potentialto waythe robustness
to improve of the
the SAR ATR region features
performance in the as well conditions.
practical as the designed
classificationFuture
scheme.work is as follows. First, as basic features in the proposed target recognition method,
(3) the extraction
Although notprecision
superior of in
binary target region
efficiency, and ASCs
the higher should be and
effectiveness further improvedmake
robustness by adopting
the proposed
or developing more robust methods. Some despeckling
method a potential way to improve the SAR ATR performance in the practical conditions. algorithms [42–44] can be first used to
improve the quality of the original SAR images before the feature extraction. Second, the similarity
Future work is as follows. First, as basic features in the proposed target recognition method,
measure based on the region matching results should be further improved to enhance the ATR
the extraction precision
performance, e.g., of
thebinary
adaptive target region andofASCs
determination should be
the weights for further
differentimproved
scores. Third,by adopting
the or
developing more robust methods. Some despeckling algorithms [42–44]
proposed method should be extended to the ensemble SAR ATR system to handle the condition can be first used to improve
the quality
that of the original
several targets areSAR images
contained in abefore the feature
SAR image. extraction.
Lastly, the Second,should
proposed method the similarity
be tested on measure
based on other
theavailable dataset from
region matching the airborne
results should or be
orbital SAR improved
further sensors to further validatethe
to enhance its ATR
effectiveness
performance,
e.g., the and robustness.
adaptive determination of the weights for different scores. Third, the proposed method should
be extended
Author toContributions:
the ensemble J.T.;SAR
X.F.; ATR system to
S.W. conceived andhandle
worked the condition
together that
to achieve thisseveral targets
work. Y.R. are the
performed contained
in a SAR image. Lastly,
experiments. J.T. wrote the
the proposed
paper. method should be tested on other available dataset from the
airborneFunding:
or orbital
This SAR
researchsensors
was funded to further
by Beijingvalidate
Municipalits effectiveness
Natural and robustness.
Science Foundation (Grant No. Z8162039), the
Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA19080100, the Hainan
Sensors 2018, 18, 3019 17 of 19

Author Contributions: J.T.; X.F.; S.W. conceived and worked together to achieve this work. Y.R. performed the
experiments. J.T. wrote the paper.
Funding: This research was funded by Beijing Municipal Natural Science Foundation (Grant No. Z8162039),
the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA19080100, the Hainan
Provincial Department of Science and Technology (Grant No. ZDKJ2016021), the Natural Science Foundation of
Hainan (Grant No. 20154171) and the 135 Plan Project of Chinese Academy of Sciences (Grant No. Y6SG0200CX).
And the APC was Funded by Beijing Municipal Natural Science Foundation (Grant No. Z8162039).
Acknowledgments: The authors thank the anonymous reviewers for their constructive suggestions.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. El-Darymli, K.; Gill, E.W.; McGuire, P.; Power, D.; Moloney, C. Automatic target recognition in synthetic
aperture radar imagery: A state-of-the-art review. IEEE Access 2016, 4, 6014–6058. [CrossRef]
2. Park, J.; Park, S.; Kim, K. New discrimination features for SAR automatic target recognition. IEEE Geosci.
Remote Sens. Lett. 2013, 10, 476–480. [CrossRef]
3. Ding, B.Y.; Wen, G.J.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar images using binary
morphological operations. J. Appl. Remote Sens. 2016, 10, 046006. [CrossRef]
4. Amoon, M.; Rezai-rad, G. Automatic target recognition of synthetic aperture radar (SAR) images based on
optimal selection of Zernike moment features. IET Comput. Vis. 2014, 8, 77–85. [CrossRef]
5. Anagnostopulos, G.C. SVM-based target recognition from synthetic aperture radar images using target
region outline descriptors. Nonlinear Anal. 2009, 71, e2934–e2939. [CrossRef]
6. Ding, B.Y.; Wen, G.J.; Ma, C.H.; Yang, X.L. Decision fusion based on physically relevant features for SAR
ATR. IET Radar Sonar Navig. 2017, 11, 682–690. [CrossRef]
7. Papson, S.; Narayanan, R.M. Classification via the shadow region in SAR imagery. IEEE Trans. Aerosp.
Electron. Syst. 2012, 48, 969–980. [CrossRef]
8. Yuan, X.; Tang, T.; Xiang, D.L.; Li, Y.; Su, Y. Target recognition in SAR imagery based on local gradient ratio
pattern. Int. J. Remote Sens. 2014, 35, 857–870. [CrossRef]
9. Mishra, A.K. Validation of PCA and LDA for SAR ATR. In Proceedings of the 2008 IEEE Region 10 Conference,
Hyderabad, India, 19–21 November 2008; pp. 1–6.
10. Cui, Z.Y.; Cao, Z.J.; Yang, J.Y.; Feng, J.L.; Ren, H.L. Target recognition in synthetic aperture radar via
non-negative matrix factorization. IET Radar Sonar Navig. 2015, 9, 1376–1385. [CrossRef]
11. Huang, Y.L.; Pei, J.F.; Yang, J.Y.; Liu, X. Neighborhood geometric center scaling embedding for SAR ATR.
IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 180–192. [CrossRef]
12. Yu, M.T.; Dong, G.G.; Fan, H.Y.; Kuang, G.Y. SAR target recognition via local sparse representation of
multi-manifold regularized low-rank approximation. Remote Sens. 2018, 10, 211. [CrossRef]
13. Liu, X.; Huang, Y.L.; Pei, J.F.; Yang, J.Y. Sample discriminant analysis for SAR ATR. IEEE Geosci. Remote
Sens. Lett. 2014, 11, 2120–2124.
14. Gerry, M.J.; Potter, L.C.; Gupta, I.J.; Merwe, A. A parametric model for synthetic aperture radar measurement.
IEEE Trans. Antennas Propag. 1999, 47, 1179–1188. [CrossRef]
15. Potter, L.C.; Mose, R.L. Attributed scattering centers for SAR ATR. IEEE Trans. Image Process. 1997, 6, 79–91.
[CrossRef] [PubMed]
16. Chiang, H.; Moses, R.L.; Potter, L.C. Model-based classification of radar images. IEEE Trans. Inf. Theor. 2000,
46, 1842–1854. [CrossRef]
17. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. Robust method for the matching of attributed
scattering centers with application to synthetic aperture radar automatic target recognition. J. Appl.
Remote Sens. 2016, 10, 016010. [CrossRef]
18. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar
images via matching of attributed scattering centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10,
3334–3347. [CrossRef]
Sensors 2018, 18, 3019 18 of 19

19. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. A robust similarity measure for attributed scattering
center sets with application to SAR ATR. Neurocomputing 2017, 219, 130–143. [CrossRef]
20. Zhao, Q.; Principe, J.C. Support vector machines for synthetic radar automatic target recognition. IEEE Trans.
Aerosp. Electron. Syst. 2001, 37, 643–654. [CrossRef]
21. Liu, H.C.; Li, S.T. Decision fusion of sparse representation and support vector machine for SAR image target
recognition. Neurocomputing 2013, 113, 97–104. [CrossRef]
22. Sun, Y.J.; Liu, Z.P.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans.
Aerosp. Electron. Syst. 2007, 43, 112–125. [CrossRef]
23. Thiagarajan, J.J.; Ramamurthy, K.; Knee, P.P.; Spanias, A.; Berisha, V. Sparse representation for automatic
target classification in SAR images. In Proceedings of the 2010 4th Communications, Control and Signal
Processing (ISCCSP), Limassol, Cyprus, 3–5 March 2010.
24. Song, H.B.; Ji, K.F.; Zhang, Y.S.; Xing, X.W.; Zou, H.X. Sparse representation-based SAR image target
classification on the 10-class MSTAR data set. Appl. Sci. 2016, 6, 26. [CrossRef]
25. Chen, S.Z.; Wang, H.P.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR
images. IEEE Trans. Geosci. Remote Sens. 2016, 47, 1685–1697. [CrossRef]
26. Wagner, S.A. SAR ATR by a combination of convolutional neural network and support vector machines.
IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2861–2872. [CrossRef]
27. Ding, J.; Chen, B.; Liu, H.W.; Huang, M.Y. Convolutional neural network with data augmentation for SAR
target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [CrossRef]
28. Du, K.N.; Deng, Y.K.; Wang, R.; Zhao, T.; Li, N. SAR ATR based on displacement-and rotation-insensitive
CNN. Remote Sens. Lett. 2016, 7, 895–904. [CrossRef]
29. Huang, Z.L.; Pan, Z.X.; Lei, B. Transfer learning with deep convolutional neural networks for SAR target
classification with limited labeled data. Remote Sens. 2017, 9, 907. [CrossRef]
30. Zhou, J.X.; Shi, Z.G.; Cheng, X.; Fu, Q. Automatic target recognition of SAR images based on global scattering
center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729.
31. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in SAR images by exploiting the
azimuth sensitivity. Remote Sens. Lett. 2017, 8, 821–830. [CrossRef]
32. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Englewood, NJ, USA, 2008.
33. Liu, H.W.; Jiu, B.; Li, F.; Wang, Y.H. Attributed scattering center extraction algorithm based on sparse
representation with dictionary refinement. IEEE Trans. Antennas Propag. 2017, 65, 2604–2614. [CrossRef]
34. Cong, Y.L.; Chen, B.; Liu, H.W.; Jiu, B. Nonparametric Bayesian attributed scattering center extraction for
synthetic aperture radar targets. IEEE Trans. Signal Process. 2016, 64, 4723–4736. [CrossRef]
35. Dong, G.G.; Kuang, G.Y. Classification on the monogenic scale space: Application to target recognition in
SAR image. IEEE Tran. Image Process. 2015, 24, 2527–2539. [CrossRef] [PubMed]
36. Chang, C.; Lin, C. LIBSVM: A library for support vector machine. ACM Trans. Intell. Syst. Technol. 2011, 2,
389–396.
37. Doo, S.; Smith, G.; Baker, C. Target classification performance as a function of measurement uncertainty.
In Proceedings of the 5th Asia-Pacific Conference on Synthetic Aperture Radar, Singapore, 1–4 September 2015.
38. Ding, B.Y.; Wen, G.J. Target recognition of SAR images based on multi-resolution representation.
Remote Sens. Lett. 2017, 8, 1006–1014. [CrossRef]
39. Ding, B.Y.; Wen, G.J. Sparsity constraint nearest subspace classifier for target recognition of SAR images.
J. Visual Commun. Image Represent. 2018, 52, 170–176. [CrossRef]
40. Bhanu, B.; Lin, Y. Stochastic models for recognition of occluded targets. Pattern Recogn. 2003, 36, 2855–2873.
[CrossRef]
41. Ding, B.Y.; Wen, G.J. Exploiting multi-view SAR images for robust target recognition. Remote Sens. 2017,
9, 1150. [CrossRef]
42. Lopera, O.; Heremans, R.; Pizurica, A.; Dupont, Y. Filtering speckle noise in SAS images to improve detection
and identification of seafloor targets. In Proceedings of the International Waterside Security Conference,
Carrara, Italy, 3–5 November 2010.
Sensors 2018, 18, 3019 19 of 19

43. Idol, T.; Haack, B.; Mahabir, R. Radar speckle reduction and derived texture measures for land cover/use
classification: A case study. Geocarto Int. 2017, 32, 18–29. [CrossRef]
44. Qiu, F.; Berglund, J.; Jensen, J.R.; Thakkar, P.; Ren, D. Speckle noise reduction in SAR imagery using a local
adaptive median filter. Gisci. Remote Sens. 2004, 41, 244–266. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like