Target Recognition of SAR Images via Matching Attr
Target Recognition of SAR Images via Matching Attr
Article
Target Recognition of SAR Images via Matching
Attributed Scattering Centers with Binary
Target Region
Jian Tan 1,2 , Xiangtao Fan 1,2 , Shenghua Wang 3, * and Yingchao Ren 2
1 Hainan Key Laboratory of Earth Observation, Sanya 572029, China; [email protected] (J.T.);
[email protected] (X.F.)
2 Key Laboratory of Digital Earth Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of
Sciences, Beijing 100094, China; [email protected]
3 School of Public Administration and Mass Media, Beijing Information Science and Technology University,
Beijing 100093, China
* Correspondence: [email protected]
Received: 25 July 2018; Accepted: 5 September 2018; Published: 10 September 2018
Abstract: A target recognition method of synthetic aperture radar (SAR) images is proposed via
matching attributed scattering centers (ASCs) to binary target regions. The ASCs extracted from the
test image are predicted as binary regions. In detail, each ASC is first transformed to the image domain
based on the ASC model. Afterwards, the resulting image is converted to a binary region segmented
by a global threshold. All the predicted binary regions of individual ASCs from the test sample are
mapped to the binary target regions of the corresponding templates. Then, the matched regions
are evaluated by three scores which are combined as a similarity measure via the score-level fusion.
In the classification stage, the target label of the test sample is determined according to the fused
similarities. The proposed region matching method avoids the conventional ASC matching problem,
which involves the assignment of ASC sets. In addition, the predicted regions are more robust
than the point features. The Moving and Stationary Target Acquisition and Recognition (MSTAR)
dataset is used for performance evaluation in the experiments. According to the experimental results,
the method in this study outperforms some traditional methods reported in the literature under
several different operating conditions. Under the standard operating condition (SOC), the proposed
method achieves very good performance, with an average recognition rate of 98.34%, which is higher
than the traditional methods. Moreover, the robustness of the proposed method is also superior to the
traditional methods under different extended operating conditions (EOCs), including configuration
variants, large depression angle variation, noise contamination, and partial occlusion.
Keywords: synthetic aperture radar (SAR); target recognition; attributed scattering center (ASC);
region matching; score fusion
1. Introduction
Owing to the merits of synthetic aperture radar (SAR), interpreting high-resolution SAR images is
becoming an important task for both military and civilian applications. As a key step of SAR interpretation,
automatic target recognition (ATR) techniques are employed to decide the target label in an unknown
image [1]. Typically, a general SAR ATR method is comprised of two parts: feature extraction and a
decision engine. The former tries to obtain low-dimensional representations from the original images
while conveying the original discrimination capability. In addition, the high dimensionality of the original
image is reduced significantly, which helps improve the efficiency of the following classification. Different
kinds of features are adopted or designed for SAR target recognition in the previous literature. The
features describing the physical structures or shape of the target are extracted for SAR ATR, e.g., binary
target region [2–4], target outline [5,6], target’s radar shadow [7], local texture [8], etc. Park et al. [2] design
several descriptors from the binary target region for SAR target recognition. In [3], a SAR ATR method
is designed through the matching of binary target regions, where the region residuals are processed
by the binary morphological operations to enhance divergences between different classes. In [4], the
binary target region is first described by the Zernike moments, and a support vector machine (SVM) is
employed for classification afterwards. The target outline is taken as the discriminative feature in [5],
which is approached by the Elliptical Fourier Series (EFS). Then, SVM is used to classify the outline
descriptors. Yuan et al. use the local gradient ratio pattern to describe SAR images with application to
target recognition [8]. The projection features are also prevalent in SAR ATR. Principal component analysis
(PCA) [9], linear discriminant analysis (LDA) [9], and non-negative matrix factorization (NMF) [10] are
often used to extract the projection features. Based on the idea of manifold learning, other projection
features are designed to exploit the properties of the training samples [11–13]. In the high frequency area,
the total backscattering of a whole target can be regarded as the summation of several individual scattering
centers [14]. In this way, the scattering center features are discriminative for SAR target recognition. Several
SAR ATR methods have been proposed using the attributed scattering centers (ASCs) which achieve
good effectiveness and robustness [15–19]. In the classification stage, the classifiers (decision engines) are
adopted or designed according to the properties of the extracted features. For features with unified forms,
e.g., feature vectors extracted by PCA, classifiers like SVM [4,5,20,21], adaptive boosting (AdaBoost) [22],
sparse representation-based classification (SRC) [21,23,24], etc., can be directly used for classification
tasks. The deep learning method, i.e., convolution neural network (CNN), is also demonstrated to be
notably effective for image interpretation [25–29]. In CNN, the hierarchical deep features are learned by
the convolution layers with a softmax classifier to perform the multi-class regression at the end. However,
for features with no specific orders, e.g., ASCs, the former classifiers cannot be directly employed for
classification. Usually, a similarity measure between these features is defined [16–18]. Afterwards,
the target label is assigned as the template class achieving the maximum similarity.
This paper proposes an efficient and effective method for SAR ATR via matching ASCs with
binary target regions. In previous works [16–18] using ASCs for SAR ATR, a complex one-to-one
correspondence is often built for the following similarity evaluation. In [16], Chiang et al. solve the
assignment problem between two ASC sets using the Hungarian algorithm and evaluate the similarity
as the posterior probability. Ding et al. exploit the line and triangle structures in the ASC set during the
similarity evaluation based on the one-to-one correspondences between two ASC sets [17,18]. However,
it is still a difficult and complex task to precisely build the correspondence between the ASCs for the
following reasons [30]. First, there are always missing or false alarms caused by the extended operating
conditions (EOCs) such as occlusion, noises, etc. Second, the ASCs cannot be extracted with no errors.
As a result, the extraction errors also cause problems. Lastly, as point features, the ASCs lack of high
stability, especially because SAR images change greatly with variations in the target azimuth [31].
As a remedy, in this study, each of the extracted ASCs from the test image is represented by a binary
region. In detail, the backscattering field of the ASC is first calculated based on the ASC method,
and then transformed to the image domain. Afterwards, a global threshold is used to segment the
reconstructed images of individual ASCs as binary regions. In the image domain, the spatial positions
of the ASCs can be intuitively observed. For ASCs with higher amplitudes, they tend to produce
regions with larger areas because their images contain more pixels with high intensities. In addition,
the distributed ASCs with lengths could also maintain their attributes at proper thresholds. Hence,
the predicted binary regions actually embody the attributes of the ASCs such as the spatial positions,
relative amplitudes, and lengths. The binary regions of individual ASCs are matched to the extracted
binary target region from the corresponding template samples. The overlap and differences during
the region matching reflect the correlations between the test image and corresponding templates from
various classes. Based on the region matching results, three matching scores are defined. To combine
Sensors 2018, 18, 3019 3 of 19
the strengths of different scores, a score-level fusion is performed to obtain a unified similarity. Finally,
the target label is determined according to the calculated similarities.
In the remainder of this study, we do the following: in Section 2, we introduce the extraction of
binary target region and ASCs. The main methodology of matching ASCs with the binary target region
is presented in Section 3. In Section 4, experiments are conducted on the Moving and Stationary Target
Acquisition and Recognition (MSTAR) dataset. Finally, in Section 5, we draw conclusions according to
the experimental results, and outline some future work.
(e)(e) (f)(f)
Figure
Figure1. Illustration
1. Illustrationof of
Illustration the target
ofthe
the segmentation
target
targetsegmentation
segmentation algorithm:
algorithm:
algorithm:(a)(a)
original SAR
original
(a) SAR
original image
SAR of of
image BMP2
image BMP2 tank; (b)
tank;
of BMP2 (b)
tank;
equalized
(b)
equalized image;
equalized (c)
image;
image; smoothed
(c) smoothed image
(c) smoothed image after
image mean
after
after mean filtering;
mean (d) preliminary
filtering;
filtering; (d) (d) segmentation
preliminary
preliminary result;
segmentation
segmentation (e)(e)
result;
result;
result
(e) after
result
result the
after opening
after
thethe
opening operation;
opening (f)(f)
operation;
operation; result after
(f) result
result the
after
after closing
the the operation.
closing
closing operation.
operation.
Figure
Figure
Figure 2. The
2. The
2. The structuring
structuring elements
elements
structuring used
used
elements in
in in
used the
thethe closing
closing operation.
operation.
closing operation.
2.2.
2.2. ASC
ASC Extraction
Extraction
2.2. ASC Extraction
2.2.1. ASC Model
2.2.1. ASC
2.2.1. ASCModel
Model
SAR images reflect the target’s electromagnetic characteristics in the high frequency region, which
SAR
SARimages
images reflect thethe
reflect target’s electromagnetic characteristics in in
thethehigh frequency region,
can be quantitively modeled astarget’s electromagnetic
a summation characteristics
of local properties, i.e., scattering high frequency
centers region,
[14]. The target’s
which
which can be quantitively
can be quantitively modeled as
modeled as a summation of local properties, i.e., scattering centers [14].
backscattering field can be expressed asafollows:
summation of local properties, i.e., scattering centers [14].
The target’s backscattering field can be expressed as follows:
The target’s backscattering field can be expressed as follows:
K K
(Ef((,ff, ;,φ;
EE θ) = ∑
K
Efi (, f ;, θ
φ; θ ) (1)
;)θ)
θ Ei (E
i
( f , i;)θii ) (1)(1)
i 1i =1
i 1
In In
Equation
In Equation
Equation (1),ff and
(1),
(1), fand φdenotes
and denotesthethe
denotes frequency
frequency
the andand
frequency andaspect
aspect angle,
angle,
aspect respectively.
respectively.
angle, K Knumber
K is the
respectively. is is
thethe
of the ASCs
number
number of of in
thethe the
ASCs
ASCs radar
in in measurement.
thethe
radar For a single
measurement.
radar measurement. For aASC,
For itsASC,
single
a single backscattering
ASC,itsits field canfield
backscattering
backscattering be calculated
can
field canbebe
according
calculated to ASC model
according to ASC [14] Equation
model [14] (2).
Equation (2).
calculated according to ASC model [14] Equation (2).
f f αi · exp(j4π f ff cos φ + y sin φ))
− j4π
, ;θθi ))= AAi( ·j ( j )fαi) αexp(
EEi ((f f, φ;
i E ( f , i; θ ) i A ( j f c ) i exp(
j 4π
c ( x(i(xcos yi sin
xii cos )) ))
yii sin (2)
i i i f 2π f c c
c f
·sinc( cc Li sin(φ − φi )) · exp(−2π f γi sin φ) (2)(2)
2π2π f f
sinc(
sinc( L sin( )) exp(-2πfγ sin )
i )) exp(-2πifγ sin )
where c denotes the propagation velocity c c i Li sin(
of electromagnetic
i i wave and θ = { θi } =
[ Ai , αi , xi , yi , Li , f i , γi ](i = 1, 2, · · · , K ) represents the attribute set of all the ASCs in a SAR image.
where
where c c denotes denotes the propagation velocity
velocity ( xof,of electromagnetic
electromagnetic wave wave and
In detail, for the ith ASC, the Ai is the propagation
complex amplitude; i yi ) denote the spatial positions; αi is
and
θθ {θi{}θ } [A[i A, αi, ,αxi,,xyi, ,yLi, ,Lfi,, fγi, ]( i 1, 2, , K ) represents the attribute set of all the
γ i ]( i 1, 2, , K ) represents the attribute set of all the ASCs in a SAR ASCs in a SAR
i i i i i i i
image.
image.InIndetail, detail,forforthetheithithASC, ASC,Ai Ais isthethecomplex
i complexamplitude;
amplitude; xi,xyi, y denote
i i
denotethethespatial
spatial
Sensors 2018, 18, 3019 5 of 19
the frequency dependence; for a distributed ASC, Li and φi represent the length and orientation,
respectively; and γi denotes the aspect dependence of a localized ASC.
s = D (θ ) × σ (3)
where s is obtained by reformulating the 2-D measurement E( f , φ; θ ) into a vector; D (θ ) represents the
overcomplete dictionary. In detail, each column of D (θ ) stores the vector form of the electromagnetic
field of one element in the parameter space θ; σ denotes a sparse vector and each element in it represents
the complex amplitude A.
In practical situations, the noises and possible model errors should also be considered. Therefore,
Equation (3) is reformulated as follows:
s = D (θ ) × σ + n (4)
In Equation (4), n denotes the error term, which is modeled as a zero-mean additive white
Gaussian process. Afterwards, the attributes of the ASCs can be estimated as follows:
In Equation (5), ε = knk2 represents the noise level; k•k0 denotes l0 -norm and σ̂ is the estimated
complex amplitudes with respect to the dictionary D (θ ). As a nondeterministic polynomial-time
hard (NP-hard) problem, the sparse representation problem in Equation (5) is computationally
difficult to solve. As a remedy, some greedy methods, e.g., the orthogonal matching pursuit (OMP),
are available [33,34]. Algorithm 1 illustrates the detailed procedure of ASC extraction based on
sparse representation.
3.
3. Matching
Matching ASCs
ASCs with
with Binary
Binary Target
Target Region
3.1. Region
3.1. Region Prediction
Prediction by
by ASC
ASC
As point
As point features,
features,the thematching
matchingofoftwo twoASC ASCsets setsis is a complex
a complex andand difficult
difficult task,
task, as as analyzed
analyzed in
in previous
previous research
research [30].[30].
As As a remedy,
a remedy, in in thisstudy,
this study,the theextracted
extractedASCs ASCsfrom fromthe the test
test image
image are are
represented as
represented as binary
binary regions
regions using
using a thresholding
thresholding method.method. The The backscattering
backscattering field field of
of each
each ASCASC is is
first calculated
first calculatedbased basedonon thethe
ASCASCmodel in Equation
model (2). Afterwards,
in Equation the imaging
(2). Afterwards, the process
imagingis process
performed is
to transform the backscattering field to the image domain. In
performed to transform the backscattering field to the image domain. In this study, the imagingthis study, the imaging process is
consistent
process with the MSTAR
is consistent with theimages
MSTAR including
images zeropadding, windowing
including zeropadding, (−35 dB Taylor
windowing (−35 dB window),
Taylor
and 2D fast
window), andFourier
2D fasttransform
Fourier (FFT).
transform The (FFT).
detailed Theoperating
detailedparameters of MSTAR SAR
operating parameters imagesSAR
of MSTAR can
be referred to [32]. Denoting the maximum intensity of the images
images can be referred to [32]. Denoting the maximum intensity of the images from individual ASCs from individual ASCs as m, the
global
as threshold
m , the global for region for
threshold prediction is set to beism/α,
region prediction set towhere
be mα/ α is ,the scaleαcoefficient
where is the scale larger than 1.
coefficient
Figure 3 shows the predicted binary regions of three ASCs with different
larger than 1. Figure 3 shows the predicted binary regions of three ASCs with different amplitudes amplitudes at α = 30. The
at
images from ASCs with higher amplitudes tend to have higher pixel
α 30 . The images from ASCs with higher amplitudes tend to have higher pixel intensities, as intensities, as shown in Figure 3a
(from left
shown in to right).3aTheir
Figure (from predicted binary Their
left to right). regions are shown
predicted in Figure
binary 3b, correspondingly.
regions are shown in Figure It shows
3b,
that the stronger ASCs produce binary regions with larger areas. Figure
correspondingly. It shows that the stronger ASCs produce binary regions with larger areas. Figure 4 4 shows the predicted binary
region the
shows of apredicted
distributed ASC.
binary As shown,
region the lengthASC.
of a distributed of the Asdistributed
shown, theASC length canofbethemaintained
distributedinASC the
predicted region at the proper threshold. Therefore, the predicted binary
can be maintained in the predicted region at the proper threshold. Therefore, the predicted binary region can effectively convey
the discriminative
region can effectively attributes
convey of the
the original ASC, such
discriminative as spatial
attributes positions,
of the original relative
ASC, amplitudes,
such as spatial and
lengths. Figure
positions, relative 5 illustrates
amplitudes, the target’s
and lengths. image
Figurereconstructed
5 illustratesby thealltarget’s
the extracted ASCs, as wellby
image reconstructed as
the predicted regions. Figure 5a shows a SAR image of BMP2 tank.
all the extracted ASCs, as well as the predicted regions. Figure 5a shows a SAR image of BMP2 tank. The ASCs of the original image
are extracted
The ASCs of the based on sparse
original imagerepresentation
are extracted based and used to reconstruct
on sparse representation the target’s
and used image, as shown
to reconstruct
in Figure
the target’s 5b. The reconstruction
image, as shown in Figure result5b. showsThe that the extracted
reconstruction ASCs
result can remove
shows that the the background
extracted ASCs
interference, while the target’s characteristics can be maintained. Figure
can remove the background interference, while the target’s characteristics can be maintained. Figure 5c shows the overlap of all
5c
the predicted
shows regions.
the overlap ofClearly,
all the the predicted
predicted regionsClearly,
regions. can convey the the geometrical
predicted shape
regions canand scattering
convey the
center distribution
geometrical shape and of the original center
scattering image.distribution of the original image.
(a)
(b)
Figure
Figure 3.
3.Images
Imagesand
andbinary
binaryregions
regionsof
ofASCs
ASCswith
withdifferent
differentamplitudes:
amplitudes:(a)
(a)images;
images;(b)
(b)binary
binaryregions.
regions.
Sensors 2018, 18, x FOR PEER REVIEW 7 of 18
Sensors 2018, 18, 3019 7 of 19
Sensors 2018, 18, x FOR PEER REVIEW 7 of 18
(a) (b)
(a) (b)
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.
Figure 4. Image and binary region of a distributed ASC: (a) image; (b) binary region.
reflect their differences. Clearly, the region overlap with the correct class has a much larger area than
those of the incorrect classes. Three scores M are definedRMto evaluate RMthe matching results, as follows.
G1 G2 G3 (6)
N , Rt , RN
M R R
G1 = , G2 = M , G3 = M (6)
where N is the number of predictedN regions, Ri.e., t the RN
number of all the extracted ASCs. M
denotes the number of predicted regions, which are assumed to be matched with the template’s
where N is the number of predicted regions, i.e., the number of all the extracted ASCs. M denotes
target region. RM denotes the total area of all the matched regions; RN and Rt are the areas of
the number of predicted regions, which are assumed to be matched with the template’s target region.
Rall
M
the predicted
denotes the regions
total area and
of allbinary target region,
the matched regions;respectively.
R N and Rt For
are athe
predicted
areas ofregion,
all the itpredicted
is judged
regions and binary target region, respectively. For a predicted region, it is judged to be matchedthan
to be matched only if the overlap between itself and the template’s binary region is larger onlyhalf
if
of its area.
the overlap between itself and the template’s binary region is larger than half of its area.
ToTocombine
combinethetheadvantages
advantagesofofthe thethree
threescores,
scores,a alinear
linearfusion
fusionalgorithm
algorithmisisperformed
performedtotoobtain
obtain
the overall similarity as Equation (7)
the overall similarity as Equation (7) [35]. [35].
S ωG ωG ωG (7)
S = ω1 G11 +1 ω2 G2 2 2+ ω33G33 (7)
where ω1 , ω2 and ω3 denote the weights; S represents the fused similarity. With little prior
where ω1 , ω 2 and ω3 denote the weights; S represents the fused similarity. With little prior information
information
on which score oniswhich
more score is more
important, important,
equal weightsequal weights are
are assigned assigned
to the to the three
three scores in thisscores
study,ini.e.,
this
ωstudy,
1 = ω2i.e.,
ω ω
= ω3 1= 1/3.2
ω3
1/ 3 .
3.3.
3.3.Target
TargetRecognition
Recognition
The
Theproposed
proposedmatching
matchingscheme
schemeforforthe
theextracted
extractedASCs
ASCsand
andbinary
binarytarget
targetregion
regionisisperformed
performed
with application to SAR target recognition. The basic procedure of our method is illustrated
with application to SAR target recognition. The basic procedure of our method is illustrated in Figure 7,in
which
Figure can
7, be summarized
which as follows. as follows.
can be summarized
(1)
(1) The
TheASCs
ASCsofofthe
thetest
testimage
imageare areestimated
estimatedand andpredicted
predictedasasbinary
binaryregions.
regions.
(2)
(2) Theazimuth
The azimuthofofthe
thetest
testimage
imageisisestimated
estimatedtotoselect
selectthe
thecorresponding
correspondingtemplate
templateimages.
images.
(3)
(3) Extractthe
Extract thebinary
binarytarget
targetregions
regionsofofall
allthe
theselected
selectedtemplate
templatesamples.
samples.
(4)
(4) Matchedthe
Matched thepredicted
predictedregions
regionstotoeach
eachofofthe
thetemplate
templateregions
regionsand
andcalculate
calculatethe
thesimilarity.
similarity.
(5)
(5) Decide the target label to be the template class, which achieves the maximum
Decide the target label to be the template class, which achieves the maximum similarity. similarity.
Specifically, the azimuth estimation algorithm in [22] is used, which also uses the binary target
Specifically, the azimuth estimation algorithm in [22] is used, which also uses the binary target
region. So, it can directly perform on the target region from Section 2 to obtain the estimated
region. So, it can directly perform on the target region from Section 2 to obtain the estimated azimuth.
azimuth. The estimation precision of the method is about ±5°. Accordingly, in this study, the
The estimation precision of the method is about ±5◦ . Accordingly, in this study, the template samples
template samples with azimuths in the interval of [−3°: 1°: 3°] around the estimated one are used as
with azimuths in the interval of [−3◦ : 1◦ : 3◦ ] around the estimated one are used as the potential
the potential templates. In addition, to overcome the 180° ambiguity, the template selection is
templates. In addition, to overcome the 180◦ ambiguity, the template selection is performed on
performed on the estimated azimuth and its 180° symmetric one, and the average of the similarities
the estimated azimuth and its 180◦ symmetric one, and the average of the similarities from all the
from all the candidate template samples is adopted as the final similarity for target recognition. The
candidate template samples is adopted as the final similarity for target recognition. The scale coefficient
scale coefficient to determine the global threshold is set as α 30 according to the experimental
to determine the global threshold is set as α = 30 according to the experimental observations for
observations for parameter selection.
parameter selection.
Sensors 2018, 18, x FOR PEER REVIEW 9 of 18
Estimated azimuth
Template database
Similarities
Similarities
Maximum similarity
Maximum similarity
Target type
Target type
Figure 7. The basic procedure of target recognition.
Figure7.7.The
Figure Thebasic
basic procedure
procedure ofoftarget recognition.
target recognition.
4. Experiment
4. Experiment on on
on
4. Experiment MSTAR
MSTAR Dataset
Dataset
MSTAR Dataset
4.1.1.4.1.1.
MSTAR Dataset
MSTAR Dataset
Dataset
The The
widelywidely used
used
used benchmarkdataset
benchmark
benchmark dataset for
dataset for
forevaluating
evaluating
evaluating SAR ATR
SAR
SAR methods,
ATR
ATR i.e., i.e.,
methods,
methods, MSTAR
i.e., dataset,
MSTAR
MSTAR is
dataset,
dataset, is
is adopted
adopted
adopted for experimental
forforexperimental evaluation
experimentalevaluation in this
evaluationininthis paper.
thispaper. The
paper.The dataset
Thedataset is collected by
datasetisis collected the
collected by Sandia National
by the Sandia National
Laboratory airborne SAR sensor platform, working at X-band with HH polarization. There are ten
Laboratory airborne SAR sensor platform, working at X-band with HH HH polarization.
polarization. There are ten
classes of ground targets with approaching physical sizes, whose names and optic images are
classes
classes ofofground
ground targets with
targets approaching
with approaching physical sizes, sizes,
physical whosewhose
names andnamesoptic andimages
opticareimages
presented
are
presented in Figure 8. The collected SAR images have resolutions of 0.3 m × 0.3 m. The detailed
in Figure
presented 8.
inThe collected
Figure 8. SAR
The images
collected have
SAR resolutions
images have of 0.3 m ×
resolutions 0.3 m.
of The
0.3 m detailed
×
template and test sets are given in Table 1, where samples from 17° depression angle are adopted as0.3 m. template
The and
detailed
test sets are given in Table 1, where samples from 17 ◦ depression angle are adopted as the templates,
template and test whereas
the templates, sets are images
given in
at Table
15° are1, where samples
classified. from 17° depression angle are adopted as
whereas
the templates, at 15◦ are
imageswhereas classified.
images at 15° are classified.
the PCCs of these targets are over 96%, and the average PCC is calculated to be 98.34%. Table 4
displays the average PCCs, as well as the time consumption (for classifying one MSTAR image) of all
the methods. Our method achieves the highest PCC, indicating its effectiveness under SOC. Although
CNN is demonstrated to be effective for SAR ATR, it cannot work well if the training samples are
insufficient. In this experimental setup, there are some configuration variants between the template
and test sets of BMP2 and T72. As a result, the performance of A-ConvNet cannot rival the proposed
method. Compared with the ASC Matching and Region Matching methods, our method performs
much better, indicating that the classification scheme in this study can better make use of ASCs and
target region to enhance the recognition performance. As for the time consumption, the classifiers
like SVM, SRC, and CNN perform more efficiently than the proposed method because of the unified
form of the features used in these methods. The ASC matching consumes the most time because
it involves complex one-to-one matching between ASC sets. Compared with the region matching
method in [3], the proposed method is relatively more efficient. The method in [3] needs to process the
region residuals between two binary target regions, which is more time-consuming than the proposed
region matching method.
Table 3. Confusion matrix of the proposed method on the ten targets under SOC.
Target BMP2 BTR70 T72 T62 BDRM2 BTR60 ZSU23/4 D7 ZIL131 2S1 PCC (%)
BMP2 553 6 7 0 0 3 2 0 1 2 96.44
BTR70 0 196 0 0 0 0 0 0 0 0 100
T72 6 4 562 0 0 0 2 5 3 0 96.73
T62 0 0 0 274 0 0 0 0 0 0 100
BDRM2 0 0 0 0 274 0 0 0 0 0 100
BTR60 1 0 0 0 0 193 0 1 0 0 98.94
ZSU23/4 1 0 0 1 1 0 269 0 1 1 98.16
D7 1 0 0 0 0 1 0 271 0 1 98.88
ZIL131 0 2 0 0 1 0 2 0 269 0 98.18
2S1 0 4 0 0 0 1 0 0 0 269 98.18
Average 98.34
PCC: percentage of correction classification.
Table 4. Average PCCs of all the methods under the standard operating condition.
ASC Region
Method Proposed SVM SRC A-ConvNet
Matching Matching
PCC (%) 98.34 95.66 94.68 97.52 95.30 94.68
Time consumption (ms) 75.8 55.3 60.5 63.2 125.3 88.6
be 98.64%. Table 7 compares the average PCCs of different methods under configuration variants.
recognized
The proposedwith PCCs works
method highermost
than robustly
96%, andunder
the average PCC isvariants
configuration calculated
withto the
be 98.64%. Table 7
highest average
compares the average PCCs of different methods under configuration variants. The proposed
PCC. For targets of different configurations, they share similar physical sizes and shape with some method
works most robustly In
local modifications. under
this configuration
case, the targetvariants
regionwith
and the highest
local average
descriptors PCC.
can For targets
provide of different
more robustness
configurations, they share similar physical sizes and shape with some local modifications.
than the global features, like image intensities or PCA features; that’s why the ASC Matching In this case,
and
the target region and local descriptors can provide more robustness than the
Region Matching methods outperform the SVM, SRC, and CNN methods in this situation. global features, like image
intensities or PCA features; that’s why the ASC Matching and Region Matching methods outperform
the SVM, SRC, and CNN methods
Table in thisand
5. Template situation.
test sets with configuration variants.
Table 5.Depr.
Template andBMP2
test sets withBDRM2
configurationBTR70
variants. T72
Template set 17° 233 (9563) 298 233 232 (132)
Depr. BMP2 BDRM2 BTR70 T72
426 (812)
Template set 17◦ 233 (9563) 298 233 232573
(132)
(A04)
428 (9566) 426 (812)
Test set 15°, 17° 0 0 573 (A05)
429 (c21) 573573
(A04)
428 (9566) (A07)
Test set 15◦ , 17◦ 0 0 573 (A05)
429 (c21)
573567 (A10)
(A07)
567 (A10)
Table 6. Classification results of different configurations of BMP2 and T72.
Table 6. Classification
Target Serial results
BMP2of different
BRDM2configurations
BTR-70 T72of BMP2
PCCand(%)
T72.
Target 9566
Serial 412
BMP2 11
BRDM2 2
BTR-70 T723 96.26
PCC (%)
BMP2
c21 420 4 2 3 97.90
9566 412 11 2 3 96.26
BMP2 812 18 14
c21 420 20 3407 95.54
97.90
A04
812 5
18 81 00 560
407 97.73
95.54
T72 A05
A04 15 18 00 571
560 99.65
97.73
T72 A05
A07 31 21 03 571
565 99.65
98.60
A07 3 2 3 565 98.60
A10 7 0 2 558 98.41
A10 7 0 2 558 98.41
Average 98.58
Average 98.58
Figure 9. Four
Four different configurations of T72 tank.
Table 7.
Table PCCs of
7. PCCs of all
all the
the methods
methods under
under configuration
configuration variants.
variants.
Method
Method Proposed
Proposed SVM
SVM SRCSRC A-ConvNet
A-ConvNetASC
ASC Matching Region
Matching Region Matching
Matching
PCC
PCC(%)
(%) 98.58
98.58 95.67
95.67 95.44
95.44 96.16
96.16 97.12
97.12 96.55
96.55
-6 -6 -6
-4 -4 -4
-2 -2 -2
Range(m)
Range(m)
Range(m)
0 0 0
2 2 2
4 4 4
6 6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
Table 8. Template and test sets with large depression angle variation.
Table 8. Template and test sets with large depression angle variation.
Depr. 2S1
Depr. 2S1 BDRM2
BDRM2 ZSU23/4
ZSU23/4
Template
Templateset set 17◦ 17° 299
299° 298
298° 299
299°
30◦ 30° 288
288 287
287 288
288
Test set set
Test 45◦ 45° 303
303 303
303 303
303
Classification Results
Classification Results
Depr.
Depr. Target
Target PCC (%)
PCC (%) Average (%)
Average (%)
2S1 BDRM2
2S1 BDRM2 ZSU23/4
ZSU23/4
2S1
2S1 278
278 66 44 96.53
96.53
◦ BDRM2 11 285 1 99.30
30°
30 BDRM2 285 99.30 97.68
97.68
ZSU23/4
ZSU23/4 44 44 280
280 97.22
97.22
2S1
2S1 229
229 48
48 26
26 75.58
75.58
45 ◦ BDRM2 10 242 51 79.87 75.82
45° BDRM2 10 242 51 79.87 75.82
ZSU23/4 54 31 218 71.95
ZSU23/4 54 31 218 71.95
Table 10. PCCs of all the methods at 30◦ and 45◦ depression angles.
Table 10. PCCs of all the methods at 30° and 45° depression angles.
PCC (%)
Method PCC (%)
Method 30◦ ◦
4545°
30°
Proposed
Proposed 97.68
97.68 75.82
75.82
SVM
SVM 96.87
96.87 65.05
65.05
SRC 96.24 64.32
SRC
A-ConvNet
96.24
97.16
64.32
66.27
A-ConvNet
ASC Matching 97.16
96.56 66.27
71.35
ASC Matching
Region Matching 96.56
95.82 71.35
64.72
Region Matching 95.82 64.72
4.3.3.
the test EOC 3-Noise
samples in TableContamination
1. In detail, the original SAR image is first transformed into the frequency
domain. Afterwards, the complex
Noise contamination is a common Gaussian noisesinare
situation theadded to application
practical the frequency spectrum
of SAR according
ATR because of to
the preset SNR.
the noises from Finally, the noisyor
the environment frequency
SAR sensors data is transformed
[37–39]. back into image
To test the performance of our domain
method underto obtain
the noisy SAR
possible
Sensors image.
noise
2018, Figure 11 shows
contamination,
18, x FOR PEER REVIEWwe firstthe noisy noisy
simulate SAR images
imagesby with different
adding levels
Gaussian of noise
noises to theaddition.
14 of test
18
samples in
The average Tableof1.all
PCCs In the
detail, the original
methods under SAR image
noise is first transformed
contamination into the
are plotted as frequency
Figure 12.domain.
As shown,
the test samples
Afterwards, in Table 1. In detail, the original SAR image is firstspectrum
transformed into thetofrequency
our method achieves the highest PCC at each noise level, indicating the best robustness preset
the complex Gaussian noises are added to the frequency
domain. Afterwards, the complex Gaussian noises are added to the frequency spectrum according to
according the regarding
SNR.
possiblethe Finally,
noise the noisy
contamination. frequency data is transformed back into image domain to obtain the noisy SAR
preset SNR. Finally, theAt lowfrequency
noisy SNRs, the dataintensity distribution
is transformed back into changes greatly.
image domain However,
to obtain
image. Figure 11 shows the noisy SAR images with different levels of noise addition. The average
the ASCs thecan
noisykeepSARtheir
image. properties so thatthethey
Figure 11 shows noisycan
SAR beimages
precisely
withextracted by sparse
different levels of noiserepresentation.
addition.
PCCs of all the methods under noise contamination are plotted as Figure 12. As shown, our method
In addition, the target
The average PCCs region
of all thestill contains
methods underpixels with higher are
noise contamination intensities
plotted as than
Figurethe
12. background
As shown, or
achieves the highest PCC at each noise level, indicating the best robustness regarding possible noise
shadow our method achieves
pixels. Then, the the highest
target region PCC at
can alsoeach noise level,
be segmented indicating
properly.the best
Thisrobustness
is alsothe regarding
theASCs
reason why
contamination.
possible noiseAt low SNRs, the
contamination. Atintensity
low SNRs, distribution
the intensity changes greatly.
distribution However,
changes greatly. However, can
the ASC
keepMatching method
their properties so and Region
that they canMatching
be precisely method
extractedperform
by sparsebetter than SVM, SRC,
representation. and CNN.
In addition,
the ASCs can keep their properties so that they can be precisely extracted by sparse representation.
the In
target regionthe
addition, still contains
target pixels
region still with higher intensities than the background or shadow pixels.
-6 -6 contains pixels with higher intensities
-6 than the background or
Then, the target region can also be segmented
shadow pixels. Then, the target region properly. This is also the reason
can also be segmented properly. This why
is also theASC
the Matching
reason why
-4 -4 -4
method andMatching
the ASC Region Matching method
method and Regionperform better
Matching thanperform
method SVM, SRC, and
better CNN.
than SVM, SRC, and CNN.
-2 -2 -2
Range(m)
Range(m)
Range(m)
0 -6 0 -6 -6 0
-4 -4 -4
2 2 2
-2 -2 -2
Range(m)
Range(m)
Range(m)
4 4 4
0 0 0
6 6 6
-6 -4 2 -2 0 2 4 6 -6 2 -4 -2 0 2 4 6 2 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
4 4 4
6
(a) 6
(b) 6
(c)
-6 -6 -4 -2 0 2 4 6 -6 -6 -4 -2 0 2 4 6 -6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
-4 (a) -4 (b) -4 (c)
-6 -6 -6
-2 -2 -2
Range(m)
Range(m)
Range(m)
-4 -4 -4
0 0 0
-2 -2 -2
Range(m)
Range(m)
Range(m)
2 2 2
0 0 0
4 2 4 2 2 4
6 4 6 4 4 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
6 CrossRange(m) 6 CrossRange(m) 6 CrossRange(m)
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
(d)
CrossRange(m) (e)
CrossRange(m) (f)
CrossRange(m)
(d) (e) (f)
Figure 11. Images with noise addition (SNR): (a) original image; (b) 10 dB; (c) 5 dB; (d) 0 dB; (e) −5 dB; (f)
Figure Images
11. 11.
Figure with
Images noise
with noiseaddition
addition(SNR):
(SNR): (a) original
originalimage;
image;(b)
(b)1010dB;
dB;(c)(c) 5 dB;
5 dB; (d)(d) 0 dB;
0 dB; −5(f)
(e)dB;
(e) −5 dB;
−10 dB.
(f) −
−10
10 dB.
dB.
Figure 12. Performance comparison of all the methods under noise contamination.
Figure
Figure 12. 12. Performance
Performance comparisonof
comparison of all
all the
the methods
methodsunder
undernoise contamination.
noise contamination.
Sensors2018,
Sensors 2018,18,
18,3019
x FOR PEER REVIEW 1515
ofof
1918
-6 -6 -6
-4 -4 -4
-2 -2 -2
Range(m)
Range(m)
Range(m)
0 0 0
2 2 2
4 4 4
6 6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m) CrossRange(m)
(a) (b) (c)
-6 -6
-4 -4
-2 -2
Range(m)
Range(m)
0 0
2 2
4 4
6 6
-6 -4 -2 0 2 4 6 -6 -4 -2 0 2 4 6
CrossRange(m) CrossRange(m)
(d) (e)
Figure 13.Occluded
Figure13. Occludedimages
imagesatatthetheocclusion
occlusionlevellevelofof20%
20%fromfromdifferent
differentdirections
directionsbeing:
being:(a)
(a)original
original
image;
image;(b)
(b)direction
direction1;1;(c)
(c)direction
direction2;2;(d)
(d)direction
direction3;3;(e)(e)direction
direction4.4.
Sensors 2018, 18, 3019 16 of 19
Sensors 2018, 18, x FOR PEER REVIEW 16 of 18
5. Conclusions
5. Conclusions
In this study, we propose an effective method for SAR ATR by matching ASCs to binary target
In this study, we propose an effective method for SAR ATR by matching ASCs to binary target
region. Instead of directly matching the points features, i.e., ASCs, to the target region, each ASC is
region. predicted
Instead of directly matching the points features, i.e., ASCs, to the target region, each ASC is
as a binary region using a thresholding method. The binary regions of individual ASCs
predicted as ainbinary
vary the areasregionand using
shapes, a thresholding
which reflect theirmethod. The binary
attributes such asregions
spatial of individual
positions, ASCs vary
relative
in the areas and shapes,
amplitudes, which Afterwards,
and lengths. reflect theirthe attributes
predictedsuch as spatial
regions positions,
of the test sample relative
are mapped amplitudes,
to the and
binary target region from the corresponding templates. Finally, a
lengths. Afterwards, the predicted regions of the test sample are mapped to the binary target region similarity measure is defined
from theaccording to the region
corresponding matching Finally,
templates. results, and the target label
a similarity is determined
measure is defined according to the highest
according to the region
similarity. The MSTAR dataset is employed for experiments. Based on the experimental results,
matching results, and the target label is determined according to the highest similarity. The MSTAR
conclusions are drawn as follows.
dataset is employed for experiments.
(1) The proposed method works Based on for
effectively thetheexperimental
recognition task results, conclusions
of ten targets under SOC are drawn
as follows.
with a notably high PCC of 98.34%, which outperforms other state-of-the-art methods.
(1) The (2)proposed methodtypes
Under different works of effectively
EOCs (including for the recognitionvariants,
configuration task of large
ten targets
depressionunder SOC with
angle
variation, noise contamination, and partial occlusion),
a notably high PCC of 98.34%, which outperforms other state-of-the-art methods. the proposed performs more robustly than
(2) the reference methods owing to the robustness of the region features as well as the designed
Under different types of EOCs (including configuration variants, large depression angle
classification scheme.
variation, noise contamination, and partial occlusion), the proposed performs more robustly than
(3) Although not superior in efficiency, the higher effectiveness and robustness make the
the reference
proposed method a owing
methods potentialto waythe robustness
to improve of the
the SAR ATR region features
performance in the as well conditions.
practical as the designed
classificationFuture
scheme.work is as follows. First, as basic features in the proposed target recognition method,
(3) the extraction
Although notprecision
superior of in
binary target region
efficiency, and ASCs
the higher should be and
effectiveness further improvedmake
robustness by adopting
the proposed
or developing more robust methods. Some despeckling
method a potential way to improve the SAR ATR performance in the practical conditions. algorithms [42–44] can be first used to
improve the quality of the original SAR images before the feature extraction. Second, the similarity
Future work is as follows. First, as basic features in the proposed target recognition method,
measure based on the region matching results should be further improved to enhance the ATR
the extraction precision
performance, e.g., of
thebinary
adaptive target region andofASCs
determination should be
the weights for further
differentimproved
scores. Third,by adopting
the or
developing more robust methods. Some despeckling algorithms [42–44]
proposed method should be extended to the ensemble SAR ATR system to handle the condition can be first used to improve
the quality
that of the original
several targets areSAR images
contained in abefore the feature
SAR image. extraction.
Lastly, the Second,should
proposed method the similarity
be tested on measure
based on other
theavailable dataset from
region matching the airborne
results should or be
orbital SAR improved
further sensors to further validatethe
to enhance its ATR
effectiveness
performance,
e.g., the and robustness.
adaptive determination of the weights for different scores. Third, the proposed method should
be extended
Author toContributions:
the ensemble J.T.;SAR
X.F.; ATR system to
S.W. conceived andhandle
worked the condition
together that
to achieve thisseveral targets
work. Y.R. are the
performed contained
in a SAR image. Lastly,
experiments. J.T. wrote the
the proposed
paper. method should be tested on other available dataset from the
airborneFunding:
or orbital
This SAR
researchsensors
was funded to further
by Beijingvalidate
Municipalits effectiveness
Natural and robustness.
Science Foundation (Grant No. Z8162039), the
Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA19080100, the Hainan
Sensors 2018, 18, 3019 17 of 19
Author Contributions: J.T.; X.F.; S.W. conceived and worked together to achieve this work. Y.R. performed the
experiments. J.T. wrote the paper.
Funding: This research was funded by Beijing Municipal Natural Science Foundation (Grant No. Z8162039),
the Strategic Priority Research Program of the Chinese Academy of Sciences, Grant No. XDA19080100, the Hainan
Provincial Department of Science and Technology (Grant No. ZDKJ2016021), the Natural Science Foundation of
Hainan (Grant No. 20154171) and the 135 Plan Project of Chinese Academy of Sciences (Grant No. Y6SG0200CX).
And the APC was Funded by Beijing Municipal Natural Science Foundation (Grant No. Z8162039).
Acknowledgments: The authors thank the anonymous reviewers for their constructive suggestions.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. El-Darymli, K.; Gill, E.W.; McGuire, P.; Power, D.; Moloney, C. Automatic target recognition in synthetic
aperture radar imagery: A state-of-the-art review. IEEE Access 2016, 4, 6014–6058. [CrossRef]
2. Park, J.; Park, S.; Kim, K. New discrimination features for SAR automatic target recognition. IEEE Geosci.
Remote Sens. Lett. 2013, 10, 476–480. [CrossRef]
3. Ding, B.Y.; Wen, G.J.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar images using binary
morphological operations. J. Appl. Remote Sens. 2016, 10, 046006. [CrossRef]
4. Amoon, M.; Rezai-rad, G. Automatic target recognition of synthetic aperture radar (SAR) images based on
optimal selection of Zernike moment features. IET Comput. Vis. 2014, 8, 77–85. [CrossRef]
5. Anagnostopulos, G.C. SVM-based target recognition from synthetic aperture radar images using target
region outline descriptors. Nonlinear Anal. 2009, 71, e2934–e2939. [CrossRef]
6. Ding, B.Y.; Wen, G.J.; Ma, C.H.; Yang, X.L. Decision fusion based on physically relevant features for SAR
ATR. IET Radar Sonar Navig. 2017, 11, 682–690. [CrossRef]
7. Papson, S.; Narayanan, R.M. Classification via the shadow region in SAR imagery. IEEE Trans. Aerosp.
Electron. Syst. 2012, 48, 969–980. [CrossRef]
8. Yuan, X.; Tang, T.; Xiang, D.L.; Li, Y.; Su, Y. Target recognition in SAR imagery based on local gradient ratio
pattern. Int. J. Remote Sens. 2014, 35, 857–870. [CrossRef]
9. Mishra, A.K. Validation of PCA and LDA for SAR ATR. In Proceedings of the 2008 IEEE Region 10 Conference,
Hyderabad, India, 19–21 November 2008; pp. 1–6.
10. Cui, Z.Y.; Cao, Z.J.; Yang, J.Y.; Feng, J.L.; Ren, H.L. Target recognition in synthetic aperture radar via
non-negative matrix factorization. IET Radar Sonar Navig. 2015, 9, 1376–1385. [CrossRef]
11. Huang, Y.L.; Pei, J.F.; Yang, J.Y.; Liu, X. Neighborhood geometric center scaling embedding for SAR ATR.
IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 180–192. [CrossRef]
12. Yu, M.T.; Dong, G.G.; Fan, H.Y.; Kuang, G.Y. SAR target recognition via local sparse representation of
multi-manifold regularized low-rank approximation. Remote Sens. 2018, 10, 211. [CrossRef]
13. Liu, X.; Huang, Y.L.; Pei, J.F.; Yang, J.Y. Sample discriminant analysis for SAR ATR. IEEE Geosci. Remote
Sens. Lett. 2014, 11, 2120–2124.
14. Gerry, M.J.; Potter, L.C.; Gupta, I.J.; Merwe, A. A parametric model for synthetic aperture radar measurement.
IEEE Trans. Antennas Propag. 1999, 47, 1179–1188. [CrossRef]
15. Potter, L.C.; Mose, R.L. Attributed scattering centers for SAR ATR. IEEE Trans. Image Process. 1997, 6, 79–91.
[CrossRef] [PubMed]
16. Chiang, H.; Moses, R.L.; Potter, L.C. Model-based classification of radar images. IEEE Trans. Inf. Theor. 2000,
46, 1842–1854. [CrossRef]
17. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. Robust method for the matching of attributed
scattering centers with application to synthetic aperture radar automatic target recognition. J. Appl.
Remote Sens. 2016, 10, 016010. [CrossRef]
18. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in synthetic aperture radar
images via matching of attributed scattering centers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10,
3334–3347. [CrossRef]
Sensors 2018, 18, 3019 18 of 19
19. Ding, B.Y.; Wen, G.J.; Zhong, J.R.; Ma, C.H.; Yang, X.L. A robust similarity measure for attributed scattering
center sets with application to SAR ATR. Neurocomputing 2017, 219, 130–143. [CrossRef]
20. Zhao, Q.; Principe, J.C. Support vector machines for synthetic radar automatic target recognition. IEEE Trans.
Aerosp. Electron. Syst. 2001, 37, 643–654. [CrossRef]
21. Liu, H.C.; Li, S.T. Decision fusion of sparse representation and support vector machine for SAR image target
recognition. Neurocomputing 2013, 113, 97–104. [CrossRef]
22. Sun, Y.J.; Liu, Z.P.; Todorovic, S.; Li, J. Adaptive boosting for SAR automatic target recognition. IEEE Trans.
Aerosp. Electron. Syst. 2007, 43, 112–125. [CrossRef]
23. Thiagarajan, J.J.; Ramamurthy, K.; Knee, P.P.; Spanias, A.; Berisha, V. Sparse representation for automatic
target classification in SAR images. In Proceedings of the 2010 4th Communications, Control and Signal
Processing (ISCCSP), Limassol, Cyprus, 3–5 March 2010.
24. Song, H.B.; Ji, K.F.; Zhang, Y.S.; Xing, X.W.; Zou, H.X. Sparse representation-based SAR image target
classification on the 10-class MSTAR data set. Appl. Sci. 2016, 6, 26. [CrossRef]
25. Chen, S.Z.; Wang, H.P.; Xu, F.; Jin, Y.Q. Target classification using the deep convolutional networks for SAR
images. IEEE Trans. Geosci. Remote Sens. 2016, 47, 1685–1697. [CrossRef]
26. Wagner, S.A. SAR ATR by a combination of convolutional neural network and support vector machines.
IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 2861–2872. [CrossRef]
27. Ding, J.; Chen, B.; Liu, H.W.; Huang, M.Y. Convolutional neural network with data augmentation for SAR
target recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [CrossRef]
28. Du, K.N.; Deng, Y.K.; Wang, R.; Zhao, T.; Li, N. SAR ATR based on displacement-and rotation-insensitive
CNN. Remote Sens. Lett. 2016, 7, 895–904. [CrossRef]
29. Huang, Z.L.; Pan, Z.X.; Lei, B. Transfer learning with deep convolutional neural networks for SAR target
classification with limited labeled data. Remote Sens. 2017, 9, 907. [CrossRef]
30. Zhou, J.X.; Shi, Z.G.; Cheng, X.; Fu, Q. Automatic target recognition of SAR images based on global scattering
center model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 3713–3729.
31. Ding, B.Y.; Wen, G.J.; Huang, X.H.; Ma, C.H.; Yang, X.L. Target recognition in SAR images by exploiting the
azimuth sensitivity. Remote Sens. Lett. 2017, 8, 821–830. [CrossRef]
32. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Prentice Hall: Englewood, NJ, USA, 2008.
33. Liu, H.W.; Jiu, B.; Li, F.; Wang, Y.H. Attributed scattering center extraction algorithm based on sparse
representation with dictionary refinement. IEEE Trans. Antennas Propag. 2017, 65, 2604–2614. [CrossRef]
34. Cong, Y.L.; Chen, B.; Liu, H.W.; Jiu, B. Nonparametric Bayesian attributed scattering center extraction for
synthetic aperture radar targets. IEEE Trans. Signal Process. 2016, 64, 4723–4736. [CrossRef]
35. Dong, G.G.; Kuang, G.Y. Classification on the monogenic scale space: Application to target recognition in
SAR image. IEEE Tran. Image Process. 2015, 24, 2527–2539. [CrossRef] [PubMed]
36. Chang, C.; Lin, C. LIBSVM: A library for support vector machine. ACM Trans. Intell. Syst. Technol. 2011, 2,
389–396.
37. Doo, S.; Smith, G.; Baker, C. Target classification performance as a function of measurement uncertainty.
In Proceedings of the 5th Asia-Pacific Conference on Synthetic Aperture Radar, Singapore, 1–4 September 2015.
38. Ding, B.Y.; Wen, G.J. Target recognition of SAR images based on multi-resolution representation.
Remote Sens. Lett. 2017, 8, 1006–1014. [CrossRef]
39. Ding, B.Y.; Wen, G.J. Sparsity constraint nearest subspace classifier for target recognition of SAR images.
J. Visual Commun. Image Represent. 2018, 52, 170–176. [CrossRef]
40. Bhanu, B.; Lin, Y. Stochastic models for recognition of occluded targets. Pattern Recogn. 2003, 36, 2855–2873.
[CrossRef]
41. Ding, B.Y.; Wen, G.J. Exploiting multi-view SAR images for robust target recognition. Remote Sens. 2017,
9, 1150. [CrossRef]
42. Lopera, O.; Heremans, R.; Pizurica, A.; Dupont, Y. Filtering speckle noise in SAS images to improve detection
and identification of seafloor targets. In Proceedings of the International Waterside Security Conference,
Carrara, Italy, 3–5 November 2010.
Sensors 2018, 18, 3019 19 of 19
43. Idol, T.; Haack, B.; Mahabir, R. Radar speckle reduction and derived texture measures for land cover/use
classification: A case study. Geocarto Int. 2017, 32, 18–29. [CrossRef]
44. Qiu, F.; Berglund, J.; Jensen, J.R.; Thakkar, P.; Ren, D. Speckle noise reduction in SAR imagery using a local
adaptive median filter. Gisci. Remote Sens. 2004, 41, 244–266. [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).