0% found this document useful (0 votes)
6 views21 pages

drones-08-00597-v2

The paper presents a novel TSNN-CFAR algorithm for detecting non-cooperative UAVs using truncated statistics and neural networks, addressing challenges in radar signal processing. This algorithm outperforms traditional CFAR methods, especially in complex environments with high-density targets and clutter. The proposed method enhances detection performance while maintaining a stable false alarm rate, contributing to improved airspace safety.

Uploaded by

hoaphung992002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

drones-08-00597-v2

The paper presents a novel TSNN-CFAR algorithm for detecting non-cooperative UAVs using truncated statistics and neural networks, addressing challenges in radar signal processing. This algorithm outperforms traditional CFAR methods, especially in complex environments with high-density targets and clutter. The proposed method enhances detection performance while maintaining a stable false alarm rate, contributing to improved airspace safety.

Uploaded by

hoaphung992002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

drones

Article
Robust Truncated Statistics Constant False Alarm Rate Detection
of UAVs Based on Neural Networks
Wei Dong 1 and Weidong Zhang 2, *

1 School of Electronic Information Engineering, Beihang University, Beijing 100191, China;


[email protected]
2 School of Cyber Science and Technology, Beihang University, Beijing 100191, China
* Correspondence: [email protected]

Abstract: With the rapid popularity of unmanned aerial vehicles (UAVs), airspace safety is facing
tougher challenges, especially for the identification of non-cooperative target UAVs. As a vital
approach for non-cooperative target identification, radar signal processing has attracted continuous
and extensive attention and research. The constant false alarm rate (CFAR) detector is widely
used in most current radar systems. However, the detection performance will sharply deteriorate
in complex and dynamical environments. In this paper, a novel truncated statistics- and neural
network-based CFAR (TSNN-CFAR) algorithm is developed. Specifically, we adopt a right truncated
Rayleigh distribution model combined with the characteristics of pattern recognition using a neural
network. In the simulation environments of four different backgrounds, the proposed algorithm
does not need guard cells and outperforms the traditional mean level (ML) and ordered statistics
(OS) CFAR algorithms. Especially in high-density target and clutter edge environments, since
utilizing 19 statistics obtained from the numerical calculation of two reference windows as the input
characteristics, the TSNN-CFAR algorithm has the best adaptive decision ability, accurate background
clutter modeling, stable false alarm regulation property and superior detection performance.

Keywords: unmanned aerial vehicles (UAVs); constant false alarm rate (CFAR); quantile; truncated
statistics (TS); neural networks (NNs)

1. Introduction
Citation: Dong, W.; Zhang, W. Robust
Truncated Statistics Constant False
With the rapid popularity and development of unmanned aerial vehicles (UAVs) comes
Alarm Rate Detection of UAVs Based
an increasing number of airspace security threats [1]. In particular, there is a lack of effective
on Neural Networks. Drones 2024, 8,
surveillance means for non-cooperative UAVs. In order to address airspace security, more
597. https://doi.org/10.3390/ and more technical means are being proposed to target non-cooperative targets. Non-
drones8100597 cooperative identification (NCI) refers to the reliable identification of suspicious targets
without the precondition of communications [2,3]. The practical applications of radar in
Received: 8 September 2024
the past half century has proved that radar is a powerful tool in NCI systems [4].
Revised: 1 October 2024
UAV target detection is a basic function of radar before it realizes range measurement,
Accepted: 16 October 2024
velocity measurement and target tracking [5]. The constant false alarm rate (CFAR) algo-
Published: 18 October 2024
rithm is the most common and practical target detection method in radar systems [6]. The
core idea of CFAR is to determine the detection threshold adaptively according to the given
false alarm probability (PFA) through accurately modeling the statistical distribution of
Copyright: © 2024 by the authors. background clutter at the detection point [7].
Licensee MDPI, Basel, Switzerland. In the process of implementation, the sliding window is usually used to accurately
This article is an open access article estimate the model parameters [8]. At present, there are many popular classical CFAR
distributed under the terms and methods, such as cell averaging (CA), greatest of (GO), smallest of (SO) and ordered
conditions of the Creative Commons statistics (OS) [9,10]. Each of those algorithms has a high performance in some specific
Attribution (CC BY) license (https:// cases only. The performance may degrade due to the complex and changeable practical
creativecommons.org/licenses/by/ detection background environments [11].
4.0/).

Drones 2024, 8, 597. https://doi.org/10.3390/drones8100597 https://www.mdpi.com/journal/drones


Drones 2024, 8, 597 2 of 21

Hence, many researchers try to improve the CFAR algorithm and have proposed
many adaptive CFAR algorithms [12]. Adaptive variability index CFAR (VI-CFAR) is a
composite of the CA-CFAR, SO-CFAR and GO-CFAR algorithms, and thus can dynamically
select specific background estimation algorithms to perform adaptive threshold target
detection. Although VI-CFAR can work stably in common test environments, the detection
performance will be lost in high-density target environments and the requirements for the
signal-to-noise ratio (SNR) are more stringent.
In fact, there are many existing studies working to improve the NCI accuracy under
different background environments. The authors of [13,14] aimed at realizing target de-
tection by auxiliary radar in heterogeneous clutter environments, based on the accuracy
of the maximum likelihood estimation equation of covariance matrix and the asymptotic
expression of false alarm probability obtained by solving fixed-point equations. However,
the calculation of estimation and approximate accuracy is complex and, thus, the accuracy
cannot be effectively guaranteed. The authors of [15] used clutter two-parameter logarith-
mic compression processing and cumulative amplitude average comprehensive constant
false alarm processing to cope with a variety of typical clutter environments. The authors
of [15,16] pointed out that most CFAR processing focused on specific clutter backgrounds,
but the diversity and change in clutter make CFAR processing insufficient to meet detection
requirements. Hence, the adaptive CFAR detector came into being. In [17], the variation
index (VI-CFAR) was proposed to select the appropriate detection threshold according to
the test results. Meanwhile, researchers also payed attention to the application of modern
machine learning technologies to realize adaptive CFAR processing, such as intelligent
detection using a support vector machine in [18].
In this paper, by introducing a novel truncated statistics method combined with the
characteristics of the signal model of frequency modulation continuous wave radar, a
TS-CFAR detection method based on the right truncated Rayleigh distribution model is
firstly proposed, which does not need guard cells. Then, combined with the characteristics
of neural network pattern recognition, 19 characteristic statistics are introduced to develop
the TSNN-CFAR algorithm to improve radar target detection. The simulation results
show that the proposed TSNN-CFAR UAV detector provides a low detection loss under
a uniform background, and also achieves a stable detection performance and stable false
alarm regulation under high-density target and clutter edge environments. Application in
the field of UAV detection can effectively protect airspace safety.
The rest of this paper is organized as follows. Section 2 formulates the principle and
performance characteristics of relevant traditional CFAR and adaptive CFAR algorithms. In
Section 3, the principle and specific process of the proposed TSNN-CFAR algorithm are in-
troduced, and the relevant performance is analyzed theoretically. A simulation experiment
is represented in Section 4 with a discussion of the diagnosis results, including visualization
analysis and comparison with other adaptive algorithms. Finally, the conclusions are
shown in Section 5.

2. Preliminaries
In this section, we firstly introduce the models of classical CFAR UAV detectors, and then
the principle and performance of the common adaptive CFAR methods are further expounded.

2.1. Typical CFAR Detector Model


In UAV detection, the principle of a constant false alarm target detection algorithm is
to obtain the power level estimation of noise and clutter by processing the sampling values
of reference cells around the detection cells, and then calculating the detection threshold of
the detection cells according to the estimated values [19].
ML-CFAR is the first CFAR algorithm proposed, and is the most important CFAR
algorithm in practical applications. The mean processing result of the cells in the reference
windows is used as the basis of noise power level estimation, and the detection threshold
is obtained by multiplying the threshold factor [20,21]. After comparison, the decision of
Drones 2024, 8, 597 3 of 21

whether there is a target or not is finally obtained. Figure 1 shows the schematic diagram
of ML-CFAR. Assume that there are N reference cells (discretized unit of space). µ̃ L and
µ̃ R denote the sample means on the left and right side, respectively. Guard cells next to
detection cell are to prevent the target signals from spreading to adjacent cells to affect the
noise power estimation. α denotes the threshold factor. Z denotes the noise power estimate
by mean estimation. The detection threshold is given by T = αZ. D is the sampling value of
the guard cell. According to the Neyman–Pearson criterion, D will be sentenced following

H0 : D < T,
(1)
H1 : D ≥ T.

Hypothesis H1 represents that there exists a target in the cell and hypothesis H1
represents that there is no target.

Detection cell
Guard cells
Detector
Reference cells
H1 H0

Comparer
D
T
mL mR
Ä a CA

Z = mean( m L , m R ) a GO

Z = max( m L , m R ) a SO

Z = min( m L , m R )

Figure 1. Schematic diagram of ML-CFAR.

CA-CFAR averages the sampled values of all reference units as the basis for noise
power level estimation [22,23]. In homogeneous environments, the false alarm probability
for CA-CFAR is calculated by
α
P̄f a = (1 + CA ) N (2)
N
GO-CFAR is proposed mainly to solve the problem that CA-CFAR is more likely to
cause false alarms at the clutter edge and, thus, the larger average value in the reference win-
dows is selected as the basis for power estimation [24,25]. In homogeneous environments,
the false alarm probability for GO-CFAR is calculated by

 − N/2  − N/2 N −1   −k


2
α α N αGO
P̄f a /2 = 1 + GO
( N/2)
− 2 + GO
( N/2)
× ∑ 2
−1+k 2+
( N/2)
(3)
k =0

According to Equation (3), given fixed P̄f a and N, the threshold factor αGO can be
computed and a further detection threshold can be obtained.
SO-CFAR mainly aims at the weak target shadowing problem of CA-CFAR. Contrary
to GO-CFAR, SO-CFAR selects the smaller average value in reference windows as the basis
for power estimation [25,26]. The false alarm probability for SO-CFAR is calculated by

− N/2 N −1 −k
N
 2
 
αSO −1+k 2 + αSO
P̄f a /2 = 2+
( N/2)
× ∑ 2
k N/2
(4)
k =0
Drones 2024, 8, 597 4 of 21

Similarly, the corresponding detection threshold can be obtained easily.


OS-CFAR uses the sorting results of sample values in the reference windows to esti-
mate the power level. OS-CFAR firstly sorts all sample values in the reference windows to
obtain the ascending sequence, and then the k-th ordered statistic value will be the power
estimate to calculate the detection threshold [27,28]. In homogeneous environments, the
false alarm probability for OS-CFAR is calculated by

Γ( N − k + 1 + αOS )Γ( N − k + 1 + αOS )Γ(k)


 
N
P̄f a = k (5)
k Γ( N + αOS + 1)

Since the false alarm probability is not affected by noise power, OS-CFAR can realize
the requirement of constant false alarm.
According to the existing theoretical analysis and experimental applications, the com-
putational complexity of the above methods is not high and the hardware implementations
are simple. However, in the process of target detection, the protrusion phenomenon is
common for CA-CFAR and GO-CFAR, and shadowing easily occurs in high-density target
detection. Meanwhile, with the decrease in the number of guard cells, shadowing is more
serious. In addition, CA-CFAR and GO-CFAR are prone to missing the detection of weak
clutter and false alarm at the edge of strong clutter. SO-CFAR increases the risk of false
alarm in a strong clutter area [29,30].
In summary, the classical CFAR method described above is derived and applicable in
a uniform Gaussian environment, but when there are multiple targets in the environment
around the detection unit or when it is located at the edge of the clutter, a single use only of
uniform Gaussian statistics as the detection threshold will result in false alarms and missed
alarms. Therefore, adaptive CFAR algorithms are developed to obtain an ideal detection
performance in more complex environments.

2.2. VI-CFAR Detector Model


The remarkable feature of the adaptive CFAR algorithms is the adaptive selection
of CFAR processing methods and parameters according to partial characteristics of the
sampling values of reference cells, so as to ensure a better detection performance in a
specific non-homogeneous environment.
VI-CFAR is a typical adaptive CFAR algorithm. The core idea is to dynamically select
the appropriate CFAR methods through the second-order statistic V I of the reference cells
and the ratio moving range (MR) of the mean values of the left and right windows, so as
to ensure robustness in various environments [31]. The statistic V I is used to determine
whether the sampled value in the reference windows comes from a homogeneous environ-
ment. It is the ratio of the second-order central moment to the second-order origin moment
plus a constant, similar to the shape parameter estimation. The statistic MR is used to test
whether the mean values of the left and right reference windows are the same.
After obtaining the statistic V I for each side window, it will be compared with the de-
cision threshold KV I . The homogeneous or non-homogeneous environments are judged by

V I < KV I ⇒ Homogeneous environment


(6)
V I ≥ KV I ⇒ Non-homogeneous environment

The consistency of the left and right reference windows is obtained by comparing the
statistic MR with the decision threshold K MR , i.e.,

K− 1
MR < MR < K MR ⇒ Same means (7)
MR ≤ K − 1
MR orK MR ≤ MR ⇒ Different means

After the above two decisions, VI-CFAR selects the corresponding CFAR method
to calculate the detection threshold according to the decision results. Table 1 shows the
threshold selection scheme of VI-CFAR.
Drones 2024, 8, 597 5 of 21

Table 1. Threshold selection schemes of VI-CFAR.

Homogeneous Homogeneous
Adaptive Threshold Corresponding
Category Decision in Left Decision in Right Mean Consistency
for VI-CFAR CFAR Scheme
Window Window
1 Yes Yes Yes α N mean(µ̃ L , µ̃ R ) CA
2 Yes Yes No α N/2 max(µ̃ L , µ̃ R ) GO
3 Yes No - α N/2 µ̃ L CA
4 No Yes - α N/2 µ̃ R CA
5 No No - α N/2 max(µ̃ L , µ̃ R ) SO

2.3. NN-CFAR Detector Model


Neural networks have provided a series of beneficial helps for pattern recognition,
data analysis and other aspects [32]. Especially in the application of pattern recognition, the
input characteristic quantity is used for nonlinear transformation to transform the output
content of the component category. The core idea of NN-CFAR is to treat the neural network
as a classifier to distinguish background environment types and select an appropriate CFAR
algorithm according to the background environment types, thus ensuring better target
detection ability.
The input of NN-CFAR based on statistical characteristics consists of 8 statistical
values and 30 reference cell sampling values. The eight statistical values are standard devi-
ation, mean absolute error (MAE), skewness (SKEW), kurtosis (KURT), range, information
entropy, lower fourth score and median.
Order the sample sequence in the reference window from small to large, and define X̄
as the ordered sequence. Thus, the information entropy is expressed as
 
Entropy = E −log2 p( X̃ )
= −sum( p( X̃ )log2 p( X̃ )) (8)
p( x̃i ) = i/N,

where x̃i is the sampled data after ordering and p( x̃i ) is the cumulative probability.
Through training a large amount of data, the NN-CFAR classifier will be formed. In
the application process, the 8 characteristic statistics are obtained according to the reference
cells to be detected, and the 30 sampling values are input into the classifier together. Finally,
the CFAR algorithm is selected according to Table 2.

Table 2. The selection schemes of NN-CFAR.

Category Background Classification CFAR Scheme


1 Homogeneous environment CA
2 High-density target environment OS
3 Low-power area of clutter edge SO
4 High-power area of clutter edge GO

The above two adaptive algorithms, i.e., VI-CFAR and NN-CFAR, can deal with more
complex background environments than the traditional CFAR algorithms. However, the
detection performance of VI-CFAR is poor when there are interference targets on both the
left and right windows, i.e., SO-CFAR is insufficient in the high-density target environments.
Although NN-CFAR has good robustness, it fluctuates obviously in the clutter edge region
and the false alarm probability increases. This is because a guard cell may lead to the
existence of lag and advance discrimination. The performance of NN-CFAR in the marginal
region becomes better with the decrease in the number of guard cells.
Drones 2024, 8, 597 6 of 21

3. Proposed TSNN-CFAR
In the previous sections, the working principle and performance of common CFAR
algorithms have been systematically introduced. Many researchers have proposed cor-
responding solutions to the robustness of CFAR detection performance in complex envi-
ronments. Among them is the TS-CFAR detector. Based on the adaptive optimization of
TS-CFAR, we propose TSNN-CFAR based on a neural network.

3.1. TS-CFAR Detector Model


The main purpose of the TS-CFAR algorithm is to estimate the parameters of the
probability distribution model by using the reference cells near the detection cells, so as to
obtain the adaptive threshold value [33].
Assuming that the sample conforms to a Rayleigh distribution and the measured value
is denoted by X, its probability density function (PDF) is denoted by
(
x − x2 /2µ2
µ2
e , x>0
pX (x) = (9)
0, x≤0

where µ denotes the mean.


TS-CFAR first sets a truncation depth h, and removes the cells in the reference windows
that are larger than the truncation depth. In this case, the remaining I noise cells obey the
right truncation probability distribution model, which is given by
(
p( x )
P(h)
, 0<x≤h
p̃( x ) = (10)
0, x > h

where P(h) is the probability at the truncated depth h. Hence, the mean is the only
parameter that needs to be estimated. The maximum likelihood estimator of the mean can
be obtained from the likelihood function, i.e.,
I
L= ∏ p̃(x̃i )
i =1
(11)
 
I
exp − ∑ x̃i2 / 2µ2

I
∏ x̃i
i =1
= I
µ (1 − exp(−h2 /(2µ2 )))
2I
i =1

where ( x̃i )iI=1 represents the amplitude value of the remaining I cells after removing the
outlier cells in the reference windows. The maximum likelihood estimation equation can
be obtained by the logarithm of the likelihood function, i.e.,

I
∂ ln L 2I 1 Iξ g(ξ )
∂µ̂
=− + 3
µ̂ µ̂ ∑ x̃i2 + σ̂ G (ξ )
=0 (12)
i =1

with ξ = h/µ̂, g(ξ ) = ξ exp −ξ 2 /2 , and G (ξ ) = 1 − exp −ξ 2 /2 . Assume


 

 
1 2 g(ξ )
J (ξ ) = − (13)
ξ ξ G (ξ )

Combining Equation (12) with Equation (13), we have

I
1
J (ξ ) =
Ih2 ∑ x̃i2 (14)
i =1
Drones 2024, 8, 597 7 of 21

Therefore, to obtain the maximum likelihood estimation µ̂, it is necessary to first use
the amplitude value of the remaining cells to obtain J (ξ ), and then use Equation (13) to
solve. Finally, the estimation value can be obtained according to ξ = h/µ̂.
Equation (13) shows that ξ can only by solved through time-consuming numerical
methods. However, practical applications require a high real-time performance of radar. It
is necessary to formulate the look-up table between J (ξ ) and ξ in advance. Some look-up
tables are shown in Table 3.
Table 3. Partial look-up table.

ξ 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009
0.050 0.000004 0.000005 0.000006 0.000007 0.000009 0.000011 0.000013 0.000015 0.000017 0.000020
0.060 0.000024 0.000027 0.000031 0.000036 0.000041 0.000047 0.000053 0.000060 0.000067 0.000075
0.070 0.000084 0.000094 0.000104 0.000116 0.000128 0.000141 0.000155 0.000171 0.000187 0.000204
0.080 0.000223 0.000242 0.000263 0.000285 0.000309 0.000334 0.000360 0.000388 0.000417 0.000448
0.090 0.000481 0.000515 0.000550 0.000588 0.000627 0.000668 0.000711 0.000756 0.000802 0.000851
0.100 0.000902 0.000954 0.001009 0.001066 0.001125 0.001187 0.001250 0.001316 0.001384 0.001455
0.110 0.001528 0.001604 0.001682 0.001762 0.001845 0.001931 0.002019 0.002110 0.002204 0.002300
0.120 0.002400 0.002502 0.002607 0.002715 0.002826 0.002939 0.003056 0.003176 0.003299 0.003425
A more complete table is given in [34].

Note that µ̂ is the sample mean estimate plus a truncation correction. The given false
alarm probability is related to the cumulative distribution function (CDF), which can be
expressed as
2
− H2
Pf a = 1 − P( H ) = e 2µ̂ (15)
Hence, the detection threshold is obtained according to the given false alarm probabil-
ity Pf a and estimation µ̂.
The TS-CFAR algorithm (Algorithm 1) is based on TS and has many advantages. It
is designed to accommodate multiple jamming targets in the reference windows. The
TS-CFAR algorithm can also control the false alarm probability very well.

Algorithm 1: TS-CFAR [33]


Input: A vector x of length N,h,D
Output: R  
1 ( x̃i ) I = find ( xi ) N > h ;
I
1
2 J (ξ ) = Ih2 ∑ x̃i2 ;
i =1
3 Find the corresponding value of ξ through the look-up table method;
4 µ̂ = h/ξ;
 1
2
5 H = −2 ln Pf a µ̂;
6 if H>D then
7 R = 0;
8 else
9 R = D;
10 end

In the TS-CFAR algorithm, reference windows with no guard cells can be used for
TS-CFAR processing because a small amount of target amplitude appearing in the reference
region has little effect on noise power estimation [33].

3.2. TSNN-CFAR UAV Detector Design


The detection performance of TS-CFAR is related to the truncation depth. If the
truncation depth is too small, there will be a large deviation in parameter estimation. If
Drones 2024, 8, 597 8 of 21

the truncation depth is too large, too much sample data will be involved in parameter
estimation [35]. If the same truncation depth is used on both sides of the clutter edge, it is
easy for the clutter false alarm phenomenon to appear. Therefore, we focus on the adaptive
judgment at the edge of clutter, and propose a truncation statistic CFAR algorithm based
on a neural network.
Figure 2 shows the four detection environments that CFAR needs to deal with. The
squares in the figure represent a distance cell, and the position where D is located is
the current detection unit. (a) shows that the target is located in the detection cell, and
the surrounding distance cell is uniformly noisy. (b) represents a high-density target
environment, which is different from (a) in that, while the target is located in the detection
cell, the neighboring distance cell contains a valid target. (c) and (d) are low-energy regions
located at the edge of the clutter. Correspondingly, (e) and (f) are high-energy regions
located at the edge of the clutter.

Figure 2. A case with no guard cells: (a) Homogeneous environment. (b) High-density target environment.
(c,d) Low-power area of clutter edge. (e,f) High-power area of clutter edge.

In order to cope with all the environments shown in Figure 2, the adaptive CFAR is
improved. Due to the homogeneous environments, CA-CFAR is the best. NN-CFAR shows
lag and advance discrimination because of the existence of the guard cell. TS-CFAR can
be used to remove the characteristics of guard cells. Therefore, combining TS-CFAR with
NN-CFAR can not only remove the guard cells but also optimize the characteristic statistics.
Thus, the optimized TSNN-CFAR identification accuracy will be higher.
The schematic diagram of the proposed TSNN-CFAR is shown in Figure 3. Eight
statistics of left and right cells, as well as the second-order statistics V I and MR, are
obtained based on all the sampling values in the left and right windows. A total of
19 statistical characteristic values are used as the basis of background classification. The
output of the neural network is regarded as the basis for selection. Next, according to the
identification results combined with the classification shown in Table 4, the CFAR scheme is
selected. Meanwhile, the detection estimate and threshold factor are obtained by using the
selected CFAR algorithm. Finally, the detection threshold is obtained, and the comparator
determines whether there is a target in the detection cell.
Drones 2024, 8, 597 9 of 21

Detector

Detection cell
Guard cells
Reference cells

Calculate 8 statistical characteristic Calculate 8 statistical characteristic


quantities quantities

VIL,VIR,MR

neural network classifier

select CFAR algorithm


Z D H1
aTSNN T
Ä Comparer
H0

Figure 3. Schematic diagram of TSNN-CFAR.

Table 4. Classification of TSNN-CFAR.

Category Background Classification CFAR Scheme


1 Homogeneous environment CA
2 High-density target environment TS
3 Low-power area of clutter edge SO
4 High-power area of clutter edge GO

As shown in Figure 4, the neural network used in this paper is the multilayer percep-
tron network consisting of input, hidden and output layers. Each neuron in the hidden layer
and output layer has an activation function. In this paper, Sigmoid is used as the activation
function in the neural network to reflect the complex nonlinear relationship between input
and output. The network consists of 19 inputs (VI, MR, etc.) and two hidden layers. The
output layer is classified into four neurons according to the background environments. In
the training process, the neural network uses 800,000 sets of data generated by simulation
in four background conditions for training, 70% of which is used for simulation, 15% for
verification and the rest for testing. Radar echoes obey Rayleigh distributions. The training
algorithm is the scaled conjugate gradient method. The cross-entropy error is used for
performance discrimination. The maximum number of iterations of the neural network is
set as 500. Then, the maximum mean square error (MSE) is configured as 10−6 .

SKEWL

MAEL



VIL

MR

VIR


MAER

SKEWR Output
Input Hidden

Figure 4. NN structure of TSNN-CFAR.

By optimizing the input of the NN classifier, the TSNN classifier has some improve-
ment compared with the NN classifier. The input number of the input layer is reduced
from 38 to 19, and the error is also reduced to 2.69 × 10−2 . The training rounds are also
Drones 2024, 8, 597 10 of 21

greatly reduced and the judgment accuracy of the class is improved. Specific data are
shown in Table 5.
Table 5. Comparison of classifier performance.

Index NN TSNN
Input layer 38 19
Error 1.99 × 10−2 2.69 × 10−4
Convergence time 481 145
Optimal verification performance 1.9 × 10−2 1.15 × 10−4
Accuracy of category 1 99.5% 100%
Accuracy of category 2 97.9% 100%
Accuracy of category 3 96.0% 99.9%
Accuracy of category 4 97.2% 100%

According to the table, the initial judgment adaptive decision performance should
be optimized. However, the analysis and evaluation of the performance will depend
on the detection probability and false alarm probability in each case. After generating
the dataset by simulation, the Stratified K-Fold Cross-Validation method is used to ob-
tain the test dataset. The respective test datasets are used as inputs to NN-CFAR and
TSNN-CFAR, and the confusion matrices shown in Figure 5 were obtained from the final
classification results. Figure 5 shows the four true numbers of categories in the test set
of NN-CFAR are 26,638/26,453/13,654/13,255, and the prediction correctness rates are
99.49%/97.90%/96.00%/97.20%/96.00%/97.20%. The four true categories of the test set
used for TSNN-CFAR are 26,544/26,766/13,399/13,291 with 99.98%/99.98%/99.90%/99.98%
correct predictions. It is observed that TSNN-CFAR reduces the number of input parameters
while the classification correctness is improved.

1 26,503 84 8 43 1 26,539 0 5 0

2 140 25,898 212 203 2 0 26,761 5 0


The Class

The Class

3 24 446 13,108 76 3 4 0 13,386 9

4 49 297 25 12,884 4 0 0 3 13,288

1 2 3 4 1 2 3 4
Predicted Class Predicted Class
(a) (b)

Figure 5. Confusion matrices of (a) NN-CFAR and (b) TSNN-CFAR on datasets.

In the next section, we will further compare the detection performance and false alarm
of the proposed TSNN-CFAR algorithm with that of other existing algorithms in different
background environments.

4. Performance Evaluation
To test and verify the superiority of the proposed TSNN-CFAR, simulations of the
radar detection system are carried out in the MATLAB R2019b environment based on
the square law detector. The results are tested by Monte Carlo simulations. The default
parameters are as follows. The length of distance units is set as 200 and the number of
reference cells is set as N = 30 with no guard cell. The value of the constant CFAR is set as
Pf a = 10−4 . The decision thresholds are set as K MR = 1.806 and KV I = 4.76.
Drones 2024, 8, 597 11 of 21

4.1. Comparison of Single-Target Detection Performance in Homogeneous Environments


In the simulated homogeneous environments, consider a target with target fluctuation
model Swerling I and SNR = 20 dB located in distance unit 95.
The detection and category judgment results are shown in Figures 6 and 7. It can be
seen that removing guard cells, the protrusion of traditional algorithms, CA-CFAR and
genetic algorithm CFAR are more serious, and the target shadowing effect is more obvious.
Figure 7 shows the result of the adaptive selection in the homogeneous background.
Although NN-CFAR only uses eight characteristic inputs, the judgment ability is similar
to that of TSNN-CFAR. However, the performance of VI-CFAR is the most unstable, and
inappropriate judgments occur more frequently.

45

40

35
Power(dB)

30
OPT
25 CA
GO
SO
20 OS
VI
NN
15
TS
TSNN
10
20 40 60 80 100 120 140 160 180 200
Distance unit
Figure 6. Single-target detection result in the homogeneous environment.

3
OPT
VI
NN
TSNN
Class

1
0 50 100 150 200
Distance unit
Figure 7. Category judgment result in the homogeneous environment.
Drones 2024, 8, 597 12 of 21

In Figures 8 and 9, the proposed method and benchmarks are tested in a homogeneous
background with noise SNR levels ranging from 0 to 30 dB. It is clear that CA-CFAR is
the best solution in homogeneous environments. Meanwhile, due to the right category
judgment, the detection probabilities of NN-CFAR and TSNN-CFAR are equal to that of
CA-CFAR. Due to a certain deviation in the decision, the detection loss of VI-CFAR is larger
than that of TSNN-CFAR, but the detection performance is better than that of OS-CFAR.

1
OPT
CA
GO
0.8
SO
OS
VI
NN
0.6
TS
PD

TSNN

0.4

0.2

0
0 5 10 15 20 25 30
SNR(dB)
Figure 8. Detection probability result in the homogeneous environment.

0.8 OPT
CA
GO
0.79
SO
OS
0.78 VI
NN
0.77 TS
PD

TSNN
0.76

0.75

0.74

0.73

15.5 16 16.5 17
SNR(dB)
Figure 9. Corresponding local enlarged result for Figure 8.

4.2. Comparison of Detection Performance in High-Density Target Environments


In the simulated scenery, four real targets with different SNR are simulated located in
the distance units 90, 95, 100 and 110, respectively.
Drones 2024, 8, 597 13 of 21

The multi-target detection and category judgment results are shown in Figures 10 and 11.
Figure 10 shows that missed detections occur in CA-CFAR, GO-CFAR and SO-CFAR in
high-density target environments. TSNN-CFAR and TS-CFAR show good robustness and
the performance is close to that of optimal. In Figure 11, since NN-CFAR only adopts
eight characteristic inputs, the decision fluctuates wildly in the locations of dense targets.
Meanwhile, the category judgment of TSNN-CFAR is the best.

45

40

35
Power(dB)

30

OPT
25 CA
GO
SO
20 OS
VI
NN
15
TS
TSNN
10
20 40 60 80 100 120 140 160 180 200
Distance unit
Figure 10. Multi-target detection result in the high-density target environment.

4
OPT
VI
NN
TSNN

3
Class

1
0 50 100 150 200
Distance unit
Figure 11. Category judgment result in the high-density target environment.

With noise SNR levels ranged from 0 to 30 dB, an interference target with the same SNR
as a real target is added in a unilateral reference window, and the detection probabilities
of the proposed method and benchmarks are tested in Figures 12 and 13. It can be seen
that the detection probability of TSNN-CFAR is basically consistent with TS-CFAR, and
the detection probability loss is minimal. Since the interference target is added in the noise
Drones 2024, 8, 597 14 of 21

power estimation of CA-CFAR and GO-CFAR, the detection threshold is raised, leading to
a decrease in detection probability, while SO-CFAR and OS-CFAR maintain good detection
performance due to eliminating the influence of interference signals. Since VI-CFAR may
be judged to select CA-CFAR and CA-CFAR has a smaller detection loss compared with
SO-CFAR, the performance of VI-CFAR is obviously higher than that of SO-CFAR.

1
OPT
CA
GO
0.8
SO
OS
VI
NN
0.6
TS
PD

TSNN

0.4

0.2

0
0 5 10 15 20 25 30
SNR(dB)
Figure 12. Detection probability result with an interference target in the high-density target environment.

OPT
0.75 CA
GO
SO
0.7 OS
VI
0.65 NN
TS
PD

TSNN
0.6

0.55

0.5

0.45

12 14 16 18 20 22
SNR(dB)
Figure 13. Corresponding local enlarged result for Figure 12.

Further, Figures 14 and 15 show the result of detection probability in the scenario
where two interference targets are respectively added in two sides of the reference windows.
TSNN-CFAR and TS-CFAR have the best detection performance, while the other traditional
CFAR methods have obvious performance degradation due to the presence of interference
signals. However, NN-CFAR is basically consistent with OS-CFAR because the category
judgment algorithm is OS-CFAR.
Drones 2024, 8, 597 15 of 21

1
OPT
CA
GO
0.8
SO
OS
VI
NN
0.6
TS
PD
TSNN

0.4

0.2

0
0 5 10 15 20 25 30
SNR(dB)
Figure 14. Detection probability result with 2 interference targets located in both sides of reference
windows in the high-density target environment.

0.75
OPT
0.7 CA
GO
0.65 SO
OS
0.6 VI
NN
0.55 TS
PD

TSNN
0.5

0.45

0.4

0.35

0.3

12 14 16 18 20 22 24
SNR(dB)
Figure 15. Corresponding local enlarged result for Figure 14.

4.3. Comparison of False Alarm Control in Clutter Edges


In this section, the focus is on the ability to control false alarms of TSNN-CFAR at the
clutter edge. Assume that the radar echo has signals in the clutter edge and the average
power of the weak clutter region is 20 dB. The strong clutter region is from distance unit 100
to unit 200; the average power is 40 dB. Other conditions are consistent with the last section.
Figures 16 and 17 show the detection result in the clutter edge situation. It is seen that
NN-CFAR and TSNN-CFAR have the best clutter edge control ability due to the removal of
guard cells. However, NN-CFAR will have a little fluctuation. TS-CFAR is prone to false
alarm in the strong clutter area due to the over-strong robustness.
Drones 2024, 8, 597 16 of 21

55 OPT
CA
50 GO
SO
OS
45 VI
NN
Power(dB)
40 TS
TSNN

35

30

25

20

40 60 80 100 120 140 160 180


Distance unit
Figure 16. Detection result in the clutter edge situation.

52 OPT
CA
GO
50 SO
OS
VI
48
NN
Power(dB)

TS
46 TSNN

44

42

40

90 100 110 120 130 140


Distance unit
Figure 17. Corresponding local enlarged result for Figure 16.

The category judgment result of the adaptive algorithms and the probabilities of false
alarm for different CFAR methods are shown in Figure 18 and Figure 19, respectively.
Figure 18 shows that the selection errors occur before VI-CFAR and NN-CFAR enter the
clutter region. The selection oscillation is obvious after NN-CFAR enters the clutter region.
The TSNN-CFAR algorithm has the best selection ability and is the most stable. Figure 19
shows that, at the clutter edge, TSNN-CFAR uses the SO-CFAR algorithm in the weak
clutter area to reduce the occurrence of missed detection, and the false alarm probability
is close to SO-CFAR. In the strong clutter region followed by the GO-CFAR algorithm,
the peak effect is reduced in false alarm probability detection. TS-CFAR is affected by the
truncated depth and has an obvious false alarm peak effect in the strong clutter region.
Drones 2024, 8, 597 17 of 21

4
OPT
VI
NN
TSNN

3
Class

1
0 50 100 150 200
Distance unit
Figure 18. Category judgment result in the clutter edge situation.

10 0
OPT
CA
GO
SO
10 −2 OS
VI
NN
TS
PFA

TSNN
−4
10

10 −6

70 80 90 100 110 120 130 140


Distance unit
Figure 19. False alarm probability in the clutter edge situation.

4.4. Comprehensive Evaluation of Robustness in Complex Environments


In order to clearly compare the robustness of various algorithms in complex environ-
ments, values δPd and δPf a are introduced for detection deviation and false alarm deviation.
For detection deviation, the partial difference value is set as 100 for the best detector and
the detection deviation value δPd−i for the i-CFAR method is calculated by

100
∆ Pd−i = µi-CFAR (16)
max(µCFAR )

where µi-CFAR represents the mean deviation of the detection result for the i-CFAR method
and the theoretically optimal detection result. µi-CFAR is calculated by

µi-CFAR = ∑ (Ti-CFAR − TOPT )/N (17)


Drones 2024, 8, 597 18 of 21

Similarly, the false alarm deviation value δPf a for the i-CFAR method is calculated by

100
∆ Pf a−i = 100 − µi-CFAR +M (18)
max(µCFAR )

where M is a constant, aiming to uniformly improve the mean value of the partial difference
as well as conveniently draw and observe. It is noted that the detection deviation value
refers to the detection performance under different SNRs, while the false alarm deviation
focuses on the false alarm performance of different distance units at the clutter edge.
According to the above experimental results, the robustness of different CFAR algo-
rithms is evaluated and scored comprehensively by using detection deviation and false
alarm deviation values, as shown in Figure 20. PD0, PD1, PD2 and PFA refer to the
homogeneous environment, high-density target environment with a interference target,
high-density target environment with two interference targets and clutter edge situation,
respectively. The deviation scores are equal to detection deviation values from SNR = 5 dB
to SNR = 10 dB and false alarm deviation values from distance unit 85 to distance unit 115.

Figure 20. Score of the robustness for different CFAR algorithms.

According to Figure 20, for PD0, the performance of TSNN-CFAR and CA-CFAR is
consistent and optimal. For PD1 and PD2, TSNN-CFAR and TS-CFAR have the highest
scores and the best performance. At the edge, TSNN-CFAR has the highest false alarm
deviation score. It shows that TSNN-CFAR has the strongest comprehensive ability of weak
missed detection probability and strong false alarm regulation.

5. Conclusions
In order to optimize the processing capability of TS-CFAR at the clutter edge and
reduce the missed detection probability of weak targets, a TSNN-CFAR UAV detector
was proposed in this paper. TSNN-CFAR was based on TS-CFAR, and the presence of a
small amount of target amplitude in the reference cells had almost no effect on the power
estimation of noise clutter. Therefore, the reference window with no guard cell can be used
for TSNN-CFAR processing.
By comparing different CFAR algorithms, it was clearly shown that TSNN-CFAR is
better than traditional CFAR algorithms with the mean method and OS-CFAR algorithm in
the reference window without guard cells. The TSNN-CFAR algorithm had the best selec-
tion ability and the most stable false alarm regulation ability in multi-target environments
and clutter edge environments, since 19 statistics obtained from the numerical calculation
of both sides of windows were used as the characteristic input and the guard cells were
Drones 2024, 8, 597 19 of 21

removed. Therefore, it is expected to be suitable for systems that need to detect multiple
targets and respond to clutter.
The detection performance of TSNN-CFAR for a high-density target environment is
closely related to the truncation depth. However, the truncation depth is usually difficult
to determine due to the lack of a priori knowledge of the background environment. Once
the truncation depth is not set properly, the background noise parameter estimation will be
highly biased. Meanwhile, the existing experiments are carried out on simulation datasets,
which lack real signal data. We will conduct further research on the above issues in the
future, and carry out UAV detection experiments based on real data on the basis of building
an experimental platform.

Author Contributions: Conceptualization, W.D. and W.Z.; methodology, W.D.; software, W.D.;
validation, W.D. and W.Z.; formal analysis, W.D.; investigation, W.D. and W.Z.; resources, W.D.; data
curation, W.D.; writing—original draft preparation, W.D. and W.Z.; writing—review and editing,
W.Z.; visualization, W.D. and W.Z.; supervision, W.Z. All authors have read and agreed to the
published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: The original contributions presented in the study are included in the
article; further inquiries can be directed to the corresponding author.
Conflicts of Interest: The authors declare no conflicts of interest.

Abbreviations
The following abbreviations are used in this manuscript:

UAVs Unmanned Aerial Vehicles


TSNN Truncated Statistics Neural Network
CFAR Constant False Alarm Rate
NCI Non-Cooperative Identification
MR Moving Range
TS Truncated Statistics
NN Neural Network
ML Mean Level
OS Ordered Statistics
PFA False Alarm Probability
CA Cell Averaging
GO Greatest Of
SO Smallest Of
VI Variability Index
SNR Signal-to-Noise Ratio
MAE Mean Absolute Error
SKEW Skewness
KURT Kurtosis
CDF Cumulative Distribution Function
MSE Mean Square Error

References
1. Song, X.; Zhao, S.; Wang, X.; Li, X.; Tian, Q. Performance Analysis of UAV RF/FSO Co-Operative Communication Network with
Co-Channel Interference. Drones 2024, 8, 70. [CrossRef]
2. Jia, W. Image Segmentation Solutions for Improved Non-Cooperative Target Recognition. J. Eng. Res. Rep. 2024, 26, 236–242.
[CrossRef]
3. Yang, J.; Zhang, Z.; Mao, W.; Yang, Y. Identification and micro-motion parameter estimation of non-cooperative UAV targets.
Phys. Commun. 2021, 46, 101314. [CrossRef]
4. Rosenbach, K.; Schiller, J. Non co-operative air target identification using radar imagery: identification rate as a function of signal
bandwidth. In Proceedings of the Record of the IEEE 2000 International Radar Conference [Cat. No. 00CH37037], Alexandria,
VA, USA, 7–12 May 2000; IEEE: Piscataway, NJ, USA, 2000; pp. 305–309.
Drones 2024, 8, 597 20 of 21

5. Wang, M.; Li, X.; Gao, L.; Sun, Z.; Cui, G.; Yeo, T.S. Signal accumulation method for high-speed maneuvering target detection
using airborne coherent MIMO radar. IEEE Trans. Signal Process. 2023, 71, 2336–2351. [CrossRef]
6. Rihan, M.Y.; Nossair, Z.B.; Mubarak, R.I. An improved CFAR algorithm for multiple environmental conditions. Signal Image
Video Process. 2024, 18, 3383–3393. [CrossRef]
7. Yang, B.; Zhang, H. A CFAR algorithm based on Monte Carlo method for millimeter-wave radar road traffic target detection.
Remote Sens. 2022, 14, 1779. [CrossRef]
8. Zeng, Z.; Cui, L.; Qian, M.; Zhang, Z.; Wei, K. A survey on sliding window sketch for network measurement. Comput. Netw. 2023,
226, 109696. [CrossRef]
9. Kim, J.H.; Bell, M.R. A computationally efficient CFAR algorithm based on a goodness-of-fit test for piecewise homogeneous
environments. IEEE Trans. Aerosp. Electron. Syst. 2013, 49, 1519–1535. [CrossRef]
10. Cui, Z.; Hou, Z.; Yang, H.; Liu, N.; Cao, Z. A CFAR target-detection method based on superpixel statistical modeling. IEEE Geosci.
Remote. Sens. Lett. 2020, 18, 1605–1609. [CrossRef]
11. Cao, Z.; Li, J.; Song, C.; Xu, Z.; Wang, X. Compressed sensing-based multitarget CFAR detection algorithm for FMCW radar.
IEEE Trans. Geosci. Remote Sens. 2021, 59, 9160–9172. [CrossRef]
12. Zhou, J.; Xie, J. An Improved Quantile Estimator with Its Application in CFAR Detection. IEEE Geosci. Remote Sens. Lett. 2023.
[CrossRef]
13. Abbadi, A.; Abbane, A.; Bencheikh, M.L.; Soltani, F. A new adaptive CFAR processor in multiple target situations. In Proceedings
of the 2017 Seminar on Detection Systems Architectures and Technologies (DAT), Algiers, Algeria, 20–22 February 2017; IEEE:
Piscataway, NJ, USA, 2017; pp. 1–5.
14. Wang, Y.; Xia, W.; He, Z. CFAR knowledge-aided radar detection with heterogeneous samples. IEEE Signal Process. Lett. 2017,
24, 693–697. [CrossRef]
15. Liu, Y.; Zhang, S.; Suo, J.; Zhang, J.; Yao, T. Research on a new comprehensive CFAR (Comp-CFAR) processing method. IEEE
Access 2019, 7, 19401–19413. [CrossRef]
16. Sana, S.; Ahsan, F.; Khan, S. Design and implementation of multimode CFAR processor. In Proceedings of the 2016 19th
International Multi-Topic Conference (INMIC), Islamabad, Pakistan, 5–6 December 2016; IEEE: Piscataway, NJ, USA, 2016;
pp. 1–6.
17. Smith, M.E.; Varshney, P.K. Intelligent CFAR processor based on data variability. IEEE Trans. Aerosp. Electron. Syst. 2000,
36, 837–847. [CrossRef]
18. Wang, L.; Wang, D.; Hao, C. Intelligent CFAR detector based on support vector machine. IEEE Access 2017, 5, 26965–26972.
[CrossRef]
19. Jiménez, L.P.J.; García, F.D.A.; Alvarado, M.C.L.; Fraidenraich, G.; de Lima, E.R. A general CA-CFAR performance analysis for
weibull-distributed clutter environments. IEEE Geosci. Remote Sens. Lett. 2022, 19, 4025305. [CrossRef]
20. Madjidi, H.; Laroussi, T.; Farah, F. On maximum likelihood quantile matching cfar detection in weibull clutter and multiple
rayleigh target situations: A comparison. Arab. J. Sci. Eng. 2023, 5, 6649–6657. [CrossRef]
21. Jeong, T.; Park, S.; Kim, J.W.; Yu, J.W. Robust CFAR detector with ordered statistic of sub-reference cells in multiple target
situations. IEEE Access 2022, 10, 42750–42761. [CrossRef]
22. Medeiros, D.S.; García, F.D.A.; Machado, R.; Santos Filho, J.C.S.; Saotome, O. CA-CFAR Performance in K-Distributed Sea Clutter
with Fully Correlated Texture. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1500505. [CrossRef]
23. Kuang, C.; Wang, C.; Wen, B.; Hou, Y.; Lai, Y. An improved CA-CFAR method for ship target detection in strong clutter using
UHF radar. IEEE Signal Process. Lett. 2020, 27, 1445–1449. [CrossRef]
24. Sahed, M.; Kenane, E.; Khalfa, A.; Djahli, F. Exact Closed-Form Pfa Expressions for CA- and GO-CFAR Detectors in Gamma-
Distributed Radar Clutter. IEEE Trans. Aerosp. Electron. Syst. 2022, 59, 4674–4679. [CrossRef]
25. Baadeche, M.; Soltani, F.; Gini, F. Performance comparison of mean-level CFAR detectors in homogeneous and non-homogeneous
Weibull clutter for MIMO radars. Signal Image Video Process. 2019, 13, 1677–1684. [CrossRef]
26. Oudira, H.; Gouri, A.; Mezache, A. Optimization of distributed CFAR detection using grey wolf algorithm. Procedia Comput. Sci.
2019, 158, 74–83. [CrossRef]
27. Ruida, C.; Yicheng, J.; Zhenwei, M.; Gang, Y.; Bing, W. A New CFAR Detection Algorithm Based on Sorting Selection for Vehicle
Millimeter Wave Radar; Report 0148-7191; SAE Technical Paper: Warrendale, PA, USA, 2020.
28. Villar, S.A.; Menna, B.V.; Torcida, S.; Acosta, G.G. Efficient approach for OS-CFAR 2D technique using distributive histograms
and breakdown point optimal concept applied to acoustic images. IET Radar Sonar Navig. 2019, 13, 2071–2082. [CrossRef]
29. Sim, Y.; Heo, J.; Jung, Y.; Lee, S.; Jung, Y. FPGA Implementation of Efficient CFAR Algorithm for Radar Systems. Sensors 2023,
23, 954. [CrossRef]
30. Akhtar, J. Training of neural network target detectors mentored by SO-CFAR. In Proceedings of the 2020 28th European
signal processing conference (EUSIPCO), Amsterdam, The Netherlands, 18–21 January 2021; IEEE: Piscataway, NJ, USA, 2021;
pp. 1522–1526.
31. Sahal, M.; Said, Z.A.; Putra, R.Y.; Kadir, R.E.A.; Firmansyah, A.A. Comparison of CFAR methods on multiple targets in sea clutter
using SPX-radar-simulator. In Proceedings of the 2020 International Seminar on Intelligent Technology and Its Applications
(ISITIA), Surabaya, Indonesia, 22–23 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 260–265.
Drones 2024, 8, 597 21 of 21

32. Amiri, Z.; Heidari, A.; Navimipour, N.J.; Unal, M.; Mousavi, A. Adventures in data analysis: A systematic review of Deep
Learning techniques for pattern recognition in cyber-physical-social systems. Multimed. Tools Appl. 2024, 83, 22909–22973.
[CrossRef]
33. Tao, D.; Anfinsen, S.N.; Brekke, C. Robust CFAR detector based on truncated statistics in multiple-target situations. IEEE Trans.
Geosci. Remote Sens. 2015, 54, 117–134. [CrossRef]
34. Cohen, A.C. Truncated and Censored Samples: Theory and Applications; CRC Press: Boca Raton, FL, USA, 1991.
35. Zhou, J.; Xie, J.; Liao, X.; Sun, C. Robust Sliding Window CFAR Detection Based on Quantile Truncated Statistics. IEEE Trans.
Geosci. Remote Sens. 2022, 60, 5117823. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like