Algorithms to Antenna
Algorithms to Antenna
Techniques
Jun 16th, 2021
This blog drills further into the discussion of applying deep learning to wireless
and radar apps by exploring how channel estimation is performed in 5G systems.
Rick Gentile
Honglei Chen, Carlos Lopez, Daniel Garcia-Alis
2. Shown is a general approach to channel estimation using known reference pilot symbols.
(©1984–2021 The MathWorks, Inc.)
You can use perfect, practical, and neural-network estimations of the same
channel model and compare their performance. 5G Toolbox has functions to
perform both perfectand practicalchannel estimations, which we use below in
our comparison.
To perform channel estimation using the neural network, you must interpolate
the received grid. Then split the interpolated image into its real and imaginary
parts and input these images together into the neural network as a single batch.
In Figure 4, you can see the mean squared error (MSE) of each estimation
method, including the individual channel estimations and the actual channel
realization obtained from the channel path gains and filter taps. The neural-
network estimator achieves the best results and both the practical estimator and
the neural-network estimator outperform linear interpolation.
4. Shown is channel estimation using different techniques compared to the actual channel
realization obtained from the channel path gains and filter taps. (©1984–2021 The MathWorks,
Inc.)
The signal processing required to pull signal parameters out of the noise are
well-established. For example, you can model a system that performs signal
parameter estimation in a radar warning receiver. The processed output of this
type of system is shown in Figure 2. Typically, a wideband signal is received by a
phased array and signal processing is performed to determine parameters that
are critical to identifying the source, including direction and location.
2. It’s possible to model a system that performs signal parameter estimation in a radar warning
receiver.
Manually extracting features can take time and will require detailed knowledge
of the signals. On the other hand, deep-learning networks need large amounts of
data for training purposes to ensure the best results. One benefit of using a
deep-learning network is that less preprocessing work and less manual feature
extraction are required.
The good news is you can generate and label synthetic, channel-impaired
waveforms. These generated waveforms provide training data that can be used
with a range of deep-learning networks (Fig. 3).
3. Modulation identification workflow with deep learning: When using a deep-learning network,
less preprocessing work and less manual feature extraction are required.
Of course, data can also be generated from live systems, but this data may be
challenging to collect and label. Keeping track of waveforms and syncing
transmit and receive systems often results in difficult-to-manage large data sets.
It’s a challenge to also coordinate data sources that are not geographically co-
located, including tests that span a wide range of conditions. In addition,
labeling this data as it’s collected (or after the fact) requires lots of work
because ground truth may not always be obvious.
Let’s look at a specific example with the following mix of communications and
radar waveforms:
Radar
Rectangular
Linear frequency modulation (LFM)
Barker code
Communications
All signals are impaired with white Gaussian noise. In addition, a frequency
offset with a random carrier frequency is applied to each signal. Finally, each
signal is passed through a channel model. In this example, a multipath Rician
fading channel is used, but others could be swapped in place of the Rician
channel.
The data is labeled as it’s generated in preparation to feed the training network.
To improve the classification performance of learning algorithms, a common
approach is to input extracted features in place of the original signal data. The
features provide a representation of the input data that makes it easier for a
classification algorithm to discriminate across the classes. We compute a time-
frequency transform for each modulation type. The downsampled images for one
set of data are shown in Figure 4.
4. Time-frequency representations of radar and communications waveforms are illustrated.
These images are used to train a deep convolutional neural network (CNN).
From the data set, the network is trained with 80% of the data and tested with
10%. The remaining 10% is used for validation.
On average, over 85% of AM signals were correctly identified. From the
confusion matrix (Fig. 5), a high percentage of DSB-AM signals were
misclassified as SSB-AM and vice versa.
5. The confusion matrix with the results of the classification reveals that a high percentage of
DSB-AM signals were misclassified as SSB-AM and vice versa.
This example shows how radar and communications modulation types can be
classified by using time-frequency signal-processing techniques and a deep-
learning network.
In this example, we generate 10,000 frames for each modulation type. Again,
80% of the data is used for training, 10% for testing, and 10% for validation.
For digital-modulation types, eight samples are used to represent a symbol. The
network makes each decision based on single frames rather than on multiple
consecutive frames. Similar to our first example, each signal is passed through a
channel with AWGN, Rician multipath fading, and a clock offset. We then
generate channel-impaired frames for each modulation type and store the
frames with their corresponding labels.
To make the scenario more realistic, a random number of samples are removed
from the beginning of each frame to remove transients and make sure that the
frames have a random starting point with respect to the symbol boundaries. The
time and time-frequency representations of each waveform type are shown
in Figure 6.
6. Shown are examples of time representation of generated waveforms (top) and
corresponding time-frequency representations (bottom).
To do this, we can use the I/Q baseband samples in rows as part of a 2D array.
Here, the convolutional layers process in-phase and quadrature components
independently. Only in the fully connected layer is information from the in-phase
and quadrature components combined. This yields a 90% accuracy.
A variant on this approach is to use the I/Q samples as a 3D array in which the
in-phase and quadrature components are part of the third dimension (pages).
This approach mixes the information in the I and Q even in the convolutional
layers and makes better use of the phase information. The variant yields a result
with more than 95% accuracy. Representing I/Q components as pages instead of
rows can improve the accuracy of the network by about 5%.
In the first example, we tested our results using synthesized data. For the
second example, we employ over-the-air signals generated from two ADALM-
PLUTO Radios (shown in Figure 8, where one is used as a transmitter and one is
a receiver). The network achieves 99% overall accuracy when two radios are
stationary and configured on a desktop. This is better than the results obtained
for synthetic data because of the configuration. However, the workflow can be
extended for radar and radio data collected in more realistic scenarios.
Figure 9 shows an app using live data from the receiver. The received waveform
is shown as an I/Q signal over time. The Estimated Modulation window in the
app reveals the probability of each type as predicted by the network.
9. Shown is the Waveform Modulation Classifier app with live data classification.
Here, the received waveform is an I/Q signal over time.
To learn more about the topics covered in this blog, see the examples below or
email me at [email protected].
Radar Waveform Classification Using Deep Learning (example): Learn how
to classify radar waveform types of generated synthetic data using the
Wigner-Ville distribution (WVD) and a deep CNN.
Radar Target Classification Using Machine Learning and Deep
Learning (example): Learn how to classify radar returns with both
machine- and deep-learning approaches.
Modulation Classification with Deep Learning (example): Learn how to use
a CNN for modulation classification. You generate synthetic, channel-
impaired waveforms. Using the generated waveforms as training data,
you train a CNN for modulation classification. Then you test the CNN
with SDR hardware and over-the-air signals.
Deep Learning for Signals (video): Learn how you can use techniques such
as time-frequency transformations and wavelet scattering networks in
conjunction with CNNs and recurrent neural networks to build predictive
models on signals.
LOG IN
REGISTER
SEARCH
Systems
Software
Systems
Systems
Systems
Systems
Systems
TECHNOLOGIES
SYSTEMS
All of the previous blogs focused on modeling and simulation frameworks to help
accelerate algorithm development. One of the goals is to make it easy for
engineers to try algorithms before building their systems. When a radar can be
used to collect data, you can move to the next stage in a project with confidence
that the algorithms are effective. Along these same lines, commercially available
radars provide a head start in working with hardware at the earliest stages of
the project.
The plot on the bottom left of Figure 2 shows the results of range-angle
processing without using a virtual array to increase angular resolution. Here,
the two reflectors look like a single object. The plot on the bottom right
of Figure 2 shows the results of creating a virtual array. Note the two reflectors
are resolved in angle. The MIMO operations are possible on the radar because
the transmitter array is spaced appropriately, and we have control of when
individual array elements transmit. The receive elements also provide spatial
awareness.
2. A virtual array is used to increase angular resolution in the radar test setup (top);
reflectors appear as a single object without MIMO operations (bottom left), and two
reflectors are resolved using MIMO processing (bottom right). (Courtesy of Ancortek
Inc.)
One more example we want to highlight from the conference is shown in Figure
3. In this case, instead of corner reflectors or pedestrians, the radar detects a
dancing dinosaur. The display on the top right shows the detected range over
time and the display on the bottom right shows angle over time. Both examples
show how the phased array as the front end of the receiver enables
beamforming and direction of arrival (click here for the full video).
3. Radar detects a dancing dinosaur in range (top right) and angle (bottom right).
(Courtesy of Ancortek Inc.)
To learn more about the topics covered in this blog, see the examples below or
email me at [email protected]:
While previous blogs focused on data synthesis, here we look at labeling real-
world data from a radar, a radio, or instrumentation. This type of data typically
is in a complex format, with an I (in-phase) and Q (quadrature) component. The
data is usually contained in sets of large files where the only identifier is in the
file name, which might include information on when the data was collected.
Your data can be labeled in many ways. For example, you may label an entire
signal by its waveform modulation type. In other applications, labels may
indicate what type of target return (aircraft, drone, etc.) or interference is
included in the signal. These types of labels are referred to as categorical labels.
Moving to a more granular level, you may label specific characteristics of your
signals, such as pulse width, bandwidth, or pulse repetition frequency (PRF). In
other applications, labels might indicate regions of interest (ROI); for example, a
spurious interference event that occurs at a finite time interval within a larger
signal.
We will demonstrate how you can label the primary time and frequency features
of pulse radar signals using three common waveforms: linear FM, rectangular,
and stepped FM. The workflow helps you to create complete and accurate data
sets to train models used for deep learning. Note, this type of workflow can be
similarly applied to communications signals as well.
We will start with an interactive tool that was designed for this type of
application—the Signal Labeler app—which is part of the Signal Processing
Toolbox for MATLAB. For completeness, we will discuss ways to label data
manually and automatically using the app.
Figure 1 shows the Signal Labeler app with radar signals loaded and ready to be
labeled. There’s a Label Definitions panel (top left) and panels for the key
visualizations (right). To start, we create a label definition for each of our signal
waveform types. For our example, these label values are LinearFM, Rectangular,
and SteppedFM. As part of our setup, we also create attribute label definitions
for PRF, duty cycle, pulse width, and bandwidth. These parameters will be
automatically labeled using functions that process each signal.
%{[ data-embed-type="image" data-embed-id="60803f363903c58c478b49c3"
data-embed-element="span" data-embed-size="640w" data-embed-alt="1. The
Signal Labeler app with radar signals loaded. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_1.60803f3594520.png?auto=format&fit=max&w=1440" data-embed-
caption="1. The Signal Labeler app with radar signals loaded. (©1984–2021 The
MathWorks, Inc.)" ]}%
We use an ROI label to capture a region in each signal that spans from the initial
and final times over which an event or characteristic of interest occurs for the
characteristic we want to label. Once the labels are defined, you can upload
custom functions you author in MATLAB to the labeler. For our example, we use
functions customized to label the PRF, bandwidth, duty cycle, and pulse
width. Figure 2 shows a zoomed view of the automated functions gallery in the
labeler toolstrip.
Note that the waveform characteristics aren’t filled in yet (Fig. 3, bottom right).
We will show how to do this labeling shortly. For the manual labeling portion of
the workflow, the labels you define (in this case, the waveform types) are
available in the dropdown menu, which you can pick from.
Figures 4 and 5 show the similar visualizations for the linear FM and stepped
FM waveforms, respectively. Note the respective waveform type label has been
added for each signal.
With the manual labeling complete for each waveform type, we can now use
functions to automatically compute and label the characteristics of each input
signal. For more information on the functions used in our example, and how you
can add and customize your own functions, please go here. The results are
shown in the bottom-right section of Figure 6. The nice part of this workflow is
that you can customize your own functions in MATLAB and add them to the
function gallery in the Signal Labeler app.
We have a small data set in this example, but you would typically have much
larger sets. You can view your labeling progress and verify the computed label
values are correct. Figure 7 (left) shows the labeling progress, which in this
example is 100%, as all signals are labeled. Figure 7 (right) shows the number of
signals with labels for each label value. The pie chart can be utilized to ensure
you have a balanced data set. In this case, you can also assess the accuracy of
your labeling and confirm the results are as expected.
Furthermore, the dashboard can be used to analyze signal region labels. For
example, we can see in Figure 8 that all pulse-width label values are distributed
around 5e-5, which matches our data-set ground truth.
When labeling is completed, you can export the labeled signals back into
MATLAB to train your deep-learning models.
To learn more about the topics covered in this blog and explore your own
designs, see the examples below or email me at [email protected]:
Automatically Label Radar Signals (example): Learn how to label the main
time and frequency features of pulse radar signals and create complete
and accurate data sets to train artificial-intelligence (AI) models.
AI for Radar (examples): Learn how AI techniques can be applied to radar
applications.
Deep Learning in Wireless Systems (examples): Learn how deep-learning
techniques can be applied to wireless applications.