0% found this document useful (0 votes)
6 views30 pages

Algorithms to Antenna

This blog discusses the application of deep learning techniques, particularly convolutional neural networks (CNNs), for channel estimation in 5G systems. It outlines the process of training CNNs using synthesized radar and communication signals to improve channel estimation accuracy, comparing it with traditional methods. The document also highlights the potential for extending these techniques to other wireless applications and modulation identification tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views30 pages

Algorithms to Antenna

This blog discusses the application of deep learning techniques, particularly convolutional neural networks (CNNs), for channel estimation in 5G systems. It outlines the process of training CNNs using synthesized radar and communication signals to improve channel estimation accuracy, comparing it with traditional methods. The document also highlights the potential for extending these techniques to other wireless applications and modulation identification tasks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Algorithms to Antenna: 5G Channel Estimation Using Deep-Learning

Techniques
Jun 16th, 2021
This blog drills further into the discussion of applying deep learning to wireless
and radar apps by exploring how channel estimation is performed in 5G systems.
Rick Gentile
Honglei Chen, Carlos Lopez, Daniel Garcia-Alis

Related To: MathWorks

This blog is part of the Algorithms to Antenna Series

What you'll learn:

 What is channel estimation?


 How is channel estimation performed?
 The role of CNNs in 5G channel estimation.

 Train Deep-Learning Networks with Synthesized Radar and


Communications Signals
 RF Fingerprinting for Trusted Communications Links
 Develop and Test Algorithms on Commercial Radars
 Labeling Radar and Comms Signals for Deep-Learning Apps
First, some background. Transmitted RF signals in wireless communications
systems propagate through the environment to a receiver. Channel propagation
results in changes to the signal, including loss, noise, distortions, and Doppler
shifts.

To recover transmitted signals, we need to remove the effects introduced by the


channel from the received signal. First, we estimate the channel characteristics.
In a system like that in Figure 1, this involves estimating the characteristics of
the channel represented by H. Knowing this information allows us to apply
signal processing to remove the effects introduced by the channel after we
demodulate and decode the signal.

1. In this wireless communications system, H represents the channel. (©1984–2021 The


MathWorks, Inc.)

A general approach to channel estimation is to insert known reference pilot


symbols into the transmission. Then the rest of the channel response is
interpolated by using these pilot symbols as shown in Figure 2.

2. Shown is a general approach to channel estimation using known reference pilot symbols.
(©1984–2021 The MathWorks, Inc.)

To perform channel estimation, we train a convolutional neural network (CNN)


using data generated with 5G Toolbox. Using the trained CNN, channel
estimation is performed in single-input single-output (SISO) mode, utilizing the
physical downlink shared channel (PDSCH) demodulation reference signal (DM-
RS). Later, we will also compare the results using the CNN with practical and
perfect versions of the channel estimator.

To apply deep-learning techniques to a channel-estimation system, we represent


the resource grid as a 2D image. This transforms the approach into an image-
processing problem, similar to denoising or super-resolution, where CNNs also
are often used.

We generate customized standard-compliant waveforms and channel models to


use as our training data. Training data is required for the CNN to learn
information about the channel. Then the trained channel-estimation CNN is used
to process images that contain linearly interpolated received pilot symbols (Fig.
3). You can see how the extracted pilot symbols on the left portion are used to
estimate the remaining portions of the channel.

3. This channel-estimation process is using a CNN. (©1984–2021 The MathWorks, Inc.)

Our channel-estimation CNN is trained on various channel configurations based


on different delay spreads, Doppler shifts, and signal-to-noise ratio (SNR) ranges
between 0 and 10 dB.

To simulate a PDSCH transmission, we perform the following steps:

 Generate PDSCH DM-RS and map/insert into the resource grid.


 Perform orthogonal frequency-division multiplexing (OFDM) modulation.
 Send modulated waveform through the channel model.
 Add white Gaussian noise.
 Perform perfect timing synchronization.
 Perform OFDM demodulation.

You can use perfect, practical, and neural-network estimations of the same
channel model and compare their performance. 5G Toolbox has functions to
perform both perfectand practicalchannel estimations, which we use below in
our comparison.

To perform channel estimation using the neural network, you must interpolate
the received grid. Then split the interpolated image into its real and imaginary
parts and input these images together into the neural network as a single batch.
In Figure 4, you can see the mean squared error (MSE) of each estimation
method, including the individual channel estimations and the actual channel
realization obtained from the channel path gains and filter taps. The neural-
network estimator achieves the best results and both the practical estimator and
the neural-network estimator outperform linear interpolation.

4. Shown is channel estimation using different techniques compared to the actual channel
realization obtained from the channel path gains and filter taps. (©1984–2021 The MathWorks,
Inc.)

This type of channel estimation can be extended to other wireless applications.

 Deep Learning Data Synthesis for 5G Channel Estimation (example): Learn


how to use a trained CNN to perform channel estimation in single-input
single-output (SISO) mode, utilizing the physical downlink shared
channel (PDSCH) demodulation reference signal (DM-RS).
 NR PDSCH Throughput(examples) – Learn how to measure the physical
downlink shared channel (PDSCH) throughput of a 5G New Radio (NR)
link, as defined by the 3GPP NR standard.
 Deep Learning in Wireless Systems (examples) – Learn how deep-learning
techniques can be applied to wireless applications.
See additional 5G, radar, and EW resources, including those referenced in
previous blog posts.

Algorithms to Antenna: Train Deep-Learning Networks with


Synthesized Radar and Comms Signals
Nov 20th, 2019
In this blog, we show how you can use learning techniques in cognitive radar,
software-defined radio, and efficient spectrum-management applications to
effectively identify modulation schemes.
Honglei Chen, Ethem Sozer, Rick Gentile

Modulation identification is an important function for an intelligent receiver.


There are numerous applications in cognitive radar, software-defined radio
(SDR), and efficient spectrum management. To identify communications and
radar waveforms, it’s necessary to classify their modulation type. DARPA’s
Spectrum Collaboration Challenge highlights the need to manage the demand
for a shared RF spectrum. Here, we show how you can exploit learning
techniques in these types of applications to effectively identify modulation
schemes.

Figure 1 shows an electronic-support-measures (ESM) receiver and an RF


emitter to demonstrate a simple scenario in which this type of application can be
applied.
1. Shown is an electronic support measures (ESM) and RF emitter scenario.

The signal processing required to pull signal parameters out of the noise are
well-established. For example, you can model a system that performs signal
parameter estimation in a radar warning receiver. The processed output of this
type of system is shown in Figure 2. Typically, a wideband signal is received by a
phased array and signal processing is performed to determine parameters that
are critical to identifying the source, including direction and location.
2. It’s possible to model a system that performs signal parameter estimation in a radar warning
receiver.

As an alternate approach, deep-learning techniques can be applied to obtain


improved classification performance.

Applying Deep-Learning Techniques

Modulation identification is challenging because of the range of waveforms that


exist in any given frequency band. In addition to the crowded spectrum, the
environment can be harsh in terms of propagation conditions and non-
cooperative interference sources. Adding to the challenge are questions such as:

 How will these signals present themselves to the receiver?


 How should unexpected signals, which haven’t been received before, be
handled?
 How do the signals interact/interfere with each other?

Machine- and deep-learning techniques can be applied to help with the


challenge. You can make tradeoffs between the time required to manually
extract features to train a machine-learning algorithm and the large data sets
required to train a deep-learning network.

Manually extracting features can take time and will require detailed knowledge
of the signals. On the other hand, deep-learning networks need large amounts of
data for training purposes to ensure the best results. One benefit of using a
deep-learning network is that less preprocessing work and less manual feature
extraction are required.
The good news is you can generate and label synthetic, channel-impaired
waveforms. These generated waveforms provide training data that can be used
with a range of deep-learning networks (Fig. 3).

3. Modulation identification workflow with deep learning: When using a deep-learning network,
less preprocessing work and less manual feature extraction are required.

Of course, data can also be generated from live systems, but this data may be
challenging to collect and label. Keeping track of waveforms and syncing
transmit and receive systems often results in difficult-to-manage large data sets.
It’s a challenge to also coordinate data sources that are not geographically co-
located, including tests that span a wide range of conditions. In addition,
labeling this data as it’s collected (or after the fact) requires lots of work
because ground truth may not always be obvious.

Synthesizing Radar and Communications Waveforms

Let’s look at a specific example with the following mix of communications and
radar waveforms:

Radar

 Rectangular
 Linear frequency modulation (LFM)
 Barker code

Communications

 Gaussian frequency shift keying (GFSK)


 Continuous phase frequency shift keying (CPFSK)
 Broadcast frequency modulation (B-FM)
 Double-sideband amplitude modulation (DSB-AM)
 Single-sideband amplitude modulation (SSB-AM)

We programmatically generate thousands of I/Q signals for each modulation


type. Each signal has unique parameters and is augmented with various
impairments to increase the fidelity of the model. For each waveform, the pulse
width and repetition frequency are randomly generated. For LFM waveforms,
the sweep bandwidth and direction are randomly generated. For Barker
waveforms, the chip width and number are randomly generated.

All signals are impaired with white Gaussian noise. In addition, a frequency
offset with a random carrier frequency is applied to each signal. Finally, each
signal is passed through a channel model. In this example, a multipath Rician
fading channel is used, but others could be swapped in place of the Rician
channel.

The data is labeled as it’s generated in preparation to feed the training network.
To improve the classification performance of learning algorithms, a common
approach is to input extracted features in place of the original signal data. The
features provide a representation of the input data that makes it easier for a
classification algorithm to discriminate across the classes. We compute a time-
frequency transform for each modulation type. The downsampled images for one
set of data are shown in Figure 4.
4. Time-frequency representations of radar and communications waveforms are illustrated.

These images are used to train a deep convolutional neural network (CNN).
From the data set, the network is trained with 80% of the data and tested with
10%. The remaining 10% is used for validation.
On average, over 85% of AM signals were correctly identified. From the
confusion matrix (Fig. 5), a high percentage of DSB-AM signals were
misclassified as SSB-AM and vice versa.

5. The confusion matrix with the results of the classification reveals that a high percentage of
DSB-AM signals were misclassified as SSB-AM and vice versa.

The framework for this workflow enables you to investigate the


misclassifications to gain insight into the network’s learning process. DSB-AM
and SSB-AM signals have a very similar signature, explaining in part the
network’s difficulty in correctly classifying these two types. Further signal
processing could make the differences between these two modulation types
clearer to the network and result in improved classification.

This example shows how radar and communications modulation types can be
classified by using time-frequency signal-processing techniques and a deep-
learning network.

Using Software Defined Radio

Let’s look at another example in which we use data from a software-defined


radio for the testing phase.
For our second approach, we generate a different data set to work with. The
data set includes the following 11 modulation types (eight digital and three
analog):

 Binary phase-shift keying (BPSK)


 Quadrature phase-shift keying (QPSK)
 8-ary phase-shift keying (8-PSK)
 16-ary quadrature amplitude modulation (16-QAM)
 64-ary quadrature amplitude modulation (64-QAM)
 4-ary pulse amplitude modulation (PAM4)
 Gaussian frequency-shift keying (GFSK)
 Continuous-phase frequency-shift keying (CPFSK)
 Broadcast FM (B-FM)
 Double-sideband amplitude modulation (DSB-AM)
 Single-sideband amplitude modulation (SSB-AM)

In this example, we generate 10,000 frames for each modulation type. Again,
80% of the data is used for training, 10% for testing, and 10% for validation.

For digital-modulation types, eight samples are used to represent a symbol. The
network makes each decision based on single frames rather than on multiple
consecutive frames. Similar to our first example, each signal is passed through a
channel with AWGN, Rician multipath fading, and a clock offset. We then
generate channel-impaired frames for each modulation type and store the
frames with their corresponding labels.

To make the scenario more realistic, a random number of samples are removed
from the beginning of each frame to remove transients and make sure that the
frames have a random starting point with respect to the symbol boundaries. The
time and time-frequency representations of each waveform type are shown
in Figure 6.
6. Shown are examples of time representation of generated waveforms (top) and
corresponding time-frequency representations (bottom).

In the previous example, we transformed each of the signals to an image. For


this example, we apply an alternate approach in which the I/Q baseband samples
are used directly without further preprocessing.

To do this, we can use the I/Q baseband samples in rows as part of a 2D array.
Here, the convolutional layers process in-phase and quadrature components
independently. Only in the fully connected layer is information from the in-phase
and quadrature components combined. This yields a 90% accuracy.

A variant on this approach is to use the I/Q samples as a 3D array in which the
in-phase and quadrature components are part of the third dimension (pages).
This approach mixes the information in the I and Q even in the convolutional
layers and makes better use of the phase information. The variant yields a result
with more than 95% accuracy. Representing I/Q components as pages instead of
rows can improve the accuracy of the network by about 5%.

As the confusion matrix in Figure 7 shows, representing I/Q components as


pages instead of rows dramatically increases the ability of the network to
accurately differentiate between 16-QAM and 64-QAM frames and between
QPSK and 8-PSK frames.
7. The confusion matrix of results with I/Q components as pages indicates the
dramatically increased ability of the network to accurately differentiate between
16-QAM and 64-QAM frames and between QPSK and 8-PSK frames.

In the first example, we tested our results using synthesized data. For the
second example, we employ over-the-air signals generated from two ADALM-
PLUTO Radios (shown in Figure 8, where one is used as a transmitter and one is
a receiver). The network achieves 99% overall accuracy when two radios are
stationary and configured on a desktop. This is better than the results obtained
for synthetic data because of the configuration. However, the workflow can be
extended for radar and radio data collected in more realistic scenarios.

8. This is the SDR configuration using ADALM-PLUTO Radios. One is used as a


transmitter and one is a receiver.

Figure 9 shows an app using live data from the receiver. The received waveform
is shown as an I/Q signal over time. The Estimated Modulation window in the
app reveals the probability of each type as predicted by the network.
9. Shown is the Waveform Modulation Classifier app with live data classification.
Here, the received waveform is an I/Q signal over time.

Frameworks and tools exist to automatically extract time-frequency features


from signals. These features can be used to perform modulation classification
with a deep-learning network. Alternate techniques to feed signals to a deep-
learning network are also possible.

It’s possible to generate and label synthetic, channel-impaired waveforms that


can augment or replace live data for training purposes. These types of systems
can be validated with over-the-air signals from SDRs and radars.

To learn more about the topics covered in this blog, see the examples below or
email me at [email protected].
 Radar Waveform Classification Using Deep Learning (example): Learn how
to classify radar waveform types of generated synthetic data using the
Wigner-Ville distribution (WVD) and a deep CNN.
 Radar Target Classification Using Machine Learning and Deep
Learning (example): Learn how to classify radar returns with both
machine- and deep-learning approaches.
 Modulation Classification with Deep Learning (example): Learn how to use
a CNN for modulation classification. You generate synthetic, channel-
impaired waveforms. Using the generated waveforms as training data,
you train a CNN for modulation classification. Then you test the CNN
with SDR hardware and over-the-air signals.
 Deep Learning for Signals (video): Learn how you can use techniques such
as time-frequency transformations and wavelet scattering networks in
conjunction with CNNs and recurrent neural networks to build predictive
models on signals.

See additional 5G, radar, and EW resources, including those referenced in


previous blog posts.

 LOG IN
 REGISTER
 SEARCH

LATEST FROM SYSTEMS

Systems

IMS 2021: SignalCore's Latest SMT Synthesizer Modules


Jun 21st, 2021

Software

Cloud-Native OpenRAN Software Delivers 5G Scalability

Jun 21st, 2021

Systems

What's New at IMS 2021: Virtual Day 1

Jun 21st, 2021

Systems

Algorithms to Antenna: 5G Channel Estimation Using Deep-Learning


Techniques

Jun 16th, 2021

Test & Measurement


2021 Special Report: 5G Test

Jun 14th, 2021

Systems

What Hath Technology Wrought?

Jun 11th, 2021

Systems

What's New at IMS 2021: Live Day 4

Jun 10th, 2021

Systems

What's New at IMS 2021: Live Day 3

Jun 9th, 2021


Systems

What's New at IMS 2021: Live Day 2

Jun 8th, 2021

 TECHNOLOGIES
 SYSTEMS

Algorithms to Antennas: Develop and


Test Algorithms on Commercial
Radars—Moving from Simulation to
Hardware
Aug 19th, 2020

In this installment of Algorithms to Antenna, we describe a


workflow you can use to connect to software-defined radars in
MATLAB.
Rick Gentile
Honglei Chen, Michael Jian
Related To: MathWorks

In a previous blog, we reviewed a workflow that be used to classify radar micro-


Doppler signatures from pedestrians and bicyclists via deep-learning techniques.
In another blog, we showed ways you can implement a virtual array to increase
angular resolution in multiple-input, multiple-output (MIMO) radars. In both
examples, we modeled a radar system and a set of scenarios to test the
algorithms we developed. Either one is a great application to try out on
commercially available radars, which brings us to our current blog topic.

Here, we shed light on a workflow that can be used to connect to software-


defined radars in MATLAB. There are many ways to connect radars to MATLAB,
but we will focus on the Ancortek radars because of the range of frequency
bands available (2.4 GHz, 6 GHz, 10 GHz, 24 GHz, and 77 GHz), the multiple
antenna and array front-end options, and the direct connectivity of these
systems to MATLAB.

All of the previous blogs focused on modeling and simulation frameworks to help
accelerate algorithm development. One of the goals is to make it easy for
engineers to try algorithms before building their systems. When a radar can be
used to collect data, you can move to the next stage in a project with confidence
that the algorithms are effective. Along these same lines, commercially available
radars provide a head start in working with hardware at the earliest stages of
the project.

At the virtual IEEE International Radar Conference in May 2020, we presented


how we tested the deep-learning network, previously described in this blog, with
data collected from an Ancortek radar in a test setup shown in Figure 1. The
network was trained completely with synthesized data, and the network was
able to recognize pedestrians in the radar field of view based on micro-Doppler
signatures. Even though the radars made it easy to collect data, it would have
been quite a challenge to collect the same amount of datasets that we simulated
in the network training process.
1. Data is collected from an Ancortek radar. (Courtesy of Ancortek Inc.)

Another example, described in an earlier blog, related to increasing angular


resolution with MIMO radars—this was shown at the conference as well.
In Figure 2, you can see the setup using the hardware. The targets consist of
two corner reflectors separated by 25 degrees.

The plot on the bottom left of Figure 2 shows the results of range-angle
processing without using a virtual array to increase angular resolution. Here,
the two reflectors look like a single object. The plot on the bottom right
of Figure 2 shows the results of creating a virtual array. Note the two reflectors
are resolved in angle. The MIMO operations are possible on the radar because
the transmitter array is spaced appropriately, and we have control of when
individual array elements transmit. The receive elements also provide spatial
awareness.
2. A virtual array is used to increase angular resolution in the radar test setup (top);
reflectors appear as a single object without MIMO operations (bottom left), and two
reflectors are resolved using MIMO processing (bottom right). (Courtesy of Ancortek
Inc.)

One more example we want to highlight from the conference is shown in Figure
3. In this case, instead of corner reflectors or pedestrians, the radar detects a
dancing dinosaur. The display on the top right shows the detected range over
time and the display on the bottom right shows angle over time. Both examples
show how the phased array as the front end of the receiver enables
beamforming and direction of arrival (click here for the full video).
3. Radar detects a dancing dinosaur in range (top right) and angle (bottom right).
(Courtesy of Ancortek Inc.)

To learn more about the topics covered in this blog, see the examples below or
email me at [email protected]:

 Increasing Angular Resolution with MIMO Radars(example): Learn how forming a


virtual array in MIMO radars can help increase angular resolution. Understand
how to simulate a coherent MIMO radar signal-processing chain.
 Pedestrian and Bicyclist Classification Using Deep Learning(example): Learn to
classify pedestrians and bicyclists based on their micro-Doppler characteristics
using a deep-learning network and time-frequency analysis.
 The Micro-Doppler Effect in Radar, Second Edition (book): Learn about micro-
Doppler effects from radar returns.

See additional 5G, radar, and EW resources, including those referenced in


previous blog posts.
Algorithms to Antenna: Labeling
Radar and Comms Signals for Deep-
Learning Apps
Apr 21st, 2021
Previous blogs on deep learning focused on applying techniques to
various radar and communications applications. Here we look at
labeling the real-world data gathered from a radar, radio, or
instrumentation.
Rick Gentile
Honglei Chen, Frantz Bouchereau, Shrey Joshi
Related To: MathWorks
As part of this series, we have written multiple posts on applying deep-learning
techniques to radar and communications applications. The most recent posts
related to this topic include:

 Train Deep-Learning Networks with Synthesized Radar and


Communications Signals

 RF Fingerprinting for Trusted Communications Link


 Develop and Test Algorithms on Commercial Radars
In these blogs, we synthesized radar and communications data to train deep-
learning networks in target and signal classification applications and, most
recently, RF fingerprinting. We showed multiple examples where networks were
trained with synthesized data and tested the networks with data collected from a
radio or radar.

While previous blogs focused on data synthesis, here we look at labeling real-
world data from a radar, a radio, or instrumentation. This type of data typically
is in a complex format, with an I (in-phase) and Q (quadrature) component. The
data is usually contained in sets of large files where the only identifier is in the
file name, which might include information on when the data was collected.

Your data can be labeled in many ways. For example, you may label an entire
signal by its waveform modulation type. In other applications, labels may
indicate what type of target return (aircraft, drone, etc.) or interference is
included in the signal. These types of labels are referred to as categorical labels.

Moving to a more granular level, you may label specific characteristics of your
signals, such as pulse width, bandwidth, or pulse repetition frequency (PRF). In
other applications, labels might indicate regions of interest (ROI); for example, a
spurious interference event that occurs at a finite time interval within a larger
signal.

We will demonstrate how you can label the primary time and frequency features
of pulse radar signals using three common waveforms: linear FM, rectangular,
and stepped FM. The workflow helps you to create complete and accurate data
sets to train models used for deep learning. Note, this type of workflow can be
similarly applied to communications signals as well.

We will start with an interactive tool that was designed for this type of
application—the Signal Labeler app—which is part of the Signal Processing
Toolbox for MATLAB. For completeness, we will discuss ways to label data
manually and automatically using the app.

In manual mode, we leverage synchronized time and time-frequency views to


help identify frequency features related to the waveform modulation type. In
automated mode, we use functions that identify waveform characteristics,
including PRF, pulse width, duty cycle, and pulse bandwidth.

Figure 1 shows the Signal Labeler app with radar signals loaded and ready to be
labeled. There’s a Label Definitions panel (top left) and panels for the key
visualizations (right). To start, we create a label definition for each of our signal
waveform types. For our example, these label values are LinearFM, Rectangular,
and SteppedFM. As part of our setup, we also create attribute label definitions
for PRF, duty cycle, pulse width, and bandwidth. These parameters will be
automatically labeled using functions that process each signal.
%{[ data-embed-type="image" data-embed-id="60803f363903c58c478b49c3"
data-embed-element="span" data-embed-size="640w" data-embed-alt="1. The
Signal Labeler app with radar signals loaded. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_1.60803f3594520.png?auto=format&fit=max&w=1440" data-embed-
caption="1. The Signal Labeler app with radar signals loaded. (©1984–2021 The
MathWorks, Inc.)" ]}%

We use an ROI label to capture a region in each signal that spans from the initial
and final times over which an event or characteristic of interest occurs for the
characteristic we want to label. Once the labels are defined, you can upload
custom functions you author in MATLAB to the labeler. For our example, we use
functions customized to label the PRF, bandwidth, duty cycle, and pulse
width. Figure 2 shows a zoomed view of the automated functions gallery in the
labeler toolstrip.

%{[ data-embed-type="image" data-embed-id="60803f4c8f14e676168b469f"


data-embed-element="span" data-embed-size="640w" data-embed-alt="2.
Labeling functions with the Signal Labeler app. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_2.60803f4c2ef8f.png?auto=format&fit=max&w=1440" data-embed-
caption="2. Labeling functions with the Signal Labeler app. (©1984–2021 The
MathWorks, Inc.)" ]}%

As we noted earlier, to demonstrate the ability to manually label a signal, we


first inspect the signal in the time and time-frequency domains (Fig. 3). Once we
visually determine the signal type by looking at the time-frequency content, we
can manually assign the label. A few notes to keep in mind: The signals we use
in the example are ideal signals to make it easy to see the attributes; however,
these same techniques can be used effectively on noisy signals with any number
of impairments. In addition, labeling can be done automatically by applying the
techniques described below.

Note that the waveform characteristics aren’t filled in yet (Fig. 3, bottom right).
We will show how to do this labeling shortly. For the manual labeling portion of
the workflow, the labels you define (in this case, the waveform types) are
available in the dropdown menu, which you can pick from.

%{[ data-embed-type="image" data-embed-id="60803f63a6ade999358b45cf"


data-embed-element="span" data-embed-size="640w" data-embed-alt="3.
Manual labeling for a rectangular waveform. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_3.60803f633071c.png?auto=format&fit=max&w=1440" data-embed-
caption="3. Manual labeling for a rectangular waveform. (©1984–2021 The
MathWorks, Inc.)" ]}%

Figures 4 and 5 show the similar visualizations for the linear FM and stepped
FM waveforms, respectively. Note the respective waveform type label has been
added for each signal.

%{[ data-embed-type="image" data-embed-id="60803f7c3903c593478b49aa"


data-embed-element="span" data-embed-size="640w" data-embed-alt="4.
Manual labeling for a linear FM waveform. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_4_web.60803f7be71bf.png?auto=format&fit=max&w=1440" data-
embed-caption="4. Manual labeling for a linear FM waveform. (©1984–2021
The MathWorks, Inc.)" ]}%

%{[ data-embed-type="image" data-embed-id="60803f968f14e6bf558b497f"


data-embed-element="span" data-embed-size="640w" data-embed-alt="5.
Manual labeling for a stepped FM waveform. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_5.60803f966e293.png?auto=format&fit=max&w=1440" data-embed-
caption="5. Manual labeling for a stepped FM waveform. (©1984–2021 The
MathWorks, Inc.)" ]}%

With the manual labeling complete for each waveform type, we can now use
functions to automatically compute and label the characteristics of each input
signal. For more information on the functions used in our example, and how you
can add and customize your own functions, please go here. The results are
shown in the bottom-right section of Figure 6. The nice part of this workflow is
that you can customize your own functions in MATLAB and add them to the
function gallery in the Signal Labeler app.

%{[ data-embed-type="image" data-embed-id="60803fdefdc9144d198b46ec"


data-embed-element="span" data-embed-size="640w" data-embed-alt="6.
Labeled waveforms with corresponding signal attributes for each ROI.
(©1984–2021 The MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_6.60803fde0575b.png?auto=format&fit=max&w=1440" data-embed-
caption="6. Labeled waveforms with corresponding signal attributes for each
ROI. (©1984–2021 The MathWorks, Inc.)" ]}%

We have a small data set in this example, but you would typically have much
larger sets. You can view your labeling progress and verify the computed label
values are correct. Figure 7 (left) shows the labeling progress, which in this
example is 100%, as all signals are labeled. Figure 7 (right) shows the number of
signals with labels for each label value. The pie chart can be utilized to ensure
you have a balanced data set. In this case, you can also assess the accuracy of
your labeling and confirm the results are as expected.

%{[ data-embed-type="image" data-embed-id="60803ff7a6ade906338b4607"


data-embed-element="span" data-embed-size="640w" data-embed-alt="7.
Dashboard with labeling progress and distribution. (©1984–2021
The MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_7.60803ff65e88d.png?auto=format&fit=max&w=1440" data-embed-
caption="7. Dashboard with labeling progress and distribution. (©1984–2021
The MathWorks, Inc.)" ]}%

Furthermore, the dashboard can be used to analyze signal region labels. For
example, we can see in Figure 8 that all pulse-width label values are distributed
around 5e-5, which matches our data-set ground truth.

%{[ data-embed-type="image" data-embed-id="6080400d8f14e651558b49a9"


data-embed-element="span" data-embed-size="640w" data-embed-alt="8.
Available SNR as a function of range. (©1984–2021 The
MathWorks, Inc.)"
data-embed-src="https://base.imgix.net/files/base/ebm/mwrf/image/2021/04/
Figure_8.6080400d11b46.png?auto=format&fit=max&w=1440" data-embed-
caption="8. Available SNR as a function of range. (©1984–2021 The
MathWorks, Inc.)" ]}%

When labeling is completed, you can export the labeled signals back into
MATLAB to train your deep-learning models.

To learn more about the topics covered in this blog and explore your own
designs, see the examples below or email me at [email protected]:

 Automatically Label Radar Signals (example): Learn how to label the main
time and frequency features of pulse radar signals and create complete
and accurate data sets to train artificial-intelligence (AI) models.
 AI for Radar (examples): Learn how AI techniques can be applied to radar
applications.
 Deep Learning in Wireless Systems (examples): Learn how deep-learning
techniques can be applied to wireless applications.

See additional 5G, radar, and EW resources, including those referenced in


previous blog posts.

Rick Gentile is Product Manager, Shrey Joshi is Senior Engineer, Frantz


Bouchereau is Engineering Manager, and Honglei Chen is Principal Engineer
at MathWorks.

You might also like