0% found this document useful (0 votes)
9 views14 pages

Resource-efficient Range-Doppler Map Generation Us

This paper presents a deep learning approach for enhancing the resolution of range-Doppler maps in automotive radar systems using a U-net-based generative adversarial network (GAN). The proposed method efficiently generates super-resolution maps from low-resolution inputs, achieving a minimal increase in pixel-wise error compared to high-resolution maps while improving target detection and tracking performance. The effectiveness of the method is validated through simulations and real-world measurements, demonstrating significant resource efficiency in radar operations.

Uploaded by

Nick Moffitt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views14 pages

Resource-efficient Range-Doppler Map Generation Us

This paper presents a deep learning approach for enhancing the resolution of range-Doppler maps in automotive radar systems using a U-net-based generative adversarial network (GAN). The proposed method efficiently generates super-resolution maps from low-resolution inputs, achieving a minimal increase in pixel-wise error compared to high-resolution maps while improving target detection and tracking performance. The effectiveness of the method is validated through simulations and real-world measurements, demonstrating significant resource efficiency in radar operations.

Uploaded by

Nick Moffitt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

This article has been accepted for publication in IEEE Access.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2023.DOI

Resource-efficient Range-Doppler Map


Generation Using Deep Learning
Network for Automotive Radar Systems
TAEWON JEONG1 , (Graduate Student Member, IEEE), and SEONGWOOK LEE2 , (MEMBER,
IEEE)
1
School of Electronics and Information Engineering, College of Engineering, Korea Aerospace University, Deogyang-gu, Goyang-si, Gyeonggi-do 10540,
Republic of Korea (e-mail: [email protected])
2
School of Electrical and Electronics Engineering, College of ICT Engineering, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Republic of
Korea (e-mail: [email protected])
Corresponding author: Seongwook Lee (e-mail: [email protected]).
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the
Korea government (MSIT) (No. 2021-0-00237).

ABSTRACT In this paper, we present a deep neural network aimed at enhancing the resolution of range-
Doppler (RD) maps in frequency-modulated continuous wave radar systems. The proposed deep neural
network consists of an U-net-based generator and a discriminator. The low-resolution (LR) RD map is
processed through the generator, resulting in a super-resolution (SR) RD map. Then, the discriminator
compares the SR RD map obtained from the generator with ground truth high-resolution (HR) RD map.
Finally, the generator continuously trains until the loss between the two RD maps is minimized. The
efficacy of the proposed method has been verified through simulations and real-world measurements. When
compared with the ground truth HR RD map, the generated SR RD map by proposed method showed
only 5.24% increase in pixel-wise mean squared error and a 0.477% decrease in peak signal-to-noise ratio.
Through the proposed method, target detection and tracking performance can be improved by efficiently
operating radar resources.

INDEX TERMS Frequency-modulated continuous wave (FMCW), generative adversarial network (GAN),
range-Doppler (RD) map, super-resolution (SR).

I. INTRODUCTION more frames can be obtained, but the velocity resolution of


ECENTLY, the frequency-modulated continuous wave the target decreases.
R (FMCW) [1] has become the most commonly used
waveform in automotive radar systems. The FMCW radar
In this study, we propose a deep neural network to enhance
the velocity resolution of the targets in range-Doppler (RD)
system determines the maximum detectable range, velocity, maps obtained with the FMCW radar systems. The low-
range resolution, and velocity resolution based on the time resolution (LR) RD map is defined as the RD map generated
and frequency resources used. In other words, the radar sys- using a small number of chirps. The deep learning network
tem’s target detection performance depends on its bandwidth aims to increase the velocity resolution of these RD maps as
or the number of chirps used, which are referred to as radar if more chirps were used. To accomplish this, we first create
resources. For example, the range resolution depends on the a database of LR RD maps that use fewer radar resources.
bandwidth used by the waveform, and the velocity resolution Meanwhile, we also create a database of ground truth high-
depends on the frame time [2]. Because the chirp duration resolution (HR) RD maps that use more radar resources.
is constant, using more chirps can lead to a longer frame Then, we design a generative adversarial network (GAN) [3]-
time. Therefore when more chirps are used in one frame, based network to transform the LR RD map into a super-
the velocity resolution of the target increases, but the number resolution (SR) RD map. In general, the structure of the GAN
of frames that can be obtained is reduced. Conversely, if the consists of two main parts: a generator and a discriminator.
frame time is shortened by reducing the number of chirps, When the generator receives input data of noise vector or

VOLUME 10, 2022 1

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

arbitrary form, it converts the input into the data desired by • In contrast to the approach proposed in [10], which
the user. On the other hand, the discriminator numerically utilizes a deep learning network to increase resolution
compares the generated data with the ground truth data to in the range-angle plane, our method employs a simple
determine how similar the generated data is. After all, the structure to enhance resolution in the range-Doppler
training objective of the GAN is to make the generated plane through the use of deep learning.
data very similar to ground truth data, making the two data • Because the same effect as using many chirps can be
indistinguishable. obtained even with smaller number of chirps, the frame
Some studies have been conducted to improve the reso- time in radar can be shortened, and thus, the trajectory
lution of the radar data by applying the deep neural network. of a target can be identified efficiently during the same
For example, the Dense U-Net, which consists of convolution time period.
layers and skip connections, is used to enhance the resolution The remainder of the paper is organized as follows. In
of weather radar data in [4]. In [5], authors proposed the mod- Section II, we introduce a conventional RD map generation
ified residual deep neural network to enhance the direction- method in the FMCW radar system. Then, the GAN-based
of-arrival resolution. In [6], authors proposed a noise-free deep learning network for generating SR RD maps is pre-
GAN to suppress the noise and enhance the overall resolution sented in Section III. Next, in Section IV, the performance
in synthetic aperture radar images. Also, the GAN-based of the proposed method is verified through simulations and
network for medical image translation, which consists of actual measurements. Finally, we conclude this paper in
a CasNet generator and a patch discriminator, was used to Section V.
enhance the resolution in the time-velocity plane (i.e., micro-
Doppler signature) [7] and range-angle plane in [8]. In [9], a II. RD MAP GENERATION IN FMCW RADAR SYSTEM
radar-SRGAN using a radar coordinate transfer module and In this section, we describe the basic principles of generating
a digital beam-forming method was proposed to improve the RD maps in FMCW radar systems. In addition, we introduce
resolution in the range-angle plane. Moreover, a deep mutual a conventional HR RD map generation method for building a
GAN consisting of two generators and one discriminator was ground truth HR RD map database.
used to enhance the angular resolution in the radar system
[10]. A. BASIC RD MAP GENERATION IN FMCW RADAR
In our study, we employed the pix2pix [11]-based SR SYSTEM
image generation algorithm to enhance the image resolution. Because the FMCW can simultaneously obtain range and
Previous studies in the generation of LR radar data have velocity information of targets, it has been widely used in
utilized 20-50% of the available radar resources. Our method automotive radar systems in recent years [1]. As shown in
utilized only 12.5% of the chirps in one frame to produce Fig. 1, a total of Nc chirps are transmitted sequentially. In
the LR data. This highlights that our method can achieve each chirp, the frequency linearly increases over a constant
a comparable enhancement of the resolution by utilizing a time interval called the chirp duration. In Fig. 1, fc and B
smaller amount of radar resources. In addition, our method represent the carrier frequency and operating bandwidth of
does not require additional processing steps when generating the waveform, respectively. In addition, the entire transmis-
low-resolution images or using them as input for deep neural sion period, which is expressed as Tf in the figure, is defined
network. Furthermore, the U-Net [12] structure, which can as one frame.
show comparable results with a relatively small dataset, was Fig. 2 shows the overall block diagram of the FMCW radar
used as a pix2pix-based generator. Thus the training time can system. The FMCW radar system is composed of a waveform
be shortened. generator, a voltage-controlled oscillator (VCO), amplifiers,
Finally, the proposed network’s performance is evaluated signal mixers, transmitting and receiving antennas (Tx and
through simulations and actual radar signal measurements.
The SR RD map generated by the proposed network is
compared with the ground truth HR RD map generated using
more radar resources. Additional experiments are conducted
to verify the effectiveness of the proposed method in terms
of radar resource operation. Through the proposed method,
the number of frames that can be measured during the same
time interval increases so that the trajectory of the target can
be tracked more effectively.
In summary, the major contributions of our work can be
summarized as follows:
• Unlike the methods proposed in [8], [9], our method
uses less radar resources and does not require addi-
tional processing steps when generating low-resolution FIGURE 1. Signal transmitted from the FMCW radar system.
or high-resolution radar data.
2 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

Rx), a 90◦ phase shifter, frequency mixers, low-pass filters The 2D FT for (1) can be expressed as
(LPFs), an analog-to-digital converter (ADC), and a digital s −1 N
NX c −1
signal processor. Let us assume that a total of L targets are 1 X
U [d, v] = T [s, c] ×
located in the field of view of the radar system. Also, let Rl Ns Nc s=0 c=0
and vl represent the relative distance to the l-th target and  
s c

the relative velocity of the l-th target, respectively. When the exp −j2π d+ v . (2)
Ns Nc
signal transmitted from the radar is reflected by the l-th target
and then returned to the radar, the time delay component is In this work, we define the absolute value of U [d, v] (i.e.,
added to the received signal due to the relative distance Rl . In |U [d, v]|) as a RD map. In general, the range resolution of the
addition, the Doppler shift, which can be expressed as fd = FMCW is inversely proportional to the bandwidth (Rres ∝
1
2vl fc /c, is added to the received signal due to the relative B ) [14] and the velocity resolution is inversely proportional
velocity vl , where c denotes the speed of light. to the number of chirps (Vres ∝ T1f ).
As shown in Fig. 2, the received signal passes through the
B. CONVENTIONAL HR RD MAP GENERATION
amplifier to compensate for the path loss in the receiving
process, and is converted into a baseband signal by frequency In the radar system, HR frequency estimation algorithms
mixer and the LPF. Finally, the signals passed through the can be used to obtain HR RD maps. In this study, we use
LPF are sampled at the ADC. The signal after passing spectrum-based frequency estimation algorithms, such as the
through the ADC can be expressed as conventional beam-forming algorithm (i.e., Bartlett) [15] and
the multiple signal classification (MUSIC) algorithm [16].
The Bartlett algorithm finds a weight vector of received
L  
X 2Rl B signal that maximizes the signal strength while keeping the
T [s, c] = Al exp j2π s noise component constant in terms of the signal-to-noise
c∆t
l=1
 ratio (SNR). Meanwhile, the MUSIC, which is one of the
2fc Rl subspace-based algorithms, uses the orthogonality between
+fd Tf c + , (1)
c the signal subspace and the noise subspace.
Both of these methods use the correlation matrix of the
where Al denotes the amplitude of the baseband signal corre- received signal, and the correlation matrix for generating a
sponding to the l-th target. In addition, s (s = 1, 2, . . . , Ns ) HR RD map can be expressed as
and c (c = 1, 2, . . . , Nc ) in (1) denote the index of time
RnC = (FC (T [s, c]))H C
n (F (T [s, c]))n (3)
samples in each chirp and the index of each chirp.
Then, the time-sampled signal in (1) can be expressed as a or
two-dimensional (2D) matrix, as shown in Fig. 3. The ranges RnS = (FS (T [s, c]))H S
n (F (T [s, c]))n . (4)
and the Doppler frequencies for multiple targets can be
H
obtained by applying the Fourier transform (FT) to the time- In (3) and (4), F(·) and (·) represent one-dimensional
sampled baseband signal. For example, the range information (1D) FT and hermitian operator, respectively. In addition,
of the target can be extracted by applying the FT to the the superscript of F(·) represent the axis to where the FT is
sampling axis (i.e., s-axis). In addition, the Doppler shift applied, as shown in Fig. 4 (a). If the correlation matrix of
by the target can be estimated by applying the FT to the (3) is used, HR target detection on the range axis is possible.
chirp axis (i.e., c-axis). To summarize, applying the 2D FT to On the other hand, HR target detection on the Doppler axis is
(1), the relative distance and velocity information of multiple possible by using the correlation matrix of (4).
detected targets can be obtained simultaneously [13].

FIGURE 2. Block diagram of the FMCW radar system. FIGURE 3. The basic process of generating the RD map.

VOLUME 10, 2022 3

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

1) HR RD Map Generation Using Bartlett Method


First, we describe how to generate a HR RD map with respect
to the chirp axis. Before calculating the correlation matrix as
written in (4), 1D FT is performed on the sampling axis to the
data obtained in (1). Then, the pseudospectrum of the Bartlett
algorithm with n-th vector can be expressed as
aH
A (v) Rn aA (v)
Pn (v) = , (5)
aHA (v) aA (v)
where Rn is correlation matrix obtained by (4) and
2v 2v
aA (v)T = [1, e−jπ λ T0 , . . . , e−jπ λ (Nc −1)T0 ] is the steer-
ing vector considering the chirp duration. When generating (a)
a RD map using pseudospectrum of the Bartlett, the range
of the relative velocity are required. Let us assume that
we divide the range of relative velocity into k number of
intervals. By using k number of velocity values, k number of
steering vectors can be generated as shown in Fig. 4 (b). Then
we create a correlation matrix using the first vector based on
the sampling axis and calculate the pseudospectrum with k
number of steering vectors. The k × 1 size vector will be
generated as a result. If this process is repeated with all of
the vectors based on the sampling axis, the RD map can be
generated consequently.

2) HR RD Map Generation Using MUSIC Algorithm


Because we assumed L number of targets and Nc number
of chirps in II-A, the correlation matrix obtained through (4)
becomes a size of Nc × Nc matrix. In the MUSIC algorithm,
an eigenvalue decomposition is applied to the correlation
matrix and separates the entire space into the signal subspace
and the noise subspace. When the eigenvectors obtained (b)
through eigenvalue decomposition are arranged in the order FIGURE 4. (a) Visual representation of F(·) operator. (b) The overall
of magnitude of the eigenvalues, the first L eigenvectors scheme of velocity estimation with the Bartlett algorithm.
correspond to targets, and the remaining Nc − L eigenvectors
correspond to noise components:
III. PROPOSED NETWORK FOR SUPER-RESOLUTION
 
ν1, 1 ν1, 2 · · · ν1, L · · · ν1, Nc IMAGE GENERATION
 ν2, 1 ν2, 2 · · · ν2, L · · · ν2, Nc  In this section, we explain the structure of the generator and
N =  . (6) discriminator used in the proposed deep learning network.
 
.. .. .. .. . . ..
 . . . . . . 
We also explain which loss functions are used to train the
νNc , 1 νNc , 2 · · · νNc , L · · · νNc , Nc
| {z } | {z } generator and discriminator.
Signal subspace Noise subspace

To calculate the pseudospectrum of the MUSIC algorithm, A. STRUCTURE OF THE GENERATOR


a matrix consisting of eigenvectors corresponding to the We designed a generator based on the U-net [12] that consists
noise components is used. The pseudospectrum of the MU- of contracting paths and expansion paths, which is different
SIC algorithm can be expressed as from the structure of the GAN proposed in [3]. The generator
proposed in [3] starts from an empty noise vector and makes
aA (v)H aA (v)
PM (v) = , (7) it look like the ground truth image, but the U-net-based gen-
aA (v)H EN ENH a (v)
A erator does not. The U-net is an end-to-end (E2E) structured
where EN represents a matrix composed of eigenvectors deep neural network [17] proposed for image segmentation
constituting the noise subspace. Finally, the velocity of the in biomedical fields (e.g., finding boundaries between cells).
target is determined by v that maximizes the value of the When deep neural network-based SR imaging techniques did
normalized pseudospectrum. In order to generate the HR not exist, an algorithm that finds a target in an image must
RD map with the sampling axis, it is necessary to use the be applied first. Then, the SR imaging was completed by
correlation matrix of (4) and the steering vector considering applying an algorithm that increases the resolution of the
the sampling interval Ts . target image. In other words, two different algorithms had to
4 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

be used for the SR imaging tasks. However, the SR imaging the entire image. By using a patch discriminator, determining
tasks can be completed with a single algorithm if a deep whether an image is real or fake with fewer parameters is
neural network with an E2E structure is used. Also, most possible.
deep neural networks require large amounts of data because When training is performed by calculating the difference
they use a data-driven approach rather than a rules-based one. of each pixel, such as L1 loss, low frequency components in
The most significant advantage of the U-net is that it shows the image are well generated, while high frequency compo-
relatively accurate performance even with a small amount of nents are not. To accurately generate high frequency com-
data. ponents in the image, it is necessary to focus on a local
Fig. 5 shows the structure of the proposed network consist- part of the image rather than focusing on the entire image.
ing of the generator and the discriminator. As shown in the Therefore, the L1 loss and the patch discriminator are used to
figure, in the contracting path, a total of 8 convolution layers restore the low and high frequency components in the image,
were used, and the kernel size and the number of strides in respectively.
each convolution layer are 4 and 2, respectively. Also, all A 256 × 256 × 3 image generated through the U-net is
8 convolution layers use leaky rectified linear unit (Leaky combined with an exact same-sized ground truth image to
ReLU) functions as an activation function. Furthermore, the create 256 × 256 × 6 size image. The created 256 × 256 × 6
number of filters used in each of the eight layers is set to 64, size image is used as input to the patch discriminator. After
128, 256, 512, 512, 512, 512, and 512, respectively. the input layer, it passes through three convolution layers. In
Next, the expansion path is a process opposite to the each convolution layer, the number of filters increases in the
contracting path and consists of 8 layers identically. To order of 64, 128, and 256. In addition, the kernel size and the
reconstruct the image size reduced after passing through 8 number of strides are 4 and 2, and the Leaky ReLU function
consecutive convolution layers, up-convolution layers (i.e., is used as the activation function, respectively. Finally, a
convolution transpose layers) are used. The kernel size and convolution layer and a zero padding layer with kernel size
the number of strides of the up-convolution layers are the and number of strides of 4 and 1 are used twice as a pair. As
same as those in the contracting path, and the number of fil- a result, we get a feature map of size 30 × 30 × 1.
ters used in each layer is 512, 512, 512, 512, 256, 128, 64, and
3, respectively. In addition, the rectified linear unit (ReLU) C. LOSS FUNCTIONS IN THE GENERATOR AND THE
function is used as activation function for up-convolution DISCRIMINATOR
m, n
layers in the expansion path. Let ILR be the input for the proposed U-net-based generator
and Gm, n
unet be the generated output. Additionally, let GT
m, n

B. STRUCTURE OF THE DISCRIMINATOR be the ground truth image of corresponding LR image (i.e.,
m, n
ILR ), where m and n represent the width and height of
As shown in Fig 5, the patch discriminator [18] is used.
image. In the generator, two loss functions are defined, which
Unlike the pixel discriminator in [19], which compares every
can be expressed as
corresponding pixel from the generated image and the ground
truth image, the patch discriminator determines the authentic- 1 X X m, n
LG1 = − (O log(D(Gm, n
unet ))) (8)
ity of a generated image in a specific size of patch unit from mn
m∈M n∈N

FIGURE 5. The pipeline of proposed network for enhancing the resolution of the RD map.

VOLUME 10, 2022 5

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

and data obtained in real-world measurements, the radar cross


X X section (RCS) of the target and signal attenuation according
Ll1 = ∥GT m, n − Gm, n
unet ∥1 , (9) to the distance between the target and the radar was con-
m∈M n∈N
sidered in (1). The RCS value of the target was set based
where D(·) and Om, n denote the output of the discriminator on the shape of a trihedral with a side length of 20 cm,
and a matrix of size m × n in which all elements are 1, which is used in the real-world measurement. In addition,
respectively. In other words, the first loss function in (8) is white Gaussian noise is added to (1) in consideration of the
the binary cross-entropy (BCE) loss between D(·) and Om, n . noise component generated in the experimental environment.
The second loss function in (9) is the L1 loss function. The Because the simulations were designed based on actual radar
loss of the entire generator is obtained through the weighted system parameters [20], the results obtained in the simulation
sum of the two loss functions. The total loss function of the and the actual environment show a high degree of similarity
generator is defined as except for slight differences due to noise components. There-
fore, the weights obtained through simulation can be saved
LG = λ1 LG1 + λ2 Ll1 , (10)
(i.e., pre-trained weights) and used in the training process
where λ1 and λ2 the weights for each loss function. To with the actual dataset. Using the pre-trained weights, large
determine the values of the two weights, the value of λ1 amounts of training data are not required, and the training
was fixed at 1 and the value of λ2 was gradually increased. time is also reduced.
Finally, λ1 and λ2 were set to 1 and 40, respectively. In addition, the number of targets appearing in the RD
For the loss function of discriminator, the sum of the two map is set from 1 to 3 in the simulation. 250 RD maps were
BCE losses is used. The two BCE loss functions can be generated for each case, resulting in a total of 750 differ-
expressed as ent RD maps. When generating the RD map, each target’s
1 X X m, n relative distance and velocity information is set randomly
LD1 = − O log(D(GT m, n )) (11) between 0 ∼ 25 m and −10 ∼ 10 m/s. By applying the
MN
m∈M n∈N data augmentation technique that flips the image horizontally,
and vertically, and diagonally based on the image’s origin, a total
1 X X of 3000 RD maps were defined as the training dataset. For
LD2 = − (1 − Z m, n ) the test dataset, 30 RD maps were generated for each case.
MN
m∈M n∈N
× (1 − log (D(Gm, n
unet ))) , (12) 2) Simulation Results
where Z m, n
denotes and a matrix of size m × n in which all In general, the frame time determines the resolution of the
elements are 0. Finally, the total loss function consisting of Doppler axis (i.e., the velocity axis). As mentioned in I,
the two BCE loss functions are expressed as the chirp duration in the FMCW radar system is constant.
Therefore, we varied the number of chirps to adjust the frame
LD = LD1 + LD2 . (13) time. In the simulation, we verified the performance of the
These two loss functions, the BCE of the ground truth image proposed network by changing the number of chirps for the
and the BCE of the generated image, are added with each LR and the ground truth HR RD maps. First, when the LR
other without multiplying weights. and the ground truth HR RD maps are generated using 4
chirps and 64 chirps, respectively, the SR RD map generated
IV. PERFORMANCE EVALUATION
by the proposed network is shown in Fig. 6. As shown in the
A. SR RD MAP GENERATION RESULTS FROM
figure, even if the resolution of the RD map was increased
SIMULATIONS
through the proposed method, the target’s location in the SR
RD map cannot be accurately found. In addition, the target
1) Simulation Conditions
located farthest from the radar and detected with weak signal
First, we verify the performance of the proposed network
strength, disappears from the generated SR RD map.
through the simulations. The simulation dataset was gener-
In addition, in generating the ground truth HR RD map,
ated using the radar parameters in Table 1 and the signal
model in (1). To increase the similarity with the radar sensor

TABLE 1. Parameters of the radar system.

Parameter Value
Carrier frequency, fc 79 GHz
Bandwidth, B 3.1 GHz
Range resolution, Rres 4.8 cm
The number of chirps, Nc 8 or 64
FIGURE 6. Generated SR RD map with the LR RD map (Nc = 4) and the
The number of time samples in each chirp, Ns 256 ground truth HR RD map (Nc = 64).
Frame time, Tf 40 ms

6 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

we performed simulations by changing the number of chirps is used as an input, the new RD map obtained through the
to 32, 64, and 128 as shown in Fig. 7. As the number of GAN-based network is very similar to the ground truth HR
chirps increases from 32 to 64, the increase in resolution is RD map.
noticeable. However, there is no significant improvement in In addition to a simple simulation scenario where the
resolution when the number of chirps is increased from 64 targets are all separated, we have also obtained data for simu-
to 128. Therefore, when generating the ground truth HR RD lations that more closely resemble real-world measurements.
maps, it is appropriate to use 64 or 128 chirps instead of 32 These include the cases where targets are closely located in
chirps. the RD map, making it difficult to distinguish their individual
Moreover, we compared the outputs (i.e., generated SR RD areas, as well as the cases where the SNR is low. Fig. 11
maps) of the network trained using the ground truth HR RD (a) shows the result via the proposed method when there is
maps of 64 chirps and the network trained using the ground partially overlapping region in the RD map due to closely
truth HR RD maps of 128 chirps, where the LR RD map located targets. Also, Fig. 11 (b) shows the result of applying
was generated from 8 chirps. As shown in Fig. 8, if the the proposed method to the RD map obtained with a lower
resolution of the ground truth HR RD map for training the SNR value than ideal simulation scenarios.
proposed network is too high, a target with relatively weak Moreover, we verified whether the targets can be suc-
signal strength disappears from the SR RD map generated cessfully detected in the RD maps generated through the
by the proposed network. In summary, the upper and lower
bounds of the number of chirps used for generating the LR
and the ground truth HR RD maps are 8 and 64, respectively.
Finally, Figs. 9 (a) and (b) show examples of the LR RD maps
and ground truth HR RD maps when the number of targets is
1, 2, and 3, respectively.
As mentioned in the previous section, we generated a
training dataset consisting of 3000 LR RD maps and ground
truth HR RD maps. In addition, we also generated a test (a)
dataset consisting of 90 RD maps to validate the performance
of the proposed network. When the LR RD map shown in
Fig. 9 (a) is used as input and the ground truth HR RD map
shown in Fig. 9 (b) is used as the ground truth image, the
newly generated RD map through the proposed method is
shown in Fig. 10. As shown in Fig. 10, when the LR map
(b)
FIGURE 9. Generated RD map with simulation: (a) LR (Nc = 8) RD map.
(b) Ground truth (Nc = 64) HR RD map.

FIGURE 7. Generated RD maps with the different numbers of chirps


used.

FIGURE 8. Comparison between the generated SR RD maps and the


ground truth HR RD maps. FIGURE 10. Generated RD map images with proposed network.

VOLUME 10, 2022 7

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

proposed method in Fig. 11 by applying the constant false targets even when low SNR. However, when the targets are
alarm rate (CFAR) algorithm. Figs. 12 (a) and (b) show the perfectly overlapped in the RD map and cannot be distin-
results of applying the CFAR to the ground truth RD maps guished, the resolution of the targets could not be enhanced
and generated RD maps in Figs. 11 (a) and (b), respectively. even if the proposed method was used.
In the case where the targets are partially overlapped, the We quantitatively evaluate how similar the newly gener-
CFAR algorithm can detect both targets in both the ground ated RD map is to the ground truth HR RD map. Because
truth RD map and the generated RD map. On the other hand, the GAN is not a deep neural network for classification
in a noisy environment with partially overlapping targets, or prediction tasks, the accuracy score cannot be used as
only one target was detected in the ground truth RD map, an evaluation measure. Therefore, several image quality as-
while both targets were detected in the generated RD map. sessment (IQA) methods have been proposed to evaluate
Therefore, the proposed method can enhance the resolution the similarity between images quantitatively. For example, a
of closely located targets in the RD map and effectively detect pixel-wise mean squared error (PMSE), peak SNR (PSNR)
[21], structural similarity index measure (SSIM) [22], and
visual information fidelity (VIF) [23] can be used for the
IQA. Among these IQA methods, the PMSE and PSNR were
used to calculate the similarity between the generated RD
map and the ground truth RD map. Those two measures are
defined as
" #
X 1 X X m, n m, n 2
PMSE = (Gunet − GT ) (14)
MN
i∈R, G, B m∈M n∈N i
(a)
and
(255.0)2
 
PSNR = 10 log10 . (15)
PMSE
In addition, we evaluated the similarity through the distribu-
tion of the pixels in various RD maps. In particular, we used
(b) average and standard deviation of the pixel values.
FIGURE 11. Generated RD map with proposed network: (a) When the
The overall network training process can be seen through
targets are closely located. (b) When targets are in noisy environment Alg. 1 below. Before training, the dataset generated by sim-
(i.e., low SNR).
ulations and those obtained through the actual experiments
are required. The network is then trained based on the dataset
generated through simulations. When the two conditions are
satisfied simultaneously in the process of training, training is
set to be stopped and the weight vectors are saved. The two
conditions can be expressed as follows:
m, n
ρ[ILR , Gm, n m, n
unet ] − ρ[ILR , GT
m, n
] E1
m, n m, n
< (16)
ρ[ILR , GT ] 100
and
ς(GT m, n ) − ς(Gm, n
unet ) E1
< . (17)
(a) ς(Gm, n
unet ) 100
ρ[· , ·] represents the PMSE value between two variables in
ρ[· , ·], and ς(·) represents the PSNR value, respectively.
After the network is trained with simulation dataset, the
network is retrained with actual dataset. When training the
network with the actual dataset, the saved weight vectors
are used in the first epoch. From the second epoch, the
weight vectors that are updated during the training process
are used. Finally, when the two conditions mentioned above
are satisfied again, the training is set to be stopped. However,
(b) there is one thing that has changed. Because the network
FIGURE 12. The results after applying the CFAR algorithm : (a) When the is retrained based on the pre-trained weights, E1 value is
targets are closely located. (b) When targets are closely located in the substituted with smaller value, E2 . In this paper, we set
low SNR scenario.
E1 and E2 as 10 and 5. After the network was trained
8 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

Algorithm 1 Training process of the proposed network.


Require: simulation, actual dataset S(x, ŷ, y), A(x, ŷ, y)
(s) (s)
Require: LG , LD , LG , LD , Om, n , Z m, n
Require: Epoch = n
With S(x, ŷ, y) :
for epoch ∈ n :
if (PSNR and PMSE) > E1 % :
Obtain generated image D(Gm, n
unet )
(s) (s)
Calculate loss functions LG , LD
(s) FIGURE 13. Comparison between the ground truth and the generated SR
(s) (s) dLG
θG ← θG + ω (s)
RD map.
d θG
(s)
(s) (s) dLD
θD ← θD + ω (s) apply gradients
d θD
else:
(s) (s)
Save θp ; θp = (θG , θD )
end
end
With A(x, ŷ, y) :
for epoch ∈ Epochs :
if epoch = 1 :
θG , θD ← θp apply pretrained gradients
Obtain generated image D(Gm, n
unet )
Calculate loss functions LG , LD
else:
if (PSNR and PMSE) > E2 % :
Obtain generated image D(Gm, n
unet )
Calculate loss functions LG , LD
θG ← θG + ω dL
d θG
G

dLD
θD ← θD + ω d θD apply gradients (a)
else:
Save (PSNR and PMSE)
end
end

with the training dataset generated through simulations, the


network was verified with test dataset generated through
simulations. As a measure of the verification, the PMSE
value between the LR RD map and ground truth HR RD
m, n
map (i.e., ρ[ILR , GT m, n ]) and the PMSE value of the LR
RD map and the deep learning-based SR RD map (i.e.,
m, n
ρ[ILR , Gm, n
unet ]) were used. In addition as another measure of
the verification, the PSNR value of ground truth HR RD map
(i.e., ς(GT m, n )) and the PSNR of the deep learning-based
SR RD map (i.e., ς(Gm, n
unet )) were used. As shown in Fig. 13, (b)
when comparing the deep learning-based SR RD map with
FIGURE 14. Distribution of average pixel values in the test dataset : (a)
the ground truth, the PMSE value increased by 7.819% and For all pixel values. (b) Pixel values between 20 to 60.
the PSNR decreased by 2.481%.
Figs. 14 (a) and (b) shows the distribution of average pixel
values in the test dataset. The x and y-axis represent the
range of pixel value in the RD map image and the number map, Ground truth RD map, and the generated RD map are
of each pixel value, respectively. For pixel values below 50, 42.386, 37.849, and 38.402, respectively. Also, the standard
the distribution of the LR RD map is more dispersed than deviations are 13.972, 3.287, and 5.98. Therefore, we verified
that of the ground truth RD map. Also, the pixel distribution that the generated RD maps are highly similar to the ground
of the generated RD map follows the distribution of the truth RD maps.
ground truth RD map. The average pixel values of the LR RD
VOLUME 10, 2022 9

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

B. SR RD MAP GENERATION RESULTS FROM ACTUAL


MEASUREMENTS
1) Experimental Environments
To show the effectiveness of our proposed deep neu-
ral network, we conducted actual measurements using the
AWR1642BOOST board, which was produced by Texas
Instrument (TI) [20]. We used the AWR1642BOOST board
connected with a DCA1000EVM, as shown in Fig. 15. The
AWR-1642 radar module has two transmit antenna elements
and four receiving antenna elements. The physical size of
the antenna mounted on the board is 30 × 19 mm. Also, the
spacing between the transmitting antenna elements, the spac-
ing between the transmitting and receiving antenna elements,
and the spacing between the receiving antenna elements are
d, 4d, and 4.5d, respectively. Moreover, the 3dB beam width
in the azimuth direction is 70 degrees, and the 3dB beam
width in the elevation direction is 30 degrees. Because our
goal is to enhance the resolution for target detection in the RD FIGURE 16. Experimental environment for radar signal measurement.
map, it can be sufficient to acquire sensor data using only one
transmit antenna and one receiving antenna. In summary, one
transmit and one receiving antenna element were used among
the multiple-input and multiple-output antenna system [24]. used as the training dataset, and the remaining 18 frames
The data acquired through the AWR-1642 radar module can were used as the test dataset.
be saved as a binary file through the DCA1000EVM board
[25]. Then, the data stored as binary files can be read through 2) Experimental Results
TI-provided code implemented in Matlab or Python. By changing the number of chirps used as mentioned in
Fig. 16 shows the experimental environment for the radar Section IV-A, a total of 990 pairs of LR and ground truth
signal measurements. In the experiment environment, radar HR RD maps were used as the training dataset. Also, 162
sensor data were obtained through the 9 different scenarios is pairs of LR and ground truth HR RD maps were used as
shown in Fig. 17. As mentioned in Section IV-A, the trihedral the test dataset. Fig. 18 shows the LR RD map from the
corner reflectors with a side length of 20 cm were used as leftmost, the ground truth HR RD map, the generated SR
targets in the measurement. Various RD maps were obtained RD map by the proposed deep learning network and HR
because the moving direction and velocity of the targets are RD maps generated by applying the Bartlett and MUSIC
different in each scenario. For each scenario, we obtained 128 algorithms to the Doppler axis. When qualitatively evaluating
frames of radar data and of which the first 110 frames were the results, the RD maps generated through the MUSIC
algorithm and the proposed deep neural network show the
most similarity to the ground truth image. Applying Bartlett
reduces sidelobes but increases the resolution very slightly. In
addition, although the RD map from the MUSIC algorithm
exhibits a high resolution comparable to the ground truth
HR RD map, a target is often not detected in the RD map
as shown in Fig. 18 (c). The target disappears because the
MUSIC algorithm’s performance is highly sensitive to the
target’s SNR. In addition, the significant disadvantage of the
MUSIC algorithm is that the number of targets must be less
than the number of chirps used in one frame [26]. Therefore,
there is a limit to applying the MUSIC algorithm under the
condition when the number of targets exceeds the number of
radar resources.
Fig. 19 quantitatively compares the similarity between the
RD maps generated by the existing methods and the RD maps
generated based on the proposed deep neural network. In
addition, Gm, n m, n
Bart and GMUSIC represent the HR image output
with Bartlett and MUSIC algorithm. To compare the pro-
FIGURE 15. AWR1642BOOST and DCA1000EVM manufactured by Texas posed deep neural network-based method with the existing
Instruments.
SR imaging method, the PMSE value between the LR RD
10 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)


FIGURE 17. 9 experimental scenarios for the performance evaluation.

(a)

(b)

(c)
FIGURE 18. Comparison between generated RD map from the conventional HR imaging algorithms and the proposed deep learning-based method.

map and the ground truth HR RD map and the PSNR value reference value, respectively. In terms of the PSNR, the
of the ground truth HR RD map was set as reference values. deep learning-based method, the Bartlett-based method, and
In terms of the PMSE, the deep learning-based method, the MUSIC-based method decreased by 0.477%, 1.619%,
the Bartlett-based method, and the MUSIC-based method and 2.387% compared to the reference value, respectively.
increased by 5.24%, 18.8%, and 28.9% compared to the Furthermore, the average pixel values of the LR RD map,

VOLUME 10, 2022 11

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

RD map into the SR RD map.

V. CONCLUSION
In this paper, we proposed a deep learning-based network
for enhancing the resolution of the LR RD maps in the
FMCW radar system. The proposed network consists of the
U-net-based generator and the discriminator. When the U-net
receives an image as input, it divides the image into feature
maps. Then it increases the resolution of the image in the
process of reconstructing the image back to its original size.
In addition, the discriminator evaluates the performance of
(a) the generator by comparing the resolution enhanced image
with the ground truth image. The performance of the pro-
posed network was verified through simulations and actual
measurements. To evaluate the similarity between the RD
map generated by the proposed network and the ground
truth HR RD map, the PMSE, and PSNR were calculated.
Compared with the conventional HR imaging algorithms
(i.e., Bartlett and MUSIC algorithms), the PMSE value de-
creased by 12.9% and 22.5%, respectively, and the PSNR
value increased by 1.1% and 1.9%, respectively, in our
proposed method. Based on these measures, we confirmed
that the RD map generated by the proposed method showed
(b) a higher resemblance with the ground truth HR RD map
FIGURE 19. Comparison between the proposed and the conventional HR than the RD maps generated from the conventional HR
imaging methods: (a) PMSE and (b) PSNR values. imaging algorithms. In addition, additional experiments were
conducted to verify the performance of the proposed method
in terms of radar resource operation, and the target track-
Ground turth RD map, and generated RD map were 45.384, ing performance could be improved through the proposed
38.912, and 39.715, respectively. The standard deviation of method. Although the proposed deep neural network-based
pixel values were 15.857, 4.291, and 6.118. Consequently, technique was trained with the data obtained through the
the proposed deep learning-based method enhanced the res- automotive radar system, this does not represent that the
olution closer to the ground truth image than the existing HR proposed technique is limited to the automotive radar system.
imaging algorithms. The proposed method can be applied to all radar systems
capable of obtaining RD map data to enhance the resolution
of targets.
C. EFFICIENT MANAGEMENT OF RADAR RESOURCES
The following experiments were conducted to emphasize REFERENCES
the resource-efficient aspect of our proposed method. The [1] V. Winkler, “Range Doppler detection for automotive FMCW radars," 2007
number of chirps required to generate the range-Doppler map European Radar Conference (EuRAD), Munich, Germany, October 2007,
was reduced to 12.5%, and the period of each frame was also pp. 166–169.
[2] S. Rao, Introduction to mmwave Sensing: FMCW Radars,
decreased to 25%, as shown above in Fig. 20. Because the Texas Instruments, Dallas, TX, USA, [Online]. Available:
period of one frame is shortened to 25%, 4 frames can be https://www.ti.com/video/series/mmwave-training-series.html.
measured within the same time when 8 chirps are used. In [3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S.
Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets," 2014
other words, because the measurement period for the target Neural Information Processing Systems (NIPS), Montreal, Canada, Decem-
is reduced to 25%, it is possible to know the trajectory of ber 2014, pp. 2672–2680.
the moving target more precisely. As shown in Fig. 20, when [4] A. Geiss and J. Hardin, “Radar super resolution using a deep convolutional
neural network," Journal of Atmospheric and Oceanic Technology, vol. 37,
the detection result for a moving target is obtained using 64 no. 12, pp. 2197–2207, November 2020.
chirps, the transition from the first frame to the second frame [5] M. Alizadeh, M. Chavoshi, A. Samir, A. M. Hegazy, A. Bahri, M. Basha,
follows the green dotted line. At the same time, if we use and S. Safavi-Naeini, “Experimental deep learning assisted super-resolution
8 chirps to acquire detection results, 3 more frames can be radar imaging," 2021 European Radar Conference (EuRAD), London,
United Kingdom, April 2022, pp.153–156.
obtained along the orange dotted line. If the reduced velocity [6] F. Gu, H. Zhang, C. Wang, and F. Wu, “SAR image super-resolution
resolution using only 8 chirps is regenerated into the SR RD based on noise-free generative adversarial network," 2019 International
map using the proposed deep neural network, the same effect Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan,
July 2019, pp. 2575–2578.
as obtaining 4 frames with 64 chirps can be achieved. Finally, [7] K. Armanious, C. Jiang, M. Fischer, T. Küstner, T. Hepp, K. Nikolaou, S.
the red lines in the figure show the result of converting the LR Gatidis, and B. Yang, “MedGAN: medical image translation using GANs,"

12 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

FIGURE 20. An example of efficient radar resource management.

Computerized Medical Imaging and Graphics, vol. 79, no. 101684, pp. 1– [18] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan,
14, January 2020. V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions," 2015
[8] K. Armanious, S. Abdulatif, F. Aziz, U. Schneider, and B. Yang, “An IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
adversarial super-resolution remedy for radar design trade-offs," 2019 Boston, MA, USA, June 2015, pp. 1–9.
IEEE European Signal Processing Conference (EUSIPCO), Coruna, Spain, [19] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros,
September 2019, pp.1–5. “Context encoders: feature learning by inpainting," 2016 IEEE Conference
[9] H.-W. Cho, W. Kim, S. Choi, M. Eo, S. Khang, and J. Kim, “Guided on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA,
generative adversarial network for super resolution of imaging radar," 2020 June 2016, pp. 2536–2544.
European Radar Conference (EuRAD), Utrecht, Netherlands, January 2021, [20] AWR1642 Single-Chip 77- and 79-GHz FMCW Radar sensor,
pp. 144–147. Texas Instruments, Dallas, TX, USA, [Online]. Available:
[10] H. Xing, M. Bao, Y. Li, L. Shi, and M. Xing, “Deep mutual GAN for life- https://www.ti.com/lit/ds/symlink/awr1642.pdf
detection radar super resolution," IEEE Geoscience and Remote Sensing [21] C. Yang, C. Ma, and M.-H. Yang, “Single-image super-resolution: a
Letters, vol. 19, pp. 1–5, March 2022. benchmark," 2014 European Conference on Computer Vision (ECCV),
[11] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with Zurich, Switzerland, September 2014, pp.372–386.
conditional adversarial networks," 2017 IEEE Conference on Computer [22] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality
Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, July 2017, pp. assessment: from error visibility to structural similarity," IEEE Transactions
5967–5976. on Image Processing, vol. 13, no. 4, pp. 600–612, April 2004.
[12] O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks [23] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,"
for biomedical image segmentation," International Conference on Medical IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430–444,
image computing and computer-assisted intervention (MICCAI), Munich, February 2006.
Germany, October 2015, pp. 234–241. [24] B. J. Donnet and I. D. Longstaff, “MIMO radar, techniques and oppor-
tunities," 2006 European Radar Conference (EuRAD), Manchester, United
[13] S. Patole, M. Torlak, D. Wang, and M. Ali, “Automotive radars: a review
Kingdom, September 2006, pp. 112–115.
of signal processing techniques," IEEE Signal Processing Magazine, vol.
[25] DCA1000EVM Data Capture Card, Texas Instruments, Dallas, TX, USA,
34, no. 2, pp. 22–35, March 2017.
[Online]. Available: https://www.ti.com/tool/DCA1000EVM
[14] M. N. Cohen, “An overview of high range resolution radar techniques," [26] L. Osman, I. Sfar, and A. Gharsallah, “Comparative study of high-
1991 National Telesystems Conference Proceedings (NTC), Atlanta, GA, resolution direction-of-arrival estimation algorithms for array antenna sys-
USA, March 1991, pp. 107–115. tem," International Journal of Research and Reviews in Wireless Commu-
[15] M. Bartlett, “Smoothing periodograms from time-series with continuous nications, vol. 2, no. 1, pp. 72–77, March 2012.
spectra," Nature, vol. 161, no. 4096, pp. 686–687, May 1948.
[16] R. Schmidt, “Multiple emitter location and signal parameter estimation,"
IEEE Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276–
280, March 1986.
[17] J. H. Saltzer, D. P. Reed, and D. D. Clark, “End-to-end arguments in sys-
tem design," International Conference on Distributed Computing Systems
(ICDCS), Paris, France, April 1981, pp. 509–512.

VOLUME 10, 2022 13

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3282688

Author et al.: Resource-efficient Range-Doppler Map Generation Using Deep Learning Network for Automotive Radar Systems

TAEWON JEONG is currently working toward


the Integrated B.S./M.S. degree in Electronics and
Information Engineering from Korea Aerospace
University (KAU), Goyang-si, Gyeonggi-do, Re-
public of Korea, since February 2017. He is inter-
ested in radar signal processing, such as radar clut-
ter suppression, deep learning-based target detec-
tion and tracking and improved angle estimation.

SEONGWOOK LEE received the B.S. and Ph.D.


degrees in electrical and computer engineering
from Seoul National University (SNU), Seoul,
Republic of Korea, in February 2013 and Au-
gust 2018, respectively. From September 2018 to
February 2020, he worked as a Staff Researcher
at the Machine Learning Lab, AI & SW Research
Center, Samsung Advanced Institute of Technol-
ogy (SAIT), Gyeonggi-do, Republic of Korea.
Thereafter, he was as an Assistant Professor at the
School of Electronics and Information Engineering, College of Engineering,
Korea Aerospace University (KAU), Gyeonggi-do, from March 2020 to
February 2023. Since March 2023, he has been working as an Assistant
Professor at the School of Electrical and Electronics Engineering, College
of ICT Engineering, Chung-Ang University (CAU), Seoul. His research
interests include radar signal processing techniques, such as enhanced target
detection and tracking, target recognition and classification, clutter suppres-
sion and mutual interference mitigation, and artificial intelligence algorithms
for radar systems. He published more than 90 papers on signal processing for
radar systems.

14 VOLUME 10, 2022

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4

You might also like