0% found this document useful (0 votes)
3 views

Face_Verification_Using_mmWave_Radar_Sensor

This document discusses the use of a mmWave radar sensor for face verification, utilizing a dataset of 200 individuals and a neural network-based autoencoder for identity verification. The mmWave technology offers advantages such as privacy support and the ability to penetrate facial hair, making it suitable for biometric authentication. The study highlights the potential of using a 802.11ad/y chipset to capture unique radar signatures for accurate face verification.

Uploaded by

Ruben V Pulayath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Face_Verification_Using_mmWave_Radar_Sensor

This document discusses the use of a mmWave radar sensor for face verification, utilizing a dataset of 200 individuals and a neural network-based autoencoder for identity verification. The mmWave technology offers advantages such as privacy support and the ability to penetrate facial hair, making it suitable for biometric authentication. The study highlights the potential of using a 802.11ad/y chipset to capture unique radar signatures for accurate face verification.

Uploaded by

Ruben V Pulayath
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Face Verification Using mmWave Radar Sensor

Eran Hof Amichai Sanderovich Mohammad Salama Evyatar Hemo


Qualcomm Israel Ltd Qualcomm Israel Ltd Qualcomm Israel Ltd Qualcomm Israel Ltd
P.O. Box 1212 P.O. Box 1212 P.O. Box 1212 P.O. Box 1212
Israel Israel Israel Israel
[email protected] [email protected] [email protected] [email protected]

Abstract—A mmWave radar sensor with massive-antenna ele- of the major benefit of our scheme is that it support privacy.
ments is tested for the task of verification of an identity based on Additional important advantage of operating a mmwave device
the human face radar signature. The sensor used in our study is is that it indifferent to beards since mmWave penetrates beards,
a 802.11ad/y CMOS mass-product networking chipset-hardware.
A dataset with faces of 200 different persons was collected for the paper masks, hair, etc ([8]).
testing. Then, a neural network based autoencoder is employed, Radar systems transmit and receive electromagnetic signals.
with either a single or double hidden layers to encode each face. By processing the received signals, radar systems can detect
Our preliminary study shows promising results. the presence and parameters of the object target reflecting the
Index Terms—Face recognition, mmwave. electromagnetic waves. This basic principle is well-studied and
dates back to the 1940s. mmWave radar systems transmit and
I. INTRODUCTION receive electromagnetic signals in the frequency band of 30-
Automatic biometric authentication is becoming increas- 300 GHz (corresponding to a wavelength of 1-10 mm). Two
ingly popular as mean of identification in front of ma- key advantages of mmWave systems are its compact size of
chines [1]. The main advantage of such authentication is, system components (e.g. amplifiers and antennas) and high
of course, removing the need for remembering passwords or level of accuracy and sensitivity (as a result of the short
pass phrases. This allows a non-expert and occasional user wavelengths). Over the past two decades, mmWave radars have
to securely operate machines with full authentication. Face been used in many civil applications, mainly in the automotive
verification (see [2],[3] and also [4]) is a recent important industry. Recent interests in mmWave radars is found in
addition to the biometric arsenal techniques. It is non-intrusive, gesture recognition and related applications, see e.g. [9], [10],
hands-free, and is acceptable by most users. Verification is [11], [12] and references therein. Scanners for security appli-
the only biometric authentication we are testing in this paper. cations based on mmWave technology are adopted in airports
However, it is noted that additional usages for biometric worldwide and ongoing research effort focus on advanced
authentications may be provided by the hardware setting algorithmic technique for analyzing mmWave signals, see, e.g.,
studied in this work and are the possible focus of future [13], [14], [15], [16], [17], [18], [19], [20], [21].
research (in particular, classification where the machine needs The IEEE 802.11ad is a Wi-Fi protocol enabling advanced
to decide who is the person and duplicate detect where the wireless communication networks operating in the unlicensed
system recognizes that the person was not registered before, 60 GHz band. The protocol provides a substantial improve-
are two promising applications of the chipset hardware studied ment for Wi-Fi communications in terms of both data rates and
in this work). latencies comparing to Wi-Fi operating in the 2.4 and 5 GHz
Different sensors are used for face recognition. RGB cam- bands. The globally available, wide and license-free bandwidth
era, is one of the well known sensors. The main drawback in the 60 GHz band in addition to mature radio-frequency
of the RGB camera is that it suffers from variable lighting integrated circuits technology are the two key reasons for ded-
condition. It addition, RGB camera suffers from poor-detection icated efforts and interest in both standardization and industry
performance of a mask or photograph (see [1]). In order focusing in providing state-of-the-art highly integrated systems
to overcome these, RGB-D (D for depth) sensors are used operating in 60 GHz band [22], [23].
for facial verification. These sensors include structured light A 802.11ad/y packet starts with a short training field,
sensors and time-of-flight sensors (see, e.g., [5],[6],[7]). followed by a channel estimation field (CEF), packet header,
In this work, we study the potential of face verifica- physical layer (PHY) payload and optional fields for gain
tion based only on the radar signature as captured by the control and additional training. A CEF is composed of Golay
Millimeter-wave (mmWave) networking schipset system that complementary sequences (128 symbols long) used in estimat-
can be operated in a radar fashion. This is, the chipset support ing the channel response characteristics. Complementary Go-
the option of simultaneous operation both a transmission lay sequences are well-studied signals in the radar community
module and a reception module. By using a chipset that is (see, e.g., [24], [25], [26]). Emerging 802.11ad technology and
already providing the task of networking, our scheme supports its PHY suitability for radar applications motivated the study
various benefits in terms of cost and power. However, one in providing opportunistic radar devices based on 802.11ad

978-1-7281-4985-1/20/$31.00
Authorized licensed use limited©2020 IEEE
to: Uppsala 320
Universitetsbibliotek. Downloaded on September 10,2024 at 17:03:00 UTC from IEEE Xplore. RestrictionsICAIIC
apply. 2020
Fig. 1. A Radar system with a target and the corresponding received signal.

technology [27], [28], [29]. This is somehow different then


legacy frequency modulated continuous wave (FMCW) radars
(see, e.g., [30], [31]).
In this paper we study facial verification using a 802.11
ad/y networking chipset that can be operated as a radar. In
particular we operate the chipset with two RF chains, one
for transmission and one as a receiver. These two RF chips
are operated simultaneously to provide radar capabilities. By
operating the chipset in this way we are able to capture the
unique radar signature of each face. The face verification is Fig. 2. A Radar setup based on Qualcomm’s 802.11 ad/ay communication
done taking advantage of the massive-antenna elements avail- system. The system includes a 802.11 chip connected to two RF and antenna
modules. The system is highly miniature in size as compared to a 2 euro-cent
able in the chipset (the massive number of antenna elements coin.
(64) is used to overcome high pathloss for WiFi links) so that
the radar signature is detailed enough.
Our observations for the facial verification are the Golay Our sensor is similar to the radar scheme described in [27],
correlation outputs (see [32]) for each of the transmit/receive we are re-using an existing communication system as a
antenna element pairs. Golay sequences enable us to remove 60 GHz radar sensor. 60 GHz is favorable due to the small
non-relevant reflections and focus the verification processing wavelegth (5mm), the high bandwidth (13GHz) and the small
on returning wave only from the face surface. To our best antenna size (2.5mm). Moreover, since the system was built
knowledge, no such dataset is publicly available. We therefore for 802.11 high speed communication, it includes low-cost
captured the faces with a real 802.11ad chipset [33]. As the high availability CMOS chipset, with mass market support.
number of antenna elements is smaller, the verification quality The radar sensor based on 802.11ad/y is captured in Fig. 2.
is worse, since the radar signature is less detailed. The sensor is composed of three parts: a base-band chip and
The rest of this paper continues as follows, in Section II two radio frequency (RF) chips attached to the green antenna
the radar system and related technology are described, the modules which are placed on a metal fixture. The capturing
capturing process and the dataset building is provided in and signal processing of the sensor is done in the base-band
Section III, the approach we used for the verification is detailed chip. Each antenna module contains 32 antenna elements that
in Section IV and Section V concludes the paper. can be switched on, one element at a time. Both the M-chip
and R-chips, seen in Fig. 2, are required for operating under
II. S ENSOR D ESCRIPTION 802.11. In a communication mode, a single RF chip operates
as both a receiver and transmitter, because the communication
An illustration of a radar system is depicted in Fig. 1. An system operates in a time-division duplex (TDD) fashion.
electromagnetic wave is transmitted from the radar tx module In order to allow radar functionality where receiving and
and reflected back from a target object (a hand in Fig. 1). Some transmitting is carried out simultaneously, two of the RF-chips
electromagnetic energy is reflected back to the location of the are used (this unique mode is enabled by adding some special
receiver which can sample the received signal and detect the hardware feature into the base-band chipset).
presence of a target. By estimating the time-of-flight and the The digital signal consists of re-using the Golay sequences
angle-of-arrival, the radar device can estimate the location and from 802.11ad/ay modem [34] for the radar operation of
the speed of the the target at hand. transmission and reception processing. The sensor is connected
FMCW, also known as chirp or Linear Frequency Modu- to a computer using PCIe link, allowing processing using
lated (LFM), is one of the most common waveforms used for MATLAB and Python.
mmWave radar (see, e.g., [30], [31]). In recent years, there is
an ongoing interest both practical [27] and theoretic [25], [26] III. FACE C APTURING U SING MM WAVE R ADAR
on more sophisticated radar waveforms such as compressed We first describe the capturing of the dataset used in our
or coded pulses. A particular interest is found with the com- study. Each subject placed his head in front of the radar sensor,
plementary Golay sequences. These sequences have zero side for two bursts, each taking 3 seconds with about 8 seconds in
lobe level and thus provide good separation between waves between. During this time, the sensor captured 200 frames per
arriving from different distances. orientation/distance. We used a plastic fixture for all subjects

321
Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on September 10,2024 at 17:03:00 UTC from IEEE Xplore. Restrictions apply.
Fig. 4. A two stage biometric verification process

by comparing the captured frame to the training data during


the enrollment stage.
A. One-Class vs Multi-Class Classification
Multi-class classification is a problem where an unknown
Fig. 3. The capturing setup in action
image needs to be classified into a class out of several possible
classes. For example, by classifying apples and oranges, we
so that the location of the faces will remain the same for all can determine if the unknown image is an image of an apple
the dataset. or an orange. However, if the picture is of a cat, the multi-
We scanned each person at 30cm and 50cm distances, class classifier can not tell us this. One-class classification
and then also when the person was looking in 5 different on the other hand ([35]), trains only seeing positive targets,
orientations: 24,15 degrees left and right, and looking at the not exposed to any negative samples. An example for such
center. classification is to train the one-class on images of apples.
This can be seen on Fig. 3, where the sensor faces the Then when it sees a tomato or an orange, it classifies it as ”not
subject, along the plastic fixture. We used a plastic fixture and an apple”, as long as the tomato or orange are significantly
not metal to reduce its effect on the radar signature. distinct from than the apple.
The subjects included both men and women, at variable Such approach is helpful when we do not have access to a
ages, including few children. Some were wearing glasses and large collection of representative negative data. This is the case
some having beards. with mmWave images, where negative captures are harder to
Each captured frame includes the following data. For each get. Additional advantage of one-class classifier is the ability
pair of transmit and receive antenna elements (we use 32 TX to enroll into the system without usage of remote dataset which
× 32 RX different antenna element pairs, summing to 1024 contains the negative data.
pairs) the Golay sequences separated the returned wave into B. Autoencoder Using Neural Network
different distances (much like pulsed radar), so that for each
An autoencoder is a technique to build a one-class classifier
8 cm we get a single complex number (amplitude and phase)
in which the input is encoded into a compressed representation
which represents the phase and amplitude of the sum of all
through an encoder. The compressed representation is then de-
reflected waves from the same distance. Total for each distance
coded back into an output. A good autoencoder is one in which
we have 2K real numbers. For 24cm, which is the usual face
the input and the output are similar to each other in a minimum
depth, we thus get 6K real numbers per captured frame.
mean square error (MSE) sense. A common implementation
Unlike camera image, each of these numbers represents the
of the encoder and the decoder in the autoencoder is composed
energy that was reflected from the face. No ”empty” pixels.
of a feed-forward artificial neural network with the same
This is why we believe that such capture contains enough
input and target output as seen in Fig. 5 [36],[37]. A small
distinctive information between the faces. The scanned dataset
hidden layer in an autencoder network creates an information
can be found online at [33].
bottleneck, forcing the network to compress the data into a
IV. FACE V ERIFICATION low-dimensional representation. For a simple autoencoder with
a single hidden layer, the vector of the hidden unit activities,
In this section we demonstrate the face verification capabili-
h, is given by
ties with a 802.11ad/y signal radar chip. It is done by analyzing
h = f(We · a + biase ) (1)
the results we obtained from the capturing as described in
Section III. We considered only the mmWave radar image for where f is the activation function (we use the logistic
the verification. The authentication is carried out with a two sigmoid function in this work), We is a parameter matrix, and
stage process as detailed in Fig. 4: enrollment of the person biase is a vector of bias parameters. The hidden representation
where the training data is captured, and then verification that of the data is then mapped back into the space of a using the
the registered person is indeed standing in front of the radar decoding function:

322
Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on September 10,2024 at 17:03:00 UTC from IEEE Xplore. Restrictions apply.
Fig. 6. MSE results for an autoencoder trained on a specific person, as the
ordinate. The abscissa stands for the captured frame index.

Fig. 5. Deep autoencoder

â = f(Wd · h + biasd ) (2)


where Wd is the decoding matrix and biasd a vector of
bias parameters. We learn the parameters of the autoencoder
by performing stochastic gradient descent to minimize the
reconstruction error which is the MSE between a and â (mean
over the dimension of a). Fig. 7. ROC for several tested networks and inputs. The 32 transmit antennas
with 32 receive antennas with one (solid circles) or two (dashed stars) hidden
layers are the two upper curves. 10 transmit and 10 receive antennas are the
MSE(a, â) = ka − âk 22 = ka − f (Wd · h + biasd )k 22 (3) two lower curves, where 48 wide layer is clearly insufficient when compared
to 64 neurons.
When the hidden layer has fewer dimensions than a, the
autoencoder learns a compressed representation of the training
We tested what happens when we reduce the dimensionality of
data. Non-linear hidden units allow the autoencoder to learn
the radar signature 10 times by taking only 10 antennas out of
more complex encoding functions, as do additional hidden
the 32. It is seen that such reduction significantly reduces the
layers.
distinctiveness between the faces. We also checked what is the
C. Results right configuration for the autoencoder network, and noticed
an improvement once we increase the number of neurons in
As an example, the MSE as a result of one of the faces is
the hidden layer and when an additional hidden layer was
shown in Fig. 6 (MSE is over the 6K dimensions of the cap-
added.
tured frame). The MSE quantifies how well the autoencoder
encoded the trained data. We see in this figure that the training
V. C ONCLUSIONS
data from a specific person resulted with MSE of 60. When
running the test data (due to the low number of data samples In this paper, a dataset of face signatures as captured with
we used only 10% as test data) we get the same MSE of around a mmWave radar is introduced. This dataset contains the
60. When we run the trained autoencoder over the negative capturing of about 200 faces of different people. The faces
data (other faces), we got MSE of no less than 250, where 78% captured include two genders and multiple ages. In addition,
of the captured frames resulted with an MSE of over 1500. the set includes people with and without eyeglasses and beards.
This is enough separation to distinguish between the different One of the key findings of our research is that the dataset
faces in the captured dataset. We next tuned a threshold value shows distinctiveness between the faces of different persons.
on the MSE to determine the region of convergence (ROC) by Moreover, is our study shows that there is a correlation
running the trained autoencoders over the entire ensemble of between different captures of the same face. In our study
captured faces. This is shown in Fig. 7. For the two hidden we trained a deep autoencoders based on neural networks.
layers scheme, we observe excellent results, with false negative With those autoencoders, we demonstrated promising results
of less than 2%, we get false positive < 10−6 . This is indicative indicating the potential of using mmWave signature as an ad-
of a strong distinctiveness between the radar signatures of ditional modality for facial verification/recognition. mmWave
different faces and the high correlation between samples of can penetrate through fabrics and hair and thus can provide a
the same person. In Fig. 7 we checked several configurations. more robust, reliable and secure verification. By re-using the

323
Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on September 10,2024 at 17:03:00 UTC from IEEE Xplore. Restrictions apply.
commercially available communication chipset, the solution [20] R. Feger, A. Fischer, and A. Stelzer, “Low-cost implementation of a
can be low cost as well. millimeter wave imaging system operating in w-band,” in Microwave
Symposium Digest (IMS), 2013 IEEE MTT-S International, pp. 1–4,
IEEE, 2013.
R EFERENCES [21] D. Oppelt, J. Adametz, J. Groh, O. Goertz, and M. Vossiek, “MIMO-
[1] J. Wayman, A. Jain, D. Maltoni, and D. Maio, “An introduction to SAR based millimeter-wave imaging for contactless assessment of
biometric authentication systems,” in Biometric Systems, pp. 1–20, burned skin,” in Microwave Symposium (IMS), 2017 IEEE MTT-S
Springer, 2005. International, pp. 1383–1386, IEEE, 2017.
[2] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, [22] ABI Research, “802.11ad will vastly enhance Wi-Fi the importance of
K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the the 60 GHz band to Wi-Fi’s continued evolution,” tech. rep., April 2016.
face recognition grand challenge,” in 2005 IEEE Computer Society [23] P. Smulders, “Exploiting the 60 GHz band for local wireless multimedia
Conference on Computer Vision and Pattern Recognition (CVPR’05), access: prospects and future directions,” IEEE communications maga-
vol. 1, pp. 947–954 vol. 1, June 2005. zine, vol. 40, no. 1, pp. 140–147, 2002.
[3] A. F. Abate, M. Nappi, D. Riccio, and G. Sabatino, “2D and 3D face [24] H. Haderer, R. Feger, and A. Stelzer, “A comparison of phase-coded CW
recognition: A survey,” Pattern recognition letters, vol. 28, no. 14, radar modulation schemes for integrated radar sensors,” in Microwave
pp. 1885–1906, 2007. Conference (EuMC), 2014 44th European, pp. 1896–1899, IEEE, 2014.
[4] A. T. Tran, T. Hassner, I. Masi, and G. Medioni, “Regressing robust [25] N. Levanon, I. Cohen, and P. Itkin, “Complementary pair radar
and discriminative 3D morphable models with a very deep neural waveforms–evaluating and mitigating some drawbacks,” IEEE
network,” in Computer Vision and Pattern Recognition (CVPR), 2017 Aerospace and Electronic Systems Magazine, vol. 32, no. 3, pp. 40–50,
IEEE Conference on, pp. 1493–1502, IEEE, 2017. 2017.
[5] J. Kittler, A. Hilton, M. Hamouz, and J. Illingworth, “3d assisted [26] A. Pezeshki, A. R. Calderbank, W. Moran, and S. D. Howard, “Doppler
face recognition: A survey of 3d imaging, modelling and recognition resilient Golay complementary waveforms,” IEEE Transactions on In-
approachest,” in 2005 IEEE Computer Society Conference on Com- formation Theory, vol. 54, no. 9, pp. 4254–4266, 2008.
puter Vision and Pattern Recognition (CVPR’05) - Workshops(CVPRW), [27] P. Kumari, N. Gonzalez-Prelcic, and R. W. Heath, “Investigating the ieee
vol. 00, p. 114, 06 2005. 802.11 ad standard for millimeter wave automotive radar,” in Vehicular
[6] T. Bakirman, M. U. Gumusay, H. C. Reis, M. O. Selbesoglu, S. Yos- Technology Conference (VTC Fall), 2015 IEEE 82nd, pp. 1–5, IEEE,
maoglu, M. C. Yaras, D. Z. Seker, and B. Bayram, “Comparison of low 2015.
cost 3d structured light scanners for face modeling,” Appl. Opt., vol. 56, [28] P. Kumari, J. Choi, N. G. Prelcic, and R. W. Heath, “Ieee 802.11
pp. 985–992, Feb 2017. ad-based radar: An approach to joint vehicular communication-radar
[7] C. A. Luna, C. Losada-Gutierrez, D. Fuentes-Jimenez, A. Fernandez- system,” IEEE Transactions on Vehicular Technology, 2017.
Rincon, M. Mazo, and J. Macias-Guarasa, “Robust people detection [29] E. Grossi, M. Lops, L. Venturino, and A. Zappone, “Opportunistic radar
using depth information from an overhead time-of-flight camera,” Expert in ieee 802.11 ad networks,” IEEE Transactions on Signal Processing,
Systems with Applications, vol. 71, pp. 240–256, 2017. 2018.
[8] D. R. Vizard and R. Doyle, “Invited paper : Advances in millimeter wave [30] J. Wenger, “Automotive mm-wave radar: Status and trends in system
imaging and radar systems for civil applications,” in 2006 IEEE MTT-S design and technology,” 1998.
International Microwave Symposium Digest, pp. 94–97, June 2006. [31] J. Hasch, E. Topak, R. Schnabel, T. Zwick, R. Weigel, and C. Wald-
[9] P. Molchanov, S. Gupta, K. Kim, and K. Pulli, “Short-range FMCW schmidt, “Millimeter-wave technology for automotive radar sensors in
monopulse radar for hand-gesture sensing,” in Radar Conference the 77 GHz frequency band,” IEEE Transactions on Microwave Theory
(RadarCon), 2015 IEEE, pp. 1491–1496, IEEE, 2015. and Techniques, vol. 60, no. 3, pp. 845–860, 2012.
[10] J. Lien, N. Gillian, M. E. Karagozler, P. Amihood, C. Schwesig, [32] T. Nitsche, C. Cordeiro, A. B. Flores, E. W. Knightly, E. Perahia, and
E. Olson, H. Raja, and I. Poupyrev, “Soli: Ubiquitous gesture sensing J. C. Widmer, “Ieee 802.11 ad: directional 60 GHz communication
with millimeter wave radar,” ACM Transactions on Graphics (TOG), for multi-gigabit-per-second Wi-Fi,” IEEE Communications Magazine,
vol. 35, no. 4, p. 142, 2016. vol. 52, no. 12, pp. 132–141, 2014.
[11] H.-S. Yeo, G. Flamich, P. Schrempf, D. Harris-Birtill, and A. Quigley, [33] Anonymous, “60GHz radar database,” [Online]. Available:URL will be
“Radarcat: Radar categorization for input & interaction,” in Proceed- provided in the published version due to anonymity requirement.
ings of the 29th Annual Symposium on User Interface Software and [34] Y. Ghasempour, C. R. da Silva, C. Cordeiro, and E. W. Knightly, “IEEE
Technology, pp. 833–841, ACM, 2016. 802.11 ay: Next-generation 60 GHz communication for 100 Gb/s Wi-Fi,”
[12] N. Amanda, “Gesture recognition demonstration using TI mmwave IEEE Communications Magazine, vol. 55, no. 12, pp. 186–192, 2017.
sensors,” TI Training. [Online]. Available: https://training.ti.com. [35] K. Hempstalk and E. Frank, “Discriminating against new classes: One-
[13] S. Harmer, N. Bowring, D. Andrews, N. Rezgui, M. Southgate, and class versus multi-class classification,” in AI 2008: Advances in Artificial
S. Smith, “A review of nonimaging stand-off concealed threat detec- Intelligence (W. Wobcke and M. Zhang, eds.), pp. 325–336, 2008.
tion with millimeter-wave radar [application notes],” IEEE Microwave [36] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of
magazine, vol. 13, no. 1, pp. 160–167, 2012. data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507,
[14] T. Sakamoto, T. Sato, P. Aubry, and A. Yarovoy, “Fast imaging method 2006.
for security systems using ultrawideband radar,” IEEE Transactions on [37] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P. Manzagol,
Aerospace and Electronic Systems, vol. 52, no. 2, pp. 658–670, 2016. “Stacked denoising autoencoders: Learning useful representations in a
[15] B. Gonzalez-Valdes, Y. Álvarez, Y. Rodriguez-Vaqueiro, A. Arboleya- deep network with a local denoising criterion,” Journal of Machine
Arboleya, A. Garcı́a-Pino, C. M. Rappaport, F. Las-Heras, and J. A. Learning Research, vol. 11, pp. 3371–3408, 2010.
Martinez-Lorenzo, “Millimeter wave imaging architecture for on-the-
move whole body imaging,” IEEE Transactions on Antennas and
Propagation, vol. 64, no. 6, pp. 2328–2338, 2016.
[16] S. López-Tapia, R. Molina, and N. P. de la Blanca, “Using ma-
chine learning to detect and localize concealed objects in passive
millimeter-wave images,” Engineering Applications of Artificial Intel-
ligence, vol. 67, pp. 81–90, 2018.
[17] L. Yujiri, M. Shoucri, and P. Moffa, “Passive millimeter wave imaging,”
IEEE microwave magazine, vol. 4, no. 3, pp. 39–50, 2003.
[18] S. S. Ahmed, A. Schiessl, F. Gumbmann, M. Tiebout, S. Methfessel,
and L. Schmidt, “Advanced microwave imaging,” IEEE microwave
magazine, vol. 13, no. 6, pp. 26–43, 2012.
[19] K. Noujeim, G. Malysa, A. Babveyh, and A. Arbabian, “A compact
nonlinear-transmission-line-based mm-wave SFCW imaging radar,” in
Microwave Conference (EuMC), 2014 44th European, pp. 1766–1769,
IEEE, 2014.

324
Authorized licensed use limited to: Uppsala Universitetsbibliotek. Downloaded on September 10,2024 at 17:03:00 UTC from IEEE Xplore. Restrictions apply.

You might also like