0% found this document useful (0 votes)
46 views

SPIE19 Sparsity Based Collaborative Sensing in A Scalable Wireless Network

S. Zhang, A. Ahmed, and Y. D. Zhang, “Sparsity-based collaborative sensing in a scalable wireless network,” in Proc. SPIE Compressive Sensing Conf., vol. 10989, Baltimore, Maryland, United States, May 2019.

Uploaded by

张水梅
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

SPIE19 Sparsity Based Collaborative Sensing in A Scalable Wireless Network

S. Zhang, A. Ahmed, and Y. D. Zhang, “Sparsity-based collaborative sensing in a scalable wireless network,” in Proc. SPIE Compressive Sensing Conf., vol. 10989, Baltimore, Maryland, United States, May 2019.

Uploaded by

张水梅
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Sparsity-based Collaborative Sensing

in a Scalable Wireless Network


Shuimei Zhang, Ammar Ahmed, and Yimin D. Zhang

Department of Electrical and Computer Engineering, Temple University, Philadelphia, USA

ABSTRACT
In this paper, we propose a collaborative sensing scheme for source localization and imaging in an unmanned
aerial vehicle (UAV) network. A two-stage image formation approach, which combines the robust adaptive
beamforming technique and sparsity-based reconstruction strategy, is proposed to achieve accurate multi-source
localization. In order to minimize the communication traffic in the UAV network, each UAV node only transmits
the coarse-resolution image, in lieu of the large volume of raw sampled data. The proposed method maintains
the robustness in the presence of model mismatch while providing a high-resolution image.
Keywords: Source localization, robust adaptive beamforming, compressive sensing, UAV network, data fusion

1. INTRODUCTION
Autonomous unmanned aerial vehicles (UAVs) play a critical role in various civil, military, and homeland security
applications, such as disaster monitoring, border surveillance, and relay communications.1–5 For time-critical or
large-area spanning missions, it is not sufficient to use a single UAV due to its limited energy and payload. A
multi-UAV network not only provides an extended coverage, but also offers diversity gain by sensing an area of
interest from different aspect angles to increase the reliability of source localization. However, the transmission
and fusion of the high-volume data between different UAVs pose great challenges as the UAVs are equipped with
restricted on-board processing capabilities and have limited communication coverage.
In practice, wireless sources are sparsely distributed within a surveillance area.6 This fact enables sparsity-
based approaches to be applied in conventional distributed stationary detectors where the received signals ob-
served at each receiver are forwarded to the global fusion center.7 In the fusion center, the time-domain signals
corresponding to all receivers are fused using compressive sensing (CS) techniques, such as the complex multi-task
Bayesian compressive sensing algorithm and group least absolute shrinkage and selection operator (LASSO).8–10
For the underlying UAV network, however, it is impractical to have such a centralized processing scheme
because a fusion center may not exist, or the UAV nodes may not be able to directly report their observed data
to the fusion center due to the limited communication range. In addition, the overall communication traffic and
latency would drastically increase as the network size scales.
In this paper, we consider real-time multi-source localization using a scalable collaborative UAV network, and
address the sensing, transmission, information fusion, and imaging (source localization) functionalities involved
for this purpose.11, 12 In particular, we take into account model mismatches due to, e.g., errors in the look
direction and local scattering effects which may result in erroneous source localization.13–15 We propose a
two-stage source localization algorithm to robustly form images within a gridded observation area. In the
first stage, each UAV obtains beamforming images using the quadratically constrained quadratic programming
(QCQP) approach, and the yielded coarse images obtained at each UAV are fused via a simple pixel-wise product
operation. Because only the coarse image is transmitted to the next UAV node, the UAV network is scalable,
i.e., the volume of data traffic being transmitted will not explode as the number of the UAV nodes increases.
In the second stage, a re-weighted l1 -norm sparsity based method is applied at the last UAV node to obtain a
high-resolution image with high fidelity.
Contact information: S. Zhang, [email protected]; A. Ahmed, [email protected], Y. D. Zhang,
[email protected]
z

f [xU , yU , zU ]
q y

x [ x S , yS , 0 ]
(0, 0, 0)

Figure 1. The coordinate system.

Notations : Lower-case (upper-case) bold characters are used to denote vectors (matrices). In particular,
∗ T H
I N stands for the N × N identity matrix. (·) , (·) and (·) denote the conjugate, transpose and Hermitian
transpose operators, respectively. Diag(·) returns a diagonal matrix from a vector, whereas diag(·) yields a
vector consisting of the diagonal elements of a matrix. ⊗ denotes the Kronecker product and represents the
Hadamard (element-wise) matrix multiplication. In addition, k·k1 and k·k2 respectively represent the l1 and l2
norms of a vector.

2. SIGNAL MODEL
Consider a UAV network where each UAV is equipped with P sensors. D uncorrelated sources impinge from
far-field with their respective elevation and azimuth angles denoted as θd and φd for d = 1, · · · , D. A spherical
coordinate system is shown in Fig. 1 to represent the arrival directions of the incoming plane waves. The received
baseband signal vector of the array can be modeled as
D
X
r̄(t) = ā(θd , φd )sd (t) + n(t) = Ās(t) + n(t), (1)
d=1

T
where Ā = [ā(θ1 , φ1 ), ā(θ2 , φ2 ), · · · , ā(θD , φD )] ∈ C P ×D is the manifold matrix, s(t) = [s1 (t), s2 (t), · · · , sD (t)] ∈
C D represents the signal waveform vector, and n(t) ∼ CN (0, σn2 I P ) denotes the vector of measurement noise
with σn2 representing the noise power.
In this paper, we use a uniform circular array (UCA) with one element located in the center. Compared to
a uniform linear array (ULA), a UCA provides 360◦ azimuthal coverage and obtains information on the source
elevation angles. The dth column of the manifold matrix Ā represents the steering vector of the dth source signal
and is expressed as
h iT
ā(θd , φd ) = e−ζ sin(θd ) cos(φd −β0 ) , e−ζ sin(θd ) cos(φd −β1 ) , · · · , e−ζ sin(θd ) cos(φd −βP −2 ) , 1 , (2)

where  = −1 is the imaginary unit, ζ = 2πfc r/c, βn = 2πn/(P − 1) for n = 0, · · · P − 2, with r denoting the
radius of the UCA, fc the carrier frequency, and c the propagation velocity.
The covariance matrix of the array received signals r̄(t) is expressed as
H
R̄r̄r̄ = E r̄(t)r̄ H (t) = ĀB Ā + σn2 I P ,
 
(3)

where B = Diag [b1 , · · · , bD ] is a diagonal matrix representing the intensity of all sources.
When the array experiences model mismatches due to, e.g., look direction error, imperfect calibration, and
local scattering, the actual array manifold matrix A may deviate from the presumed array manifold matrix Ā.
In this case, we decompose A into two terms, i.e., the presumed manifold matrix, Ā, and the model mismatch
error in the manifold matrix, E. As a result, we have
H
Rrr = E r(t)r H (t) = ABAH + σn2 I P = Ā + E B Ā + E + σn2 I P ,
  
(4)
PD
where r(t) = d=1 a(θd , φd )sd (t)+n(t) = As(t)+n(t), E = [e(θ1 , φ1 ), e(θ2 , φ2 ), · · · , e(θD , φD )] with e(θd , φd ) =
a(θd , φd ) − ā(θd , φd ) denoting the mismatch vector for the dth source.
In practice, the actual correlation matrix Rrr is not available and is estimated by its maximum likelihood
estimate, i.e., the sampled correlation matrix R̂rr , which is calculated as
T
1X
R̂rr = r(t)r H (t), (5)
T t=1

where T is the number of available data samples.


Based on the imaging technique being used, model mismatch generally yields blurring and/or displacement
in the obtained image. It is necessary to develop a robust high-resolution image formation algorithm to achieve
accurate image for source localization.

3. PROPOSED METHOD
In this section, we elaborate a novel two-stage image formation algorithm. First, a robust beamforming technique
by estimating the actual steering vector is performed on each UAV to obtain coarse images in the presence of
model mismatch. Then, coarse images from different UAVs are fused through pixel-wise multiplication. In the
second stage, a re-weighted CS method is applied on the last UAV to yield a high-resolution image with high
fidelity.

3.1 First Stage: Beamforming-based Coarse Image Formation


Considering M pixels in the observation area, where M  D, the source intensity is the output power of the
beamformer. Because beamforming is performed for pixel by pixel, (θm , φm ) is omitted in the sequel for notational
simplicity. To address the model mismatch issue, the actual signal steering vector a(θm , φm ), m = 1, · · · , M , can
be estimated by solving an optimization problem which maximizes the beamformer output power subject to a
constraint that prevents the estimated steering vector from converging to the interference steering vectors.13
Consider the output power of the Capon beamformer expressed as
1
P (a) = −1 . (6)
aH R̂rr a

We aim to maximize the output power P (a) by searching for the optimal steering vector a, which is equivalent to
minimize the denominator in the right-hand side of the above expression. Therefore, the problem is reformulated
as
−1
min (ā + e)H R̂rr (ā + e) (7)
e

s.t. (ā + e)H R̂rr (ā + e) ≤ āH R̂rr ā,

where a = ā + e. In order to exclude the trivial solution e = −ā, we decompose the mismatch vector e into
two components, ek and e⊥ , which represent the components that are respectively parallel and orthogonal to
the presumed steering vector ā. Since ek is a scaled replica of the presumed steering vector ā, it does not affect
the output signal-to-interference-plus-noise ratio (SINR). Hence, by estimating only the orthogonal component
e⊥ , we can retain the optimal beamforming performance without falling into the undesired trivial solutions. In
this case, instead of estimating e, the optimization problem (7) can be reformulated as:
−1
min (ā + e⊥ )H R̂rr (ā + e⊥ )
e⊥

s.t. āH e⊥ = 0, (8)


H H
(ā + e⊥ ) R̂rr (ā + e⊥ ) ≤ ā R̂rr ā,
which is a feasible quadratically constrained quadratic programming (QCQP) problem, and can be readily
solved by using convex optimization methods, e.g., using CVX.16 As a result, the estimated actual steering
vector becomes
ã = ā + e⊥ . (9)

In this paper, we use the adaptive angular response (AAR) beamformer,17 which provides a high image
resolution and isotropic white noise response. Repeating the above procedure pixel by pixel, the mth pixel of
the AAR image obtained at the qth UAV I q is given as
H  q −1 m 
ãm
q R̂rr ãq
I q [m] =  q −2 (10)
H
ãm m
 
q R̂ rr ãq

for m = 1, · · · , M and q = 1, · · · , Q, where Q is the number of UAVs. It is noted that the QCQP-based AAR
beamformer not only mitigates the effects of the model mismatch, but also suppresses interference.
We fuse the beamforming-based images via a pixel-by-pixel multiplication scheme.18 The information passing
from the qth UAV to the (q̃ + 1)th UAV is described by

I {1:q̃} = I {1:q̃−1} I q , (11)

where I {1:q̃} , q̃ = 2, · · · , Q, represents the fused beamforming-based image at the q̃th UAV. For the first UAV,
I {1} = I 1 .
The final image obtained from the first stage is the fused beamforming-based image at the Qth UAV, i.e.,
I stage1 = I {1:Q} . The mth pixel of the coarse image obtained in the first stage can be expressed as
Q
Y
Istage1 [m] = I q [m] . (12)
q=1

3.2 Second Stage: Sparsity-based High-Resolution Image Formation


The beamforming-based imaging method may not yield the desirable imaging quality with a high resolution and
low sidelobes. In the second stage, we apply sparsity-based reconstruction to provide a high-resolution source
localization capability.19, 20 Based on the fused coarse image obtained by pixel-by-pixel image multiplication,
different schemes can be utilized to form a sparsity-based high-resolution image within the CS framework. In
this paper, we exploit the re-weighted l1 minimization approach21 that modifies the LASSO22 with weighting
factors applied to the sparse entries.
Denote I stage2 as the high-resolution image to be obtained in the second stage, and let ustage2 = vec(I stage2 ).
Then, ustage2 is estimated as
1
ûstage2 = arg min kz Q − Ψ̃Q uk2 + λ kGuk1 , (13)
u 2
Q 2 2
where z Q = vec(R̂rr ) ∈ C P , Ψ̃Q ∈ C P ×M denotes the dictionary matrix, λ is a regulation parameter, and
G ∈ C M ×M is the re-weighted diagonal matrix whose mth diagonal entry is defined as:
 
1
[G]m,m = min , Ω , (14)
|Istage1 [m] |η
200 1.2
Projected UAV location
Reference source location 1
100
0.8

0 0.6

0.4
-100
0.2

-200 0
-200 -100 0 100 200

Figure 2. The simulated image.


200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(a) UAV 1 (b) UAV 2 (c) UAV 3


Figure 3. Comparison of images generated by LASSO for the three UAV nodes in the presence of model mismatch.

where η is the weighting parameter, and Ω is a sufficiently large real value. Note that, to mitigate the effects
of model mismatch, we utilize the estimated steering vector ãm Q obtained from the first stage to replace the
1 2 M m
presumed steering vector. In other words, Ψ̃Q = [ψ̃ Q , ψ̃ Q , · · · , ψ̃ Q ] with ψ̃ Q = (ãm ∗ m
Q ) ⊗ ãQ .

4. SIMULATION RESULTS
In this section, we provide simulation results to demonstrate the imaging performance of the proposed two-stage
high-resolution imaging method. Three UAVs are considered with their respective instant locations at (0, 40, 120)
m, (40, 0, 120) m, and (0, −40, 120) m. Each UAV is equipped with a 5-sensor UCA with one sesnor in the center.
The simulations focus on a small search area with a size of 400 m × 400 m on the ground. There are two ground
sources which are initially located at (−35, −5, 0) m and (35, −5, 0) m, respectively, as shown in Fig. 2. The grid
interval is chosen to be 5 m.
q
A sampled covariance matrix R̂rr , q = 1, · · · , Q, is generated based on T = 30, 000 data samples. The input
signal-to-noise ratio (SNR) is defined as the ratio of the average signal power and the average noise power at
each data sample, and is assumed to be 5 dB for each antenna. We consider that the steering vector of the the
desired signal is distorted by a random error vector. For the dth source signal, the actual spatial signature ad is
formed as
ad = ād + ed , (15)
where ed denotes the mismatch vector. The detailed information of the mismatch regarding each UAV for the
two sources is illustrated in Table 1. Note that because the array model mismatch √ may change the norm the
steering vector, the actual steering vector for each point source is normalized as a = P a/kak2 .
Table 1. The value of ked k2 for the two sources
Source UAV 1 UAV 2 UAV 3
position (0, 40, 120) m (40, 0, 120) m (0, −40, 120) m
(−35, −5, 0) m 0.4741 0.3324 0.3637
(35, −5, 0) m 0.2957 0.3580 0.3061
200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(a) AAR of UAV 1 (b) AAR of UAV 2 (c) AAR of UAV 3


200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(d) Mismatch AAR of UAV 1 (e) Mismatch AAR of UAV 2 (f) Mismatch AAR of UAV 3
200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(g) QCQP-based AAR of UAV 1 (h) QCQP-based AAR of UAV 2 (i) QCQP-based AAR of UAV 3
Figure 4. Comparison of coarse images generated by different beamforming methods. Plots (a)–(c) are generated without
model mismatch; Plots (d)–(i) are generated in the presence of model mismatch.

The LASSO-based method22 is used for comparison. For the qth UAV node, the LASSO-based image is
obatained as
1
ũ = arg min kz q − Ψ̄q uk2 + λ kuk1 , (16)
u 2
q 2 1 2 M 2
where z q = vec(R̂rr ) ∈ C P , q = 1, · · · , Q, and Ψ̄q = [ψ̄ q , ψ̄ q , · · · , ψ̄ q ] ∈ C P ×M is the dictionary matrix, with
m ∗
ψ̄ q = (ām m
q ) ⊗ āq . In Fig. 3, the LASSO-based images obtained at the three UAVs are depicted. It is noted
that none of the UAVs could detect the actual sources since the dictionary matrix is not accurate due to model
mismatch.
In Fig. 4, coarse images are generated in three different cases, i.e., AAR in the absence of model mismatch,
AAR in the presence of model mismatch, QCQP-based AAR in the presence of model mismatch. We refer to
these three cases as AAR, mismatched AAR, and QCQP-based AAR for brevity in the figures. Fig. 4(a) through
Fig. 4(c) are the AAR images in the absence of model mismatch. It is observed that the two sources are not
clearly resolved from Fig. 4(a) through Fig. 4(c), even without model mismatch, but the peaks correctly identify
the source locations. In the presence of model mismatch error, the two actual sources are not exactly associated
to the peaks as shown from Fig. 4(d) through Fig. 4(f). Fig. 4(g) through Fig. 4(i) show the images obtained from
the QCQP-based beamforming technique, where the effects of model mismatch have been effectively mitigated.
200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(a) Stage 1 image obtained from (b) Stage 1 image obtained from (c) Stage 1 image obtained from
AAR images mismatch AAR images QCQP-based AAR images
200 200 200

100 100 100

0 0 0

-100 -100 -100

-200 -200 -200


-200 -100 0 100 200 -200 -100 0 100 200 -200 -100 0 100 200

(d) Stage 2 image obtained from (e) Stage 2 image obtained from (f) Stage 2 image obtained from
AAR images mismatch AAR images QCQP-based AAR images
(Proposed Method)
Figure 5. Proposed two-stage image formation. Plots (a) and (d) are generated without steering vector mismatch; Plots
(b),(c),(e) and (f) are generated in the presence of steering vector mismatch.

Fig. 5 presents the results of the two-stage image formation methods with AAR beamforming images in the
same three cases. Fig. 5(a) and Fig. 5(d) present the output images for stage 1 and stage 2 in the absence
of model mismatch. It is observed that stage 2 yields a high-resolution image which correctly locates the two
actual sources. Fig. 5(b) and Fig. 5(e) present the stage 1 output image using mismatch AAR images, and the
associated stage 2 output image, respectively. No source is correctly detected. Fig. 5(c) and Fig. 5(f) show the
output image of stage 1 and stage 2 using QCQP-based beamforming images in the presence of model mismatch.
We can see that with the revision of steering vector via the QCQP beamforming technique, both the two sources
are correctly detected.

5. CONCLUSION
In this paper, we proposed a two-stage high-resolution image reconstruction technique in the presence of model
mismatch. In the first stage, the QCQP-based beamforming technique is individually applied at each UAV to
robustly obtain coarse-resolution images, whereas a re-weighted l1 -minimization is utilized in the final UAV to
obtain a high-resolution image. During the information passing, only the coarse image is emitted through the
UAV networks, so that the overall communication traffic and latency do not increase as the network size scales.

REFERENCES
[1] Ryan, A., Zennaro, M., Howell, A., Sengupta, R., and Hedrick, J. K., “An overview of emerging results in
cooperative uav control,” in [Proceedings of 2004 43rd IEEE Conference on Decision and Control (CDC)],
1, 602–607 (2004).
[2] Palat, R. C., Annamalau, A., and Reed, J., “Cooperative relaying for ad-hoc ground networks using swarm
UAVs,” in [Proceedings of 2005 IEEE Military Communications Conference (MILCOM) ], 1588–1594 (2005).
[3] Li, X. and Zhang, Y. D., “Multi-source cooperative communications using multiple small relay uavs,” in
[Proceedings of 2010 IEEE Globecom Workshops ], 1805–1810 (2010).
[4] Chalise, B. K., Zhang, Y. D., and Amin, M. G., “Multi-beam scheduling for unmanned aerial vehicle net-
works,” in [Proceedings of 2013 IEEE/CIC International Conference on Communications in China (ICCC) ],
442–447 (2013).
[5] Hayat, S., Yanmaz, E., and Muzaffar, R., “Survey on unmanned aerial vehicle networks for civil applications:
A communications viewpoint,” IEEE Communications Surveys & Tutorials, 18(4), 2624–2661 (2016).
[6] Zhang, Y. D., Amin, M. G., and Himed, B., “Structure-aware sparse reconstruction and applications to
passive multistatic radar,” IEEE Aerospace and Electronic Systems Magazine, 32(2), 68–78 (2017).
[7] Subedi, S., Zhang, Y. D., Amin, M. G., and Himed, B., “Group sparsity based multi-target tracking
in passive multi-static radar systems using Doppler-only measurements,” IEEE Transactions on Signal
Processing, 64(14), 3619–3634 (2016).
[8] Ji, S., Dunson, D., and Carin, L., “Multitask compressive sensing,” IEEE Transactions on Signal Process-
ing, 57, 92–106 (Jan. 2009).
[9] Wu, Q., Zhang, Y. D., Amin, M. G., and Himed, B., “Multi-task Bayesian compressive sensing exploiting
intra-task dependency,” IEEE Signal Processing Letters, 22, 430–434 (April 2015).
[10] Jacob, L., Obozinski, G., and Vert, J.-P., “Group lasso with overlap and graph lasso,” in [Proceedings of
the 26th Annual International Conference on Machine Learning ], 433–440 (2009).
[11] Ahmed, A., Zhang, S., and Zhang, Y. D., “Multi-target motion parameter estimation exploiting collaborative
uav network,” in [ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP) ], 4459–4463 (May 2019).
[12] Mao, X., Zhang, Y. D., and Amin, M. G., “Low-complexity sparse reconstruction for high-resolution multi-
static passive SAR imaging,” EURASIP Journal on Advances in Signal Processing, 2014, 104 (Jul 2014).
[13] Gu, Y. and Leshem, A., “Robust adaptive beamforming based on interference covariance matrix recon-
struction and steering vector estimation,” IEEE Transactions on Signal Processing, 60, 3881–3885 (July
2012).
[14] Zhang, S., Gu, Y., Wang, B., and Zhang, Y. D., “Robust astronomical imaging under coexistence with
wireless communications,” in [Proceedings of 2017 51st Asilomar Conference on Signals, Systems, and Com-
puters], 1301–1305 (2017).
[15] Zhang, S., Gu, Y., and Zhang, Y. D., “Robust astronomical imaging in the presence of radio frequency
interference,” Journal of Astronomical Instrumentation, 8(1), 1940012 1–15 (2019).
[16] Grant, M. and Boyd, S., “CVX: Matlab software for disciplined convex programming.”
http://cvxr.com/cvx/ (Dec. 2018).
[17] Ben-David, C. and Leshem, A., “Parametric high resolution techniques for radio astronomical imaging,”
IEEE Journal of Selected Topics in Signal Processing 2(5), 670–684 (2008).
[18] Comite, D., Ahmad, F., Liao, D., Dogaru, T., and Amin, M. G., “Multiview imaging for low-signature target
detection in rough-surface clutter environment,” IEEE Transactions on Geoscience and Remote Sensing, 55,
5220–5229 (Sep. 2017).
[19] Zhang, S., Gu, Y., Won, C.-H., and Zhang, Y. D., “Dimension-reduced radio astronomical imaging based on
sparse reconstruction,” in [Proceedings of 2018 IEEE 10th Sensor Array and Multichannel Signal Processing
Workshop (SAM)], 470–474 (2018).
[20] Zhang, S., Gu, Y., Barott, W. C., and Zhang, Y. D., “Improved radio astronomical imaging based on sparse
reconstruction,” in [Proceeding of SPIE on Compressive Sensing VII: From Diverse Modalities to Big Data
Analytics ], 10658, 106580O (2018).
[21] Candès, E. J., Wakin, M. B., and Boyd, S. P., “Enhancing sparsity by reweighted 1 minimization,” Journal
of Fourier Analysis and Applications, 14, 877–905 (Dec. 2008).
[22] Tibshirani, R., “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society:
Series B (Methodological), 58(1), 267–288 (1996).

You might also like