IET Signal Processing - 2021 - Miron - Tensor Methods For Multisensor Signal Processing
IET Signal Processing - 2021 - Miron - Tensor Methods For Multisensor Signal Processing
Invited Paper
processing
Revised 16th November 2020
Accepted on 24th November 2020
E-First on 2nd March 2021
doi: 10.1049/iet-spr.2020.0373
www.ietdl.org
Sebastian Miron1 , Yassine Zniyed1, Rémy Boyer2, André Lima Ferrer de Almeida3, Gérard Favier4,
David Brie1, Pierre Comon5
1Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France
2Université de Lille, CNRS, CRIStAL, 59655 Lille, France
3Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza, CE 60440–900, Brazil
4Université de Côte d'Azur, CNRS, I3S Laboratory, Sophia Antipolis, France
5Université Grenoble Alpes, CNRS, GIPSA-Lab, 38000 Grenoble, France
E-mail: [email protected]
Abstract: Over the last two decades, tensor-based methods have received growing attention in the signal processing
community. In this work, the authors proposed a comprehensive overview of tensor-based models and methods for multisensor
signal processing. They presented for instance the Tucker decomposition, the canonical polyadic decomposition, the tensor-
train decomposition (TTD), the structured TTD, including nested Tucker train, as well as the associated optimisation strategies.
More precisely, they gave synthetic descriptions of state-of-the-art estimators as the alternating least square (ALS) algorithm,
the high-order singular value decomposition (HOSVD), and of more advanced algorithms as the rectified ALS, the TT-SVD/TT-
HSVD and the Joint dImensionally Reduction and Factor retrieval Estimator scheme. They illustrated the efficiency of the
introduced methodological and algorithmic concepts in the context of three important and timely signal processing-based
applications: the direction-of-arrival estimation based on sensor arrays, multidimensional harmonic retrieval and multiple-input–
multiple-output wireless communication systems.
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 693
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
A1, 1B A1, 2B … A1, J B Q
∏ Is
A2, 1B A2, 2B … A1, J B unfoldqT = reshape T′, Iq,
s=1
, (2)
A⊠B= , Iq
⋮ ⋮ ⋮ ⋮
AI , 1B AI , 2B … AI , J B
with
if the matrix A is of size I × J. Once the bases of linear spaces are
well defined, the tensor product between operators can be T′ = permute T, [q, 1, 2, …, q − 1, q + 1, q + 2, …, Q] ,
represented by Kronecker products [9]. The Khatri–Rao (column-
wise Kronecker) product between two matrices with the same where permute is a native MATLAB function that rearranges the
number J of columns is written as: dimensions of the Q-order tensor T.
A ⊙ B = a1 ⊠ b1, …a2 ⊠ b2, …aJ ⊠ bJ ,
3 Tensor decompositions and algorithms
if a j and b j represent the columns of A and B, respectively. We introduce hereafter the tensor decompositions (Section 3.1)
Analogously to the Frobenius norm of a matrix, the Frobenius used in the applications presented in Sections 4, 5 and 6, as well as
norm of a tensor T is defined as the square root of the sum of all some basic algorithms to compute these decompositions (Section
the squares of its elements, i.e.: 3.2).
The Tucker decomposition along with two of its variants (high-
∥ T ∥F = ∑ ∑ ∑ Ti j k . 2 order SVD and partial Tucker) are introduced in Section 3.1.1.
Tucker decomposition is a generic tensor tool that decomposes a
, ,
i j k
tensor into a set of non-structured factor matrices and a core tensor.
Contraction between two tensors A simple algorithm for computing the high-order SVD (Tucker
p decomposition with semi-unitary factor matrices) for third-order
The product ∙ between two tensors A and ℬ of size I1 × ⋯ × IQ tensors is given in Section 3.2.2. The canonical polyadic
q
decomposition (CPD) (Section 3.1.2) – probably the most widely
and J1 × ⋯ × JP, respectively, where Iq = J p, is a tensor of order
used tensor decomposition – can be seen as a Tucker
(Q + P − 2) denoted by: decomposition with a diagonal core. Its existence and uniqueness
p
issues are discussed in Section 3.1.3. The pseudo-code for the
[A ∙ ℬ]i1, , iq − 1, iq + 1, , iQ, j1, , jp − 1, jp + 1, , jP popular alternating least squares (ALSs) algorithm for estimating
q the CPD is given in Section 3.2.1. Section 3.1.4 introduces the
Iq tensor-train decomposition (TTD), a tool designed to efficiently
= ∑ [A]i [ℬ] j1, , jp − 1, k, jp + 1, , jP . handle high-order tensors (order higher than 3), by splitting them
1, , iq − 1, k , iq + 1, , iQ
k=1 into an interconnected set (‘train’) of lower-order tensors. A
particular class of TTD, the Structured Tensor-Train (STT) models,
Tensor reshaping is presented in Section 3.1.6. The tensor-train SVD (TT-SVD)
Tensor reshaping transforms a Q-order tensor T of dimensions algorithm for computing TTD is illustrated in Section 3.2.3. A link
I1 × ⋯ × IQ into a matrix T(q) having the product of the first q between the TTD and the CPD of high-order tensors (TT-CPD) is
dimensions of T, say I1⋯Iq, as the number of rows, and the developed in Section 3.2.4; the JIRAFE (Joint dImensionality
product of the remaining dimensions, Iq + 1⋯IQ, as the number of Reduction And Factors rEtrieval) method for estimating the
columns. In MATLAB, this reshaping can be obtained using the parameters of the TT-CPD model is also detailed. Two additional
tools used in MIMO application (Section 6), the least squares
native reshape function, such that
Kronecker (Section 3.2.5) and least squares Khatri–Rao (Section
q Q 3.2.6) factorisations, conclude this section.
T(q) = reshape T, ∏ Is, ∏ Is . (1)
s=1 s=q+1 3.1 Tensor decompositions
For example, let T be a fourth-order tensor of dimensions Any matrix can always be diagonalised by the congruent
2 × 2 × 2 × 2, defined by: transformation. In addition, the two linear transforms involved can
be imposed to be unitary: this is the singular value decomposition
1 3 5 7 (SVD). When we move from matrices to tensors, it is generally
T(: , : , 1, 1) = , T(: , : , 2, 1) = , impossible to use unitary transforms and obtain a diagonal tensor.
2 4 6 8 Consequently, depending on which property is desired, we end up
9 11 13 15 with two different decompositions. This is what is explained in the
T(: , : , 1, 2) = , T(: , : , 2, 2) = . next two subsections, where we limit ourselves to third-order
10 12 14 16
tensors to ease the presentation.
Matrix T(2) of dimensions 4 × 4 is then given by:
3.1.1 Tucker decomposition, HOSVD, and multilinear
1 5 9 13 rank: Given a tensor T of dimensions I × J × K, it is always
possible to find three matrices A, B and C of dimensions I × R1,
2 6 10 14
T(2) = reshape T, 4, 4 = . J × R2 and K × R3, respectively, such that
3 7 11 15
4 8 12 16 R1 R2 R3
Ti, j, k = ∑ ∑ ∑ Ai mB j nCk p Gm n p,
, , , , , (3)
The so-called flattening or unfolding operation is a special case of m=1n=1 p=1
the reshaping operation [9]. The most used flattenings are those
keeping one of the original dimensions, say Nq, as the number of where R1, R2, R3 are minimal and R1 ≤ I, R2 ≤ J, R3 ≤ K. Tensor G,
rows. Hence, the so-called qth matrix unfolding of T is of often called core tensor, is of dimensions R1 × R2 × R3. The triplet
dimension Iq × I1…Iq − 1 Iq + 1 …IQ. Compared to the flattening of minimal values of R1, R2, R3 forms the multilinear rank of T.
operation, the reshape function can be considered as a more This is referred to as the Tucker decomposition of T, and can be
general flattening operation, in the sense that written in a more compact way as:
T= A, B, C; G . (4)
694 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Each Rℓ is uniquely defined, and corresponds to the rank of the The uniqueness of the CPD (8) should not be confused with its
linear operator associated with the ℓth matrix unfolding of T, array representation (7). In fact, the latter is never unique, since
unfoldℓT. The point is that the Tucker decomposition is not expressing a rank-1 tensor as the tensor product of N vectors is
unique, for it is defined up to three invertible matrices subject to N − 1 scaling indeterminacies. For instance,
M1, M2, M3 , because: D = a ⊗ b ⊗ c can also be written as αa ⊗ βb ⊗ γc provided
αβγ = 1. This is precisely the difference between a tensor space
A, B, C; G = AM1−1, BM2−1, CM3−1; G′ and a product of linear spaces [9, 13, 14]. This is the reason why
the wording of ‘essential uniqueness’ is sometimes found in the
literature; it expresses the fact that there are N − 1 scaling
if G′ = M1, M2, M3; G . Yet, as in the SVD of matrices, it is
indeterminacies, and that the order of summation is subject to
possible to impose that the core tensor is obtained via semi-unitary
permutation because the addition is commutative.
transforms {U, V, W} as:
With Definition (2), the CPD can be rewritten in matrix form.
For a third-order rank-R tensor T = A, B, C; S for example, the
T = U, V, W; G , (5)
three flattened representations of the CPD are [15]:
where UHU = IR1, V HV = IR2 and W HW = IR3. Equation (5)
unfold1T = AS(C ⊙ B)T, (9)
defines the high-order SVD (HOSVD), sometimes referred to as
multilinear SVD of tensor T [10].
unfold2T = BS(C ⊙ A)T, (10)
Partial Tucker decomposition: We present next a variant of the
Tucker decomposition for an Q-order tensor X ∈ ℂI1 × ⋯ × IQ, with
unfold3T = CS(B ⊙ A)T, (11)
factor matrices A(q) ∈ ℂIq × Rq whose Q − Q1 last ones are equal to
identity matrices IIq of order Iq, for q = Q1 + 1, ⋯, Q. This so- where S denotes the diagonal matrix with entries σ(r) = Sr, r, r,
called Tucker-(Q1, Q) decomposition is compactly written as ∀1 ≤ r ≤ R.
A(1), …, A(Q1), IIQ1 + 1, …, IIQ; G , where the core tensor G is of One of the major interests in the CPD (8) lies in its uniqueness.
In particular, it allows to relax the orthogonality constraint
dimensions R1 × ⋯ × RQ1 × IQ1 + 1 × ⋯ × IQ, which induces Rq = Iq
(necessary in the case of matrices to achieve some form of
for q = Q1 + 1, …, Q. See [11] for more details. For instance, the uniqueness), which is often incompatible with physical reality.
standard Tucker2 and Tucker1 decompositions correspond to More precisely, several uniqueness conditions have been derived in
(Q1, Q) = (1, 3) and (Q1, Q) = (2, 3), respectively. the literature. We shall quote two of them. A sufficient condition is
that [16–18]:
3.1.2 Exact CP decomposition, rank, uniqueness: Of course,
there is no reason that tensor G in (5) be diagonal; this is clear by 2R ≤ krank{A} + krank{B} + krank{C} − 2, (12)
just looking at the number of degrees of freedom of both sides. On
the other hand, if the unitary constraint of matrices {U, V, W} is where krank{A} denotes the so-called Kruskal rank of A [9, 16];
relaxed, it is possible to decompose any tensor T as: by definition [A matrix M has Kruskal rank R if any subset of R
columns forms a full rank matrix. Remember that M has rank R if
T= A, B, C; S , (6) there exists at least one subset of R columns forming a full rank
matrix.], the krank is always smaller than or equal to the usual
where tensor S is diagonal. This is known as the Canonical matrix rank. If entries of A are seen as randomly drawn according
Polyadic (CP [The acronym CP also stands for Candecomp/Parafac to an absolutely continuous distribution, krank{A} can be replaced
in the literature, due to two contributions [1, 2] in which this tensor by the smallest of its two dimensions. In the latter case, the
decomposition has been independently rediscovered, years after the condition is referred to as generic [19, 20].
original publication [12].]) decomposition (CPD). Note that now, A second generic condition ensuring uniqueness of the CPD is
not only matrices A, B, C are not semi-unitary, but their number given by the bound of the so-called expected rank [21–23]:
of columns, R, may exceed the number of their rows. The minimal
value of R such that (6) holds exactly is called the rank of the R1R2R3
R≤ − 1, (13)
tensor T. The explicit writing of (6) in terms of array entries is: R1 + R2 + R3 − 2
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 695
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
and the TT-ranks are all equal to the canonical rank R.
1 1 1 1
ℐQ, R = IR ∙ ℐ3, R ∙ ⋯ ∙ ℐ3, R ∙ IR . (22)
2 3 Q−1 Q
Fig. 1 TT decomposition of a Q-order tensor
Substituting the above TTD into the CPD, we get
increases the rank [9], even for matrices. This being said, it has
been proved in [22] that the best non-negative low-rank tensor 1 1 1
approximation of a non-negative tensor is almost surely unique. X = (IR ∙ ℐ3, R ∙ ⋯ ∙ IR) ∙ P1… ∙ PQ (23)
2 3 Q 1 Q
One can even look at the conditions under which the best low-rank
approximation admits a unique CPD [27], but this is more by expressing the entries of X and reorganising them, we can
involved. equivalently write
To conclude, the best low-rank approximate of a tensor does not
always exist, and this is still often ignored in the literature. 1 1 1
X = (P1 IR) ∙ (ℐ3, R ∙ P2) ∙ ⋯ ∙ (IR PQT ) . (24)
2 2 3 Q
3.1.4 Tensor-train decomposition and TT-ranks: A Q-order
tensor of size I1 × ⋯ × IQ admits a decomposition into a train of By identifying the TT-cores in (24) with those in (14), we can
tensors [28] if deduce the relations (19), (20) and (21). Then, it is straightforward
to conclude that the TT-ranks are all identical and equal to the
1 1 1 1
X = G(1) ∙ G(2) ∙ G(3) ∙ … ∙ G(Q − 1) ∙ G(Q),
1
(14) canonical rank R. □
2 3 4 Q−1 Q Thus, conditionally to the knowledge of the TT-cores, it is
theoretically possible to recover the factors of the CPD by a one-to-
where the TT-cores G(1), G(q)(2 ≤ q ≤ Q − 1), and G(Q) are, one methodology. In Section 3.2.4, we present the so-called
respectively, of size I1 × R1, Rq − 1 × Iq × Rq and RQ − 1 × IQ. The JIRAFE framework, used for that aim.
Q − 1 dimensions {R1, , RQ − 1} are referred to as the TT-ranks with
boundary conditions R0 = RQ = 1. 3.1.6 Structured TT models: In this subsection, we present a
The idea behind the TTD is to transform a high Q-order tensor particular class of TT models, the so-called STT models. These
into a set of much lower third-order tensors, which allows to break models are composed of a train of third-order tensors, each tensor
the ‘curse of dimensionality’ [29]. Indeed, just like the CPD, the being represented by means of a tensor decomposition like CP,
storage cost of the TTD scales linearly with the order Q. A graph- Tucker or generalised Tucker.
based representation of the TTD of a Q-order tensor X is given in For a Q-order tensor X ∈ ℂI1 × I2⋯ × IQ, an STT model can be
Fig. 1. It is straightforward to see that the TTD is not unique since written as:
[30]
1 1 1
T
X = T(1) ∙ T(2) ∙ ⋯ ∙ T(Q − 2), (25)
[X]i1, ⋯, iQ = a1(i1) A2(i2)⋯AQ − 1(iQ − 1)aQ(iQ) (15) 3 4 Q−1
696 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
R3 R4
tr(2)2, i3, i4 = ∑ ∑ gr (41)
(2) (2) (3)
a a
3, i3, r4 r2, r3 i4, r4
r3 = 1 r4 = 1
R1 R2
ti(3)
1, i2, r3
= ∑ ∑ gr (1) (1)
a (2)
a
1, i2, r2 i1, r1 r2, r3
(42)
r1 = 1 r2 = 1
Fig. 2 NTT(4) model for a fourth-order tensor
R4
tr(4)3, i3, i4 = ∑ gr (43)
(2) (3)
a
3, i3, r4 i4, r4
.
r4 = 1
The NTT(4) model (31) can be viewed as the nesting of the third-
order Tucker models T(3) and T(2) which share the matrix factor
A(2). It can also be interpreted as a contraction of a Tucker-(1,3)
model with a Tucker-(2,3) model, along their common mode:
1 1
Fig. 3 Nested CP model for a fourth-order tensor X = T(1) ∙ T(2) = T(3) ∙ T(4) . (44)
3 3
1 1
X = T(1) ∙ T(2) ∙ T(3) . (30) These contraction operations correspond to summing the entries of
3 4
the third-order tensors T(1) and T(2), or T(3) and T(4), along their
Comparing (29) and (30) of the STuT(5) model with (14) of the TT common mode r2, or r3, as follows:
model with Q = 5, we can conclude that the STuT model R2
corresponds to a TT model for which each tensor of the train
∑ ti (45)
(1) (2)
satisfies a Tucker-(1,3) or Tucker-(2,3) decomposition. xi1, i2, i3, i4 = t
1, i2, r2 r2, i3, i4
r2 = 1
Such an STuT model was derived for the fourth-order tensor of
received signals in a cooperative wireless MIMO communication R3
system [31], and then generalised to a Q-order tensor in the case of (46)
multi-hop MIMO relay systems [32]. This model can also be
= ∑ ti (3)
t(4)
1, i2, r3 r3, i3, i4
.
r3 = 1
interpreted as a nested Tucker train (NTT) model, with two
successive tensors sharing a common factor.
Defining the fourth-order tensor C ∈ ℂR1 × I2 × I3 × R4 such that:
Thus, in the case of a fourth-order tensor X ∈ ℂI1 × I2 × I3 × I4, as
illustrated by means of Fig. 2, with R2 R3
A(1) ∈ ℂI1 × R1, G(1) ∈ ℂR1 × I2 × R2, A(2) ∈ ℂR2 × R3, G(2) ∈ ℂR3 × I3 × R4, A(3), cr1, i2, i3, r4 = ∑ ∑ gr (1)
a(2) (2)
g
1, i2, r2 r2, r3 r3, i3, r4
, (47)
∈ ℂI4 × R4 r2 = 1 r3 = 1
the NTT(4) model can be described by means of the following
scalar equation: the NTT(4) model (31) can also be interpreted as the following
Tucker-(2,4) model [33]:
R1 R2 R3 R4
xi1, i2, i3, i4 = ∑ ∑ ∑ ∑ ai (1)
g(1) (2)
a g(2)
a (3)
1, r1 r1, i2, r2 r2, r3 r3, i3, r4 i4, r4
. (31) X = C ∙ A(1) ∙ A(3), (48)
r1 = 1 r2 = 1 r3 = 1 r4 = 1 1 4
Let us define the following third-order Tucker models: or in the scalar form:
R1 R4
T(1) = A(1), II2, IR2; G(1) (32)
xi1, i2, i3, i4 = ∑ ∑ cr 1, i2, i3, r4
ai(1) a(3) .
1, r1 i4, r4
(49)
r1 = 1 r4 = 1
(1) (1) I1 × I2 × R2
=G ∙A ∈ℂ (33)
1
The nested CP model introduced in [34], and exploited in [35] in
the context of MIMO relay systems, is a special case of the nested
(34)
(2) (2) (3) (2)
T = A , II3, A ; G Tucker model (31), defined as follows:
These Tucker models can be written in an element-wise form as: (A(1), G(1), A(2), G(2), A(3)) ↔ (A(1), G(1), A(2), G(2), A(3)) . (52)
R1
More generally, one can define a STT model (25) for which each
ti(1) ∑ gr (40)
(1) (1)
1, i2, r2
= a
1, i2, r2 i1, r1 elementary tensor of the train has a CP decomposition. Thus, for a
r1 = 1
Q-order tensor X, (25) then becomes a structured CP train (SCPT)
such that:
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 697
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
T
T(1) = A(1), G(1), A(2) ; ℐR1 ∈ ℂI1 × I2 × R2
T
T(q) = A(q), G(q), A(q + 1) ; ℐRq
∈ ℂRq − 1 × Iq + 1 × Rq + 1 for 2 ≤ q ≤ Q − 3
T(Q − 2) = A(Q − 2), G(Q − 2), A(Q − 1); ℐRQ − 2
∈ ℂRQ − 2 × IQ − 1 × IQ,
with the matrix factors A(1) ∈ ℂI1 × R1, A(q) ∈ ℂRq − 1 × Rq, for
2 ≤ q ≤ Q − 1, A(Q − 1) ∈ ℂIQ × RQ − 2, and G(q) ∈ ℂIq + 1 × Rq, for Fig. 4 Algorithm 1 ALS algorithm
1 ≤ q ≤ Q − 2. The SCPT model of order Q can also be written in
the following scalar form:
R1 RQ − 2
xi1, ⋯, iQ = ∑⋯ ∑ ai(1) g(1) a(2) g(2)
1, r1 i2, r1 r1, r2 i3, r2
r1 = 1 rQ − 2 = 1
× ar(3)2, r3gi(3)
4, r3
⋯gi(QQ−−12), rQ − 2ai(QQ, r−Q1)− 2 .
3.2 Algorithms
3.2.1 ALSs algorithm: There are many algorithms to compute a
CP decomposition. By far, the most used one is the ALSs. ALS
was proposed in [1, 2], and is considered today as the ‘workhorse’
for CPD computation. It is a simple algorithm that fixes, iteratively,
all but one factor, which is then updated by solving a linear least
squares problem. In Algorithm 1 (Fig. 4), we present the ALS
algorithm for a third-order tensor T ≈ A, B, C; ℐ3, R . The
generalisation for a Q-order case is straightforward.
Generally CritStop is based on the evaluation of the fitting
error ∥ T − A, B, C; ℐ3, R ∥F.
Algorithm 2 HOSVD algorithm: It is worth noting that the TT-SVD in its current state cannot be
parallelised, which is a problem when we deal with very high order
Input: Third-order tensor T, multilinear rank {R1, R2, R3} tensors. An alternative method to compute the TTD is to consider
Output: U, V, W and G different reshapings than those considered in the TT-SVD. Indeed,
(1) U ← R1 first left singular vectors of unfold1T the recently proposed TT-HSVD [30] algorithm, for tensor-train
(2) V ← R2 first left singular vectors of unfold2T hierarchical SVD, is a hierarchical algorithm for TTD, that
suggests to combine more than one mode in each dimension of the
(3) W ← R3 first left singular vectors of unfold3T reshaped matrices, i.e. instead of considering the matrix X(1) of size
(4) G = T ∙ U H ∙ V H ∙ W H I1 × I2I3I4 , we can for example use as first unfolding the matrix
1 2 3
X(2) of size I1I2 × I3I4 . Note that using this strategy allows to
parallelise the decomposition, after each SVD, across several
3.2.3 TTD computation with the TT-SVD algorithm: The TT- processors, which suits very well high order tensors. Moreover, the
SVD algorithm has been introduced in [28]. This algorithm choice of the best reshaping strategy when the order is very high is
minimises in a sequential way the following LS criterion: discussed in [30] in terms of the algorithmic complexity. Indeed,
2
[30] shows that reshapings with the ‘most square’ matrix (i.e.
1 1 1 1
ψ(X) = ∥ X − A1 ∙ A2 ∙ ⋯ ∙ AQ − 1 ∙ AQ ∥ . (53) matrices with more balanced row and column dimensions), leads to
2 3 Q−1 Q F the lower computational complexity.
Remark that from an estimation point of view, the TT-SVD and
In Fig. 5, we present in a schematic way the TT-SVD algorithm the TT-HSVD algorithms suffer from the lack of uniqueness of the
applied on a fourth-order tensor X. As illustrated, the TT-SVD TT-cores described in Section 3.1.4. Indeed, as the latent matrices
computes in a sequential way Q − 1 SVDs on matrix reshapings of M1, …, MQ − 1 are unknown, the true TT-cores remain unknown.
the Q-order original tensor. Note that at each step, we apply the In the next section, we propose a methodology to solve this
truncated SVD on the reshaped matrices V(2) (q)
of size problem in the important context where the observed tensor
follows a Q-order CPD of rank R.
RqIq + 1 × Iq + 2⋯IQ to recover matrices U (q + 1)
and V (q + 1), this
latter containing the product of the pseudo-diagonal singular values
3.2.4 JIRAFE principle for CPD: In this section, we present a
matrix and the right singular vectors matrix.
TT-based methodology [36], JIRAFE, for high-order CPD factors
698 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
min ∥ A1 − P1 M1−1 ∥F + ∥ AQ − MQ − 1 PQ ∥F (57)
M, ℙ
Q−2
+ ∑ ϕ(Aq) , (58)
q=2
(i) Reduce the dimensionality of the original factor retrieval where Λk is the scaling ambiguity for the k-mode factor Pk.
problem by breaking the difficult multidimensional optimisation
problem into a collection of simpler optimisation problems on 3.2.5 Least squares Kronecker factorisation: Consider the
small-order tensors. This step is performed using the TT-SVD following minimisation problem
algorithm.
(ii) Design a factor retrieval strategy by exploiting (or not) the min∥ X − A ⊠ B ∥F, (61)
A, B
coupled existing structure between the first and third factors of two
consecutive TT-cores. Here, the goal is to minimise a sum of
coupled least-square (LS) criteria. where A ∈ ℂI2 × R2, B ∈ ℂI1 × R1 and X = A ⊗ B + V ∈ ℂI1I2 × R1R2,
and V represents a zero-mean uncorrelated noise term. The
In [36], the structures of the TT-cores associated with the TT-SVD solution of the problem in (62), is based on a rank-one matrix
algorithm are described and a family of estimation algorithms is approximation (via SVD) of X (a permuted version of X, the
proposed. This methodology is based on the following result. construction of which was proposed in [38]). The problem in (62)
becomes
Theorem 2: If the data tensor follows a Q-order CPD of rank R
parametrised by Q full column rank factors P1, …, PQ . The TT- min∥ X¯ − b ⊗ a ∥F, (62)
a, b
SVD algorithm recovers the TT-cores such that:
meaning to find the nearest rank-one matrix to X¯ , where
A1 = P1 M1−1, (54)
a = vec(A) ∈ ℂI2R2 × 1 and b = vec(B) ∈ ℂI1R1 × 1. In [39], the
authors proposed a solution generalising [38] to a Kronecker
Aq = ℐ3, R ∙ Mq − 1 ∙ Pq ∙ Mq− T, where 2 ≤ q ≤ Q − 1 (55) product involving N factor matrices. Let us consider the case
1 2 3
N = 3, usually encountered in practice. The problem then becomes
AQ = MQ − 1 PQT , (56)
min∥ X − A ⊠ B ⊠ C ∥F, (63)
A, B
where Mq is a non-singular R × R change of basis matrix.
This means that if a Q-order tensor admits a rank-R CPD, then where A ∈ ℂI3 × R3, B ∈ ℂI2 × R2 and C ∈ ℂI1 × R1. The problem in
its TTD involves a train of (Q − 2) third-order CPD(s) and has all (64) now becomes
identical TT-ranks such as R1 = ⋯ = RQ − 1 = R. The factors can be
derived straightforwardly from the TT-cores up to two change-of- min ∥ X̄ − c ⊗ b ⊗ a ∥F, (64)
basis matrices. a, b, c
Remark 1: Note that each TT-core for 2 ≤ q ≤ Q − 1 follows a where a = vec(A) ∈ ℂI3R3 × 1, b = vec(B) ∈ ℂI2R2 × 1,
CPD coupled with its adjacent TT-cores (see Fig. 6). c = vec(C) ∈ ℂI1R1 × 1. We have that x̄ = vec X¯ and
The above theorem and the remark allow us to derive the factor
retrieval scheme, called JIRAFE and presented in Algorithm 3 X = T x̄ ∈ ℂI1R1 × I2R2 × I3R3, where the operator T{ ⋅ } maps the
(Fig. 7) as a pseudo-code which minimises the following criterion: elements of x̄ into X̄, as follows:
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 699
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
x̄q1 + (q2 − 1)Q1 + (q3 − 1)Q1Q2 → X̄q1, q2, q3 (65) approximation problem R times in parallel, one for each column xr,
T{ ⋅ } r = 1, …, R.
where qi = {1, …, Qi} and Qi = IiRi, with i = {1, 2, 3}. Otherwise
4 Tensor-based DOA estimation with sensor
stated, we have X = reshape x, Q1, Q2, Q3 . Hence, finding the
matrix triplet A, B, C that solves (63) is equivalent to finding the arrays
vector triplet a, b, c that solves (64), i.e. the solution of a A fundamental problem in array processing is the estimation of the
Kronecker approximation problem can be recast as the solution to a DOAs for multiple sources impinging on an array of sensors.
rank-one tensor approximation problem, for which effective Assume that P narrow-band far-field sources are impinging on an
algorithms exist in the literature (see, e.g. [40–42]). array of L identical sensors (P < L, in general). The direction of
arrival of a source p in a Cartesian coordinate system OXYZ
3.2.6 Least squares Khatri–Rao factorisation: Consider the associated with the array is given by the unitary vector
following minimisation problem T
kp = sin θ pcos ϕp sin θ psin ϕp cos θ p , where θ and ϕ are the
elevation and azimuth angles, respectively, as illustrated on Fig. 8.
min∥ X − A ⊙ B ∥F, (66) With these notations, a snapshot of the array at time instant t is
A, B
given by:
where A ∈ ℂI × R, B ∈ ℂJ × R and X = A ⊙ B + V ∈ ℂJI × R, and V P
represents the zero-mean uncorrelated noise term. Note that y(t) = ∑ a(kp)sp(t) + b(t) = Ax(t) + b(t), (70)
problem (66) can be rewritten as p=1
Y = [y(t1), …, y(tK )]
where X¯ r ∈ ℂJ × I . Defining UrΣr BrH as the singular value
= A(k1, …, kP)[x(t1), …, x(tK )] + B (71)
decomposition (SVD) of Xr, estimates for ar ∈ ℂI and br ∈ ℂJ , T
(r = 1, …, R) can be obtained by truncating the SVD of Xr to its = A(k1, …, kP)S + B
dominant eigenmodes, i.e. [43]
with S = s1, …, sP , a K × P matrix gathering on its columns the
T
^ (1) (1)
br = σr ur and ar = σr vr , ^ (1) (1) (1) ∗
(69) K time samples for the P sources, sp = sp(t1), …, sp tK , with
p = 1, …, P and B = b(t1), …, b tK , a (L × K) noise matrix. The
where ur(1) ∈ ℂJ and vr(1) ∈ ℂI are the dominant first left and right DOA estimation problem consists in finding k1, …, kP from the
singular vectors of Ui and Vi, respectively, while σr(1) denotes the data Y.
The use of multilinear algebra to solve this problem is inspired
largest singular value of X¯ r. Hence, estimates of the full matrices A
by the principle of ESPRIT algorithm introduced by Roy et al. in
and B that minimise (67) are obtained by repeating such a rank-1 [44, 45]. The idea is to exploit the invariances in the array output
data in order to create multilinear structures. These invariances can
be intrinsic to the physics of the acquired signals (i.e. polarisation),
to the acquisition setup (i.e. spatially shifted subarrays), or
artificially created (i.e. matricisation/time–frequency transform of
1D signals).
The main idea of ESPRIT, when applied to DOA estimation is
to employ two identical subarrays, the second one being spatially
shifted compared to the first one by a known displacement vector
δ, as illustrated in Fig. 9. The outputs of the first and the second
subarrays, denoted by y1(t) and y2(t), respectively, can then be
expressed as:
j 2π
λ 1
T
k δ
e 1
Φ= ⋱ .
j 2π T
k δ
λp P
e
700 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
y1(t) A b1(t) 4.1 CPD-based array processing
y(t) = = x(t) +
y2(t) AΦ b2(t) To overcome these drawbacks, a DOA estimation approach was
introduced in [5], capable of handling N arbitrarily displaced
is first computed as: subarray, as illustrated in Fig. 10.
It is also the first time that a tensor-based algorithm is proposed
1
K
for the DOA estimation problem. If y1(t), …, yN (t) are the outputs
K k∑
Ryy = E y(t)yH(t) ≃ y(tk)yH(tk) .
of the N subarrays and using notations similar to those used for
=1
ESPRIT, we have:
T
Then, we use the fact that the first P eigenvectors E1T, E2T of Ryy P
span the same subspace as the source steering vectors, i.e. y1(t) = ∑ a(kp)sp(t) + b (t) = Ax(t) + b (t),
1 1
p=1
E1 A AT P
j 2π kT δ
= T= , (74) y2(t) = ∑ a(kp)e λ p 2
p sp(t) + b2(t) = AΦ2 x(t) + b2(t),
E2 AΦ AΦT
p=1
P
Y= ∑ ap ⊗ sp ⊗ d p + ℬ = A, S, D; ℐ3, P + ℬ . (78)
p=1
A1 = a1(k1), …, a1(kP)
(83)
⋮
702 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
obtaining a set of high-variance but unambiguous direction-cosine computations over the cores [61]. In the tensor network framework,
estimates. On the contrary, to achieve a practical advantage, it is Hierarchical/tree Tucker [29, 64] and tensor train (TT) [28] are two
important that the spatial displacement between any two subarrays popular representations of a high-order tensor into a graph-
of the highest level exceeds λ/2, where λ is the wavelength. This connected low-order (at most 3) tensors. In this section, we focus
will produce estimates of lower variance but with cyclic ambiguity our effort on the TT formalism for its simplicity and compactness
for the same set of direction-cosines. On the other hand, under the in terms of storage cost. Unlike the hierarchical Tucker model, TT
first assumption, the J1(kp) function is unimodal on the support is exploited in many practical and important contexts as, for
region of the DOAs. Therefore, any local optimisation procedure instance, tensor completion [65], blind source separation [66], and
should converge towards the global minimum for the criterion. machine learning [4], to mention a few. In the context of the MHR
Thus, we obtain another set of estimates, now of high-variance but problem, this strategy has at least two advantages. First, it is well-
with no cyclic ambiguity, for the DOAs, to be denoted by k∗p, 1 with known that the convergence of the ALSs algorithm becomes more
p = 1, …, P. These estimates will subsequently be used, in a and more difficult when the order increases [67–69]. To deal with
second step, as the initial point for the minimisation of this problem, applying ALS on lower-order tensors is preferable.
ℐ2 kp = J1 kp + J2 kp . As no assumption is made on the The second argument is to exploit some latent coupling properties
between the cores [4, 70] to propose new efficient estimators.
distances between the level-2 subarrays, ℐ2 kp may present more The maximum likelihood estimator (MLE) [71, 72] is the
than one local minimum. Hence, a good initial estimate is crucial optimal choice from an estimation point of view since it is
for the optimisation procedure. The estimates obtained by the statistically efficient, i.e. its mean squared error (MSE) reaches the
minimisation of ℐ2 kp , denoted by k∗p, 2, are then used for the Cramér–Rao bound (CRB) [73] in the presence of noise. The main
minimisation of ℐ3 kp = ∑n = 1 Jn kp , and so on, until the final
3
drawback of the MLE is its prohibitive complexity cost. This
estimates are obtained by the minimisation of ℐN kp . The limitation is particularly severe in the context of a high-order data
proposed algorithm can be summarised as follows: tensor. To overcome this problem, several low-complexity methods
First stage: can be found in the literature. These methods may not reach,
• Estimate A1, …, AN by CP decomposition of the data Z or z sometimes, the CRB, but they provide a significant gain in terms of
the computational cost compared to the MLE. There are essentially
[see (86) or (87)].
two main families of methods. The first one is based on the
Second stage:
factorisation of the data to estimate the well-known signal/noise
• For p = 1, …, P and for n = 1, …, N, compute
subspace such as the estimation of signal parameters via rotational
n
invariance techniques (ESPRIT) [45], the ND-ESPRIT [74], the
(90) improved multidimensional folding technique [75], and the CP-
k∗p, n = argmin ℐn kp = argmin
kp kp
∑ Ji kp .
VDM [76]. The second one is based on the uniqueness property of
i=1
the CPD. Indeed, factorising the data tensor thanks to the ALS
• Output: The estimated parameters for the P sources: algorithm [77] allows identification of the unknown parameters by
^ Vandermonde-based rectification of the factor matrices.
k p = u^ p, v^ p, w^ p = k∗p, N with p = 1, …, P.
Several CPD-based DOA estimation approaches, based on other
diversity schemes have been proposed in the literature. In [49], 5.1 Generalised Vandermonde canonical polyadic
polarisation of electromagnetic waves was exploited as an decomposition
additional diversity, by using vector-sensor arrays capable of The MH model assumes that the measurements can be modeled as
capturing the six components of the electromagnetic field. A the superposition of R undamped exponentials sampled on a Q -
coupled CPD approach for DOA estimation, using the multiple dimensional grid according to [6]
baselines in sparse arrays was introduced in [50]. High-order
statistics (cumulants) were also used to create diversity in CPD R Q
based direction-finding algorithms [51]. [X]i1iQ = ∑ αr ∏ zri q q−1
, , 1 ≤ iq ≤ Iq (91)
While CPD remains the most commonly used decomposition in r=1 q=1
tensor-based array processing, other tensor decompositions made
their way in. For example, the HOSVD has been used to develop in which the rth complex amplitude is denoted by αr and the pole is
multidimensional versions of the popular ESPRIT and MUSIC defined by zr, q = e jωr, q where ωr, q is the rth angular-frequency
algorithms (see e.g. [52, 53]). T
along the qth dimension, and we have zq = z1, q z2, q … zR, q .
Note that the tensor X is expressed as the linear combination of M
5 Tensor-based MHR rank-1 tensors, each of size I1 × × IQ (the size of the grid), and
MHR [6, 54] is a classical signal processing problem that has follows a generalised Vandermonde CPD [78]:
found several applications in spectroscopy [55], radar
communications [56], sensor array processing [5, 57], to mention a X = A ∙ V1 ∙ … ∙ VQ (92)
few. The multidimensional harmonic (MH) model can be viewed 1 2 Q
as the tensor-based generalisation of the one-dimensional harmonic
one, resulting from the sampling process over a multidimensional where A is a R × ⋯ × R diagonal tensor with [A]r, , r = αr and
regular grid. As a consequence, the Q-dimensional harmonic model
needs the estimation of a large number QR of angular-frequencies Vq = v(z1, q) ⋯ v(zR, q)
of interest. We can easily note that the number of unknown
parameters and the order of the associated data tensor grow with Q. is a Iq × R rank- R Vandermonde matrix, where
Moreover, it is likely that the joint exploitation of multi-diversity/
modality sensing technologies for data fusion [58–60] further I −1 T
v(zr, q) = 1 zr, q zr2, q … zrq, q .
increases the data tensor order. This trend is usually called the
‘curse of dimensionality’ [61–63] and the challenge here is to
reformulate a high-order tensor as a set of low-order tensors over a We define a noisy MH tensor model of order Q as:
graph. In this context, we observe an increasing interest for the
tensor network theory (see [61] and references therein). Tensor Y = X + σℰ, (93)
network provides a useful and rigorous graphical representation of
a high-order tensor into a factor graph where the nodes are low- where σℰ is the noise tensor, σ is a positive real scalar, and each
order tensors, called cores, and the edges encode their entry [ℰ]i1iQ follows an i.i.d circular Gaussian distribution
dependencies, i.e. their common dimensions, often called ‘rank’. In CN(0, 1), and X has a canonical rank equal to R. The reader may
addition, tensor network allows to perform scalable/distributed
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 703
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
the structure (Fig. 12). The most intuitive way to exploit the
1
Vandermonde structure is called column-averaging. Let ω = ∠zi
i
where ∠ stands for the angle function. Define the sets
f
J= v= : f ∈ ℂI (94)
[ f ]1
ℓ
1 1
(95)
ℓ i∑
Aℓ = v z = e jω̄ : ω̄ = ∠[ f ]i + 1
=1
i
5.3 VTT-RecALS
In this section, the JIRAFE algorithm and the RecALS method are
associated to solve the MHR problem. The VTT-RecALS estimator
is based on the JIRAFE principle which is composed of two main
steps.
(i) The first one is the computation of the TTD of the initial
tensor. By doing this, the initial Q-order tensor is broken down into
Q graph-connected third-order tensors, called TT-cores. This
dimensionality reduction is an efficient way to mitigate the ‘curse
Fig. 12 Algorithm 4 Rectified ALS (RecALS) of dimensionality’. To reach this goal, the TT-SVD [28] is used as
a first step.
be referred to [74] for the case of damped signals, which is not (ii) The second step is dedicated to the factorisation of the TT-
addressed in this paper. cores. Recall the main result given by Theorem 2, i.e. if the initial
tensor follows a Q-order CPD of rank R, then the TT-cores for
5.2 Algorithms 2 ≤ q ≤ Q − 1 follow coupled third-order CPD of rank R.
Consequently, the JIRAFE minimises the following criterion over
5.2.1 Vandermonde rectification of the ALS algorithm: Limit the physical quantities {V1, …, VQ} and over the latent quantities
of the ALS algorithm for structured CPD: The CPD of any order- {M1, …, MQ − 1}:
Q rank-R tensor X involves the estimation of Q factors Vq of size
Iq × R. As pointed out above, in the context of the MD-harmonic 2 2
C= G1 − V1 AM1−1 + GQ − MQ − 1VQT (96)
model, the factors Vq of the CPD are Vandermonde matrices. F F
Consider, Yq, the qth mode unfolding [9] of tensor Y, at the kth 2
Q−1
iteration with 1 ≤ k ≤ M, M denoting the maximal number of +∑ Gq − ℐ3, R ∙ Mq − 1 ∙ Vq ∙ Mq− T , (97)
iterations. The ALS algorithm solves alternatively for each of the Q q=2 1 2 3
F
dimensions the minimisation problem [23, 71]:
2
minVq Yq − VqSq where where A is a R × R diagonal matrix with [A]r, r = αr.
SqT= VQ ⊙ … ⊙ Vq + 1 ⊙ Vq − 1 ⊙ … ⊙ V1. It aims at approximating The above cost function is the sum of coupled LS criteria. The
tensor Y by a tensor of rank R, hopefully close to X. The LS aim of JIRAFE is to recover the original tensor factors using only
solution conditionally to matrix Sq is given by Vq = YqSq† where † the third-order tensors Gq. Consequently, the JIRAFE approach
stands for the pseudo-inverse. Now, remark that there is no reason adopts a straightforward sequential methodology, described in
that the above LS criterion promotes the Vandermonde structure in Fig. 13, to minimise the cost function C.
the estimated factors in the presence of noise. In other words, We present in Algorithm 5, the pseudo-code of the VTT-
ignoring the structure in the CPD leads to estimate an excessive RecALS algorithm, where RecALS3 denotes the RecALS applied
number of free parameters. This mismatched model dramatically to a third-order tensor, while RecALS2 denotes the RecALS applied
decreases the estimation performance [79]. Hence there is a need to to a third-order tensor using the knowledge of one factor. The VTT-
rectify the ALS algorithm to take into account the factor structure. RecALS algorithm is actually divided into two parts (Fig. 14). The
Rectified ALS (RecALS): The RecALS algorithm belongs to the first part is dedicated to dimensionality reduction, i.e. breaking the
family of lift-and-project algorithms [80, 81]. The optional lift step dimensionality of the high Q-order tensor into a set of third-order
computes a low rank approximation and the projection step tensors using Theorem 2. The second part is dedicated to the
performs a rectification toward the desired structure. The RecALS factors retrieval from the TT-cores using the RecALS algorithm
algorithm is based on iterated projections and split LS criteria. Its presented in the previous section. It is worth noting that the factors
algorithmic description is provided in Algorithm 4 for Q = 3. We Vq are estimated up to a trivial (common) permutation ambiguity
insist that several iterations in the while loops are necessary, since [84]. As noted in [37], since all the factors are estimated up to a
restoring the structure generally increases the rank, and computing unique column permutation matrix, the estimated angular-
the low-rank approximation via truncated SVD generally destroys frequencies are automatically paired.
704 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
• to derive closed-form receivers under a priori knowledge of
some transmitted symbols (semi-blind systems);
• to deal with point-to-point systems as well as multi-hop systems
with relays.
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 705
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
developed for multi-hop relaying systems, by exploiting the nested U = C ∙ S ∈ ℂM × N × P (102)
Tucker and coupled nested Tucker models [31, 98]. 2
In this section, we make a brief presentation of three point-to-
point systems and of one relaying system to illustrate the tensor- or equivalently:
based approach in the context of wireless communication systems.
R
um, n, p = ∑ cm r psn r, (103)
6.1 Khatri–Rao ST coding r=1
, , ,
constant during the transmission, i.e. for p ∈ [1, P]. The coding m=1
(105)
(98) can be viewed as a simplified version of the Khatri–Rao M R
space–time (KRST) coding proposed in [99]. = ∑ ∑ cm r phk msn r + bk n p,
, , , , , ,
706 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
where GRMF × JPF is a matrix unfolding of the code-allocation tensor
defined in (108). By properly designing this tensor to ensure its
right-invertibility, the channel and symbol matrices are estimated in
closed form from a LS estimation of the Kronecker product
†
S ⊠ HK × MF = XNK × JPFGRMF × JPF . (111)
6.3 TSTF coding 6.4 MIMO relaying systems with tensor coding
A more general and flexible tensor-based transceiver scheme for The benefits of TSTC have been extended to MIMO relaying
space, time and frequency transmit signalling was proposed in [86], systems in [31], where semi-blind receivers have been derived for
referred to as TSTF coding. By relying on a new class of tensor the joint estimation of the source-relay and relay-destination
models, namely, the generalised PARATUCK- N1, N and the channels, in addition to symbol estimation. Consider a one-way
Tucker- N1, N models, the TSTF system combines a fifth-order two-hop MIMO relaying communication system shown in Fig. 15,
coding tensor with a fourth-order allocation tensor. It can also be where M, and K denote the numbers of antennas at the source S ,
seen as a generalisation of three previous tensor-based and destination D nodes, respectively, while QR and QT represent
ST/TST/STF coding schemes, offering new performance/ the numbers of receive and transmit antennas at the relay,
complexity tradeoffs and space, time and frequency allocation
respectively. The source–relay channel H(SR) ∈ ℂQR × M and the
flexibility.
Herein, the transmission time is decomposed into P time-slots relay–destination channel H(RD) ∈ ℂK × QT are assumed to be flat
(data blocks) of N symbol periods, each one being composed of J fading and quasi-static, i.e. constant during the transmission
chips. At each symbol period n of the pth block, the transceiver protocol which is divided into two consecutive phases. Assume
transmits a linear combination of the nth symbols of certain data that the direct link between the source and destination is absent or
streams, using a set of transmit antennas and of sub-carriers. The negligible. During the first hop, the transmission is composed of N
coding is carried out by means of a fifth-order code tensor time-blocks associated with N symbol periods, R data streams
W ∈ ℂM × R × F × P × J whose dimensions are the numbers of transmit being transmitted per time-block. The transmitted symbol matrix
antennas M , data streams R , sub-carriers F , time blocks P , S ∈ ℂN × R contains R data streams composed of N symbols each.
During the time-block n, each antenna m of the source node
and chips J . An allocation tensor C ∈ ℂM × R × F × P determines the
transmits a combination of R information symbols
transmit antennas and the sub-carriers that are used, as well as the
sn, r, r = 1, . . . , R, to the relay, after a TSTC [31] by means of a
data streams transmitted in each block p. As an example,
cm, r, f , p = 1 means that the data stream r is transmitted using the third-order tensor C(S) ∈ ℂM × P × R, which introduces ST
transmit antenna m, with the sub-carrier f, during the time block p. redundancies, since each symbol sn, r is repeated P times over the M
The TSTF-coded signals to be transmitted are given by: transmit antennas, P being the source code length. During the
second hop, the source remains silent and the relay uses a second
U = G ∙ S, with G = W ⋆ C, (107) three dimensional coding C(R) ∈ ℂQT × J × QR before forwarding the
1 {m, r, f , p}
QR received signals to the destination using QT transmit antennas.
or, alternatively, in scalar form Not that this second tensor coding consists in repeating the signals
received at the relay J times, over the QT transmit antennas, J being
R the relay code length. The tensor T ∈ ℂQT × J × P × N of signals
um, n, f , p, j = ∑ wm r f p jsn rcm r f p,
, , , , , , , , (108) transmitted by the relay is defined as
r=1
QR M R
where G ∈ ℂM × R × F × P × J is the effective code-allocation tensor, tqT, j, p, n = ∑ ∑ ∑ cqR j q hqSRmcmS p rsn r .
( ) (
T, , R R,
) ( )
, , , (112)
given by the Hadamard product between the code tensor qR = 1 m = 1 r = 1
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 707
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
[8] Nion, D., Sidiropoulos, N.D.: ‘Tensor algebra and multidimensional harmonic
retrieval in signal processing for mimo radar’, IEEE Trans. Signal Process.,
2010, 58, (11), pp. 5693–5705
[9] Comon, P.: ‘Tensors: a brief introduction’, IEEE Signal. Proc. Mag., 2014, 31,
(3), pp. 44–53, special issue on BSS
[10] De Lathauwer, L., Moor, B.D., Vandewalle, J.: ‘A multilinear singular value
decomposition’, SIAM J. Matrix Anal. Appl., 2000, 21, (4), pp. 1253–1278
[11] Favier, G., de Almeida, A.: ‘Overview of constrained PARAFAC models’,
Fig. 16 NTT(4) model for a one-way two-hop MIMO relaying system with EURASIP J. Adv. Signal Process., 2014, 5, pp. 1–41
tensor coding [12] Hitchcock, F.L.: ‘The expression of a tensor or a polyadic as a sum of
products’, J. Math Phys., 1927, 6, (1), pp. 165–189
[13] Landsberg, J.M.: ‘Tensors: geometry and applications’, in David, Cox (Ed.):
Table 1 Summary of the tensor decompositions and ‘Graduate studies in mathematics’ (American Mathematical Society,
algorithms used in the applications Providence, Rhode Island, USA 2012), vol. 128
[14] Hackbusch, W.: ‘Tensor spaces and numerical tensor Calculus’, in Randolph
Application Tensor decompositions Algorithms E., Bank (Ed.): ‘Series in computational mathematics’ (Springer, Berlin,
DOA Canonical polyadic alternating least squares Heidelberg, 2012)
decomposition [15] Kolda, T.G., Bader, B.W.: ‘Tensor decompositions and applications’, SIAM
Rev., 2009, 51, (3), pp. 455–500
MHR generalised Vandermonde rectified ALS (RecALS), [16] Kruskal, J.B.: ‘Three-way arrays: rank and uniqueness of trilinear
CPD Vandermonde tensor-train decompositions’, Linear Algebra Appl., 1977, 18, pp. 95–138
RecALS (VTT-RecALS) [17] Sidiropoulos, N.D., Bro, R.: ‘On the uniqueness of multilinear decomposition
of N-way arrays’, J. Chemo, 2000, 14, pp. 229–239
MIMO CPD, Tucker- N1, N , Kronecker factorisation [18] Stegeman, A., Sidiropoulos, N.: ‘On Kruskal's uniqueness condition for the
nested tucker train (NTT) (KF), Khatri–Rao CP decomposition’, Linear Algebra Appl., 2007, 420, (2–3), pp. 540–552
factorisation (KRF) [19] Comon, P., Mourrain, B., Lim, L.H., et al.: ‘Genericity and rank deficiency of
high order symmetric tensors’. 2006 IEEE Int. Conf. on Acoustics Speech and
Signal Processing Proc., Toulouse, 2006
[20] Comon, P., Berge, J.M.F.T., De Lathauwer, L., et al.: ‘Generic and typical
respectively, before transmission. In this case, the noiseless ranks of multi-way arrays’, Linear. Algebra. Appl., 2009, 430, (11–12), pp.
received signal tensor at the destination node reduces to 2997–3007
[21] Chiantini, L., Ottaviani, G., Vannieuwenhoven, N.: ‘An algorithm for generic
and low-rank specific identifiability of complex tensors’, SIAM J. matrix
QT M Anal. Appl., 2014, 35, (4), pp. 1265–1287
xk, j, p, n = ∑ ∑ hkRDq cqR jhqSRmcmS psn r,
(
, T
) ( )
T,
(
R,
) ( )
, , (114) [22] Qi, Y., Comon, P., Lim, L.H.: ‘Uniqueness of nonnegative tensor
qT = 1 m = 1 approximations’, IEEE Trans. Inf. Theory, 2016, 62, (4), pp. 2170–2183,
arXiv:1410.8129
[23] Comon, P., Luciani, X., De Almeida, A.L.F.: ‘Tensor decompositions,
which corresponds to a nested CP model. This can be concluded by alternating least squares and other tales’, J. Chemometrics, 2009, 23, (7–8),
comparing (114) with (50). pp. 393–405
Assuming that the coding tensors C(S) and C(R) are known at the [24] Lim, L.H., Comon, P.: ‘Blind multilinear identification’, IEEE Trans. Inf.
Theory, 2014, 60, (2), pp. 1260–1280
destination node (i.e. by the receiver), semi-blind receivers for [25] Sahnoun, S., Comon, P.: ‘Joint source estimation and localization’, IEEE
jointly estimating the symbol matrix S and the individual Trans. Signal. Proc., 2015, 63, (10), pp. 2485–2495. hal-01005352
[26] Lim, L.H., Comon, P.: ‘Nonnegative approximations of nonnegative tensors’,
channels H(SR), H(RD) are proposed in [31]. We refer the interested J. Chemometrics, 2009, 23, pp. 432–441
reader to this reference for further details on the receiver [27] Qi, Y., Comon, P., Lim, L.H.: ‘Semialgebraic geometry of nonnegative tensor
algorithms. rank’, SIAM J. Matrix Anal. Appl., 2016, 37, (4), pp. 1556–1580.
hal-01763832
We summarised in Table 1 the different tensor decompositions [28] Oseledets, I.V.: ‘Tensor-train decomposition’, SIAM J Sci. Comput., 2011, 33,
and algorithms used in the three applications presented in this (5), pp. 2295–2317
paper. [29] Oseledets, I.V., Tyrtyshnikov, E.E.: ‘Breaking the curse of dimensionality, or
how to use SVD in many dimensions’, SIAM J. Sci. Comput., 2009, 31, pp.
3744–3759
7 Conclusion [30] Zniyed, Y., Boyer, R., de Almeida, A.L.F., et al.: ‘A TT-based hierarchical
framework for decomposing high-order tensors’, SIAM J. Sci. Comput., 2020,
Collecting data from multisensor signal processing can be naturally 42, (2), pp. A822–A848
interpreted in the multilinear algebra context. In this work, three [31] Favier, G., Fernandes, C., de Almeida, A.: ‘Nested Tucker tensor
important array signal processing-based applications are exposed in decomposition with application to MIMO relay systems using tensor space-
time coding (TSTC)’, Elsevier Signal Process., 2016, 128, (4), pp. 318–331
which the tensor-based framework has proven to be useful. [32] Rocha, D., Favier, G., Fernandes, C.: ‘Closed-form receiver for multi-hop
Namely, the problems of interest are (i) array signal processing for MIMO relay systems with tensor space-time coding’, J. Commun. Inf. Syst.,
localisation, (ii) MHR and (iii) MIMO processing for wireless 2019, 34, (1), pp. 50–54
communications. In this overview paper, we present systematically [33] Favier, G., da Costa, M.N., de Almeida, A., et al.: ‘Tensor space-time (TST)
coding for (MIMO) wireless communication systems’, Elsevier Signal
in a synthetic way the necessary mathematical material to ensure a Process., 2012, 92, (4), pp. 1079–1092
good understanding of the problems of interest. [34] de Almeida, A., Favier, G.: ‘Double Khatri-Rao space-time-frequency coding
using semi-blind PARAFAC based receiver’, IEEE Signal Process. Lett.,
2013, 20, (5), pp. 471–474
8 References [35] Ximenes, L., Favier, G., de Almeida, A.: ‘Closed-form semi-blind receiver for
[1] Harshman, R.A.: ‘Foundations of the parafac procedure: models and MIMO relay systems using double Khatri-Rao space-time coding’, IEEE
conditions for an explanatory multimodal factor analysis’, UCLA Work. Pap. Signal Process. Lett., 2016, 23, (3), pp. 316–320
Phonetics, 1970, 16, pp. 1–84 [36] Zniyed, Y., Boyer, R., de Almeida, A.L.F., et al.: ‘High-order tensor
[2] Carroll, J.D., Chang, J.J.: ‘Analysis of individual differences in estimation via trains of coupled third-order CP and Tucker decompositions’,
multidimensional scaling via N-way generalization of Eckart-Young Linear Algebr. Appl., 2020, 588, pp. 304–337
decomposition’, Psychometrika, 1970, 35, (3), pp. 283–319 [37] Zniyed, Y., Boyer, R., de Almeida, A.L.F., et al.: ‘High-order CPD estimation
[3] Acar, E., Yener, B.: ‘Unsupervised multiway data analysis: a literature with dimensionality reduction using a tensor train model’. Proc. of the 26th
survey’, IEEE Trans. Knowl. Data Eng., 2008, 21, (1), pp. 6–20 European Signal Processing Conf., Rome, Italy, 2018
[4] Sidiropoulos, N., De Lathauwer, L., Fu, X., et al.: ‘Tensor decomposition for [38] Van.Loan, C.F., Pitsianis, N.: ‘Approximation with Kronecker products’,
signal processing andmachine learning’, IEEE Trans. Signal Process., 2017, Moonen, M.S., Golub, G.H., De.Moor, B.L.R. (Eds).: ‘Linear algebra for
65, pp. 3551–3582 large scale and real-time applications’. (Springer Netherlands, Dordrecht,
[5] Sidiropoulos, N.D., Bro, R., Giannakis, G.B.: ‘Parallel factor analysis in 1993), pp. 293–314
sensor array processing’, IEEE Trans. Signal Process., 2000, 48, (8), pp. [39] Wu, K.K., Yam, Y., Meng, H., et al.: ‘Kronecker product approximation with
2377–2388 multiple factor matrices via the tensor product algorithm’. Proc. IEEE Int.
[6] Jiang, T., Sidiropoulos, N.D., Berge, J.M.F.T.: ‘Almost-sure identifiability of Conf. on Systems, Man, and Cybernetics (SMC 2016), Budapest, Hungary,
multidimensional harmonic retrieval’, IEEE Trans. Signal Process., 2001, 49, 2016, pp. 004277–004282
(9), pp. 1849–1859 [40] Zhang, T., Golub, G.H.: ‘Rank-one approximation to high order tensors’,
[7] Sidiropoulos, N.D., Bro, R., Giannakis, G.B.: ‘Blind PARAFAC receivers for SIAM J. Matrix Anal. Appl., 2001, 23, (2), pp. 534–550
DSCDMA systems’, IEEE Trans. Signal Process., 2000, 48, (3), pp. 810–823 [41] Kofidis, E., Regalia, P.A.: ‘On the best rank-1 approximation of higher-order
supersymmetric tensors’, SIAM J. Matrix Anal. Appl., 2002, 23, (3), pp. 863–
884
708 IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709
© The Institution of Engineering and Technology 2021
17519683, 2020, 10, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-spr.2020.0373 by Indian Institute Of Technology Guwahati, Wiley Online Library on [27/02/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
[42] da Silva, A.P., Comon, P., de Almeida, A.L.F.: ‘A finite algorithm to compute [71] Kroonenberg, P.M., de Leeuw, J.: ‘Principal component analysis of three-
rank-1 tensor approximations’, IEEE Signal Process. Lett., 2016, 23, (7), pp. mode data by means of alternating least squares algorithms’, Psychometrika,
959–963 1980, 45, pp. 69–97
[43] Kibangou, A.Y., Favier, G.: ‘Non-iterative solution for PARAFAC with a [72] Clark, M.P., Scharf, L.L.: ‘Two-dimensional modal analysis based on
Toeplitz matrix factor’. Proc. EUSIPCO, Glasgow Scotland, 2009 maximum likelihood’, IEEE Trans. Signal Process., 1994, 42, pp. 1443–1452
[44] Roy, R., Paulraj, A., Kailath, T.: ‘ESPRIT–a subspace rotation approach to [73] Boyer, R.: ‘Deterministic asymptotic Cramér-Rao bound for the
estimation of parameters of cisoids in noise’, IEEE Trans. Acoust. Speech multidimensional harmonic model’, Signal Process., 2008, 88, pp. 2869–2877
Signal Process., 1986, 34, (5), pp. 1340–1342 [74] Sahnoun, S., Usevich, K., Comon, P.: ‘Multidimensional ESPRIT for damped
[45] Roy, R., Kailath, T.: ‘ESPRIT-estimation of signal parameters via rotational and undamped signals: algorithm, computations and perturbation analysis’,
invariance techniques’, IEEE Trans. Acoust. Speech Signal Process., 1989, IEEE Trans. Signal. Proc., 2017, 65, (22), pp. 5897–5910. hal-01360438
37, (7), pp. 984–995 [75] Liu, J., Liu, X.: ‘An eigenvector-based approach for multidimensional
[46] Bienvenu, G., Kopp, L.: ‘Principe de la goniométrie passive adaptative’. 1979 frequency estimation with improved identifiability’, IEEE Trans. Signal
7ème Colloque sur le traitement du signal et des images (GRETSI). (Groupe Process., 2006, 54, pp. 4543–4556
d'Etudes du Traitement du Signal et des Images, 1979 [76] Sorensen, M., De Lathauwer, L.: ‘Blind signal separation via tensor
[47] Schmidt, R.O.: ‘A signal subspace approach to multiple emitter location and decomposition with vandermonde factor: canonical polyadic decomposition’,
spectral estimation’, Stanford Univ, Stanford, CA, 1981 IEEE Trans. Signal Process., 2013, 61, pp. 5507–5519
[48] Miron, S., Song, Y., Brie, D., et al.: ‘Multilinear direction finding for sensor- [77] Bro, R., Sidiropoulos, N.D., Giannakis, G.B.: ‘A fast least squares algorithm
array with multiple scales of invariance’, IEEE Trans. Aerosp. Electron. Syst., for separating trilinear mixtures’. Int. Workshop on Independent Component
2015, 51, (3), pp. 2057–2070 Analysis and Blind Separationg, Aussois, France, Jan. 11–15, 1999
[49] Guo, X., Miron, S., Brie, D., et al.: ‘A CANDECOMP/PARAFAC perspective [78] Papy, J.M., De Lathauwer, L., Huffel, S.V.: ‘Exponential data fitting using
on uniqueness of DOA estimation using a vector sensor array’, IEEE Trans. multilinear algebra: the single-channel and multi-channel case’, Wiley Online
Signal Process., 2011, 59, (7), pp. 3475–3481 Library, 2005, 12, pp. 809–826
[50] Sørensen, M., De Lathauwer, L.: ‘Multiple invariance ESPRIT for [79] Goulart, J.H., Boizard, M., Boyer, R., et al.: ‘Tensor CP decomposition with
nonuniform linear arrays: a coupled canonical polyadic decomposition structured factor matrices: algorithms and performance’, IEEE. J. Sel. Top.
approach’, IEEE Trans. Signal Process., 2016, 64, (14), pp. 3693–3704 Signal. Process., 2016, 10, (4), pp. 757–769
[51] Liang, J., Yang, S., Zhang, J., et al.: ‘4D near-field source localization using [80] Markovsky, I.: ‘Low rank approximation: algorithms, implementation,
cumulant’, EURASIP J. Adv. Signal Process., 2007, 2007, pp. 1–10 applications’ (Springer Science & Business Media, 2011)
[52] Haardt, M., Roemer, F., Del.Galdo, G.: ‘Higher-order SVD-based subspace [81] Boyle, J.P., Dykstra, R.L.: ‘A method for finding projections onto the
estimation to improve the parameter estimation accuracy in multidimensional intersection of convex sets in hilbert spaces’. Advances in order restricted
harmonic retrieval problems’, IEEE Trans. Signal Process., 2008, 56, (7), pp. statistical inference, Springer, New York, NY, USA, 1986, pp. 28–47
3198–3213 [82] Rife, D.C., Boorstyn, R.R.: ‘Single tone parameter estimation from discrete-
[53] Boizard, M., Ginolhac, G., Pascal, F., et al.: ‘Numerical performance of a time observations’, IEEE Trans. Inf. Theory, 1974, 20, (5), pp. 591–598
tensorMUSIC algorithm based on HOSVD for a mixture of polarized [83] Boyer, R., Comon, P.: ‘Rectified ALS algorithm for multidimensional
sources’. 2013 21st European Signal Processing Conf. (EUSIPCO), harmonic retrieval’. Sensor Array and Multichannel Signal Processing
Marrakech Morocco, 2013 Workshop (SAM), Rio de Janeiro, Brazil, 2016
[54] Sidiropoulos, N.D.: ‘Generalizing carathéodory's uniqueness of harmonic [84] Sørensen, M., De Lathauwer, L.: ‘New uniqueness conditions for the
parameterization to n dimensions’, IEEE Trans. Inf. Theory, 2001, 47, pp. canonical polyadic decomposition of third-order tensors’, SIAM J.Matrix
1687–1690 Anal. Appl., 2015, 36, pp. 1381–1403
[55] Li, Y., Razavilar, J., Liu, K.J.R.: ‘A high-resolution technique for [85] Zheng, L., Tse, D.N.C.: ‘Diversity and multiplexing: A fundamental tradeoff
multidimensional NMR spectroscopie’, IEEE Trans. Biomed. Eng., 1998, 45, in multiple-antenna channels’, IEEE Trans. Signal Process., 2003, 49, (5), pp.
pp. 78–86 1073–1096
[56] Nion, D., Sidiropoulos, N.D.: ‘Tensor algebra and multidimensional harmonic [86] Favier, G., de Almeida, A.: ‘Tensor space-time-frequency coding with
retrieval in signal processing for MIMO radars’, IEEE Trans. Signal Process., semiblind receivers for MIMO wireless communication systems’, IEEE
2010, 58, pp. 5693–57057 Trans. Signal Process., 2014, 62, (22), pp. 5987–6002
[57] Zoltowski, M.D., Haardt, M., Mathews, C.P.: ‘Closed-form 2D angle [87] Liu, K., da Costa, J.P.C., So, H., et al.: ‘Semi-blind receivers for joint symbol
estimation with rectangular arrays in element space or beamspace via unitary and channel estimation in space-time-frequency MIMO-OFDM systems’,
ESPRIT’, IEEE Trans. Signal Process., 1996, 44, pp. 316–328 IEEE Trans. Signal Process., 2013, 61, (21), pp. 5444–5457
[58] Acar, E., Bro, R., Smilde, A.K.: ‘Data fusion in metabolomics using coupled [88] de Almeida, A., Favier, G., Mota, J.C.: ‘A constrained factor decomposition
matrix and tensor factorizations’, Proc. IEEE, 2015, 103, pp. 1602–1620 with application to MIMO antenna systems’, IEEE Trans. Signal Process.,
[59] Farias, R.C., Cohen, J.E., Comon, P.: ‘Exploring multimodal data fusion 2008, 56, (6), pp. 2429–2442
through joint decompositions with flexible couplings’, IEEE Trans. Signal [89] de Almeida, A., Favier, G., Mota, J.C.: ‘PARAFAC-based unified tensor
Process., 2016, 64, pp. 4830–4844 modeling for wireless communication systems with application to blind
[60] Lahat, D., Adali, T., Jutten, C.: ‘Multimodal data fusion: an overview of multiuser equalization’, Signal Process., 2007, 87, (2), pp. 337–351
methods, challenges and prospects’, Proc. IEEE, 2015, 103, pp. 1449–1477 [90] da Costa, M.N., Favier, G., Romano, J.M.T.: ‘Tensor modelling of MIMO
[61] Cichocki, A.: ‘Era of big data processing: A new approach via tensor communication systems with performance analysis and kronecker receivers’,
networks and tensor decompositions’. Int. Workshop on Smart Info-Media Elsevier Signal Process., 2018, 145, (4), pp. 304–316
Systems in Asia, Nagoya, Japan, 2013 [91] Rong, Y., Khandaker, M.R.A., Xiang, Y.: ‘Channel estimation of dual-hop
[62] Phan, A., Cichocki, A., Uschmajew, A., et al.: ‘Tensor networks for latent MIMO relay system via parallel factor analysis’, IEEE Trans. Wirel.
variable analysis. Part i: algorithms for tensor train decomposition’, IEEE Commun., 2012, 11, (6), pp. 2224–2233
Trans. Neural Netw. Learn. Syst., 2020, 31, (11), pp. 4622–4636 [92] Cavalcante, I.V., de Almeida, A.L.F., Haardt, M.: ‘Tensor-based approach to
[63] Vervliet, N., Debals, O., Sorber, L., et al.: ‘Breaking the curse of channel estimation in amplify-and-forward MIMO relaying systems’. Proc.
dimensionality using decompositions of incomplete tensors: tensor-based IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM
scientific computing in big data analysis’, SIAM J. Sci. Comput., 2014, 31, pp. 2014), Coruna, Spain, 2014, pp. 445–448
71–79 [93] Roemer, F., Haardt, M.: ‘Tensor-based channel estimation and iterative
[64] Hackbusch, W., Kuhn, S.: ‘A new scheme for the tensor representation’, J. refinements for two-way relaying with multiple antennas and spatial reuse’,
Fourier Anal. Appl., 2009, 15, pp. 706–722 IEEE Trans. Signal Process., 2010, 58, (11), pp. 5720–5735
[65] Kressner, D., Steinlechner, M., Vandereycken, B.: ‘Low-rank tensor [94] Fernandes, C.A.R., de Almeida, A.L.F., da Costa, D.B.: ‘Unified tensor
completion by Riemannian optimization’, BIT Numerical Math., 2014, 54, pp. modeling for blind receivers in multiuser uplink cooperative systems’, IEEE
447–468 Signal Process. Lett., 2012, 19, (5), pp. 247–250
[66] Bousse, M., Debals, O., De Lathauwer, L.: ‘A tensor-based method for large- [95] de Almeida, A.L.F., Fernandes, C.A.R., da Costa, D.B.: ‘Multiuser detection
scale blind source separation using segmentation’, IEEE Trans. Signal for uplink DS-CDMA amplify-and-forward relaying systems’, IEEE Signal
Process., 2016, 65, pp. 346–358 Process. Lett., 2013, 20, (7), pp. 697–700
[67] Roemer, F., Haardt, M.: ‘A closed-form solution for parallel factor [96] Ximenes, L.R., Favier, G., de Almeida, A.L.F.: ‘Semi-blind receivers for non-
(PARAFAC) analysis’. IEEE Int. Conf. on Acoustics, Speech and Signal regenerative cooperative MIMO communications based on nested PARAFAC
Processing, Las Vegas, USA, 2008 modeling’, IEEE Trans. Signal Process., 2015, 63, (18), pp. 4985–4998
[68] Bro, R.: ‘PARAFAC: tutorial and applications’, Chemometr. Intell. Lab. Syst., [97] Sokal, B., de Almeida, A.L.F., Haardt, M.: ‘Semi-blind receivers for MIMO
1997, 38, pp. 149–171 multi-relaying systems via rank-one tensor approximations’, Signal Process.,
[69] Li, N., Kindermann, S., Navasca, C.: ‘Some convergence results on the 2020, 166, p. 107254
regularized alternating least-squares method for tensor decomposition’, [98] Rocha, D.S., Fernandes, C., Favier, G.: ‘MIMO multi-relay systems with
Linear Algebr. Appl., 2013, 438, pp. 796–812 tensor space-time coding based on coupled nested tucker decomposition’,
[70] Cichocki, A. N., Lee, I.V.O., Phan, A.H., et al.: ‘Low-rank tensor networks Digit. Signal Process., 2019, 89, (3), pp. 170–185
for dimensionality reduction and large-scale optimization problems: [99] Sidiropoulos, N.D., Budampati, R.: ‘Khatri-Rao space-time codes’, IEEE
perspectives and challenges’, Foundations and Trends in Mach. Learn., 2016, Trans. Signal Process., 2002, 50, (10), pp. 2396–2407
9, pp. 249–420
IET Signal Process., 2020, Vol. 14 Iss. 10, pp. 693-709 709
© The Institution of Engineering and Technology 2021