Tensor Decomp Presentation
Tensor Decomp Presentation
Overview
Matrix(comple.on(mo.va.on(and(methods(
Background(on(tensors(and(basic(generaliza.ons(from(matrices(
Overview(of(challenges(and(methods(for(tensor(decomposi.on(
Exis.ng(tensor(comple.on(methods(
Proposed(works(
10/5/16
Olfat, M.
Motivation:
Challenges:
a) X in the null space of A:
2
3
1 0 0
e1 eT1 = 40 0 05
0 0 0
10/5/16
1 0
0 0
+ (1 )
=
0 0
0 1
0 1
Olfat, M.
Recht, Fazel & Parrilo (2010) show nuclear norm ball is tightest convex relaxation of rank ball
Define the r-isometry constant: r = min : (1
)kXkF kA(X)k (1 + )kXkF 8X : rank(X) r
Show min kXk gives exact solution given Restricted Isometry Property: 5r < 1/10
Give cases where RIP holds with high probability
Candes & Recht (2009) build on nuclear norm relaxation specifically for matrix completion
Can give exact solution given O(rn1.2 log n) samples for small r, or O(rn1.25 log n) samples for any r
Constant dependent on coherence of the matrix: (X) = nr max1in kPX ei k2
Ut VtT k2F
rkXkF
kXk
For matrix completion, t = O(log F ) ! kX Ut VtT kF if |A| = O(4 (X)r4.5 n log n log )
Hardt (2013) improves results by a factor of r4 (X)5 for matrix completion
10/5/16(
Olfat,(M.(
4(
Tensor Decomposition
Origination:
Applications:
Signal processing
Numerical linear algebra
Image compression
Data mining
Neuroscience
Quantum physics
Full survey by Kolda & Bader (2009)
10/5/16
Olfat, M.
Tensor Decomposition
Some Problems:
Hillar & Lim (2012) show that finding most standard decompositions for tensors is NP-hard
Clipping smallest components in CP decomposition gives low-rank approximation, but not always the best
Finding rank is therefore NP-hard in general as well
In fact, De Silva & Lim (2008) show that finding low-rank approximations of tensors is ill-posed
However, also show use of hyper-determinant in finding tensor rank (more later)
Decomposition can depend on whether tensor can take complex numbers:
1 0
0 1
X 2 R222 ! rank(X ) = 3; X 2 C222 ! rank(X ) = 2
X1 =
,X2 =
0 1
1 0
Approaches:
Anandkumar et al. (2012) efficiently solve for low CP-rank orthogonal symmetric tensor via power method
Anandkumar et al. (2014) efficiently find approximations for low CP-rank incoherent tensors
Generally depend on whitening step to make tensor symmetric first, but this is computationally expensive
Ge et al. (2015) suggest online stochastic gradient descent method for decomposition
But only provably converges to local solution
In the case of Matrix Completion, Ge et al. (2016) showed that all local optima are global optima when
restricting set of matrices to be PSD
Raises similar questions for tensors, but first need to rigorously define PSD tensors..
10/5/16
Olfat, M.
rank(X )
A(X ) = b
! min kA(X )
A(
i2[r]
(1)
i (ai
(N )
ai
))k2F
New line of work seeks to design further relaxations of matrix rank ball via new norms or decompositions
Rauhut & Stojanac (2015) construct the k -norm based on concepts from computational algebraic geometry
Specifically, design nested sets based on relaxations of polynomial ideal generated by hyper-determinant of
tensor, which provably converge to convex hull of original ideal
Use Grbner basis to formulate SDP problem to efficiently minimize norms under affine constraints
Use norm of order 1 to recover third-order tensor, but do not provide theoretical bounds (next slide)
Nie & Wang (2014) use similar approach to recover best rank-1 approximations
Aswani (2016) also defines new decomposition, objective function that allows randomized approach
10/5/16
Olfat, M.
10/5/16
Olfat, M.
Proposed Works
Conduct more extensive empirical
study of recently proposed norm
relaxation methods for tensors
Olfat, M.