SlideShare a Scribd company logo
Low Complexity
Regularization of
Inverse Problems
Gabriel Peyré

Joint works with:
Samuel Vaiter Jalal Fadili
Charles Dossal
Mohammad Golbabaee

VISI
www.numerical-tours.com

N
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Single Pixel Camera (Rice)
x0
˜
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

N micro-mirrors
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

P/N = 1

N micro-mirrors

P/N = 0.16

P/N = 0.02
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f
2

x0 2 L
˜

array
resolution
CS hardware

x0 2 R

N

micro

mirrors

RN .

y 2 RP
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f

x0 2 L
˜

array
resolution

x0 2 R

N

micro

mirrors

y 2 RP

CS hardware

,
,
...

2

RN .

,

Operator
x0
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP

Examples: Inpainting, super-resolution, compressed-sensing

x0

x0
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity

Performance analysis:
! Criteria on (x0 , ||w||, ) to ensure

L2 stability ||x(y) x0 || = O(||w||)
Model stability (e.g. spikes location)
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x

D

Analysis
sparsity:
Image x

Gradient D⇤ x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Analysis
sparsity:

Image

x

D

Low-rank:

Image x

Gradient D⇤ x

S1,·

Multi-spectral imaging:
Pr
xi,· = j=1 Ai,j Sj,·

S2,·

x

S3,·
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

x
J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x

T0

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

0

J(x) = ||x||⇤

T = low-rank
matrices
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

T0

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

x
x0

0

J(x) = ||x||⇤

T = low-rank
matrices

J(x) = ||x||1

T = antisparse
vectors
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

xi}

|x|
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}

@J(x)
0

x
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}
Tx = {⌘  supp(⌘) = I}

Definition:

Tx = VectHull(@J(x))?

@J(x)
0

x

Tx
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

@J(x)
0

I = supp(x) = {i  xi 6= 0}

Tx

Tx = {⌘  supp(⌘) = I}
ex = sign(x)

Definition:

Tx = VectHull(@J(x))?
⌘ 2 @J(x)

ex x

=)

ProjTx (⌘) = ex
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

x

@J(x)

x

0

Tx = {z  supp(z) ⇢ supp(x)}
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}

x

@J(x)

x

0

x

@J(x)

x

0
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

x

@J(x)

x

0

⇤

x

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

@J(x)

x

0

x
@J(x)
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

⇤

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

I = {i  |xi | = ||x||1 }
Anti-sparsity: J(x) = ||x||1
Tx = {y  yI / sign(xI )}
ex = |I| 1 sign(x)

x

@J(x)

x

0

x

@J(x)

@J(x)

x

0

x
@J(x)

x

x

0
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

x?

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:

D(x0 ) = Im(

⇤

⌘

x?

)  @J(x0 )

@J(x0 )
x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(

⇤

⌘

@J(x0 )

x?

)  @J(x0 )
⇤
)  ri(@J(x0 ))

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)

[Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 .
[Grassmair 2012]: J(x? x0 ) = O(||w||).
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

⇠ N (0, 1), i.i.d.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

! Similar results for || · ||1,2 , || · ||1 .

.
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

T = T x0
e = ex0
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

T = T x0
e = ex0

argmin
⌘=

⇤ q,⌘

T =e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =
Proposition:

One has

T = T x0
e = ex0

argmin
⌘=

⌘0 = (

⇤ q,⌘

+
T

T =e

)⇤ e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]

[Fuchs 2004]: J = || · ||1 .
[Vaiter et al. 2011]: J = ||D⇤ · ||1 .
[Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on
.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability
P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )

! Not using RIP technics (non-uniform result on x0 ).
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

i)

x0

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

x0

i)

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0

||⌘0,I c ||1
2

1
0

10

20

I = {j  x0 (j) 6= 0}
||⌘0,I c ||1 < 1
()
¯
⌘0 2 D(x0 )
()
support recovery.
Conclusion
Gauges: encode linear models as singular points.
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Open problems:
– Approximate model recovery Tx? ⇡ Tx0 .
– CS performance with complicated gauges (e.g. TV).
Ad

More Related Content

What's hot (20)

Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
Gabriel Peyré
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
Gabriel Peyré
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
Gabriel Peyré
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
Gabriel Peyré
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
Stéphane Canu
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
Frank Nielsen
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
Frank Nielsen
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
Gabriel Peyré
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
Frank Nielsen
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
Gabriel Peyré
 
Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgos
Roziq Bahtiar
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
Stéphane Canu
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
Frank Nielsen
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3
jainatin
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
Gabriel Peyré
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
Gabriel Peyré
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
Stéphane Canu
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
Stéphane Canu
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
BigMC
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
SEENET-MTP
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
Gabriel Peyré
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
Gabriel Peyré
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
Gabriel Peyré
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
Gabriel Peyré
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
Stéphane Canu
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
Frank Nielsen
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
Frank Nielsen
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
Gabriel Peyré
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
Frank Nielsen
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
Gabriel Peyré
 
Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgos
Roziq Bahtiar
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
Frank Nielsen
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3
jainatin
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
Gabriel Peyré
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
Gabriel Peyré
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
Stéphane Canu
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
Stéphane Canu
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
BigMC
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
SEENET-MTP
 

Similar to Low Complexity Regularization of Inverse Problems (20)

ch3.ppt
ch3.pptch3.ppt
ch3.ppt
NajlaAlThuniyan1
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
The Statistical and Applied Mathematical Sciences Institute
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
Stephane Senecal
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
Mark Chang
 
sada_pres
sada_pressada_pres
sada_pres
Stephane Senecal
 
Statistical Hydrology for Engineering.pdf
Statistical Hydrology for Engineering.pdfStatistical Hydrology for Engineering.pdf
Statistical Hydrology for Engineering.pdf
yuyohsilas
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
Chiheb Ben Hammouda
 
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Christian Robert
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
STAIR Lab, Chiba Institute of Technology
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Zheng Mengdi
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
Alexander Litvinenko
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
PK Lehre
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Per Kristian Lehre
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
Jagadeeswaran Rathinavel
 
Nested sampling
Nested samplingNested sampling
Nested sampling
Christian Robert
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodNonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Tasuku Soma
 
ML unit2.pptx
ML unit2.pptxML unit2.pptx
ML unit2.pptx
SwarnaKumariChinni
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
Mark Chang
 
Statistical Hydrology for Engineering.pdf
Statistical Hydrology for Engineering.pdfStatistical Hydrology for Engineering.pdf
Statistical Hydrology for Engineering.pdf
yuyohsilas
 
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
Christian Robert
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
STAIR Lab, Chiba Institute of Technology
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Zheng Mengdi
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
Alexander Litvinenko
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
PK Lehre
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Per Kristian Lehre
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodNonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Tasuku Soma
 
Ad

More from Gabriel Peyré (14)

Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
Gabriel Peyré
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
Gabriel Peyré
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
Gabriel Peyré
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
Gabriel Peyré
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
Gabriel Peyré
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
Gabriel Peyré
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
Gabriel Peyré
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
Gabriel Peyré
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
Gabriel Peyré
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
Gabriel Peyré
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
Gabriel Peyré
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
Gabriel Peyré
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
Gabriel Peyré
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
Gabriel Peyré
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
Gabriel Peyré
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
Gabriel Peyré
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
Gabriel Peyré
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
Gabriel Peyré
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
Gabriel Peyré
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
Gabriel Peyré
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
Gabriel Peyré
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
Gabriel Peyré
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
Gabriel Peyré
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
Gabriel Peyré
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
Gabriel Peyré
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
Gabriel Peyré
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
Gabriel Peyré
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
Gabriel Peyré
 
Ad

Recently uploaded (20)

Link your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRMLink your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRM
Celine George
 
dynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south Indiadynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south India
PrachiSontakke5
 
pulse ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
pulse  ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulsepulse  ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
pulse ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
sushreesangita003
 
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public SchoolsK12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
dogden2
 
How to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of saleHow to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of sale
Celine George
 
Sugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptxSugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptx
Dr. Renu Jangid
 
Operations Management (Dr. Abdulfatah Salem).pdf
Operations Management (Dr. Abdulfatah Salem).pdfOperations Management (Dr. Abdulfatah Salem).pdf
Operations Management (Dr. Abdulfatah Salem).pdf
Arab Academy for Science, Technology and Maritime Transport
 
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
Kasdorf "Accessibility Essentials: A 2025 NISO Training Series, Session 5, Ac...
National Information Standards Organization (NISO)
 
Presentation on Tourism Product Development By Md Shaifullar Rabbi
Presentation on Tourism Product Development By Md Shaifullar RabbiPresentation on Tourism Product Development By Md Shaifullar Rabbi
Presentation on Tourism Product Development By Md Shaifullar Rabbi
Md Shaifullar Rabbi
 
Sinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_NameSinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_Name
keshanf79
 
Metamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative JourneyMetamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative Journey
Arshad Shaikh
 
SPRING FESTIVITIES - UK AND USA -
SPRING FESTIVITIES - UK AND USA            -SPRING FESTIVITIES - UK AND USA            -
SPRING FESTIVITIES - UK AND USA -
Colégio Santa Teresinha
 
Contact Lens:::: An Overview.pptx.: Optometry
Contact Lens:::: An Overview.pptx.: OptometryContact Lens:::: An Overview.pptx.: Optometry
Contact Lens:::: An Overview.pptx.: Optometry
MushahidRaza8
 
Geography Sem II Unit 1C Correlation of Geography with other school subjects
Geography Sem II Unit 1C Correlation of Geography with other school subjectsGeography Sem II Unit 1C Correlation of Geography with other school subjects
Geography Sem II Unit 1C Correlation of Geography with other school subjects
ProfDrShaikhImran
 
Engage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdfEngage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdf
TechSoup
 
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-3-2025.pptx
YSPH VMOC Special Report - Measles Outbreak  Southwest US 5-3-2025.pptxYSPH VMOC Special Report - Measles Outbreak  Southwest US 5-3-2025.pptx
YSPH VMOC Special Report - Measles Outbreak Southwest US 5-3-2025.pptx
Yale School of Public Health - The Virtual Medical Operations Center (VMOC)
 
Introduction to Vibe Coding and Vibe Engineering
Introduction to Vibe Coding and Vibe EngineeringIntroduction to Vibe Coding and Vibe Engineering
Introduction to Vibe Coding and Vibe Engineering
Damian T. Gordon
 
How to Set warnings for invoicing specific customers in odoo
How to Set warnings for invoicing specific customers in odooHow to Set warnings for invoicing specific customers in odoo
How to Set warnings for invoicing specific customers in odoo
Celine George
 
Grade 3 - English - Printable Worksheet (PDF Format)
Grade 3 - English - Printable Worksheet  (PDF Format)Grade 3 - English - Printable Worksheet  (PDF Format)
Grade 3 - English - Printable Worksheet (PDF Format)
Sritoma Majumder
 
Herbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptxHerbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptx
RAJU THENGE
 
Link your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRMLink your Lead Opportunities into Spreadsheet using odoo CRM
Link your Lead Opportunities into Spreadsheet using odoo CRM
Celine George
 
dynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south Indiadynastic art of the Pallava dynasty south India
dynastic art of the Pallava dynasty south India
PrachiSontakke5
 
pulse ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
pulse  ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulsepulse  ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
pulse ppt.pptx Types of pulse , characteristics of pulse , Alteration of pulse
sushreesangita003
 
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public SchoolsK12 Tableau Tuesday  - Algebra Equity and Access in Atlanta Public Schools
K12 Tableau Tuesday - Algebra Equity and Access in Atlanta Public Schools
dogden2
 
How to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of saleHow to manage Multiple Warehouses for multiple floors in odoo point of sale
How to manage Multiple Warehouses for multiple floors in odoo point of sale
Celine George
 
Sugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptxSugar-Sensing Mechanism in plants....pptx
Sugar-Sensing Mechanism in plants....pptx
Dr. Renu Jangid
 
Presentation on Tourism Product Development By Md Shaifullar Rabbi
Presentation on Tourism Product Development By Md Shaifullar RabbiPresentation on Tourism Product Development By Md Shaifullar Rabbi
Presentation on Tourism Product Development By Md Shaifullar Rabbi
Md Shaifullar Rabbi
 
Sinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_NameSinhala_Male_Names.pdf Sinhala_Male_Name
Sinhala_Male_Names.pdf Sinhala_Male_Name
keshanf79
 
Metamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative JourneyMetamorphosis: Life's Transformative Journey
Metamorphosis: Life's Transformative Journey
Arshad Shaikh
 
Contact Lens:::: An Overview.pptx.: Optometry
Contact Lens:::: An Overview.pptx.: OptometryContact Lens:::: An Overview.pptx.: Optometry
Contact Lens:::: An Overview.pptx.: Optometry
MushahidRaza8
 
Geography Sem II Unit 1C Correlation of Geography with other school subjects
Geography Sem II Unit 1C Correlation of Geography with other school subjectsGeography Sem II Unit 1C Correlation of Geography with other school subjects
Geography Sem II Unit 1C Correlation of Geography with other school subjects
ProfDrShaikhImran
 
Engage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdfEngage Donors Through Powerful Storytelling.pdf
Engage Donors Through Powerful Storytelling.pdf
TechSoup
 
Introduction to Vibe Coding and Vibe Engineering
Introduction to Vibe Coding and Vibe EngineeringIntroduction to Vibe Coding and Vibe Engineering
Introduction to Vibe Coding and Vibe Engineering
Damian T. Gordon
 
How to Set warnings for invoicing specific customers in odoo
How to Set warnings for invoicing specific customers in odooHow to Set warnings for invoicing specific customers in odoo
How to Set warnings for invoicing specific customers in odoo
Celine George
 
Grade 3 - English - Printable Worksheet (PDF Format)
Grade 3 - English - Printable Worksheet  (PDF Format)Grade 3 - English - Printable Worksheet  (PDF Format)
Grade 3 - English - Printable Worksheet (PDF Format)
Sritoma Majumder
 
Herbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptxHerbs Used in Cosmetic Formulations .pptx
Herbs Used in Cosmetic Formulations .pptx
RAJU THENGE
 

Low Complexity Regularization of Inverse Problems

  • 1. Low Complexity Regularization of Inverse Problems Gabriel Peyré Joint works with: Samuel Vaiter Jalal Fadili Charles Dossal Mohammad Golbabaee VISI www.numerical-tours.com N
  • 2. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 3. Single Pixel Camera (Rice) x0 ˜
  • 4. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures N micro-mirrors
  • 5. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures P/N = 1 N micro-mirrors P/N = 0.16 P/N = 0.02
  • 6. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f 2 x0 2 L ˜ array resolution CS hardware x0 2 R N micro mirrors RN . y 2 RP
  • 7. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f x0 2 L ˜ array resolution x0 2 R N micro mirrors y 2 RP CS hardware , , ... 2 RN . , Operator x0
  • 8. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP
  • 9. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP Examples: Inpainting, super-resolution, compressed-sensing x0 x0
  • 10. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter
  • 11. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity
  • 12. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity Performance analysis: ! Criteria on (x0 , ||w||, ) to ensure L2 stability ||x(y) x0 || = O(||w||) Model stability (e.g. spikes location)
  • 13. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 14. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: T Coe cients x Image x
  • 15. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x
  • 16. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x D Analysis sparsity: Image x Gradient D⇤ x
  • 17. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Analysis sparsity: Image x D Low-rank: Image x Gradient D⇤ x S1,· Multi-spectral imaging: Pr xi,· = j=1 Ai,j Sj,· S2,· x S3,·
  • 18. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x)
  • 19. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T x J(x) = ||x||1 T T = sparse vectors
  • 20. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T T0 x0 x J(x) = ||x||1 T T = sparse vectors
  • 21. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T0 x |x1 |+||x2,3 || T = block sparse vectors 0
  • 22. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T x |x1 |+||x2,3 || T = block sparse vectors 0 x 0 J(x) = ||x||⇤ T = low-rank matrices
  • 23. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors T0 x T x |x1 |+||x2,3 || T = block sparse vectors 0 x x x0 0 J(x) = ||x||⇤ T = low-rank matrices J(x) = ||x||1 T = antisparse vectors
  • 24. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y xi} |x|
  • 25. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} @J(x) 0 x
  • 26. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} Tx = {⌘ supp(⌘) = I} Definition: Tx = VectHull(@J(x))? @J(x) 0 x Tx
  • 27. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / @J(x) 0 I = supp(x) = {i xi 6= 0} Tx Tx = {⌘ supp(⌘) = I} ex = sign(x) Definition: Tx = VectHull(@J(x))? ⌘ 2 @J(x) ex x =) ProjTx (⌘) = ex
  • 28. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) x @J(x) x 0 Tx = {z supp(z) ⇢ supp(x)}
  • 29. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} x @J(x) x 0 x @J(x) x 0
  • 30. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V x @J(x) x 0 ⇤ x x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} @J(x) x 0 x @J(x)
  • 31. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V ⇤ x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} I = {i |xi | = ||x||1 } Anti-sparsity: J(x) = ||x||1 Tx = {y yI / sign(xI )} ex = |I| 1 sign(x) x @J(x) x 0 x @J(x) @J(x) x 0 x @J(x) x x 0
  • 32. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 33. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) x? x= x0
  • 34. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ⇤ ⌘ x? ) @J(x0 ) @J(x0 ) x= x0
  • 35. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( ⇤ ⌘ @J(x0 ) x? ) @J(x0 ) ⇤ ) ri(@J(x0 )) x= x0
  • 36. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||)
  • 37. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||) [Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 . [Grassmair 2012]: J(x? x0 ) = O(||w||).
  • 38. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j ⇠ N (0, 1), i.i.d.
  • 39. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 40. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 41. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on ! Similar results for || · ||1,2 , || · ||1 . .
  • 42. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e T = T x0 e = ex0
  • 43. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = T = T x0 e = ex0 argmin ⌘= ⇤ q,⌘ T =e ||q||
  • 44. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = Proposition: One has T = T x0 e = ex0 argmin ⌘= ⌘0 = ( ⇤ q,⌘ + T T =e )⇤ e ||q||
  • 45. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013]
  • 46. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013] [Fuchs 2004]: J = || · ||1 . [Vaiter et al. 2011]: J = ||D⇤ · ||1 . [Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
  • 47. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on .
  • 48. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability P ⇠ 2s log(N )
  • 49. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N )
  • 50. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N ) ! Not using RIP technics (non-uniform result on x0 ).
  • 51. 1-D Sparse Spikes Deconvolution ⇥x = xi (· i) x0 i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0
  • 52. 1-D Sparse Spikes Deconvolution ⇥x = xi (· x0 i) i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0 ||⌘0,I c ||1 2 1 0 10 20 I = {j x0 (j) 6= 0} ||⌘0,I c ||1 < 1 () ¯ ⌘0 2 D(x0 ) () support recovery.
  • 53. Conclusion Gauges: encode linear models as singular points.
  • 54. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees
  • 55. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 .
  • 56. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 . Open problems: – Approximate model recovery Tx? ⇡ Tx0 . – CS performance with complicated gauges (e.g. TV).