0% found this document useful (0 votes)
68 views

2017 Unser (Slides) Biomedical Image Reconstruction

The document is a letter from Michael Unser informing Dr. Liebling that he has been selected to receive the 2004 Research Award from the Swiss Society of Biomedical Engineering for his thesis work on Fresnelets, interference fringes, and digital holography. Unser invites Liebling to accept the award in person in Zurich and give a brief presentation, and provides instructions for receiving the 1000 CHF cash prize.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views

2017 Unser (Slides) Biomedical Image Reconstruction

The document is a letter from Michael Unser informing Dr. Liebling that he has been selected to receive the 2004 Research Award from the Swiss Society of Biomedical Engineering for his thesis work on Fresnelets, interference fringes, and digital holography. Unser invites Liebling to accept the award in person in Zurich and give a brief presentation, and provides instructions for receiving the 1000 CHF cash prize.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

BIOMEDICAL IMAGING GROUP (BIG)

LABORATOIRE D’IMAGERIE BIOMEDICALE

EPFL LIB Téléphone : + 4121 693 51 85


Bât. BM 4.127 Fax : + 4121 693 37 01
CH 1015 Lausanne E-mail : [email protected]
Switzerland Site web : http://bigwww.epfl.ch

Dr. Michael Liebling


Biomedical image reconstruction
Biological Imaging Center
California Inst. of Technology
Mail Code 139-74
Pasadena, CA 91125, USA

Michael Unser
Biomedical Imaging Group Lausanne, August 19, 2004
EPFL, Lausanne, Switzerland

Dear Dr. Liebling,

I am pleased to inform you that you were selected to the receive the 2004 Research Award of the
Swiss Society
Tutorial of Biomedical
Session, Engineering
European Molecular for Meeting
Imaging your thesis work “On
(EMIM’17), Fresnelets,
5-7 April interference
2017, Köln, Germany
fringes, and digital holography”. The award will be presented during the general assembly of the
SSBE, September 3, Zurich, Switzerland.

Please, lets us know if


1) you will be present to receive the award,
2) you would be willing to give a 10 minutes presentation of the work during the general
OUTLINE
assembly.

The ■
award comes with a cash
1. Imaging as prize
an ofinverse
1000.- CHF.problem
Would you please send your banking information to the treasurer of the SSBE, Uli Diermann
■ Basic imaging operators
(Email:[email protected]), so that he can transfer the cash prize to your account ?
■ Comparison of modalities
I congratulate you on your achievement.
■ Discretization of the inverse problem
With■best
2. regards,
Classical reconstruction algorithms
■ Backprojection
■ Tikhonov regularization
Michael
■ Unser,
WienerProfessor
/ LMSE solution
Chairman of the SSBE Award Committee
■ 3. Modern methods: the sparsity (re)evolution
cc: Ralph Mueller, president of the SSBE; Uli Diermann, treasurer
Specific examples: Magnetic resonance imaging
Computed tomography
Differential phase-contrast tomography

■ 4. What’s next: the learning revolution ?


2
Inverse problems in bio-imaging
Linear forward model y = Hs + n
noise

Integral operator
n
s Problem: recover s from noisy measurements y
The easy scenario

Basic limitations
Inverse problem is well posed if 9c0 > 0 s.t., for all s 2 X , c0 ksk  kHsk
1) Inherent noise ampl
) s⇡H 1
y ification
2)(assuming noise is negligible)
Difficulty to invert H (to
o large or non-square
3) All interesting invers )
Backprojection (poor man’s solution): s ⇡e H prTobylems are ill-posed

Part 1:

Setting up
the problem

4
Forward imaging model (noise-free)
Unknown molecular/anatomical map: s(r), r = (x, y, z, t) 2 Rd

defined over a continuum in space-time


s 2 L2 (Rd ) (space of finite-energy functions)

Imaging operator H : s 7! y = (y1 , · · · , yM ) = H{s}

from continuum to discrete (finite dimensional)

H : L2 (Rd ) ! RM

Linearity assumption: for all s1 , s2 2 L2 (Rd ), ↵1 , ↵2 2 R

H{↵1 s1 + ↵2 s2 } = ↵1 H{s1 } + ↵2 H{s2 }

impulse response of mth detector


Z
) [y]m = ym = h⌘m , si = ⌘m (r)s(r)dr
Rd

(by the Riesz representation theorem) 5

Images are obviously made of sine waves ...

6
Basic operator: Fourier transform

F : L2 (Rd ) ! L2 (Rd )
Z
fˆ(!) = F{f }(!) = f (x)e jh!,xi
dx
Rd

Reconstruction formula (inverse Fourier transform)


Z
1
f (x) = F 1
{f }(x) = fˆ(!)ejh!,ri d! (a.e.)
(2⇡)d Rd

Equivalent analysis functions: ⌘m (x) = ejh!m ,xi (complex sinusoids)

2D Fourier reconstruction

Original image: Reconstruction using N largest coefficients:


1
f (x) f˜(x) = fˆ( )ej x, ⇥
(2 )2
subset

8
Magnetic resonance imaging
Magnetic resonance: !0 = B0

Frequency encode:
z 0 = 0 (x)
x

Linear forward model for MRI r = (x, y, z)


Z
ŝ(! m ) = s(r)e jh!m ,ri dr (sampling of Fourier transform)
R3

Extended forward model with coil sensitivity


Z
jh! m ,ri
ŝw (! m ) = w(r)s(r)e dr
R3
9

Basic operator: Windowing


W : L2 (Rd ) ! L2 (Rd )

W{f }(x) = w(x)f (x)

Positive window function (continuous and bounded): w 2 Cb (Rd ), w(x) 0

Special case: modulation

w(r) = ejh!0 ,ri


F
ejh!0 ,ri f (r) ! fˆ(! !0 )

Application: Structured illumination microscopy (SIM)

10
Basic operator: Convolution

H : L2 (Rd ) ! L2 (Rd )
Z
H{f }(x) = (h ⇤ f )(x) = h(x y)f (y)dy
Rd

Impulse response: h(x) = H{ }

Equivalent analysis functions: ⌘m (x) = h(xm ·)

Frequency response: ĥ(!) = F{h}(!)

Convolution as a frequency-domain product


F
(h ⇤ f )(x) ! ĥ(!)fˆ(!)

11

Modeling of optical systems


f (x, y) g(x, y) = (h ⇤ f )(x, y)

h(x, y): Point Spread Function (PSF)


Diffraction-limited optics = LSI system

Aberation-free point spread function (in focal plane) Radial profile


2
2J1 ( r) Airy disk
h(x, y) = h(r) = C ·
r

Airy disc
where r = x2 + y 2 (radial distance)

Effect of misfocus
Point source output

(in focus) (defocus) 12


Basic operator: X-ray transform
10.3 MAP reconstruction of biomedical images 273
?
Projection geometry: x = t✓ + r✓ with ✓ = (cos ✓, sin ✓)

)
(t
t )

s}
}(

(t
{s

✓{
p
Radon transform (line
y integrals) y

R
Z r

y)
r

f(

y
R✓ {s(x)}(t) = s(t✓ + r✓x? )dr

P
x2
R
Z x
(a)
= s(x) (t hx, ✓i)dx
R2 x1

(b) (c)
sinogram
Figure 10.5 X-ray tomography and the Radon transform. (a) Imaging geometry. (b) 2-D
reconstruction of a tomogram. (c) Its Radon transform (sinogram).

Equivalent analysis functions:


In practice, the measurements correspond to the sampled
⌘ (x) =
m values of themRadon t hx, ✓ m i
transform of the absorption map s(x) at a series of points (t m , µm ), m = 1, . . . , M . From
(10.32), we deduce that the analysis functions are 13
° ¢
¥ m (x) = ± t m ° hx, µm i

which represent a series of idealized lines in R2 perpendicular to µm = (cos µm , sin µm ).

Discretization

Central slice theorem


For discretization purpose, we represent the absorption distribution as the weighted
sum of separable B-spline-like basis functions
X
s(x) = s[k]Ø(x ° k) ,
k
Measurements of line integrals (Radon transform)
t
)

with Ø(x) = Ø(x)Ø(y) where Ø(x) is a suitable symmetric kernel (typically, a polyno-
(t
p

mial B-spline of degree n). The constraint here is that Ø ought to have a short support
p (t) = R {f } (t, ✓)
to reduce
✓ computations,
✓ which rules out the use of the sinc basis.
In order to determine the system matrix, we need to compute the Radon transform
of the basis functions. The properties of the Radon transform that are helpful for that
purpose are
1D and 2D Fourier transforms
p̂✓ (!) = F1D {p✓ }(!) Fourier transform
fˆ(!) = F2D {f }(!) = fˆpol (!, ✓)
y

Central-slice theorem
)
(

p̂ (⇥) = fˆ(⇥ cos , ⇥ sin ) = fˆpol (⇥, ) x

Proof: for ✓ = 0
Z +1 Z +1 Z +1 ✓Z +1 ◆
fˆ(!, 0) = f (x, y)e j!x
dxdy = f (x, y) dy e j!x
dx = p̂0 (!)
1 1 1 1
| {z }
p0 (x)
then use rotation property of Fourier transform. . .
14
Modality Radiation Forward model Variations
2D or 3D parallel,
tomography coherent x-ray yi = R✓i x
cone beam, spiral sampling

3D deconvolution brightfield, confocal,


microscopy fluorescence y = Hx
light sheet

yi = HWi x full 3D reconstruction,


structured illumination fluorescence
microscopy (SIM) H: PSF of microscope non-sinusoidal patterns
Wi : illumination pattern

Positron Emission gamma rays yi = H✓i x list mode


Tomography (PET) with time-of-flight

Magnetic resonance y = Fx uniform or non-uniform


radio frequency
imaging (MRI) sampling in k space

yt,i = Ft Wi x gated or not,


Cardiac MRI
radio frequency retrospective registration
(parallel, non-uniform) Wi : coil sensitivity

Optical diffraction with holography


coherent light yi = W i F i x or grating interferometry
tomography

Discretization: Finite dimensional formalism


X
s(r) = s[k] k (r)
k2⌦

Signal vector: s = s[k] k2⌦


of dimension K

Measurement model (image formation)


Z
ym = s(r)⌘m (r)dr + n[m] = hs, ⌘m i + n[m], (m = 1, . . . , M )
Rd

⌘m : sampling/imaging function (mth detector)


n[·]: additive noise

y = y0 + n = Hs + n

Z
(M ⇥ K) system matrix : [H]m,k = h⌘m , ki = ⌘m (r) k (r)dr
Rd

16
Example of basis functions
Shift-invariant representation: k (x) = (x k)
d
Y
Separable generator: (x) = (xn )
n=1

Pixelated model 1
tri(x) = 1
(x)
0.8
(x) = rect(x) 0.6
0.4
0.2

Bilinear model !0.2


!2 !1 0 1 2 3

(x) = (rect ⇤ rect)(x) = tri(x) 1


0.8
0.6
0.4
Bandlimited representation 0.2

-4 -2 0 2 4
(x) = sinc(x) -0.2

17

Part 2:

Classical image  

reconstruction

Discretized forward model: y=Hs+ n

Inverse problem: How to efficiently recover s from y ?


18
Vector calculus
Scalar cost function J(v) : RN ! R
2 3
@J/@v1
@J(v) 6 .. 7
Vector differentiation: =6
4 .
7 = rJ(v)
5 (gradient)
@v
@J/@vN

Necessary condition for an unconstrained optimum (minimum or maximum)

@J(v)
=0 (also sufficient if J(v) is convex in v)
@v

Useful identities
⇥ ⇥
aT v = vT a = a
v v
⇥ ⇥
vT Av = A + AT · v
v

vT Av = 2A · v if A is symmetric
v

19

Basic reconstruction: least-squares solution


noise

s Imaging y = Hs + n s̃
+ LS algorithm
system
ỹ = H s̃

Least-squares fitting criterion: JLS (s̃, y) = ky Hs̃k2

min ky ỹk2 = min JLS (s, y) (maximum consistency with the data)
s̃ s

Formal least-squares solution

JLS (s, y) = ky Hsk2 = kyk2 + sT |H{z


T
H} s 2 yT H s
| {z }
A aT
T
@JLS (s,y)
H) 1 HT y
@s = 2HT Hs 2HT y = 0
Ba)sicsli
LS = arg min JLS (s, y) = (H
mitatis ons
1) Inherent noise ampl
Backprojection (poor man’s solution): T ification
2) Difficus ⇡ H y lty to invert H (too larg
3) Al e or non-square)
OK if H is unitary , H 1 = HT l interesting inverse problems are ill
-posed
20
Linear inverse problems (20th century theory)
Dealing with ill-posed problems: Tikhonov regularization
R(s) = kLsk22 : regularization (or smoothness) functional
L: regularization operator (i.e., Gradient)

min R(s) subject to ky Hsk22  2


s

Equivalent variational problem Andrey N. Tikhonov (1906-1993)

s? = arg min ky Hsk22 + kLsk22


| {z } | {z }
data consistency regularization

Formal linear solution: s = (HT H + LT L) 1


HT y = R · y

Interpretation: “filtered” backprojection


21

Statistical formulation (20th century)


Linear measurement model: y = Hs + n
n : additive white Gaussian noise (i. i. d.)
s : realization of Gaussian process with zero-mean
and covariance matrix E{s · sT } = Cs
Norbert Wiener (1894-1964)

Wiener (LMMSE) solution = Gauss MMSE = Gauss MAP


1
sMAP = arg mins 2
ky Hsk22 + kCs 1/2 sk22
| {z } | {z }
Gaussian prior likelihood
Data Log likelihood

1/2
m L = Cs : Whitening filter

Quadratic regularization (Tikhonov)


sTik = arg min ky Hsk22 + R(s) with R(s) = kLsk22
s

Linear solution : s = (HT H + LT L) 1


HT y = R · y

22
Iterative reconstruction algorithm
Generic minimization problem: sopt = arg min J(s, y)
s

Steepest-descent solution
s(k+1) = s(k) rJ s(k) , y

Iterative constrained least-squares reconstruction


JTik (s, y) = 12 ky Hsk2 + 2 kLsk2

@JTik (s, y)
Gradient: = s0 + (HT H + LT L)s with s0 = HT y
@s

Steepest-descent algorithm

s(k+1) = s(k) + s0 (HT H + LT L)s̃(k)


(
0, [s(k+1) ]i < 0
Positivity constraint (IC): [s̃(k+1) ]i = (projection on convex set)
[s(k+1) ]i , otherwise.

23

Iterative deconvolution: unregularized case

Degraded image: van Cittert animation


Gaussian blur + additive noise

Ground truth

24
Effect of regularization parameter

Degraded image: not enough: λ=0.02 not enough: λ=0.2


Gaussian blur + additive noise

Optimal regularization: λ=2 too much: λ=20 too much: λ=200


Unser: Image processing 9- 25

Selecting the regularization operator


Translation, rotation and scale-invariant operators
Laplacian: s = (rT r)s ! k!k2 ŝ(!)

Modulus of gradient: |rs|

Fractional Laplacian: ( )2 ! k!k ŝ(!)

TRS-invariant regularization functional


1
krsk2L2 (Rd ) = k( ) 2 sk2L2 (Rd ) ) L: discrete version of gradient

Fractional Brownian motion field


1
Statistical decoupling/whitening: ( )2 s = w ! |!| spectral decay

26
Relevance of self-similarity for bio-imaging
■ Fractals and physiology

27

Designing fast reconstruction algorithms


Normal matrix: A = HT H (symmetric)

Formal linear solution: s = (A + LT L) 1


HT y = R · y

Generic form of the iterator: s(k+1) = s(k) + s0 (A + LT L)s(k)

Recognizing structured matrices

L: convolution matrix ) LT L: symmetric convolution matrix


L, A: convolution matrices ) (A + LT L) : symmetric convolution matrix

Fast implementation

Diagonalization of convolution matrices ) FFT-based implementation


Applicable to:  - deconvolution microscopy (Wiener filter)
 - parallel rays computer tomography (FBP)
 - MRI, including non-uniform sampling of k-space
28
Part 3:

Modern image  

reconstruction

29

Linear inverse problems: The sparsity (r)evolution


(20th Century) p=2 !1 (21st Century)

srec = arg min ky Hsk22 + R(s)


s

Non-quadratic regularization regularization


R(s) = kLsk2`2 ! kLskp`p ! kLsk`1

Total variation (Rudin-Osher, 1992)


R(s) = kLsk`1 with L: gradient

Wavelet-domain regularization (Figuereido et al., Daubechies et al. 2004)


v = W 1 s: wavelet expansion of s (typically, sparse)
R(s) = kvk`1

Compressed sensing/sampling (Candes-Romberg-Tao; Donoho, 2006)

30
Sparsifying transforms
Biomedical images are well described by few basis coefficients

0
10
Fourier
DCT
Prior =
10
-1 8x8 Block DCT
DWT (Haar)
sparse
DWT (spline2)
DWT (9/7) representation
-2
10
Normalised MSE

R(s) = WT s
-3
10
1
-4 Error maps
10
Advantages:
10
-5 • convex
• favors sparse
-6
min=3, max=70 min=3, max=26 min=3, max=6    solutions
10
0.1% 0.5% 1% 5% 10%
Percentage of coefficients kept
50% 100%
• Fast: WFISTA
(Guerquin-Kern IEEE TMI 2011)
31

Theory of compressive sensing


Generalized sampling setting (after discretization)
Linear inverse problem: y = Hs + n

Sparse representation of signal: s = Wx with kxk0 = K ⌧ Nx

Ny ⇥ Nx system matrix : A = HW

Formulation of ill-posed recovery problem when 2K < Ny ⌧ Nx


(P0) min ky Axk22 subject to kxk0  K
x

Theoretical result
Under suitable conditions on A (e.g., restricted isometry), the solution is unique
and the recovery problem (P0) is equivalent to:

(P1) min ky Axk22 subject to kxk1  C1


x
[Donoho et al., 2005
Candès-Tao, 2006, ...]

32
Compressive sensing (CS) and l1 minimization
[Donoho et al., 2005
y A x Candès-Tao, 2006, ...]

+ “noise”

Sparse representation of signal: s = Wx with kxk0 = K ⌧ Nx

Equivalent Ny ⇥ Nx sensing matrix : A = HW

Constrained (synthesis) formulation of recovery problem


min kxk1 subject to ky Axk22  2
x

33

Classical regularized least-squares estimator


Linear measurement model:
ym = hhm , xi + n[m], m = 1, . . . , M

System matrix : H = [h1 · · · hM ]T 2 RN ⇥N

xLS = arg min ky Hxk22 + kxk22


x2RN

) xLS = (HT H + IN ) 1
HT y
M
X
T
=H a= am h m where a = (HHT + IM ) 1
y
m=1

Interpretation: xLS 2 span{hm }M


m=1

Lemma
(HT H + IN ) 1
HT = HT (HHT + IM ) 1

34
Generalization: constrained l2 minimization
Discrete signal to reconstruct: x = (x[n])n2Z

Sensing operator H : `2 (Z) ! RM


x 7! z = H{x} = (hx, h1 i, . . . , hx, hM i) with hm 2 `2 (Z)

Closed convex set in measurement space: C ⇢ RM

Example: Cy = {z 2 RM : ky zk22  2
}

Representer theorem for constrained `2 minimization

(P2) min kxk2`2 s.t. H{x} 2 C


x2`2 (Z)

The problem (P2) has a unique solution of the form


M
X
xLS = am hm = H⇤ {a}
m=1

with expansion coefficients a = (a1 , · · · , aM ) 2 RM .

(U.-Fageot-Gupta IEEE Trans. Info. Theory, Sept. 2016) 35

Constrained l1 minimization ⇒ sparsifying effect


Discrete signal to reconstruct: x = (x[n])n2Z

Sensing operator H : `1 (Z) ! RM


x 7! z = H{x} = (hx, h1 i, . . . , hx, hM i) with hm 2 `1 (Z)

Closed convex set in measurement space: C ⇢ RM

Representer theorem for constrained `1 minimization

(P1) V = arg min kxk`1 s.t. H{x} 2 C


x2`1 (Z)

is convex, weak*-compact with extreme points of the form


K
X
xsparse [·] = ak [· nk ] with K = kxsparse k0  M .
k=1

If CS condition is satisfied,
then solution is unique
V
(U.-Fageot-Gupta IEEE Trans. Info. Theory, Sept. 2016)
36
Controlling sparsity

Measurement model: ym = hhm , xi + n[m], m = 1, . . . , M

M
!
X 2
xsparse = arg min ym hhm , xi + kxk`1
x2`1 (Z)
m=1

50
Conv.
Sparsity Index (K) →

40 DCT
CS
30

20

10

0
10-3 λ → 10-2 10-1 100 101 102
a): Sparse model
50
Conv.
DCT
40
Sparsity Index (K) →

CS
37
30

20

10
Geometry of0 l2  vs. l1 minimization
10-3 λ → 10-2 10-1 100 101 102
Prototypical inverse problem b): Gaussian model

min ky Hxk2`2 + kxk2`2 , min kxk`2 subject to ky Hxk2`2  2


x x

min ky Hxk2`2 + kxk`1 , min kxk`1 subject to ky Hxk2`2  2


x x

x2 y1 = hT1 x
C

x1
2

`2 -ball: |x1 |2 + |x2 |2 = C2


`1 -ball: |x1 | + |x2 | = C1

38
Geometry of l2  vs. l1 minimization
Prototypical inverse problem
min ky Hxk2`2 + kxk2`2 , min kxk`2 subject to ky Hxk2`2  2
x x

min ky Hxk2`2 + kxk`1 , min kxk`1 subject to ky Hxk2`2  2


x x

x2 y1 = hT1 x
C

sparse extreme points

x1

`2 -ball: |x1 |2 + |x2 |2 = C2


`1 -ball: |x1 | + |x2 | = C1

Configuration for non-unique `1 solution


39

Variational-MAP formulation of inverse problem


Linear forward model noise
y = Hs + n
linear
model
H n
s

Reconstruction as an optimization problem

srec = arg min ky Hsk22 + kLskpp , p = 1, 2


| {z } | {z }
data consistency regularization

log Prob(s) : prior likelihood


40
ym =
Z

u = Ls
Rd

pS|Y (s|y) =
One puff to follow, one puff to follow, one puff to follow, one puff to

1
Z
follow, one puff to follow, one puff to follow, one puff to follow…

s = L
Ls = w

)
AUTHOR NAME , affiliation 1

Unser and Tafti


w
An

pS|Y (s|y) / exp


pY (y)

Statistical decoupling
Providing a novel approach to sparse stocastic processes, this comprehen-


sive book presents the theory of stochastic process that are ruled by

y = y0 + n = Hs + n
stochastic differential equations, and that admit a parsimonious represen-

pY |S (y|s)pS (s)
tation in a matched wavelet-like basis.
Introduction

=
Two key themes are the statistical property of infinite divisibility, which

ky
leads to two distinct types of behavior – Gaussian and sparse – and the

= pN (y Hs)pS (s)
Statistical innovation model

structural link between linear stochastic processes and spline functions,


to Sparse

2
which is exploited to simplify the mathematical analysis. The core of the

2
book is devoted to investigating sparse processes, including a complete
description of their transform-domain statistics. The final part develops
Spline-like reconstruction model: s(r) =

practical signal-processing algorithms that are based on these models,


Stochastic

pS (s) / pU (Ls) ⇡

Hsk2
with special emphasis on biomedical image reconstruction.
An Introduction to

This is an ideal reference for graduate students and researchers with an


X

k2⌦

pY (y)

Q
interest in signal/image processing, compressed sensing, approximation
theory, machine learning, or statistics.
Processes

◆Y
k2⌦
s[k]

k2⌦
MICH AEL UN SER is Professor and Director of EPFL’s Biomedical Imaging
Discretization

pN y Hs pS (s)
Group, Switzerland. He is a member of the Swiss Academy of Engineering
Michael Unser and Pouya D. Tafti
Sparse Stochastic Processes

Sciences, a fellow of EURASIP, and a fellow of the IEEE.

POU YA D. TAF TI is a researcher at Qlaym BmbH, Düsseldort, and a former


s(x)⌘m (x)dx + n[m] = hs, ⌘m i + n[m],
k (r)

Additive white Gaussian noise scenario (AWGN)


member of the Biomedical Imaging Group at EPFL, where he conducted
research on the theory and applications of probablilistic models for data.

pU [Ls]k
Posterior probability distribution

pU [Ls]k
Physical model: image formation and acquisition
!

Cover illustration: to follow

Unser and Tafti. 9781107058545 PPC. C M Y K


(m = 1, . . . , M )

n: i.i.d. noise with pdf pN

(Bayes’ rule)
Discretization of reconstruction problem

... and then take the log and maximize ...


s = (s[k])k2⌦

u = Ls (matrix notation)

41

42
pU is part of infinitely divisible family
General form of MAP estimator
⇣ P ⌘
1 2 2
sMAP = argmin 2 ky Hsk2 + n U ([Ls]n )

p 1 x2 /(2 2
0) 1
Gaussian: pU (x) = 2⇡
e ) U (x) = 2 2 x2 + C 1
0 0

|x|
Laplace: pU (x) = 2e ) U (x) = |x| + C2
✓ ◆r+ 12
1 1 1
Student: pU (x) = ) U (x) = r+ log(1 + x2 ) + C3
B r, 12 2
x +1 2
Sparser

Potential: U (x) = log pU (x)


5

0
-4 -2 0 2 4

43

Proximal operator: pointwise denoiser

2
U (u)
3 5

2 4

1 3

0 2

-1 1

-2
0
-4 -2 0 2 4

-3

-4 -2 0 2 4

︎ linear attenuation `2 minimization

︎ soft-threshold `1 minimization

︎ shrinkage function ⇡ `p relaxation for p ! 0

44
Maximum a posteriori (MAP) estimation

Constrained optimization formulation

Auxiliary innovation variable: u = Ls


!
1 X
sMAP = arg min ky Hsk22 + 2
U [u]n subject to u = Ls
s2RK 2 n

1 2
X µ
2
LA (s, u, ↵) = ky Hsk2 + U ([u]n ) + ↵T (Ls u) + kLs uk22
2 n
2

(Bostan et al. IEEE TIP 2013)


45

Alternating direction method of multipliers (ADMM)


1 2
X µ
2
LA (s, u, ↵) = ky Hsk2 + U ([u]n ) + ↵T (Ls u) + kLs uk22
2 n
2

Sequential minimization
sk+1 arg min LA (s, uk , ↵k )
s RN

↵k+1 = ↵k + µ Lsk+1 uk

1
Linear inverse problem: sk+1 = HT H + µLT L HT y + zk+1
with zk+1 = LT µuk ↵k
2
Nonlinear denoising: uk+1 = prox U
Lsk+1 + µ1 ↵k+1 ; µ 3

Proximal operator taylored to stochastic model 0

1
u|2 +
-1
prox U (y; ) = arg min |y U (u)
u 2 -2

-3

-4 -2 0 2 4
46
Deconvolution of fluorescence micrographs
Physical model of a diffraction-limited microscope
!2 !2 !2

7IDEFIELD
!1 !1 !1
X
g(x, y, z) = (h3D ⇤ s)(x, y, z) Y 0 0 0

 1 1 1

2 2 2

!2 !1 0 1 2 !2 !1 0 1 2 !2 !1 0 1 2
!2 !2 !2

7IDEFIELD
!2 !2 !2 !1 !1 !1
X
Y

#ONFOCAL
0 0 0

 !1 !1 !1

3-D point spread function (PSF)  1 1 1

0 0 0 2 2 2

!2 !1 0 1 2 !2 !1 0 1 2 !2 !1 0 1
1 1 1
Z x y z 2  !2 !2 !2

h3D (x, y, z) = I0 p M , M , M2

#ONFOCAL
2 2 2
 !1 !1 !1

!2 !1 0 1 2 !2 !1 0 1 2 !2 0 !1 0 1 2 0 0

1 1 1
Z
2 2 2

Z ✓ ◆ ✓ ◆ !2 !1 0 1 2 !2 !1 0 1 2 !2 !1 0 1

!12
+ !22 x!1 + y!2
p (x, y, z) = P (!1 , !2 ) exp j2⇡z exp j2⇡ d!1 d!2
R2 2 f02 f0

Optical parameters
: wavelength (emission)
M : magnification factor
f0 : focal length
P (!1 , !2 ) = k!k<R0 : pupil function
NA = n sin ✓ = R0 /f0 : numerical aperture
47

2-D convolution model


s(x, y) g(x, y) = (h2D ⇤ s)(x, y)
Thin specimen
Radial profile

2
Airy disk: h2D (x, y) = I0 2 J1r/r
(r/r0 )
0

p f0
with r = x 2 + y 2 , r0 = 2⇡R0 , J1 (r): first-order Bessel function.

Modulation transfer function


8 r !
> ⇣ ⌘ ⇣ ⌘2
< 2
arccos k!k k!k
1 k!k
, for 0  k!k < !0
⇡ !0 !0 !0
ĥ2D (!) =
>
:
0, otherwise

2R0 ⇡ 2NA
Cut-off frequency (Rayleigh): !0 = f0 = r0 ⇡

48
2-D deconvolution: numerical set-up

Discretization
!0  ⇡ and representation in (separable) sinc basis {sinc(x k)}k2Z2

Analysis functions: ⌘m (x, y) = h2D (x m1 , y m2 )

[H]m,k = h⌘m , sinc(· k)i


= hh2D (· m), sinc(· k)i
= sinc ⇤ h2D (m k) = h2D (m k).

H and L: convolution matrices diagonalized by discrete Fourier transform

Linear step of ADMM algorithm implemented using the FFT

1
sk+1 = HT H + µLT L HT y + zk+1
with zk+1 = LT µuk ↵k
49

10.3 MAP reconstruction of biomedical images 269

Deconvolution experiments

(a) (b) (c)

270 Recovery of sparse signals


Figure 10.3 Images used in deconvolution experiments. (a) Stem cells surrounded by goblet
cells. (b) Nerve cells growing around fibers. (c) Artery cells.

Table 10.2
where Deconvolution
sinc(x) performance
= sin(ºx)/(ºx). The of MAP estimators
entries basedmatrix
of the system on different prior are then
in (10.9)
distributions.
obtained as
Estimation performance (SNR in dB)
[H]m,k (dB)
BSNR sinc(· ° k)iLaplace
= h¥ m ,Gaussian Student’s
Stem cells 20 = hh 2D (·14.43 13.76
° m), sinc(· ° k)i 11.86
30 ° 15.92 ¢ 15.77 13.15
40 = sinc § h 2D (m °18.11
18.11 ° k).
k) = h 2D (m13.83
(a) cells
Nerve 20 13.86(b) 15.31 14.01 (c)
In effect, this is equivalent30
to constructing
15.89 the 18.18
system matrix from the samples of the
15.81
PSF since h 2D is already band-limited
40 as a result
18.58 of the imaging
20.57 16.92 physics (diffraction-
limited microscope).
Artery cells 20 14.86 15.23 13.48 15
An important aspect for 30 the implementation
16.59 17.21 14.92
of the signal-recovery algorithm is
40 18.68 19.61 15.94
that H is a discrete convolution matrix which is diagonalized by the discrete Fourier
50
transform. The same is true for the regularization operator L as well as for any linear
combination, product, or inverse of such convolution matrices. This allows us to
on the Laplace
convert (10.23)prior,
to a on the other
simple hand, yields the
Fourier-domain best performance
multiplication whichfor images
yields hav-and
a fast
ing sharp edges with a moderate amount of texture, such as those in Figures 10.3(b)-
direct implementation of the linear step of the algorithm. The computational cost is
2D deconvolution experiment

Astrocytes cells bovine pulmonary artery cells human embryonic stem cells

Disk shaped PSF (7x7)


L : gradient
Deconvolution results in dB Optimized parameters

Gaussian Estimator Laplace Estimator Student’s Estimator


Astrocytes cells 12.18 10.48 10.52
Pulmonary cells 16.90 19.04 18.34
Stem cells 15.81 20.19 20.50

51

3D deconvolution of widefield stack

Maximum intensity projections of 384⇥448⇥260 image stacks;


Leica DM 5500 widefield epifluorescence microscope with a 63⇥ oil-immersion objective;
C. Elegans embryo labeled with Hoechst, Alexa488, Alexa568;
wavelet regularization (Haar), 3 decomposition levels for X-Y, 2 decomposition levels for Z.

(Vonesch-U., IEEE TIP 2009)


1- 52
Magnetic resonance imaging (MRI)

Physical image formation model (noise-free)


Z
jh! m ,ri
ŝ(! m ) = s(r)e dr (sampling of Fourier transform)
R2

jh! m ,ri
Equivalent analysis function: ⌘m (r) = e

Discretization in separable sinc basis

[H]m,n = h⌘m , sinc(· n)i


jh! m ,·i jh! m ,ni
= he , sinc(· n)i = e

Property: HT H is circulant (FFT-based implementation)

53

MRI: Shepp-Logan phantom

Original SL Phantom Fourier Sampling Pattern


12 Angles

L : gradient
Optimized parameters

Laplace prior (TV) Student prior (log) 54


MRI phantom: Spiral sampling in k-space
L : gradient
Optimized parameters

Original Phantom Gaussian prior (Tikhonov)


(Guerquin-Kern TMI 2012) SER =17.69 dB

Laplace prior (TV) Student prior


SER = 21.37 dB SER = 27.22 dB 55

10.3 MAP
10.3 MAPreconstruction
reconstructionofofbiomedical
biomedicalimages
images 271
271
(a)
(a) (b)
(b) (c)(c)
MRI reconstruction experiments

(a)
(a) (b)
(b) (c)(c)

Figure
Figure 10.4
10.4 Data
Dataused
usedininMR
MRreconstruction
reconstructionexperiments.
experiments.(a) (a)Cross
Crosssection ofof
section a wrist. (b)(b)
a wrist.
Angiography 15 15
Angiographyimage.
image.(c)
(c)k-space
k-spacesampling
samplingpattern
patternalong
along4040radial lines.
radial lines.

Table
Table 10.3
10.3 MR
MR reconstruction
reconstructionperformance
performanceofofMAP
MAPestimators
estimatorsbased
basedonondifferent prior
different prior
distributions.
distributions.
Radial
Radiallines
lines Estimation
Estimationperformance
performance(SNR
(SNR inin
dB)
dB)
Gaussian Laplace
Gaussian Laplace Student’s
Student’s
Wrist 20 8.82 11.8 5.97
Wrist 20 8.82 11.8 5.97
40 11.30 14.69 13.81
40 11.30 14.69 13.81
Angiogram 20 4.30 9.01 9.40
Angiogram 20 4.30 9.01 9.40
40 6.31 14.48 14.97
40 6.31 14.48 14.97

56

The basic problem in MRI is then to reconstruct s(r ) based on the partial know-
The basic problem in MRI is then to reconstruct s(r ) based on the partial know-
ledge of its Fourier coefficients which are also corrupted by noise. While the recon-
ledge of its Fourier coefficients which are also corrupted by noise. While the recon-
ISMRM reconstruction challenge
L2 regularization (Laplacian) 1 wavelet regularization

(Guerquin-Kern IEEE TMI 2011) 57

Differential phase-contrast tomography

Paul Scherrer Institute (PSI), Villigen xg


4xg (y, ✓)
CCD

x2 ✓
ern
X-ray Source


Intensity

x1
att
ep
nc
ere
erf
int

phase grating
absorption grating

(Pfeiffer, Nature 2006)

Mathematical model
y=Hs
@
y(t, ✓) = R✓ {s}(t)
@t @
[H](i,j),k = P✓ k (tj )
@t j
58
Properties of Radon transform
Projected translation invariance

R✓ {'(· x0 )}(t) = R✓ {'}(t hx0 , ✓i)

Pseudo-distributivity with respect to convolution


p̂✓ (!) = R\
✓ {'}(!) = '(!
ˆ cos ✓, ! sin ✓)
R✓ {'1 ⇤ '2 }(t) = (R✓ {'1 } ⇤ R✓ {'2 }) (t) !1

)
(
Fourier central-slice theorem


Z
j!t !2
R✓ {'}(t)e dt = '(!)|
ˆ !=!✓
R

Proposition: Consider the separable function '(x) = '1 (x)'2 (y). Then,

R✓ {'(· x0 )}(t) = '✓ (t t0 )


where t0 = hx0 , ✓i and
⇣ · · ⌘
1 1
'✓ (t) = | cos ✓| '1 ⇤ | sin ✓| '2 (t).
cos ✓ sin ✓

59

Reducing the numbers of views


Rat brain reconstruction with 181 projections
ADMM-PCG g-FBP

SSIM = .96 SSIM = .51

SSIM = .95 SSIM = .60

SSIM = .49 SSIM = .15


SSIM = .89 SSIM = .43

Collaboration: Prof. Marco Stampanoni, TOMCAT PSI / ETHZ

(Nichian et al. Optics Express 2013)


60
Performance evaluation
Goldstandard: high-quality iterative reconstruction with 721 views

0.9
30
0.8
20
0.7
10
0.6
SNR (dB)

0.5

SSIM
0.4

ADMM PCG 0.3


ADMM PCG
0.2
1
FBP 0.1 FBP

0
361 181 91 46 23 361 181 91 46 23
Number of directions Number of directions

(a) (b)

Reduction of acquisition time by a factor 10 (or more) ?

61

Physical model Statistical model of signal

1
J(x, u) = ky Hxk22 + R(u) + µkLx uk22
2
| {z } | {z }
algorithmic
consistency prior constraints
coupling

Schematic structure of reconstruction algorithm:


Repeat

x(n) = arg min J(x, u(n 1)


): Linear step (problem specific)
x
Niter
u(n) = arg min J(x(n) , u): Statistical or “denoising” step
u

until stop criterion


Inverse problems in imaging: Current status
Higher reconstruction quality: Sparsity-promoting schemes almost sys-
tematically outperform the classical linear reconstruction methods in MRI,
x-ray tomography, deconvolution microscopy, etc... (Lustig et al. 2007)

Faster imaging, reduced radiation exposure: Reconstruction from a


lesser number of measurements supported by compressed sensing.
(Candes-Romberg-Tao; Donoho, 2006)

Increased complexity: Resolution of linear inverse problems using `1


regularization requires more sophisticated algorithms (iterative and non-
linear); efficient solutions (FISTA, ADMM) have emerged during the past
decade. (Chambolle 2004; Figueiredo 2004; Beck-Teboule 2009; Boyd 2011)

Outstanding research issues


Beyond `1 and TV: Connection with statistical modeling & learning
Beyond matrix algebra: Continuous-domain formulation

63

Part 4:

Short guess about the future:


The (deep) learning revolution (??)

64
Learning within the current paradigm

Data-driven tuning of parameters: , calibration of forward model


Semi-blind methods, sequential optimization

Improved decoupling/representation of the signal


Data-driven dictionary learning
) “optimal” L
(based of sparsity or statistics/ICA)

(Elad 2006, Ravishankar 2011, Mairal 2012)

Learning of non-linearities / Proximal operators


) “optimal” potential
CNN-type parametrization, backpropagation

(Chen-Pock 2015-2016, Kamilov 2016)

65

Recent appearance of Deep Conv Neural Nets


(Jin et al. 2016; Chen et al. 2017; ... )

CT reconstruction based on Deep ConvNets


Input: Sparse view FBP reconstruction

Training: Set of 500 high-quality full-view CT reconstructions

Architecture: U-Net with skip connection (Jin et al., arXiv:1611.03679)

66
CT data Dose reduction by 7: 143 views

Reconstructed from
from 1000 views (Jin et al., arXiv:1611.03679)

CT data Dose reduction by 20: 50 views

Reconstructed from
from 1000 views
(Jin et al., arXiv:1611.03679)
Dose reduction by 14: 51 views
µCT data

Reconstructed from
from 721 views

Challenges for deep learning methods


Fundamental change of paradigm
Requires availability of extensive sets of representative training data
together with gold-standards = desired high-quality reconstruction

Research challenges/opportunities
How does one assess reconstruction quality ?
Can we trust the results ?
Should be “task oriented”!!!

Use of CNN to correct artifacts of current methods

Reconstruction from fewer measurements


(trained on high-quality full-view data sets).
Use of CNN to emulate/speedup some well-performing, but “slow”,
reference reconstruction methods

Development of more realistic simulators


both “ground truth” images + physical forward model

True 3D CNN toolbox (still missing) 70


References
Theoretical foundations
M. Unser and P. Tafti, An Introduction to Sparse Stochastic Processes, Cambridge University
Press, 2014; preprint, available at http://www.sparseprocesses.org.

M. Unser, J. Fageot, H. Gupta, “Representer Theorems for Sparsity-Promoting `1 Regularization,”


IEEE Trans. Information Theory, vol. 62, no. 9, pp. 5167-5180, September 2016.

M. Unser, J. Fageot, J.P. Ward, “Splines Are Universal Solutions of Linear Inverse Problems with
Generalized-TV Regularization,” SIAM Review (in press), arXiv:1603.01427 [math.FA].

Algorithms and imaging applications


E. Bostan, U.S. Kamilov, M. Nilchian, M. Unser, “Sparse Stochastic Processes and Discretization
of Linear Inverse Problems,” IEEE Trans. Image Processing, vol. 22, no. 7, pp. 2699-2710, 2013.

C. Vonesch, M. Unser, “A Fast Multilevel Algorithm for Wavelet-Regularized Image Restoration,”


IEEE Trans. Image Processing, vol. 18, no. 3, pp. 509-523, March 2009.

M. Guerquin-Kern, M. Häberlin, K.P. Pruessmann, M. Unser, ”A Fast Wavelet-Based Reconstruc-


tion Method for Magnetic Resonance Imaging,” IEEE Transactions on Medical Imaging, vol. 30, no.
9, pp. 1649-1660, September 2011.

M. Nilchian, C. Vonesch, S. Lefkimmiatis, P. Modregger, M. Stampanoni, M. Unser, “Constrained


Regularized Reconstruction of X-Ray-DPCI Tomograms with Weighted-Norm,” Optics Express, vol.
21, no. 26, pp. 32340-32348, 2013.

M.T. McCann, M. Nilchian, M. Stampanoni, M. Unser, “Fast 3D Reconstruction Method for Differ-
ential Phase Contrast x-Ray CT,” Optics Express, vol. 24, no. 13, pp. 14564-14581, 2016.

K.H. Jin, M.T. McCann, E. Froustey, M. Unser, “Deep Convolutional Neural Network for Inverse
Problems in Imaging,” arXiv:1611.03679 [cs.CV].
71

Acknowledgments
Many thanks to (former) members of
EPFL’s Biomedical Imaging Group
■ Dr. Pouya Tafti
■ Prof. Arash Amini
■ Dr. John-Paul Ward
■ Julien Fageot
■ Dr. Emrah Bostan
■ Dr. Masih Nilchian
■ Dr. Ulugbek Kamilov
■ Dr. Cédric Vonesch and collaborators ...
■ ....
■ Prof. Demetri Psaltis
■ Prof. Marco Stampanoni
2
■ Prof. Carlos-Oscar Sorzano
■ Dr. Arne Seitz
■ ....
■ Preprints and demos: http://bigwww.epfl.ch/ 72
General convex problems with gTV regularization
ML (Rd ) = s : gTV(s) = kL{s}kM = sup hL{s}, 'i < 1
k'k1 1

Linear measurement operator ML (Rd ) ! RM : f 7! z = H{f }

C : convex compact subset of RM

Finite-dimensional null space NL = {q 2 ML (Rd ) : L{q} = 0} with basis {pn }N 0


n=1

Admissibility of regularization: H{q1 } = H{q2 } , q1 = q2 for all q1 , q2 2 NL

Representer theorem for gTV regularization


The extremal points of the constrained minimization problem
V
V = arg min kL{f }kM s.t. H{f } 2 C
f 2ML (Rd )
K
X N0
X
are necessarily of the form f (x) = ak ⇢L (x xk ) + bn pn (x) with K 
k=1 n=1
M N0 ; that is, non-uniform L-splines with knots at the xk and kL{f }kM =
P
k=1 |ak |. The full solution set is the convex hull of those extremal points.

(U.-Fageot-Ward, SIAM Review, in Press) 73

You might also like