0% found this document useful (0 votes)
40 views

2009 09 15 Chung Presentation

The document discusses numerical methods for solving large-scale ill-posed inverse problems. It covers regularization techniques for linear least squares problems, including Tikhonov regularization. Iterative methods like conjugate gradients can exhibit semi-convergence behavior for ill-posed problems where the solution initially improves but is then corrupted by noise. The document also outlines applications to problems in image processing and tomography.

Uploaded by

kaibaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

2009 09 15 Chung Presentation

The document discusses numerical methods for solving large-scale ill-posed inverse problems. It covers regularization techniques for linear least squares problems, including Tikhonov regularization. Iterative methods like conjugate gradients can exhibit semi-convergence behavior for ill-posed problems where the solution initially improves but is then corrupted by noise. The document also outlines applications to problems in image processing and tomography.

Uploaded by

kaibaw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Regularization for Least Squares Systems

High Performance Implementation


Polyenergetic Tomosynthesis
Concluding Remarks
Numerical Methods for Large-Scale Ill-Posed
Inverse Problems
Julianne Chung
University of Maryland
Collaborators: James G. Nagy (Emory)
Eldad Haber (Emory)
Dianne OLeary (University of Maryland)
Ioannis Sechopoulos (Emory)
Chao Yang (Lawrence Berkeley National Laboratory)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
What is an inverse problem?
Physical System
Input Signal Output Signal
Forward Model
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
What is an inverse problem?
Physical System
Input Signal Output Signal
Forward Model
Inverse Problem
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Image Deblurring
Given: Blurred image and some
information about the blurring
Goal: Compute approximation of
true image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Image Deblurring
Given: Blurred image and some
information about the blurring
Goal: Compute approximation of
true image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
1th low resolution image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
8th low resolution image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
15th low resolution image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
22th low resolution image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
29th low resolution image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Super-Resolution Imaging
Given: LR images and some
information about the motion
parameters
Goal: Improve parameters and
approximate HR image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Tomographic Imaging
Given: 2D projection images
Goal: Approximate 3D volume
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Application: Tomographic Imaging
Given: 2D projection images
Goal: Approximate 3D volume
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
What is an Ill-Posed Inverse Problem?
Hadamard (1923): A problem is ill-posed if the solution
does not exist,
is not unique, or
does not depend continuously on the data.
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
What is an Ill-Posed Inverse Problem?
Hadamard (1923): A problem is ill-posed if the solution
does not exist,
is not unique, or
does not depend continuously on the data.

Invers e
Problem
Forward
Problem
N
a
ive in
verse
solu
tion
is
corru
p
ted
w
ith

n
oise!
True image: Blurred & noisy image:
Inverse Solution:
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Outline
1
Regularization for Least Squares Systems
2
High Performance Implementation
3
Polyenergetic Tomosynthesis
4
Concluding Remarks
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Outline
1
Regularization for Least Squares Systems
2
High Performance Implementation
3
Polyenergetic Tomosynthesis
4
Concluding Remarks
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
The Linear Problem
b = Ax +
where
x R
n
- true data
A R
mn
- large, ill-conditioned matrix
R
m
- noise, statistical properties may be known
b R
m
- known, observed data
Goal: Given b and A, compute approximation of x
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Regularization
Tikhonov Regularization
min
x
_
||b Ax||
2
2
+
2
||Lx||
2
2
_
min
x
_
_
_
_
_
b
0
_

_
A
L
_
x
_
_
_
_
2
Selecting a good regularization parameter, , is difcult
Discrepancy Principle
Generalized Cross-Validation
L-curve
Difcult for large-scale problems
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
1. 1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
1. 1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Iteration 10
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
1. 1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Iteration 10
Iteration 28
Solution
gets better
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
1. 1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Iteration 10
Iteration 28
Iteration 85
Solution
gets better
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 4
0. 6
0. 8
1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Iteration 10
Iteration 28
Iteration 85
Iteration 150
Solution
gets better
Noise
corrupts!
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Illustration of Semi-convergence Behavior
min
x
b Ax
2
20 40 60 80 100 120 140
0. 4
0. 6
0. 8
1


CG
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Typical Behavior for Ill-Posed Problems
Iteration 0
Iteration 10
Iteration 28
Iteration 85
Iteration 150
Solution
gets better
Noise
corrupts!
Either nd a good stopping criteria or ...
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Motivation to use a Hybrid Method
... avoid semi-convergence behavior altogether!
Iteration 150
Iteration 0
20 40 60 80 100 120 140
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
1. 1


CGLS
HyBR
Iteration
R
e
l
a
t
i
v
e

E
r
r
o
r
Hybrid Method Stabilizes the Error
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Previous Work on Hybrid Methods
Regularization embedded in iterative method:
OLeary and Simmons, SISSC, 1981.
Bjrck, BIT 1988.
Bjrck, Grimme, and Van Dooren, BIT, 1994.
Larsen, PhD Thesis, 1998.
Hanke, BIT 2001.
Kilmer and OLeary, SIMAX, 2001.
Kilmer, Hansen, Espanol, 2006.
Use iterative method to solve regularized problem:
Golub, Von Matt, Numer. Math.,1991
Calvetti, Golub, Reichel, BIT, 1999
Frommer, Maass , SISC, 1999
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Lanczos Bidiagonalization(LBD)
Given A and b, for k = 1, 2, ..., compute
W =
_
w
1
w
2
w
k
w
k+1

, w
1
= b/||b||
Y =
_
y
1
y
2
y
k

B =
_

2

2
.
.
.
.
.
.

k

k

k+1
_

_
where W and Y have orthonormal columns, and
AY = WB
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
The Projected Problem
After k steps of LBD, we solve the projected LS problem:
min
xR(Y)
||b Ax||
2
= min
f
||W
T
b Bf||
2
where x = Yf.
Remarks:
Ill-posed problem B may be very ill-conditioned.
B is much smaller than A
Standard techniques (e.g. GCV) to nd and stopping
point
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Lanczos Hybrid Method in Action: Satellite
50 100 150 200 250 300
0.3
0.4
0.5
0.6
0.7
0.8
0.9
iteration
r
e
l
a
t
i
v
e

e
r
r
o
r

(
S
a
t
e
l
l
i
t
e
)


LSQR (no regularization)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Lanczos Hybrid Method in Action: Satellite
50 100 150 200 250 300
0.3
0.4
0.5
0.6
0.7
0.8
0.9
iteration
r
e
l
a
t
i
v
e

e
r
r
o
r

(
S
a
t
e
l
l
i
t
e
)


LSQR (no regularization)
Tikhonov with optimal
k
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Lanczos Hybrid Method in Action: Satellite
50 100 150 200 250 300
0.3
0.4
0.5
0.6
0.7
0.8
0.9
iteration
r
e
l
a
t
i
v
e

e
r
r
o
r

(
S
a
t
e
l
l
i
t
e
)


LSQR (no regularization)
Tikhonov with optimal
k
Tikhonov with GCV
k
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
A Novel Approach: Weighted GCV
min
f
||W
T
b Bf||
2
GCV tends to over smooth, use weighted GCV function with
< 1:
G(, ) =
n||(I BB

)W
T
b||
2
_
trace(I BB

)
_
2
New adaptive approach to select
MATLAB implementation:
>> x = HyBR(A, b);
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Results for Satellite
50 100 150 200 250 300
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
iteration
r
e
l
a
t
i
v
e

e
r
r
o
r

(
S
a
t
e
l
l
i
t
e
)


LS QR (no regularization)
Tikhonov with optimal
k
Tikhonov with GCV
k
WGCV, adaptive
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
The Nonlinear Problem
b = A(y)x +
where
x - true data
A(y) - large, ill-conditioned matrix dened by parameters y
(registration, blur, etc.)
- additive noise
b - known, observed data
Goal: Approximate x and improve parameters y
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Mathematical Representation
We want to nd x and y so that
b = A(y)x +e
With Tikhonov regularization, solve
min
x,y
_
_
_
_
_
A(y)
I
_
x
_
b
0
__
_
_
_
2
2
Some Considerations:
Problem is linear in x, nonlinear in y.
y R
p
, x R
n
, with p n.
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Separable Nonlinear Least Squares
Variable Projection Method:
Implicitly eliminate linear term.
Optimize over nonlinear term.
Some general references:
Golub and Pereyra, SINUM 1973 (also IP 2003)
Kaufman, BIT 1975
Osborne, SINUM 1975 (also ETNA 2007)
Ruhe and Wedin, SIREV, 1980
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Variable Projection Method
Instead of optimizing over both x and y:
min
x,y
(x, y) = min
x,y
_
_
_
_
_
A(y)
I
_
x
_
b
0
__
_
_
_
2
2
Minimize the reduced cost functional:
min
y
(y) , (y) = (x(y), y)
where x(y) is the solution of
min
x
(x, y) = min
x
_
_
_
_
_
A(y)
I
_
x
_
b
0
__
_
_
_
2
2
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Variable Projection Method
Instead of optimizing over both x and y:
min
x,y
(x, y) = min
x,y
_
_
_
_
_
A(y)
I
_
x
_
b
0
__
_
_
_
2
2
Minimize the reduced cost functional:
min
y
(y) , (y) = (x(y), y)
where x(y) is the solution of
min
x
(x, y) = min
x
_
_
_
_
_
A(y)
I
_
x
_
b
0
__
_
_
_
2
2
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Gauss-Newton Algorithm
choose initial y
0
for k = 0, 1, 2, . . .
x
k
= arg min
x
_
_
_
_
_
A(y
k
)

k
I
_
x
_
b
0
__
_
_
_
2
r
k
= b A(y
k
) x
k
d
k
= arg min
d
J

d r
k

2
y
k+1
= y
k
+d
k
end
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Gauss-Newton Algorithm with HyBR
choose initial y
0
for k = 0, 1, 2, . . .
x
k
= arg min
x
_
_
_
_
_
A(y
k
)

k
I
_
x
_
b
0
__
_
_
_
2
x
k
=HyBR(A(y
k
), b)
r
k
= b A(y
k
) x
k
d
k
= arg min
d
J

d r
k

2
y
k+1
= y
k
+d
k
end
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Numerical Results: Super-resolution
Inverse
Problem
Given:
Goal:

Gauss-Newton Iterations
Error of y
k

k
0 0.5810 0.2519
1 0.3887 0.2063
2 0.2495 0.1765
3 0.1546 0.1476
4 0.1077 0.1254
5 0.0862 0.1139
6 0.0763 0.1102
7 0.0706 0.1077
8 0.0667 0.1067
Reconstructed Image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
The Linear Problem: b = Ax +
The Nonlinear Problem: b = A(y)x +
Numerical Results: Super-resolution
Inverse
Problem
Given:
Goal:

Gauss-Newton Iterations
Error of y
k

k
0 0.5810 0.2519
1 0.3887 0.2063
2 0.2495 0.1765
3 0.1546 0.1476
4 0.1077 0.1254
5 0.0862 0.1139
6 0.0763 0.1102
7 0.0706 0.1077
8 0.0667 0.1067
Reconstructed Image
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
Outline
1
Regularization for Least Squares Systems
2
High Performance Implementation
3
Polyenergetic Tomosynthesis
4
Concluding Remarks
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
Mathematical Model
min
x
1
2
||Ax b||
2
where
A =
_

_
A
1
.
.
.
A
m
_

_
, b =
_

_
b
1
.
.
.
b
m
_

_
Some Applications:
Super-resolution
Tomography - Cryo-Electron Microscopy
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
An Application: Cryo-EM
Inverse
Problem
Given:
Goal:

min
x
(x)
1
2
m

i =1
||A
i
x b
i
||
2
where
x R
n
3
represents the 3-D electron density map
b
i
R
n
2
(i = 1, 2, ..., m) represents 2-D projection images
A
i
= A(y
i
) R
n
2
n
3
represents projection
y
i
- translation parameters and Euler angles
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
An Application: Cryo-EM
Inverse
Problem
Given:
Goal:

min
x
(x)
1
2
m

i =1
||A
i
x b
i
||
2
where
x R
n
3
represents the 3-D electron density map
b
i
R
n
2
(i = 1, 2, ..., m) represents 2-D projection images
A
i
= A(y
i
) R
n
2
n
3
represents projection
y
i
- translation parameters and Euler angles
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
Parallelization using 1D data distribution
EM 2D
Images
Distribution
of Images and
Angles
Independent
Partial
Reconstructions
Combination:
All reduce
Back Projected
Volume
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
New Parallelization using 2D data distribution
Distribute images along rows.
Distribute volume along columns.
p
1,1
p
1,2
p
2,3
p
2,2
p
2,1
p
1,3
g
c
2
g
r
1
b
1
b
2
b
3
b
4
b
5
b
6
x
(1)

x
(2)

x
(3)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
Forward and Back Projection on 2D Topology
A
i
=
_
A
(1)
i
A
(2)
i
A
(n
c
)
i
_
, x =
_

_
x
(1)
x
(2)
.
.
.
x
(n
c
)
_

_
, =
_

x
(1)

x
(2)
.
.
.

x
(n
c
)
_

A
i
x =
n
c

j =1
A
(j )
i
x
(j )
All Reduce along Rows

x
(j )
=
m

i =1
(A
(j )
i
)
T
r
(i )
All Reduce along Columns
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Mathematical Problem
2D Data Distribution
New MPI Parallel Performance
Good for very large problems
Adenovirus Data Set: 500 500 pixels, 959 (60) images
n
r
n
c
Wall clock seconds speedup
137 7 9635 1
959 2 4841 2
959 4 2406 4
959 8 1335 7.2
959 16 609 15.8
SPARX software package
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Outline
1
Regularization for Least Squares Systems
2
High Performance Implementation
3
Polyenergetic Tomosynthesis
4
Concluding Remarks
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Digital Tomosynthesis
X-ray Mammography
Digital Tomosynthesis
Computed Tomography
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
An Inverse Problem
Given: 2D projection images
Goal: Reconstruct a 3D volume
True Images
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Simulated Problem
Original object:
300 300 200 voxels
(7.5 7.5 5 cm)
21 projection images:
200 300 pixels
(10 15 cm)
30

to 30

, every 3

Reconstruction:
150 150 50 voxels
(7.5 7.5 5 cm)
Detector
Center of
Rotation
Compressed Breast
X-ray Tube
Support Plate
Compression Plate
X-ray Tube
C
h
e
s
t

W
a
l
l
Detector
Front view Side view with X-ray tube at 0

Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems


Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Polyenergetic Model
Incident X-ray has a distribution
of energies
43 energy levels: 5keV - 26keV
Consequences:
Beam Hardening: Low energy photons preferentially
absorbed
Unnecessary radiation
Linear attenuation coefcient depends on energy
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Polyenergetic Model
Incident X-ray has a distribution
of energies
43 energy levels: 5keV - 26keV
Consequences:
Beam Hardening: Low energy photons preferentially
absorbed
Unnecessary radiation
Linear attenuation coefcient depends on energy
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Monoenergetic Algorithm
Lange and Fesslers Convex
MLEM Algorithm
Beam hardening artifacts
Monoenergetic Reconstruction
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Previous Methods
Methods for eliminating beam hardening artifacts:
Dual Energy Methods
Alvarez and Macovski (1976), Fessler et al (2002)
FBP + Segmentation
Joseph and Spital (1978)
Filter function based on density
De Man et al (2001), Elbakri and Fessler (2003)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
A Polyenergetic Mathematical Representation
Energy-dependent Attenuation Coefcient:
(e)
(j )
= s(e)x
(j )
+z(e)
Voxel j
where
x
(j )
represents unknown glandular fraction of j
th
voxel
s(e) and z(e) are known linear t coefcients
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Computing Image Projections
Ray Trace:
_
L
i
(e)dl
N

j =1
(e)
(j )
a
(ij )
Vector Notation
(e) = s(e)x+z(e) s(e)A

x+z(e)A

1
0
1
2
3
4
5
6
0
2
4
6
0
10
20
30
40
50
60
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Computing Image Projections
Ray Trace:
_
L
i
(e)dl
N

j =1
(e)
(j )
a
(ij )
Vector Notation
(e) = s(e)x+z(e) s(e)A

x+z(e)A

1
0
1
2
3
4
5
6
0
2
4
6
0
10
20
30
40
50
60
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Computing Image Projections
Ray Trace:
_
L
i
(e)dl
N

j =1
(e)
(j )
a
(ij )
Vector Notation
(e) = s(e)x+z(e) s(e)A

x+z(e)A

1
0
1
2
3
4
5
6
0
2
4
6
0
10
20
30
40
50
60
Polyenergetic Projection:
n
e

e=1
(e) exp([s(e)A

x
true
+z(e)A

1])
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Statistical Model
Given x, dene for pixel i the expected value:

b
(i )

=
n
e

e=1
(e) exp([s(e)A

x +z(e)A

1]).
Let
(i )
be the statistical mean of the noise.
Then

b
(i )

+
(i )
R is the expected or average observation.
Observed Data: b
(i )

Poisson(

b
(i )

+
(i )
)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Statistical Model
Likelihood Function:
p(b

, x) =
M

i =1
e
(

b
(i )

+
(i )
)
(

b
(i )

+
(i )
)
b
(i )

b
(i )

!
Negative Log Likelihood Function:
L

(x) = logp(b

, x)
=
M

i =1
(

b
(i )

+
(i )
) b
(i )

log(

b
(i )

+
(i )
)
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Volume Reconstruction
Maximum Likelihood Estimate:
x
MLE
= argmin
x
_
n

=1
L

(x)
_
Numerical Optimization:
Gradient Descent:
x
k+1
= x
k

k
L(x
k
), where L(x
k
) = A
T
v
k
Newton Approach:
x
k+1
= x
k

k
H
1
k
L(x
k
), where H
k
= A
T
W
k
A
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Numerical Results
Initial guess: 50% glandular tissue
Newton-CG inner stopping criteria:
Max 50 inner iterations
residual tolerance < 0.1
Gradient Descent Newton Iteration
Iteration Relative Error Iteration Relative Error CG Iterations
0 1.7691 0 1.7691 -
1 1.0958 1 1.1045 3
5 0.8752 2 0.8630 2
10 0.8320 3 0.8403 2
25 0.8024 4 0.7925 16
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Compare Images
True


0
20
40
60
80
100
Monoconvex


0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Gradient


0
20
40
60
80
100
NewtonCG


0
20
40
60
80
100
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Motivation
Mathematical Problem
Reconstruction Algorithms
Some Considerations
Convexity
Severe nonlinearities Cost function is not convex
Regularization
x
MAP
= argmin
x
{L(x) +R(x)}
Need good regularizer, R(x):
Huber penalty, Markov Random Fields, Total Variation
Need good methods for choosing
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Outline
1
Regularization for Least Squares Systems
2
High Performance Implementation
3
Polyenergetic Tomosynthesis
4
Concluding Remarks
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
Concluding Remarks
Inverse problems arise in many imaging applications.
Hybrid methods:
efcient solvers for large scale LS problems
effective linear solvers for nonlinear problems
Separable nonlinear LS models exploit high level structure
High performance implementation allows reconstruction of
large volumes with high resolution
Polyenergetic tomosynthesis:
Novel mathematical framework
Standard optimization made feasible
Better reconstructed images
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems
Regularization for Least Squares Systems
High Performance Implementation
Polyenergetic Tomosynthesis
Concluding Remarks
References
Linear LS (HyBR):
Chung, Nagy, OLeary. ETNA (2008)
http://www.cs.umd.edu/jmchung/Home/HyBR.html
Nonlinear LS:
Chung, Haber, Nagy. Inverse Problems (2006)
Chung, Nagy. Journal of Physics Conference Series (2008)
Chung, Nagy. SISC (Accepted 2009)
High Performance Computing:
Chung, Sternberg, Yang. Int. J. High Perf. Computing
(Accepted 2009)
Project featured in DOE publication, DEIXIS 2009
Digital Tomosynthesis:
Chung, Nagy, Sechopoulos. (Submitted 2009)
Thank you!
Julianne Chung Numerical Methods for Large-Scale Ill-Posed Inverse Problems

You might also like