Lecture 5: Wavefront Reconstruction and Prediction: P.r.fraanje@tudelft - NL
Lecture 5: Wavefront Reconstruction and Prediction: P.r.fraanje@tudelft - NL
Rufus Fraanje
[email protected]
TU Delft Delft Center for Systems and Control
1 / 37
Exercise
Study the relation between the Strehl ratio, the Fried parameter r0 and the
telescope diameter D.
Do this by means of a computer program, that:
1
an
I (0, 0)
Io (0, 0)
where I (0, 0) and Io (0, 00 the intensity at the intersection of the optical axis
with the image plane with and without phase aberration.
4
Make graphs of the Strehl ratio S versus the Fried parameter r0 for several
choices of the telescope diameter D. How can you explain the results?
2 / 37
Matlab example
L0
= 10;
D list = [2:2:20];
r0 list = [0.1:0.1:1];
nD
nr0
Nav
Strehl
=
=
=
=
%[m] o u t e r s c a l e o f t u r b u l e n c e
% diameter telescope
% F r i e d parameter
length ( D l i s t ) ;
length ( r 0 l i s t ) ;
500; % number o f r e a l i z a t i o n f o r a v e r a g i n g
zeros ( nr0 , nD , Nav ) ; % a r r a y f o r S t r e h l r a t i o s
f o r iD = 1 : nD ,
D = D l i s t ( iD ) ;
f o r i r 0 = 1 : nr0 ,
r0 = r 0 l i s t ( i r 0 ) ;
% telescope diameter
% F r i e d parameter
Matlab example
Corr = vonkarmancorr space ( x , y , r0 , L0 ) ; % compute c o r r . c o e f f s
% Cholesky f a c t o r i z . n
C o r r s q r t = chol ( Corr ) ;
...
% Compute u n d i s t o r t e d wave over c i r c u l a r t e l e s c o p e g r i d :
Af = c i r c s h i f t ( f f t 2 (A) , ( p +1) r a d i u s [ 1 1 ] ) ;
% i n t e n s i t y at center :
I 0 = abs ( Af ( ( p +1) r a d i u s + 1 , ( p +1) r a d i u s + 1 ) ) 2 ;
f o r i a v = 1 : Nav ,
% Generate a square random phase screen
p h i = C o r r s q r t randn ( ( 2 r a d i u s + 1 ) 2 , 1 ) ;
. . . % place phi at center of grid , i . e .
f i l l Phi
4 / 37
Matlab example
% Compute t h e wave over t h e c i r c u l a r t e l e s c o p e g r i d :
Ap = A. exp ( s q r t ( 1) Phi ) ;
% perform t h e f o u r i e r t r a n s f o r m o f t h e d i s t o r t e d wave :
Apf = c i r c s h i f t ( f f t 2 ( Ap) , ( p +1) r a d i u s [ 1 1 ] ) ;
% i n t e n s i t y at center :
I p = abs ( Apf ( ( p +1) r a d i u s+1av : ( p +1) r a d i u s +1+av , . . .
( p +1) r a d i u s+1av : ( p +1) r a d i u s +1+av ) ) 2 ;
S t r e h l ( i r 0 , iD , i a v ) = I p / I 0 ;
end ; % number o f averages Nav
end ; % number o f F r i e d parameters nr0
end ; % number o f t e l e s c o p e d i a m e t e r s nD
mean( S t r e h l , 3 ) , % average over r e a l i z a t i o n s
5 / 37
Matlab example
0
10
D=2m
D=4m
D=6m
D=8m
D=10m
D=12m
D=14m
D=16m
D=18m
D=20m
10
10
10
10
10
10
6 / 37
Matlab example
Strehl ratio
10
10
10
10
D=0.10m
D=0.20m
D=0.30m
D=0.40m
D=0.50m
D=0.60m
D=0.70m
D=0.80m
D=0.90m
D=1.00m
10
10
10
10
10
10
7 / 37
Outline
Wavefront prediction;
Current research.
8 / 37
Outline
Wavefront prediction;
Current research.
8 / 37
Outline
Wavefront prediction;
Current research.
8 / 37
Outline
Wavefront prediction;
Current research.
8 / 37
Outline
Wavefront prediction;
Current research.
8 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (1)
d
tr AXB T = AT B
dX
Proof.
Note that
tr AXB T
XXX
k
hence
d
tr AXB T
dXij
where A =
a1
an
,B=
b1
d X
aki bkj
dXij
k
= aiT bj
bm
9 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
10 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
10 / 37
Preliminaries
Lemma (2)
d T T
tr AX B
= BT A
dX
Lemma (3)
d
tr AXBX T C = AT C T XB T + CAXB
dX
Proof.
Use product rule:
dtr AXBX T C
= tr AdXBX T C + tr AXBdX T C
10 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
Preliminaries
Theorem (Weighted least squares)
Let V = V T and W = W T and
J (X ) = tr V C AXB W C AXB
then
dJ (X )
= 2AT V C AXB WB T
dX
AT VA X BWB T
AT V C WB T
Proof.
The proof follows by straightforward use of the previous lemmas.
Exercise: verify this.
11 / 37
12 / 37
12 / 37
12 / 37
F 1 {.}
F{.}
Image plane
x
y
z
12 / 37
phase-retrieval problem
13 / 37
phase-retrieval problem
13 / 37
14 / 37
14 / 37
14 / 37
14 / 37
14 / 37
14 / 37
4 Gonsalves,
4 Gonsalves,
4 Gonsalves,
4 Gonsalves,
4 Gonsalves,
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
x
y
x /f
y /f
(, )
x + y
16 / 37
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
x
y
x /f
y /f
(, )
x + y
16 / 37
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
x
y
x /f
y /f
(, )
x + y
16 / 37
RR
x I (x , y )dxdy
RR
I (x , y )dxdy
RR
y I (x , y )dxdy
RR
I (x , y )dxdy
x
y
x /f
y /f
(, )
x + y
16 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
)
Output: s(x , y ) and (,
1:
s(x , y ) |s(x , y )|
2:
3:
(, ) F (s(x , y ))
S
4:
5:
(, )
s(x , y ) F 1 S
6:
7:
end while
8:
(, )
) = S
return (,
17 / 37
30
0.045
2
Error ||S S||
0.04
Yaxis [m]
0.035
0.03
0.025
25
20
center of mass determined
intial condition
15
0.02
10
0.015
0.01
0.005
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
10
15
Iteration nr. [.]
20
25
30
18 / 37
Center of mass
0.05
0.05
0.045
0.045
0.045
0.04
0.04
0.04
0.035
0.035
0.035
0.025
0.02
0.015
0.03
Yaxis [m]
0.03
Yaxis [m]
Yaxis [m]
True wavefront
0.05
0.025
0.02
0.015
0.03
0.025
0.02
0.015
0.01
0.01
0.01
0.005
0.005
0.005
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
0
0
0.01
0.02
0.03
Xaxis [m]
0.04
0.05
18 / 37
GS
Complexity:
O (N )
Accuracy:
only tip-tilt
depending on initialization
Application:
19 / 37
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j ) + nx (i , j )
= (i , j + 1) (i , j ) + ny (i , j )
= G + n
Reconstruction:
1
3
4
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
x , y ) =
(
a Poyneer
21 / 37
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
x , y ) =
(
a Poyneer
21 / 37
= (i + 1, j ) (i , j )
= (i , j + 1) (i , j )
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
x , y )
(
a Poyneer
e j x 1
e j y 1
e j x 1
e j y 1
!1
e j x 1
e j y 1
Tx (x , y )
Ty (x , y )
= (i + 1, j ) (i , j )
ty (i , j )
= (i , j + 1) (i , j )
Tx (x , y )
Ty (x , y )
e j x 1
e j y 1
(x , y )
x , y ) =
(
a Poyneer
(ej x 1) (ej y 1)
4(sin2 (x /2) + sin2 (y /2))
Tx (x , y )
Ty (x , y )
J ()
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
= (GT G)1 GT t
22 / 37
J ()
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
= (GT G)1 GT t
22 / 37
J ()
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
= (GT G)1 GT t
22 / 37
J ()
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
= (GT G)1 GT t
22 / 37
J ()
T (t G)
(t G)
t G)
T
= tr (t G)(
=
d = 0
dJ ()/
GT G = GT t
= (GT G)1 GT t
22 / 37
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
23 / 37
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
23 / 37
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
23 / 37
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
23 / 37
= G+ t
with G+ the Moore-Penrose pseudoinverse of G (Exercise: verify this.)
Moore-Penrose pseudoinverse
Let the singular value decomposition of G be given by
G =
0
0
VT
T
V
23 / 37
T 1
1
en Rn n/2
1
/
2
|2 Rn |
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
J ()
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
T 1
1
en Rn n/2
1
/
2
|2 Rn |
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
J ()
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
T 1
1
en Rn n/2
1
/
2
|2 Rn |
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
J ()
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
T 1
1
en Rn n/2
1
/
2
|2 Rn |
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
J ()
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
T 1
1
en Rn n/2
1
/
2
|2 Rn |
T 1
1
e(t G) Rn (t G)/2
1
/
2
|2 Rn |
J ()
T Rn1 (t G)
(t G)
t G)
T
= tr Rn1 (t G)(
=
d = 0
dJ ()/
GT Rn1 G = GT Rn1 t
24 / 37
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
T nT
I LG L
I LG
= trE
n nnT
25 / 37
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
T nT
I LG L
I LG
= trE
n nnT
25 / 37
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
T nT
I LG L
I LG
= trE
n nnT
25 / 37
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
T nT
I LG L
I LG
= trE
n nnT
25 / 37
T 1
1
e R /2
1
/
2
|2 R |
and independent of n.
Suppose is given by a linear estimator
= Lt
where L minimizes
J (L)
T
= E tr( )(
)
T nT
I LG L
I LG
= trE
n nnT
25 / 37
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
= R GT
which is equivalent to
= GT Rn1
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
= R GT
which is equivalent to
= GT Rn1
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
= R GT
which is equivalent to
= GT Rn1
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
= R GT
which is equivalent to
= GT Rn1
T nT
I
LG
L
I LG L
= trE
n nnT
R 0
I 0 L G I
I 0 L G
= tr
0
Rn
= R GT
which is equivalent to
= GT Rn1
Linear estimation
Hence can be solved from
27 / 37
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
28 / 37
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
28 / 37
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
28 / 37
Least squares:
= G+ t = (GT G)+ GT t
4
Linear estimation:
28 / 37
= G(k ) + n(k )
Time-recursive methods!
29 / 37
= G(k ) + n(k )
Time-recursive methods!
29 / 37
= G(k ) + n(k )
Time-recursive methods!
29 / 37
= G(k ) + n(k )
Time-recursive methods!
29 / 37
= G(k ) + n(k )
Time-recursive methods!
29 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Problem formulation:
Given measurements:
t (k ) = G(k ) + n(k ),
k = 1, 2,
(k )
n(k )
(k )
n(k )
T !
R ()
0
0
n2 I ()
k + 1|k )) = E ((k + 1) (
k + 1|k ))T ((k + 1) (
k + 1|k ))
Jk ((
is minimized.
30 / 37
Wavefront prediction
Solutiona :
a Fraanje
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
Ak
31 / 37
Wavefront prediction
Solutiona :
a Fraanje
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
Ak
31 / 37
Wavefront prediction
Solutiona :
a Fraanje
k + 1|k ) = Ak t(k )
(
where
t(k )
t (1)T ,
t (k )T
A1
Ak
31 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
Suppose auto-regressive model of wavefront is given:
(k + 1) =
p
X
Ai (k i ) + e(k + 1)
i =0
A1
Ap
(k )
..
.
(k n)
= Ap p (k ) + e(k + 1)
+ e(k + 1)
= Ap E p (k )p (k )T
= Ap p
1 T
and Re = E e(k )e(k )T satisfies Re = R (0) p
p p
32 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
k + 1|k ) is estimated by
Assume (
k + 1|k ) =
(
p
X
Li t (k i )
i =0
L0
t (k )
..
Lp
.
t (k p)
G(k ) + n(k )
..
Lp
.
G(k p) + n(k p)
= Lp (Ip G)p (k ) + Lp np (k )
33 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
= Ap p (Ip GT )
34 / 37
Wavefront prediction
AR predictor
(k + 1) = Ap p (k ) + e(k + 1)
= Ap p (Ip GT )
34 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k ) = C (
k + 1|k )
(
where Kt the Kalman gain.
a c.f.
35 / 37
Wavefront prediction
State-space predictors
State-space model for wavefront phase:
(k + 1) = A(k ) + Ke(k )
(k ) = C (k ) + e(k )
measurement equation:
t (k )
= G(k ) + n(k )
k |k 1)
t (k |k 1) = GC (
k + 1|k ) = A(
k |k 1) + Kt (t (k ) t (k ))
(
k + 1|k )
k + 1|k ) = C (
(
where Kt the Kalman gain.
a c.f.
35 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Research issues
Phase reconstruction from pixels for extended sources;
Phase reconstruction from single pixel for point sources;
Efficient distributed wavefront reconstruction / prediction (implementation on
36 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37
Overview
Wavefront reconstruction from pixel intensities:
Gerchberg-Saxton;
Center-of-mass algorithm.
Wavefront prediction:
Time-recursive methods needed;
AR-predictors;
Kalman filter (state-space models).
37 / 37