Good Lecture For Est Theory
Good Lecture For Est Theory
2. Mobile Communications
The position of the mobile terminal can be estimated using the time-ofarrival measurements received at the base stations.
3. Speech Processing
Recognition of human speech by a machine is a difficult task because
our voice changes from time to time.
Given a human voice, the estimation problem is to determine the
speech as close as possible.
4. Image Processing
Estimation of the position and orientation of an object from a camera
image is useful when using a robot to pick it up, e.g., bomb-disposal
5. Biomedical Engineering
Estimation the heart rate of a fetus and the difficulty is that the
measurements are corrupted by the mothers heart beat as well.
6. Seismology
Estimation of the underground distance of an oil deposit based on
sound reflection due to the different densities of oil and rock layers.
5
2. Communications
In wired or wireless communications, we need to know the information
sent from the transmitter to the receiver.
e.g., for binary phase shift keying (BPSK) signals, it consists of only two
symbols, 0 or 1. The detection problem is to decide whether it is 0 or
1.
3. Speech Processing
Given a human speech signal, the detection problem is decide what is the
spoken word from a set of predefined words, e.g., 0, 1,, 9
Waveform of 0
4. Image Processing
Fingerprint authentication: given a fingerprint image and his owner says
he is A, we need to verify if it is true or not
5. Biomedical Engineering
e.g., given some X-ray slides, the detection problem is to determine if she
has breast cancer or not
6. Seismology
To detect if there is oil or there is no oil at a region
12
What is Estimation?
Extract or estimate some parameters from the observed signals, e.g.,
Use a voltmeter to measure a DC signal
x[n] = A + w[n],
n = 0,1, L , N 1
n = 0,1, L , N 1
n = 0,1, L , N 1
DC value; scalar
Amplitude, frequency and phase of sinusoid; vector
Constrained or unconstrained
Constrained:
Changing :
16
x[n] = A + w[n],
n = 0,1, L , N 1
A1 = x[0]
N 1
1
A 2 =
x[n]
N n =0
N 1
1
A 3 =
x[n]
N 1 n =0
N 1
17
Biased :
E{ A } A
Unbiased :
E{ A } = A
Asymptotically unbiased : E{ A } = A only if N
Taking the expected values for A1, A 2 and A 3 , we have
E{ A1} = E{x[0]} = E{ A} + E{w[0]} = A + 0 = A
N 1
N 1
N 1
1
1
1
E{ A 2 } = E x[n] = E A + E w[n]
N n =0
N n =0
N n =0
1 N 1
1 N 1
1
1 N 1
=
A+
E{w[n]} = N A +
0= A
N n =0
N n =0
N
N n =0
E{ A 3 } =
1
N
A=
A
1 1/ N
N 1
N
x[0] x[1]L x[ N 1] = N A AL A = A N = A
(5.1)
var = E{( A E{ A }) 2 }
(5.2)
:
If the estimator is unbiased, then MSE = var
19
In general,
MSE = E{( A A) 2 } = E{( A E{ A } + E{ A } A) 2 }
= E{( A E{ A }) 2 } + E{( E{ A } A) 2 } + 2 E{( A E{ A })( E{ A } A)}
= var + ( E{ A } A) 2 + 2( E{ A } E{ A })( E{ A } A)
(5.3)
= var + (bias) 2
2
2
E{( A 2 A) } = E
x[n] A = E w [n] =
N n = 0
N n = 0
2w
N
2
2
2
1 N 1
w
E{( A 3 A) 2 } = E
x[n] A =
+
N 1
N 1
N 1 n = 0
20
21
CRLB( i ) = [J ()]i,i = I 1 () i ,i
(5.4)
where
2 ln p (x; )
2 ln p ( x; )
2 ln p (x; )
- E
- E
L - E
1
1 2
1 P
2
2
ln p ( x; )
ln p (x; )
E
E
(5.5)
2
I () =
2 1
M
O
2
2
ln p (x; )
- E ln p (x; )
- E
P
P 1
22
Note that
I () is known as Fisher information matrix
2 3
ln
p
(
x
;
)
ln
p
(
x
;
E
E
=
i j
j i
23
and
it
is
exp
( x ) 2
2
2
2
2
1
(5.6)
We can write x ~ N (, )
The Gaussian PDF for a random vector x of size N is defined as
p ( x) =
exp (x )T C -1 (x )
2
We can write x ~ N (, C)
24
(5.7)
L E{( x[0] 0 )( x[ N 1] N 1 )}
E{( x[0] 0 ) 2 }
O
M
E{( x[0] 0 )( x[1] 1 )}
2
E{( x[ N 1] N 1 ) }
E{( x[0] 0 )( x[ N 1] N 1 )} L
(5.8)
where
x = [ x[0], x[1], L , x[ N 1]]T
= E{x} = [ 0 , 1 , L , N 1 ]]T
25
0
C = E{( x ) (x )T } =
M
0 L 0
2
M
= 2 I N
O 0
2
L 0
p ( x) =
exp
x [ n]
2 N /2
2
2 n = 0
(2 )
1
C -1 = 2 I N
det(C) = ( 2 ) N = 2 N
26
(5.9)
Example 5.1
Determine the PDF of
x[0] = A + w[0]
and
x[n] = A + w[n],
n = 0,1, L , N 1
1
1
2
p ( x[0]; A) =
exp 2 ( x[0] A)
2
2
2 w
w
N 1
1
2
p (x;A) =
exp 2 ( x[n] A)
2 N /2
2 n = 0
(2 w )
27
Example 5.2
Find the CRLB for estimating A based on single measurement:
x[0] = A + w[0]
1
2
p ( x[0]; A) =
exp 2 ( x[0] A)
2
22w
w
1
ln( p ( x[0]; A)) = ln( 22w ) 2 ( x[0] A) 2
2 w
1
= 2
w
28
As a result,
2 ln( p ( x[0]; A))
1
E
=
2w
A
I ( A) = I ( A) = 2
w
J ( A) = 2w
CRLB( A) = 2w
29
Example 5.3
Find the CRLB for estimating A based on N measurements:
x[n] = A + w[n],
n = 0,1, L , N 1
N 1
1
2
p (x;A) =
exp 2 ( x[n] A)
2 N /2
2 n = 0
(2 w )
30
N 1
1
2
exp 2 ( x[n] A)
p (x; A) =
2 N /2
2 n = 0
(2 w )
w
1 N 1
2 N /2
) 2 ( x[n] A) 2
ln( p (x; A)) = ln((2 w )
2 w n = 0
N 1
N 1
( x[n] A)
1
ln( p (x; A))
= 2 2 ( x[n] A) 1 = n = 0 2
A
2 w
w
n =0
2 ln( p (x; A))
A
= 2
w
2w
A
31
As a result,
I ( A) = I ( A) =
N
2w
2w
J ( A) =
N
2w
CRLB( A) =
N
var( A ) w
N
A1 = x[0]
N 1
1
A 2 =
x[n]
N n =0
2
1 N 1
1 N 1 2
2
E{( A2 A) } = E
x[n] A = E w [n] =
N n = 0
N n = 0
2w
N
33
Example 5.4
Find the CRLB for A and 2w given {x[n]}:
x[n] = A + w[n],
n = 0,1, L , N 1
1
1 N 1
2
2
p (x; ) =
x
n
A
A,
exp
(
[
]
)
,
[
=
w]
2 N /2
2
(2 w )
2 w n = 0
1 N 1
2 N /2
) 2 ( x[n] A) 2
ln( p (x; )) = ln((2 w )
2 w n = 0
N
N
1 N 1
2
= ln(2) ln( w ) 2 ( x[n] A) 2
2
2
2 w n = 0
N 1
( x[n] A)
N 1
1
ln( p (x; ))
= 2 2 ( x[n] A) 1 = n = 0 2
A
2 w
w
n =0
34
2 ln( p (x; ))
A
= 2
w
2 ln( p(x; ))
N
E
= 2
2
A
w
N 1
ln( p (x; ))
A 2w
( x[n] A)
= n =0
4w
N 1
N 1
( w[n])
= n =0 4
w
( E{w[n]})
2 ln( p(x; ))
n =0
E
=
=0
4
2
A w
w
35
1 N 1
N
N
2
ln( p (x; )) = ln(2) ln( w ) 2 ( x[n] A) 2
2
2
2 w n = 0
ln( p (x; ))
2w
1 N 1
= 2 + 4 ( x[n] A) 2
2 w 2 w n = 0
2 ln( p (x; ))
( 2w ) 2
1 N 1
= 4 6 ( w[n]) 2
2 w w n = 0
N
2 ln( p (x; ))
N
N
1
2
E
=
w
2
2
4
6
( w )
2 w w
2 4w
N
2
I () = w
0
36
N
2 4w
2w
-1
J () = I () = N
4
2 w
N
2w
CRLB( A) =
N
CRLB( 2w ) =
2 4w
N
the CRLBs for unknown and known noise power are identical
37
Example 5.5
Find the CRLB for phase of a sinusoid in white Gaussian noise:
x[n] = A cos(0 n + ) + w[n],
n = 0,1,L, N 1
1
2
p (x; ) =
exp 2 ( x[n] A cos(0 n + ))
2 N /2
2 n = 0
(2 w )
w
1 N 1
2 N /2
) 2 ( x[n] A cos(0 n + )) 2
ln( p (x; )) = ln((2 w )
2 w n = 0
38
ln( p (x; ))
1 N 1
= 2 2( x[n] A cos(0 n + )) A sin(0 n + )
2 w n = 0
A N 1
A
w n=0
2 ln( p (x; ))
2
A N 1
= 2 [x[n] cos(0 n + ) A cos(20 n + 2)]
w n=0
2 ln( p (x; ))
A N 1
E
= 2 [ A cos(0 n + ) cos(0 n + ) A cos(20 n + 2)]
2
w n=0
A2 N 1
= 2 cos 2 (0 n + ) cos(20 n + 2)
w n=0
A2 N 1 1 1
= 2 + cos(20 n + 2) cos(20 n + 2)
w n=0 2 2
39
2 ln( p (x; ))
A2 N
A2 N 1
E
= 2 + 2 cos(20 n + 2)
2
w 2 2 w n = 0
NA2
As a result,
A2 N 1
= 2 + 2 cos(20 n + 2)
2 w 2 w n = 0
NA2 A2 N 1
CRLB() = 2 2 cos(20 n + 2)
2 w 2 w n = 0
If N >> 1,
then
1
2 2w 1 N 1
=
1
cos(20 n + 2)
2
NA N n = 0
1 N 1
cos(20 n + 2) 0
N n =0
CRLB()
40
2 2w
NA2
Example 5.6
Find the CRLB for A , 0 and for
x[n] = A cos(0 n + ) + w[n],
n = 0,1,L, N 1,
N >> 1
N 1
1
2
= [ A,0 , ]
p (x; ) =
exp 2 ( x[n] A cos(0 n + )) ,
2 N /2
(2 w )
2 w n = 0
1 N 1
2 N /2
ln( p (x; )) = ln((2 w )
) 2 ( x[n] A cos(0 n + )) 2
2 w n = 0
N 1
ln( p (x; ))
1
= 2 2 ( x[n] A cos(0 n + )) cos(0 n + )
A
2 w
n =0
1 N 1
= 2 ( x[n] cos(0 n + ) A cos 2 (0 n + ))
w n =0
41
2 ln( p (x; ))
A2
1 N 1
1 N 1 1 1
2
= 2 cos (0 n + ) = 2 + cos(20 n + 2)
w n =0 2 2
w n =0
N
2 2w
2 ln( p (x; ))
N
E
2
2
A
2 w
Similarly,
2 ln( p ( x; ))
A N 1
E
= 2 n sin( 20 n + 2) 0
A0 2 w n = 0
2 ln( p ( x; ))
A N 1
E
= 2 sin( 20 n + 2) 0
A
2 w n = 0
42
2 N 1
2 ln( p (x; ))
A 2 N 1 2 1 1
A
=
E
n cos(20 n + 2)
n2
2w n = 0 2 2
2 2w n = 0
0 2
2 ln( p ( x; ))
A2 N 1
A2 N 1
2
E
= 2 n sin (0 n + ) 2 n
0
w n=0
2 w n = 0
2 ln( p (x; ))
A 2 N 1 2
NA 2
E
= 2 sin (0 n + ) 2
2
w n=0
2 w
I ()
1
2w
0
A 2 N 1 2
n
2 n=0
A 2 N 1
n
2 n=0
43
2 N 1
A
n
2 n=0
2
NA
CRLB(0 )
12
2
SNR N ( N 1)
SNR =
A2
2 2w
2(2 N 1)
CRLB()
SNR N ( N + 1)
Note that
2 2w
2(2 N 1)
4
1
CRLB()
>
=
NA
SNR N ( N + 1) SNR N SNR N
44
x[n] = A + w[n],
n = 0,1, L , N 1
CRLB() =
g ()
ln( p (x; ))
E
(5.10)
45
Example 5.7
Find the CRLB for the power of the DC value, i.e., A 2 :
x[n] = A + w[n],
n = 0,1, L , N 1
= g ( A) = A 2
2
g ( A)
g ( A)
2
= 2A
= 4A
A
A
From Example 5.3, we have
2 ln( p (x; A)) N
E
= 2
2
w
A
As a result,
2
2 2
A
4
w
2
2
w
CRLB( A ) 4 A
=
,
N >> 1
N
N
46
Example 5.8
Find the CRLB for = c1 + c 2 A from
x[n] = A + w[n],
n = 0,1, L , N 1
= g ( A) = c1 + c 2 A
2
g ( A)
g ( A)
2
= c2
= c2
A
A
As a result,
CRLB() = c 22 CRLB( A) = c 22 w
N
c 22 2w
=
N
47
48
(5.11)
49
Example 5.9
Given
x[n] = A + w[n],
n = 0,1, L , N 1
1
2
p (x;A) =
exp
(
x
[
n
]
A
)
2 2 n = 0
(22w ) N / 2
Since arg max p (x; ) = arg max{ln( p (x; ))}, taking log for p (x;A) gives
50
1 N 1
2
x
n
A
(
[
]
2 2w n = 0
N 1
( x[n] A)
N 1
N 1
( x[n] A )
n =0
Note that
2w
N 1
1 N 1
= 0 ( x[n] A) = 0 A =
x[n]
N n =0
n =0
51
Example 5.10
Find the ML estimate for phase of a sinusoid in white Gaussian noise:
x[n] = A cos(0 n + ) + w[n],
n = 0,1,L, N 1
1
2
p (x; ) =
exp 2 ( x[n] A cos(0 n + ))
2 N /2
2 n = 0
(2 w )
w
1 N 1
2 N /2
) 2 ( x[n] A cos(0 n + )) 2
ln( p (x; )) = ln((2 w )
2 w n = 0
52
A
cos(
n
+
))
or
(
x
[
n
]
A
cos(
n
+
))
0
0
2
2 w n = 0
n=0
n =0
N 1
n=0
N 1
N 1
A
x[n] sin(0 n + ) =
sin( 20 n + 2 )
2 n =0
n=0
The ML estimate for is determined from the root of the above equation
Q. Any ideas to solve the nonlinear equation?
53
A N 1
x[n] sin(0 n + ) =
sin( 20 n + 2 )
2 n =0
n=0
N 1
1 N 1
A
1
A
x[n] sin(0 n + ) =
sin( 20 n + 2) 0 = 0,
N n=0
2 N n=0
2
x[ n] sin(0 n + ) = 0
n=0
N 1
N 1
n=0
n=0
N 1
n =0
n=0
54
N >> 1
N 1 x[ n] sin( n)
0
= tan 1 Nn =01
x
[
n
]
cos(
n
)
n =0
0
0
n=0
= tan
N 1 ( A cos( n + ) + w[n]) cos( n)
0
0
n=0
NA
N 1 w[ n] sin( n)
sin(
)
+
n=0
0
1
2
,
N >> 1
tan
NA
2 N 1
n = 0 w[n] sin(0 n)
sin()
NA
= tan 1
cos() + 2 N 1 w[n] cos( n)
0
NA n = 0
1
55
(5.12)
Example 5.11
Given N samples of a white Gaussian process w[n], n = 0,1, L , N 1, with
unknown variance 2 . Determine the power of w[n] in dB.
The power in dB is related to 2 by
P = 10 log10 ( 2 )
56
1 N 1 2
p(w; ) =
exp
x [ n]
2 N /2
2
(2 )
2 n = 0
1
N
N
1 N 1 2
2
ln( p (w; )) = ln(2) ln( )
x [ n]
2
2
2
2 n = 0
2
1 N 1 2
= 2 + 4 x [ n]
2
2 n = 0
N
As a result,
N 1
1
P = 10 log10 ( ) = 10 log10
x [ n]
N n =0
57
Example 5.12
Given
x[n] = A + w[n],
n = 0,1, L , N 1
= [A ]
ln( p (x;)) 1 N 1
= 2 ( x[n] A)
A
n =0
ln( p (x;))
1 N 1
N
2
=
+
(
[
]
)
x
n
A
2 2 2 4 n = 0
2
58
2 T
59
Example 5.13
From Example 5.10, the ML solution of is determined from
N 1
A N 1
x[n] sin(0 n + ) =
sin( 20 n + 2 )
2 n =0
n=0
A N 1
g () = x[n] sin(0 n + ) sin( 20 n + 2)
2 n=0
n =0
It is obvious that
N 1
= root of g ()
60
pe = [1:1000]/1000;
plot(pe,g)
Note: x-axis is /( 2)
62
stem(pe,g)
axis([0.14 0.16 -2 2])
63
64
stem(pe,g)
axis([0.14 0.16 -2 2])
65
g
(
k +1 = k
=
dg ()
g ' ( k )
d =
k
N 1
A N 1
g () = x[n] sin(0 n + ) sin( 20 n + 2)
2 n=0
n =0
N 1
A N 1
g ' () = x[n] cos(0 n + ) cos(20 n + 2) 2
2 n =0
n=0
N 1
N 1
n=0
n =0
with
0 = 0
66
(5.13)
p1 = 0;
for k=1:10
s1 =sin(w.*n+p1);
s2 =sin(2.*w.*n+2.*p1);
c1 =cos(w.*n+p1);
c2 =cos(2.*w.*n+2.*p1);
g = x*s1'-A/2*sum(s2);
g1 = x*c1'-A*sum(c2);
p1 = p1 - g/g1;
p1_vector(k) = p1;
end
stem(p1_vector/(2*pi))
67
(5.14)
where
x is the observed vector of size N
w is Gaussian noise vector with known covariance matrix C
H is known matrix of size N p
is parameter vector of size p
T
-1
exp
)
p (x; ) =
x
H
C
x
H
N /2
1/ 2
2
( 2 )
det (C)
1
68
(5.15)
(5.16)
69
(5.17)
Example 5.14
Given N pair of ( x, y ) where x is error-free but y is subject to error:
y[n] = m x[n] + c + w[n]
, n = 0,1, L , N 1
= [m c]T
y[ N 1] = [ x[ N 1] 1] + w[ N 1]
70
y = H + w
where
y = [ y[0], y[1], L , y[ N 1]]T
x[0]
x[1]
H=
M
x[ N 1]
1
1
M
1
71
Example 5.15
Find the ML estimates of A , 0 and for
x[n] = A cos(0 n + ) + w[n],
n = 0,1,L, N 1,
N >> 1
1 N 1
2
p (x; ) =
exp
( x[n] A cos(0 n + )) ,
2 N /2
2
(2 w )
2 w n = 0
J ( A, 0 , ) = ( x[n] A cos(0 n + )) 2
n=0
72
= [ A,0 , ]
J ( A, 0 , ) = ( x[n] A cos(0 n + )) 2
n=0
N 1
73
A = 12 + 22
1 2
= tan
1
Let
c = [1 cos(0 ) L cos(0 ( N 1))]T
s = [0 sin(0 ) L sin(0 ( N 1))]T
We have
J (1, 2 , 0 ) = (x 1c 2s)T (x 1c 2s)
T
= (x - H ) (x - H ),
74
1
= , H = [c s]
2
= (I - H ( H H ) H ) x
= xT (I - H (HT H ) 1 HT )T (I - H (HT H ) 1 HT ) x
= xT (I - H (HT H ) 1 HT ) x
= xT x - xT H (HT H ) 1 HT x
or
c c c s
x H (H H) H x = c x s x T
T
s c s s
T
0
N / 2
T
T
c x s x
N
0
/
2
( ) ( )
cT x
T
s x
1
cT x
T
s x
2 T 2
T 2
= c x + s x
N
2
2 N 1
=
x[n] exp( j0 n)
N n=0
2
1 N 1
0 = arg max
76
77
Variants of LS Methods
1. Standard LS
Consider the general linear data model:
x = H + w
where
78
(5.18)
where
e = [e(0) e(1) L e( N 1)]T
(5.18) is equivalent to
N 1
= arg min e 2 (k )
k =0
79
(5.19)
Example 5.16
Given
x[n] = A + w[n],
n = 0,1, L , N 1
N 1
N 1
N 1
A = 1 x[n]
N n =0
80
Using (5.18),
1
x[0]
1
1
x[1]
N 1
1
= N x[n]
A = [1 1 L 1] [1 1 L 1]
M
M
n =0
1
x[ N 1]
Both (5.18) and (5.19) give the same answer and the LS solution is
81
where
n = 0,1, L , N 1
Using (5.18):
W =W
= (HT H ) 1 HT d
W
82
where
X T (0) x[0]
0
0
L
T
[
1
]
[
0
]
0
x
x
L
X
(
1
)
=
H=
M
M
M
M
M
T
[
1
]
[
2
]
[
]
x
N
x
N
L
x
N
L
X ( N 1)
R xx = HT H
R dx = HT d
where R xx is not the original version but not modified version of (3.6)
83
Example 5.18
Find the LS estimate of A for
x[n] = A cos(0 n + ) + w[n],
n = 0,1,L, N 1,
N >> 1
Differentiate
N 1
N 1
2
( x[n] A cos(0 n + ) ) with respect to A & set result to 0:
n=0
N 1
N 1
n=0
n =0
N 1
x[n] cos(0 n + )
A = n =N01
2
cos (0 n + )
n=0
2. Weighted LS
Use a general form of LS via a symmetric weighting matrix W
such that
(5.20)
W = WT
85
(5.21)
and
x2 = A + w2
Use
2
-1 1
W=C =
0
0
2
2
1 / 12
0
=
2
1 / 2
0
or
x = H A+w
Using (5.21)
2
1
/
0
1
1
1
T
T
1
A = (H C H ) H C x = [1 1]
2 1
0
1
/
87
1 / 12
0 x1
[1 1]
2 x
1 / 2 2
0
As a result,
A = 1 + 1
2 2
1
2
2
2
x1 x2
2
1
=
x
x
+
+
1
2
2 2
2 2 2 + 2
1 + 2
1
2
1
2
Note that
If 22 > 12 , a larger weight is placed on x1 and vice versa
If 22 = 12 , the solution is equal to the standard sample mean
The solution will be more complicated if w1 and w2 are correlated
Exact values for 12 and 22 are not necessary, only ratio is needed
Define = 12 / 22 , we have
1
A =
x2
x1 +
1+
1+
88
3. Nonlinear LS
The LS cost function cannot be represented as a linear model as in
x = H + w
2
( x[n] A cos(0 n + ))
n=0
89
4. Constrained LS
The linear LS cost function is minimized subject to constraints:
= arg min (x - H )T (x - H )
subject to S
(5.22)
1 + 2 + 3 = 10
12 + 22 + 32 = 100
90
= arg min (x - H )T (x - H )
subject to A = b
(5.23)
(5.24)
Expanding (5.24):
J c = xT x - 2T HT x + T HT H + T A - T b
1 T 1 T
1 T 1 T
T
1 T
c = (H H ) H x - (H H ) A = - (H H ) A
2
2
where is the LS solution. Put c into A = b :
1
A c = A - A(HT H ) 1 AT = b = ( A(HT H ) 1 AT ) 1 ( A - b)
2
2
92
Put back to c :
c = - (HT H ) 1 AT ( A(HT H ) 1 AT ) 1 ( A - b)
93
5. Total LS
Motivation: Noises at both x and H :
x + w 1 = H + w 2
(5.25)
n = 0,1, L , N 1
n = 0,1, L , N 1
94
n = 0,1, L , N 1
95
M
M
L L L
a1
M
x( N 1) = a0 x( N 2) + a1x( N 3) x( N 1) x( N 2) x( N 2)
s ( 0)
w(0)
w(1)
s (2) w(2) s (1)
s (3) w(3) s (2)
s (1) a0 w(2)
w(1) a0
+
=
M
M
M
M
M
a1
a1
M
w( N 2) w( N 2)
s ( N 1) w( N 1) s ( N 2) s ( N 2)
6. Mixed LS
A combination of LS, weighted LS, nonlinear LS, constrained LS and/or
total LS
Examples: weighted LS with constraints, total LS with constraints, etc.
96
i = 1,2, L , N
2. Use least squares to estimate the line y = ax in Q.1 but now only {xi }
contain zero-mean noise.
3. In a radar system, the received signal is
r (n) = s (n 0 ) + w(n)
where the range R of an object is related to the time delay by
0 = 2R / c
98
99