0% found this document useful (0 votes)
11 views5 pages

Maximum likelihood estimation

Uploaded by

Gerii Garcia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views5 pages

Maximum likelihood estimation

Uploaded by

Gerii Garcia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

DIGITAL SIGNAL PROCESSING

Laboratory exercise 2: Maximum likelihood estimation


Gerard Garcia Gros, Mario Lacueva Conesa

Abstract—In this laboratory exercise, we explore the concept of


maximum likelihood estimation (MLE) applied to signal phase estimation.
The signal r(n) is the received GNSS signal, consisting of a deterministic
cosine component with amplitude A, frequency f0 , and phase ϕ, along
with additive white Gaussian noise (AWGN) with variance σ 2 . The
objective is to estimate the unknown deterministic phase ϕ using the
maximum likelihood estimator. We derive the probability density function
(pdf) of the signal and calculate its mean. Subsequently, we implement
the MLE for phase estimation under both noiseless and noisy conditions,
comparing the estimated phase to the true value. Additionally, we analyze
the signal-to-noise ratio (SNR) and its impact on the estimator’s accuracy.
Matlab simulations are conducted to validate the theoretical results.

1. PDF AND MEAN OF r(n)


To calculate the mean of r(n), we must consider the cosine signal
and the noise. The mean of the cosine signal at a point is the point
itself. The mean of the noise is 0, as stated in the exercise. The mean
of the signal r(n) is the sum of the expectations of both signals, so
the mean of r(n) is the point. We can see this if we develop the
expectation of r(n):

E[r(n)] = E[A cos(ω0 n + ϕ) + w(n)] Fig. 1: pdf of r(n) simulated


= E[A cos(ω0 n + ϕ)] + E[w(n)]
(1)
= AE[cos(ω0 n + ϕ)] + E[w(n)] To calculate the pdf of the signal r(n), we created a Matlab code
= A cos(ω0 n + ϕ) that generates a Gaussian as defined in the exercise. We only calculate
the Gaussian part that depends on the noise, since the cosine part is
Therefore, the mean of r(n) at a point depends on n, as this is deterministic. One of the outputs of this simulation is the figure 1.
deterministic. If we consider the samples contained in a vector r(n), the pdf will
To find the pdf of r(n) at a point, we need to consider that the be the same as for a single point except that now it has the product
signal r(n) is composed of two terms where: for each point. Thus, the pdf of r(n) will be:
• A cos(2πfo n + θ) is a deterministic signal with amplitude A,
N −1
(r − µ(n))2
 
frequency fo , and phase θ, Y 1
fr (r) = √ exp −
• w(n) is additive white Gaussian noise (AWGN) with zero mean 2πσ 2 2σ 2
r=0
and variance σ 2 , as stated in the exercise.
3. BACKGROUND ON FUNDAMENTALS OF COMMUNICATIONS
The noise w(n) follows a normal distribution with mean 0 and
variance σ 2 : In fundamentals of communications, we saw that the instantaneous
w2
 
1 phase of the baseband equivalent can be expressed as:
pw (w) = √ exp − 2
2πσ 2 2σ q x (t)
φbx (t) = ∠bx (t) = arctan
Since the signal is deterministic, the pdf will be Gaussian due to ix (t)
the noise, so the pdf of the signal will be:
where ix (t) and q x (t) are the in-phase and quadrature components
of the signal bx (t).
(r(n) − A cos(ω0 n + ϕ))2
 
1 We can see that the expression for the estimator is quite similar,
pr (n) = √ exp −
2πσ 2 2σ 2 with:
2. PDF AND MEAN OF r(n) • ix (t) being the factor multiplying the cosine.
• q x (t) being the factor multiplying the sine.
Now the mean of the signal vector r(n) is the vector of means at
each point. The mean vector µ will be equal to: Thus, with the estimator ϕ̂ML , we are estimating the phase of the
signal ϕ, which is unknown but deterministic.
µ(n) = E[r(n)] = 4. I MPLEMENTATION OF MLE ϕ̂ML
= [A cos(ω0 n0 + ϕ), A cos(ω0 n1 + ϕ), To test the estimator ϕ̂ML , we create a signal r(n) with a known
. . . , A cos(ω0 n( N − 1) + ϕ)] phase shift and use it to estimate the phase shift. If it gives us the
(2) same value that we set, it is working correctly. Since the signal has
no noise, all the decimal places should match. As an example, we Error Absoluto Promedio en función de SNR (dB)
will test it for ϕ1 = 3π
2
and for ϕ2 = π2 . 0.9

• r(n) = A · cos (2πf0 n + ϕ1 ) 0.8


• r(n) = A · cos (2πf0 n + ϕ2 )
0.7
Where A = 1, f0 = 0.05 Hz and N = 100. We substitute into

Error Absoluto Promedio


the estimator: 0.6
PN −1 !
n=0 r(n) sin(2πf0 n)
0.5
ϕ̂M L = − arctan PN −1
n=0 r(n) cos(2πf0 n) 0.4

The result of the estimator gives us the expected value:


0.3

• ϕ̂M L1 = 4.7124 =
2
π 0.2
• ϕ̂M L2 = 1.5708 =
2

5. I MPLEMENTATION OF MLE ϕ̂ML WITH N OISE 0.1

We add Gaussian noise with power Pw = 0.1 as requested in the 0


-60 -50 -40 -30 -20 -10 0 10 20
statement. To determine the SNR, we first need to calculate the power
SNR (dB)
of the signal Pn .
To obtain the power of the signal r(n) = A · cos (2πf0 n + ϕ), we Fig. 2: pdf of r(n) simulated
use the definition of average power for periodic signals:

1 T
Z
P = |r(t)|2 dt
T 0 SNR (dB) ≈ 10 · 0.6990 ≈ 6.99 dB
1
where T is the period of the signal, given by T = .
Substituting
f0 Thus, the SNR in dB is approximately:
r(n):

1 T
Z SNR (dB) ≈ 7 dB
P = (A · cos (2πf0 n + ϕ))2 dn
T 0 Now let’s check if the estimator works for the same values as
before:
A2 T
Z
= cos2 (2πf0 n + ϕ) dn • ϕ1 =

= 2.0944
T 0 π
3
• ϕ2 = = 1.5708
2
Using the identity cos2 (x) = 1+cos(2x)
2
: In the first example, we have:
A2 T 1 + cos(4πf0 n + 2ϕ)
Z
r1 (n) = A · cos (2πf0 n + ϕ1 )
P = dn
T 0 2
The integral of the term cos(4πf0 n + 2ϕ) over one period is zero. PN −1 !
Thus, we have: n=0 r1 (n) sin(2πf0 n)
ϕ̂M L1 = − arctan PN −1
= 2.0411
n=0 r1 (n) cos(2πf0 n)
A2 1 T A2 T A2
Z
P = · 1 dn = · =
T 2 0 T 2 2 ϕ̂M L2 = ... = 1.5577
Finally, the power of the signal is: The results have been quite accurate, with an average absolute error
A2 of 0.0332.
P = As an annex, we will calculate the absolute error for SNR from
2
-60dB (much noise) to 20dB. For each SNR, we will take a total of
The signal-to-noise ratio (SNR) is defined as:
500 samples. The phase will be π/2.
Psignal We can see in Figure 2 that as the SNR increases, the average
SNR =
Pnoise absolute error decreases until it stabilizes near zero. This makes
A2 sense since a higher SNR means a lower proportion of noise. It is
Since Psignal = 2
and Pnoise = 0.1, for A = 1 we have: noteworthy that from -40dB downwards, the error stabilizes at 0.8.
12 1 This is due to the fact that at that point, the noise power is much
Psignal = = higher than the signal power and no longer affects it.
2 2
Therefore, the SNR is: 6. B IAS ACCORDING TO NUMBER OF SAMPLES
1
1 10 As shown in Figure 3, the bias decreases as the number of samples
2
SNR = = · =5 increases. Caution is needed with these realizations because, as a
0.1 2 1
random variable, the result can vary significantly for small N , as
To convert the SNR to decibels (dB), we use the following formula:
illustrated in Figure 4.
For this reason, we calculate the bias every five N , specifically
SNR (dB) = 10 log10 (SNR) = 10 log10 (5)
N = {10, 15, 20, ..., 1000}. The result is shown in Figure 5, where
Calculating the value: the clear exponential shape of the function can be observed.
10-3 Sesgo en función del número de muestras (N) 10-3 Bias in function of the number of samples
3 6

2.5 4

2 2
Sesgo (bias)

Bias
1.5 0

1 -2

0.5 -4

0 -6
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Número de muestras (N) N samples

Fig. 3: bias for N = 10, 100, 1000 Fig. 5: bias for N = 10, 20, 30, ..., 1000

10-3 Bias in function of the number of samples Bias in function of the number of samples
2 0.1

1.5 0.05

1 0
Bias
Bias

0.5 -0.05

0 -0.1

-0.5 -0.15

-1 -0.2
0 100 200 300 400 500 600 700 800 900 1000 10 20 30 40 50 60 70 80 90 100
N samples N samples

Fig. 4: bias for N = 10, 100, 1000 - different performance Fig. 6: bias for N = 10, 11, 12, ..., 100

If we zoom into Figure 5, we see that the bias is zero for every After this, we examined the trend in variance when varying N.
increment of 10 N . It appears that the bias function is sinusoidal Let’s first see what happens if we repeat the same process but now
with a decreasing amplitude. This pattern is even more evident when make N a vector such that:
testing with consecutive values of N , as shown in Figure 6. N = [10, 100, 1000]
Thus, it can be observed that the bias has a periodicity of 10 × N .
Thus, the resulting 3-point plot would look like Figure 7.
7. VARIANCE ACCORDING TO NUMBER OF SAMPLES In this case, we do not observe significant variation in the result if
we run the code multiple times, unlike in the previous section. This
As in the previous exercise, we are asked to calculate a statistical could be because variance has a quadratic factor in its formula, which
parameter. In this case, we are required to perform the same procedure may reduce the random effect of noise on the final variance result.
as before, but now focusing on variance.
Although the plot does not vary significantly with each execution,
Variance is the statistical parameter that shows the variability of we will follow the same approach as in the previous exercise. Instead
the data and follows the formula: of N having 3 values, we will now set N as a vector defined by:

var(x) = E[(x − E[x])2 ] N = [10 : 5 : 1000]


First, we observed what happens when we generate K noisy signals This creates a vector starting at 10 and increasing by 5 until
and calculate the phase variance by fixing the signal samples to N reaching 1000. This allows us to better observe the variance trend
values. This calculation produces a random number that depends on based on the number of samples. Applying this vector, we obtain the
the noise in all realizations. result shown in Figure 8.
Variance according to number of samples 10 -3 Bias according to number of samples
0.02

0.018 8

0.016
6
0.014
4
0.012
Variance

2
0.01

Bias
0.008 0

0.006
-2

0.004
-4
0.002
-6
0
0 100 200 300 400 500 600 700 800 900 1000 300 320 340 360 380 400 420 440 460
Number of Samples Number of Samples

Fig. 7: Variance of phase simulated with different number of samples Fig. 9: Variance of phase simulated in the range 300 - 470 samples

Variance according to number of samples 10 -3 Variance according to number of samples


0.02 2.5

0.018
2

0.016
1.5
0.014
1
0.012
Variance
Variance

0.5
0.01
0
0.008

-0.5
0.006

0.004 -1

0.002 -1.5

0
0 100 200 300 400 500 600 700 800 900 1000 300 320 340 360 380 400 420 440 460
Number of Samples Number of Samples

Fig. 8: Variance of phase simulated with different number of samples Fig. 10: Variance of phase simulated in the range 300 - 470 samples

In this case, we can see that the plot now has more than just 3
points, allowing us to observe the variance trend more clearly as the N = [10 : 5 : 1000]
number of signal samples increases. We can clearly see how the plot
trends towards zero. In fact, this plot tends toward the CRB, but we For doing this, we use the following expression:
will examine that in more detail in the next section.
To highlight the difference in the effect of noise when calculating MSE(θ̂M L ) = bias2 (θ̂M L ) + var(θ̂M L ) (3)
variance versus bias, we performed a comparison over the same range
El CRB is a deterministic value that depends on the number of
in both simulations. As shown in Figures 9 and 10, the bias and
samples and the amplitude of the signal and it has as an approximate
variance simulations are displayed within the same range.
function:
We observe that, in the case of bias, there is a trend towards zero,
but with greater variability. In contrast, in the variance simulation, 2σw2

we see a similar downward trend but in a more regular manner, with CRB(ϕ) ≃ (4)
N A2
less variability.
That has been obtained from the expression shown in theory
classes:
8. MSE ACCORDING TO NUMBER OF SAMPLES
1
Finally, in this last section, we are asked to calculate the MSE, CRB{θ} = n o (5)
∂ 2 ln(f (θ|x))
this time using a vector for the number of samples defined as: −E ∂θ 2
10 -3 MSE and CRB according number of samples 10 -3 MSE and CRB according number of samples
1 1
MSE MSE
0.9 CRB 0.9 CRB

0.8 0.8

0.7 0.7
MSE and CRB

MSE and CRB


0.6 0.6

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
200 300 400 500 600 700 800 900 1000 200 300 400 500 600 700 800 900 1000
Samples Samples

Fig. 11: MSE and CRB in 200 - 1000 samples Fig. 13: MSE and CRB with 1 - 10000 samples for A = 5

MSE and CRB according to number of samples the denominator serves as a measure of signal power. It makes sense
3
MSE that the CRB is lower when the amplitude is higher, as there will be
CRB a greater difference between the information and the noise.
2.5
C ONCLUSIONS
The maximum likelihood estimator (MLE) for phase estimation has
2
been successfully implemented and validated through both theoretical
MSE and CRB

derivations and practical simulations. The estimator demonstrated


1.5 high accuracy in noiseless conditions, yielding results that closely
matched the known phase values. As the presence of noise increased,
the accuracy of the estimator was affected, which was quantitatively
1
assessed through the signal-to-noise ratio (SNR).
Furthermore, the analysis of bias showed a decreasing trend as
0.5 the number of samples increased. The results indicate that with a
sufficient number of samples, the bias stabilizes, suggesting im-
proved reliability in phase estimation under varying noise conditions.
0
0 5 10 15 20 25 30 Overall, this laboratory exercise highlights the fact that using MLE
Samples in signal processing may be asymptotically efficient in specific
conditions. This task emphasizes the importance of considering noise
Fig. 12: MSE and CRB in 1 - 30 samples
characteristics and sample sizes for robust estimator performance.

If we simulate it in MATLAB and override both in the same plot,


we obtain the result in figure 11.
As we can see in the figure 11, the MSE tends to CRB. This figure
is shown from 200 to 1000 samples in order to observe it with enough
resoloution. In the previous samples we can appreciate how for less
samples the difference gets higher as we can see in figure 12.
The MSE can’t be bigger than CRB by definition. We can observe
in section 7 that the variance tends to a number close to zero. In fact,
what is really happening is that the variance is getting close to CRB.
This happens when the estimator is asymptotically efficient.
As a final reflection, let us see what happens if the signal amplitude
is greater than one. By analyzing the CRB formula, we deduce that
the CRB will be smaller. This can be seen in Figure 13. For this
simulation, we used a signal amplitude such that A = 5. We can
compare this figure with Figure 12 and conclude that the CRB is
smaller if A is larger, and the MSE continues to follow the CRB. We
can compare this result to the inverse of the SNR, since the noise
power is in the numerator of the CRB, and the squared amplitude in

You might also like