Signals Filters
Signals Filters
Signals
signals are representations of quantities and variations of physical things
measured with respect to somethings else. It can be Voltage vs time, pixel values
wet position of pixel, depth vs voltage etc.
Analog Signals
Analog signals are continuous signals that represent time-varying quantities. They
are characterized by their ability to assume an infinite number of values over a
continuous range. These signals are a direct representation of physical quantities,
such as sound, light, temperature, or pressure.
Characteristics:
Applications:
How to store analog signals: Vinyl Disk example. What are the problems with vinyl
disks.
Discrete-Time Signals
Discrete-time signals are a class of signals generated by sampling continuous
signals at specific intervals. Unlike analog signals, discrete-time signals are
defined only at discrete points in time. However, between these points, the
signal's value is not defined, making them distinct from truly digital signals. The
amplitude of discrete-time signals can still take on an infinite number of values.
Signals, Filters 1
Characteristics:
Applications:
Digital Signals
Digital signals are signals that have been both sampled and quantized. They are
defined at discrete intervals in time and have discrete amplitude levels. This
double discretization makes digital signals ideal for electronic processing and
transmission, as they are less susceptible to noise and degradation compared to
analog signals.
Characteristics:
Applications:
Comparison
Representation: Analog signals represent physical measurements directly in a
continuous form, while discrete-time signals represent these measurements at
specific time intervals without quantizing their amplitude. Digital signals
Signals, Filters 2
represent information in a binary form, suitable for processing by digital
systems.
Fidelity and Noise: Analog signals can offer high fidelity but are more
susceptible to noise and distortion during transmission and processing.
Discrete-time signals bridge the gap between analog and digital by allowing
for the temporal discretization of signals, retaining a high degree of fidelity
while easing the transition to digital processing. Digital signals, by their nature,
resist noise and distortion better due to their discrete representation, making
them ideal for long-distance transmission and digital processing.
Signals, Filters 3
The Fourier Transform is a mathematical operation that decomposes a function (in
our case, a time-domain signal) into its constituent frequencies. The DFT is
specifically designed for signals sampled at discrete intervals, making it ideal for
digital signals. The FFT is an algorithm that efficiently computes the DFT,
significantly reducing the computational complexity.
Signals, Filters 4
Practical Example: Signal Analysis
Consider a digital signal comprising multiple sinusoidal components with different
frequencies. In the time domain, these components might be difficult to
distinguish, especially if they overlap or if the signal is noisy. By applying the FFT,
each component can be identified as a peak in the frequency spectrum at its
respective frequency, making it easier to analyze the signal's composition, identify
noise, and apply targeted filtering.
Conclusion
The frequency domain representation of digital signals provides a powerful tool
for understanding and manipulating signals in ways that are not always apparent
in the time domain. Through the use of the Fourier Transform and its algorithms,
signals can be analyzed and processed efficiently, enabling advancements in
telecommunications, audio and video processing, and beyond. Whether for
filtering, compression, or spectrum analysis, the insights gained from the
frequency domain are indispensable in the field of digital signal processing.
The Fast Fourier Transform (FFT) is a powerful computational tool used in digital
signal processing (DSP) to efficiently compute the Discrete Fourier Transform
(DFT) of a sequence. The DFT is a fundamental operation that transforms a
sequence of complex or real numbers in the time domain into a sequence of
complex numbers in the frequency domain. This transformation reveals the
frequency components of the original time-domain signal, including their
amplitudes and phases. The FFT significantly reduces the computational
complexity of performing a DFT from O(N2)O(N2) to O(NlogN)O(NlogN), where
NN is the number of points in the time-domain signal. This efficiency is crucial for
real-time signal processing applications.
Signals, Filters 5
Understanding the FFT Algorithm
The FFT algorithm exploits the symmetry and periodicity properties of the DFT to
break down a large DFT problem into smaller ones. The most common FFT
algorithm is the Cooley-Tukey algorithm, which recursively divides the DFT of a
sequence into smaller DFTs, leading to a dramatic reduction in the computation
time. The algorithm is most efficient when NN is a power of two, but variations
exist for other cases.
Spectrum analysis
Image processing
#include <math.h>
#include <complex.h>// Recursive FFT function
void fft(complex double* X, int N) {
if (N <= 1) return;
// Divide step: Divide the array into even and odd elemen
ts
complex double even[N/2], odd[N/2];
for (int i = 0; i < N/2; i++) {
even[i] = X[2*i];
odd[i] = X[2*i + 1];
Signals, Filters 6
}
int main() {
int N = 512; // Size of the input (must be a power of 2)
complex double x[N]; // Input array to be transformed
double Fs = 1000.0; // Sampling rate in Hz
double f = 15; // input signal frequency
// Initialize x with some values, for example, a simple s
ignal
for(int i = 0; i < N; i++) {
x[i] = sin(2* M_PI * i * f /Fs);
}
// Apply FFT
fft(x, N);
Signals, Filters 7
}
return 0;
}
Explanation
Divide Step: The FFT function first divides the input array X into two smaller
arrays, even and odd , containing the even-indexed and odd-indexed elements,
respectively. This step is based on the principle that a DFT can be separated
into its even and odd components.
Conquer Step: It then recursively applies the FFT to these smaller arrays. This
recursion continues until the base case of a single-element array is reached,
at which point the function simply returns because the DFT of a single point is
the point itself.
Combine Step: After the recursive calls, the algorithm combines the DFTs of
the even and odd arrays using the twiddle factors ( cexp(-I * 2 * M_PI * k / N) )
to compute the final FFT result. The twiddle factors are complex exponential
functions that account for the rotation in the complex plane associated with
each frequency component.
Note
The example provided is a basic implementation meant for educational purposes
and may need modifications for practical applications, including optimizing for
specific hardware or handling non-power-of-two sequence lengths. It uses
complex numbers from the C standard library ( <complex.h> ) and demonstrates the
FFT's divide-and-conquer approach, highlighting how the FFT algorithm
efficiently computes the frequency domain representation of a signal.
nyquist criterion: https://www.youtube.com/watch?v=yWqrx08UeUs
Signals, Filters 8
Filters
Filters are tools for modifying or analyzing signals, crucial in signal processing and
communication systems.
Basics of Filtering
Filterability of Data: Not all data is equally amenable to filtering; the choice of
filter depends on the characteristics of the data and the desired outcome.
How It Works
The Moving Average Filter calculates the average of a specific number of data
points in a dataset and then slides over to the next data point, continuing the
process throughout the data set. The specific number of data points used in each
average is called the "window size" or "period" of the filter. The result is a
smoothed version of the original signal.
Signals, Filters 9
2. Exponential Moving Average (EMA): Gives more weight to the most recent
data points, making it more responsive to new information. It uses a weighting
factor to decrease the weights of older data points exponentially.
Applications
Noise Reduction: By averaging out random fluctuations, the filter makes it
easier to observe the true signal or trend in the data.
Advantages
Simplicity: Moving average filters are straightforward to understand and
implement, requiring no complex mathematics.
Flexibility: The window size can be adjusted based on the desired level of
smoothing or the application's specific needs.
Limitations
Lag: Moving average filters introduce a delay between the input signal and the
output signal, which can be problematic for real-time applications.
Edge Effects: At the start and end of the data set, the filter can't apply the
same window size, which might result in less accurate smoothing.
Non-Adaptive: The filter treats all data points with equal importance (in the
case of SMA and WMA), not adapting to changes in the signal's variability or
frequency content.
Signals, Filters 10
# header initializations #
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
def movavg(inp,n):
N = inp.size
op = np.zeros(inp.size)
if N < n:
return np.mean(inp)
else:
for i in range(op.size-n+1):
op[int(i+n/2)] = np.mean(inp[i:i+n-1:1])
return op
a = np.sin(np.array(range(70))*np.pi/100.0) # signal
c = 0.1*np.sin(np.array(range(70))*40*np.pi/100)# noise
t = a+c # adding signal and noise
b = movavg(t,10)
ax = plt.plot(a,label = 'true signal', color = 'black')
ax = plt.plot(t,label = 'noisy signal', color = 'red')
ay = plt.plot(b,label= 'filterd output', color = 'cyan')
plt.legend(loc = 'upper left')
plt.show()
Signals, Filters 11
def wmovavg(inp,n,alpha):
N = inp.size
op = np.zeros(inp.size)
s = (1-alpha**n)/(1-alpha)
if N < n:
return np.mean(inp)
else:
sum = 0
for i in range(op.size-n):
sum = 0
for j in range(n):
op[int(i+n/2)] = 1.0*op[int(i+n/2)] + 1.0*(alpha
op[int(i+n/2)] = op[int(i+n/2)]/s
return op
a = np.sin(np.array(range(70))*np.pi/100.0) # signal
c = 0.1*np.sin(np.array(range(70))*40*np.pi/100) # noise
t = a+c # adding signal and noise
b = wmovavg(t,10,0.9)
ax = plt.plot(a, label='true signal', color='black')
ax = plt.plot(t, label='noisy signal', color='red')
ay = plt.plot(b, label='filtered output', color='cyan')
Signals, Filters 12
plt.legend(loc='upper left')
plt.show()
GH Filters
The GH Filter, often referred to as the g-h filter, is a simple yet effective approach
to estimate the state of a linear system. It's a type of predictive filter that
combines two concepts: g, the gain for updating the state estimate (position, for
instance), and h, the gain for updating the rate estimate (velocity, in motion
tracking scenarios).
Objective: To provide a simple method to estimate the current state and rate
of change for a system based on noisy measurements. It's especially useful
when the model dynamics are roughly known but subjected to random errors.
Applications: Used in scenarios where more complex filters like the Kalman
filter might be considered overkill, such as in basic tracking and sensor fusion
Signals, Filters 13
tasks where computational simplicity is desired.
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
# Input as id input
# update step
residual = z - x_pred
dx = dx + h * (residual) / dt
x_est = x_pred + g * residual
return x_est,dx
"""
'data' contains the data to be filtered.
'x0' is the initial value for our state variable
'dx' is the initial change rate for our state variable
'g' is the g-h's g scale factor
'h' is the g-h's h scale factor
'dt' is the length of the time step
"""
def g_h_filter_array(data, x0, dx, g, h, dt=1.):
Signals, Filters 14
x_est = x0
results = []
k = 1
for z in data:
x_est,dx = g_h_filter(z,x_est,dx,g,h,dt,first_time =
k)
k = 0
results.append(x_est)
return np.array(results)
noise = np.random.normal(0,0.5,1000)
t = np.arange(1000)/1000
f = 1
sig = np.sin(2*np.pi*f*t)
noisy = sig + noise
a = g_h_filter_array(noisy,0,0.03,0.1,0.001,1)
plt.plot(noisy,label='noisy signal')
plt.plot(a,label='filtered output')
plt.plot(sig,color = 'black',label = 'original signal')
plt.legend(loc = 'lower left')
plt.show()
Signals, Filters 15
Understanding Gaussian Noise
Gaussian noise is defined by two parameters: its mean (μμ) and its variance
(σ2σ2). The mean typically represents the noise's center, which is often zero in a
signal processing context, indicating that the noise can be both positive and
negative with equal probability. The variance indicates the noise's spread or
intensity; a higher variance means the noise can have a wider range of values and
thus a more significant impact on the signal.
Signals, Filters 16
Image Processing: In imaging, Gaussian noise can affect the quality of photos
and videos. Noise reduction algorithms, often employing Gaussian models, are
crucial for enhancing image quality.
Conclusion
The Gaussian noise model plays a pivotal role in the field of signal processing,
providing a mathematical framework for understanding and mitigating noise in
various systems. From the conversion of analog signals to digital to the
processing and transmission of these signals, addressing Gaussian noise is
essential for maintaining signal integrity and achieving high-quality results in
electronics, communication, and beyond. Whether through hardware design
considerations or software algorithms for noise reduction, the principles
underlying the Gaussian noise model are integral to the development of modern
technology.
Signals, Filters 17
Signals, Filters 18
here we can see that the effective variance is always smaller than the 2 individual
variances. This means when we combine 2 measurements the error goes down in
this. This concept can be used to make the gh filter better as we can use it to set
the g and h for it.
Kalman filter:
Before we start with this let us consider a simple real life system, a slider which’s
speed can be controlled and a distance sensor (may be ultrasonic sensor) is
mounted on one side and is measuring the distance with noise. We want to
estimate the position of this slider accurately.
Signals, Filters 19
The Kalman Filter is a powerful algorithm used for estimating the state of a linear
dynamic system from a series of noisy measurements. It is widely used in
applications such as navigation and tracking systems, economics, and
engineering. Instead of diving deep into the mathematical equations, I will
introduce the Kalman Filter conceptually and demonstrate its implementation
using Python code.
Key Concepts
State Vector: Represents the true state of the system you're trying to
estimate.
Python Implementation
Let's implement a simple Kalman Filter in Python. We'll simulate a system where
we want to estimate a position and velocity by measuring only the position with
some noise.
Step 1: Setup
First, we need to define our initial state, error covariance, process noise,
measurement noise, and other necessary matrices.
Signals, Filters 20
import numpy as np
# Initial uncertainty
P = np.array([[1000, 0], [0, 1000]])
# Transition matrix
F = np.array([[1, 1], [0, 1]])
# Measurement matrix
H = np.array([[1, 0]])
Step 2: Prediction
The prediction step projects the current state estimate ahead in time.
Step 3: Update
Signals, Filters 21
The update step incorporates a new measurement into the state estimate.
for Z in measurements:
# Predict
x, P = predict(x, P, F, Q, u)
This example demonstrates a very basic implementation of the Kalman Filter. The
actual application of the Kalman Filter involves more complexities depending on
the system dynamics, but the core concepts remain the prediction and update
steps that continuously refine the estimate of the system's state.
Conclusion
The Kalman Filter is a robust method for tracking and prediction in linear systems.
Its efficiency and effectiveness have made it a standard tool in various fields. The
Python code provided offers a basic implementation, illustrating the process of
Signals, Filters 22
prediction and update to achieve more accurate estimates over time. As you delve
deeper into specific applications, you'll encounter more complex models and
adaptations of the Kalman Filter, such as the Extended Kalman Filter and the
Unscented Kalman Filter for nonlinear systems.
Annexure
Angle Representation and Conversions
Roll, Pitch, Yaw, and Quaternions: Discusses the conversion between these
two formats for representing 3D orientation, vital in robotics and aerospace.
ans.pitch = asin(t2);
ans.roll = atan2(t3, t4);
ans.yaw = atan2(t1, t0);
return ans;
}
Signals, Filters 23
QuaternionData Flight::toQuaternion(EulerianAngle data)
{
QuaternionData ans;
double t0 = cos(data.yaw * 0.5);
double t1 = sin(data.yaw * 0.5);
double t2 = cos(data.roll * 0.5);
double t3 = sin(data.roll * 0.5);
double t4 = cos(data.pitch * 0.5);
double t5 = sin(data.pitch * 0.5);
ans.q0 = t2 * t4 * t0 + t3 * t5 * t1;
ans.q1 = t3 * t4 * t0 - t2 * t4 * t1;
ans.q3 = t2 * t4 * t1 - t3 * t5 * t0;
return ans;
}
Signals, Filters 24