0% found this document useful (0 votes)
5 views

Signals Filters

Uploaded by

sreenivastvla005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Signals Filters

Uploaded by

sreenivastvla005
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Signals, Filters

Sensors are crucial in converting physical phenomena into measurable electrical


signals.

Signals
signals are representations of quantities and variations of physical things
measured with respect to somethings else. It can be Voltage vs time, pixel values
wet position of pixel, depth vs voltage etc.

Analog Signals
Analog signals are continuous signals that represent time-varying quantities. They
are characterized by their ability to assume an infinite number of values over a
continuous range. These signals are a direct representation of physical quantities,
such as sound, light, temperature, or pressure.

Characteristics:

Continuous in both time and amplitude.

Can represent variations in physical quantities with high fidelity.

Applications:

Audio and video broadcasting, where the continuous variation of signals is


analogous to the original sounds and images.

Analog electronics, including amplifiers and radio transmitters.

How to store analog signals: Vinyl Disk example. What are the problems with vinyl
disks.

Discrete-Time Signals
Discrete-time signals are a class of signals generated by sampling continuous
signals at specific intervals. Unlike analog signals, discrete-time signals are
defined only at discrete points in time. However, between these points, the
signal's value is not defined, making them distinct from truly digital signals. The
amplitude of discrete-time signals can still take on an infinite number of values.

Signals, Filters 1
Characteristics:

Defined at discrete intervals in time but not quantized in amplitude.

Can be derived from analog signals through the process of sampling


without quantization.

Applications:

Digital signal processing (DSP) where signals are manipulated


algorithmically for improvements or alterations.

Systems that require the analysis or processing of signals in a digital form


but are not concerned with the constraints of digital systems, such as
fixed amplitude levels.

Digital Signals
Digital signals are signals that have been both sampled and quantized. They are
defined at discrete intervals in time and have discrete amplitude levels. This
double discretization makes digital signals ideal for electronic processing and
transmission, as they are less susceptible to noise and degradation compared to
analog signals.

Characteristics:

Discrete in both time and amplitude.

Represented by binary digits, making them suitable for computer systems


and digital electronics.

Applications:

Computing and digital electronics, where binary representation is


fundamental.

Communication systems that rely on digital protocols and encoding


schemes, such as cellular networks and the Internet.

Comparison
Representation: Analog signals represent physical measurements directly in a
continuous form, while discrete-time signals represent these measurements at
specific time intervals without quantizing their amplitude. Digital signals

Signals, Filters 2
represent information in a binary form, suitable for processing by digital
systems.

Fidelity and Noise: Analog signals can offer high fidelity but are more
susceptible to noise and distortion during transmission and processing.
Discrete-time signals bridge the gap between analog and digital by allowing
for the temporal discretization of signals, retaining a high degree of fidelity
while easing the transition to digital processing. Digital signals, by their nature,
resist noise and distortion better due to their discrete representation, making
them ideal for long-distance transmission and digital processing.

Processing: Analog signal processing is performed with analog electronic


circuits, which can introduce noise. Discrete-time signal processing can utilize
algorithms that mimic digital processing techniques, offering flexibility and
precision. Digital signal processing (DSP) is performed with digital circuits and
processors, allowing for complex manipulation, error correction, and more
sophisticated analysis than is possible with analog processing.

Frequency Domain Representation of Digital Signals:


The frequency domain representation of digital signals is a fundamental concept
in signal processing that offers a complementary perspective to the time domain
representation. While the time domain focuses on how a signal changes over time,
the frequency domain reveals the signal's constituent frequencies. This
representation is invaluable for analyzing, processing, and understanding the
characteristics of digital signals, especially for tasks like filtering, compression,
and spectrum analysis.

Basics of Frequency Domain


In the frequency domain, a signal is represented by its frequency components
along with their amplitudes and phases. This transformation from time domain to
frequency domain is typically achieved using the Fourier Transform (FT) or, more
commonly in digital signal processing (DSP), the Discrete Fourier Transform (DFT)
and its efficient algorithm, the Fast Fourier Transform (FFT).

Fourier Transform and its Significance

Signals, Filters 3
The Fourier Transform is a mathematical operation that decomposes a function (in
our case, a time-domain signal) into its constituent frequencies. The DFT is
specifically designed for signals sampled at discrete intervals, making it ideal for
digital signals. The FFT is an algorithm that efficiently computes the DFT,
significantly reducing the computational complexity.

Applications in Signal Processing


Filter Design: Frequency domain representation is crucial for designing digital
filters, such as low-pass, high-pass, and band-pass filters. It allows engineers
to precisely tailor the filter characteristics to attenuate or amplify specific
frequency components.

Spectrum Analysis: Analyzing the spectrum of a digital signal helps in


identifying the signal's frequency content, which is essential for applications
like telecommunications, audio processing, and diagnostics in machinery
where different frequencies carry different information.

Signal Compression: In data compression, especially for audio and image


files, the frequency domain representation enables identifying and removing
less perceptually significant components, leading to more efficient storage
and transmission.

Vector and Frequency Domain Representation


The frequency domain can also be visualized as a vector space, where each
frequency component of the signal is represented by a vector in a
multidimensional space. This conceptualization facilitates operations like filtering
and analysis by treating them as manipulations in this vector space.

Transforming Signals: FFT in Action


The FFT algorithm is a cornerstone of digital signal processing, enabling the
practical application of frequency domain techniques. By transforming a time-
domain signal into its frequency components, the FFT provides a clear picture of
the signal's frequency makeup. This transformation is reversible, allowing for
operations to be performed in the frequency domain and then transformed back to
the time domain without losing information.

Signals, Filters 4
Practical Example: Signal Analysis
Consider a digital signal comprising multiple sinusoidal components with different
frequencies. In the time domain, these components might be difficult to
distinguish, especially if they overlap or if the signal is noisy. By applying the FFT,
each component can be identified as a peak in the frequency spectrum at its
respective frequency, making it easier to analyze the signal's composition, identify
noise, and apply targeted filtering.

Conclusion
The frequency domain representation of digital signals provides a powerful tool
for understanding and manipulating signals in ways that are not always apparent
in the time domain. Through the use of the Fourier Transform and its algorithms,
signals can be analyzed and processed efficiently, enabling advancements in
telecommunications, audio and video processing, and beyond. Whether for
filtering, compression, or spectrum analysis, the insights gained from the
frequency domain are indispensable in the field of digital signal processing.

Gaussian noise model:

The Gaussian noise model is a fundamental concept in the field of signal


processing, particularly relevant when discussing the conversion of analog signals
to digital format and the various noise models that can affect this process.
Gaussian noise, also known as normal noise, is characterized by its probability
density function following a normal distribution, which is symmetrical and bell-
shaped. This type of noise is ubiquitous in electronic systems and signal
processing due to its natural occurrence in many physical processes.

The Fast Fourier Transform (FFT) is a powerful computational tool used in digital
signal processing (DSP) to efficiently compute the Discrete Fourier Transform
(DFT) of a sequence. The DFT is a fundamental operation that transforms a
sequence of complex or real numbers in the time domain into a sequence of
complex numbers in the frequency domain. This transformation reveals the
frequency components of the original time-domain signal, including their
amplitudes and phases. The FFT significantly reduces the computational
complexity of performing a DFT from O(N2)O(N2) to O(Nlog⁡N)O(NlogN), where
NN is the number of points in the time-domain signal. This efficiency is crucial for
real-time signal processing applications.

Signals, Filters 5
Understanding the FFT Algorithm
The FFT algorithm exploits the symmetry and periodicity properties of the DFT to
break down a large DFT problem into smaller ones. The most common FFT
algorithm is the Cooley-Tukey algorithm, which recursively divides the DFT of a
sequence into smaller DFTs, leading to a dramatic reduction in the computation
time. The algorithm is most efficient when NN is a power of two, but variations
exist for other cases.

Applications of the FFT


FFT is used in numerous applications including but not limited to:

Digital signal processing (audio, video, digital communications)

Spectrum analysis

Fast convolution and correlation

Image processing

Numerical solutions of partial differential equations

C Code Example for FFT Implementation


Below is a simplified C code example that demonstrates an FFT implementation.
This example focuses on clarity and understanding rather than optimization or
handling edge cases. It implements the Cooley-Tukey FFT algorithm for
sequences whose length is a power of two.

#include <math.h>
#include <complex.h>// Recursive FFT function
void fft(complex double* X, int N) {
if (N <= 1) return;

// Divide step: Divide the array into even and odd elemen
ts
complex double even[N/2], odd[N/2];
for (int i = 0; i < N/2; i++) {
even[i] = X[2*i];
odd[i] = X[2*i + 1];

Signals, Filters 6
}

// Conquer step: Recursively apply FFT to even and odd ar


rays
fft(even, N/2);
fft(odd, N/2);

// Combine step: Combine the results of the smaller DFTs


into the final result
for (int k = 0; k < N/2; k++) {
complex double t = cexp(-I * 2 * M_PI * k / N) * odd
[k];
X[k] = even[k] + t;
X[k + N/2] = even[k] - t;
}
}

int main() {
int N = 512; // Size of the input (must be a power of 2)
complex double x[N]; // Input array to be transformed
double Fs = 1000.0; // Sampling rate in Hz
double f = 15; // input signal frequency
// Initialize x with some values, for example, a simple s
ignal
for(int i = 0; i < N; i++) {
x[i] = sin(2* M_PI * i * f /Fs);
}

// Apply FFT
fft(x, N);

// Print the FFT result and corresponding frequencies


for(int i = 0; i < N; i++) {
double frequency = i * Fs / N;
printf("X[%d] = %f + %fi, Frequency = %f Hz\n", i, cr
eal(x[i]), cimag(x[i]), frequency);

Signals, Filters 7
}

return 0;
}

Explanation
Divide Step: The FFT function first divides the input array X into two smaller
arrays, even and odd , containing the even-indexed and odd-indexed elements,
respectively. This step is based on the principle that a DFT can be separated
into its even and odd components.

Conquer Step: It then recursively applies the FFT to these smaller arrays. This
recursion continues until the base case of a single-element array is reached,
at which point the function simply returns because the DFT of a single point is
the point itself.

Combine Step: After the recursive calls, the algorithm combines the DFTs of
the even and odd arrays using the twiddle factors ( cexp(-I * 2 * M_PI * k / N) )
to compute the final FFT result. The twiddle factors are complex exponential
functions that account for the rotation in the complex plane associated with
each frequency component.

Note
The example provided is a basic implementation meant for educational purposes
and may need modifications for practical applications, including optimizing for
specific hardware or handling non-power-of-two sequence lengths. It uses
complex numbers from the C standard library ( <complex.h> ) and demonstrates the
FFT's divide-and-conquer approach, highlighting how the FFT algorithm
efficiently computes the frequency domain representation of a signal.
nyquist criterion: https://www.youtube.com/watch?v=yWqrx08UeUs

Signals Beyond Time


Images as Signals: Treating images as 2D signals for operations like
compression, which can be achieved through Fourier transform techniques.

Signals, Filters 8
Filters
Filters are tools for modifying or analyzing signals, crucial in signal processing and
communication systems.

Basics of Filtering
Filterability of Data: Not all data is equally amenable to filtering; the choice of
filter depends on the characteristics of the data and the desired outcome.

Filter Types and Applications


Low Pass, High Pass, Band Pass Filters: Each serves different purposes, such
as smoothing signals, removing noise, or isolating a specific frequency range.

Easy to implement smoothing filters:


Moving Average
The Moving Average Filter is a simple yet effective digital filter used in signal
processing for smoothing data or time series. It works by creating a series of
averages of different subsets of the full dataset. The main purpose of the Moving
Average Filter is to reduce noise in a dataset, making it easier to identify the true
underlying patterns. This filter is particularly useful in time series analysis, where
it can help in detecting trends over time without the interference of short-term
fluctuations.

How It Works
The Moving Average Filter calculates the average of a specific number of data
points in a dataset and then slides over to the next data point, continuing the
process throughout the data set. The specific number of data points used in each
average is called the "window size" or "period" of the filter. The result is a
smoothed version of the original signal.

Types of Moving Average Filters


1. Simple Moving Average (SMA): Each value in the window contributes equally
to the final average. Mathematically, it is the sum of the data points within the
window, divided by the number of data points.

Signals, Filters 9
2. Exponential Moving Average (EMA): Gives more weight to the most recent
data points, making it more responsive to new information. It uses a weighting
factor to decrease the weights of older data points exponentially.

3. Weighted Moving Average (WMA): Allows for the manual assignment of


weights to each data point in the window, providing flexibility in emphasizing
particular data points more than others.

Applications
Noise Reduction: By averaging out random fluctuations, the filter makes it
easier to observe the true signal or trend in the data.

Trend Analysis: In financial markets, moving averages are used to identify


trends in stock prices or trading volumes.

Data Preprocessing: In machine learning and data science, moving averages


are used to prepare and clean time series data before analysis or modeling.

Advantages
Simplicity: Moving average filters are straightforward to understand and
implement, requiring no complex mathematics.

Effectiveness: Despite their simplicity, these filters are quite effective at


smoothing data and reducing random noise.

Flexibility: The window size can be adjusted based on the desired level of
smoothing or the application's specific needs.

Limitations
Lag: Moving average filters introduce a delay between the input signal and the
output signal, which can be problematic for real-time applications.

Edge Effects: At the start and end of the data set, the filter can't apply the
same window size, which might result in less accurate smoothing.

Non-Adaptive: The filter treats all data points with equal importance (in the
case of SMA and WMA), not adapting to changes in the signal's variability or
frequency content.

Signals, Filters 10
# header initializations #
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt

def movavg(inp,n):
N = inp.size
op = np.zeros(inp.size)
if N < n:
return np.mean(inp)
else:
for i in range(op.size-n+1):
op[int(i+n/2)] = np.mean(inp[i:i+n-1:1])
return op

a = np.sin(np.array(range(70))*np.pi/100.0) # signal
c = 0.1*np.sin(np.array(range(70))*40*np.pi/100)# noise
t = a+c # adding signal and noise
b = movavg(t,10)
ax = plt.plot(a,label = 'true signal', color = 'black')
ax = plt.plot(t,label = 'noisy signal', color = 'red')
ay = plt.plot(b,label= 'filterd output', color = 'cyan')
plt.legend(loc = 'upper left')
plt.show()

Signals, Filters 11
def wmovavg(inp,n,alpha):
N = inp.size
op = np.zeros(inp.size)
s = (1-alpha**n)/(1-alpha)
if N < n:
return np.mean(inp)
else:
sum = 0
for i in range(op.size-n):
sum = 0
for j in range(n):
op[int(i+n/2)] = 1.0*op[int(i+n/2)] + 1.0*(alpha
op[int(i+n/2)] = op[int(i+n/2)]/s
return op

a = np.sin(np.array(range(70))*np.pi/100.0) # signal
c = 0.1*np.sin(np.array(range(70))*40*np.pi/100) # noise
t = a+c # adding signal and noise
b = wmovavg(t,10,0.9)
ax = plt.plot(a, label='true signal', color='black')
ax = plt.plot(t, label='noisy signal', color='red')
ay = plt.plot(b, label='filtered output', color='cyan')

Signals, Filters 12
plt.legend(loc='upper left')
plt.show()

Advanced Filtering Techniques

GH Filters
The GH Filter, often referred to as the g-h filter, is a simple yet effective approach
to estimate the state of a linear system. It's a type of predictive filter that
combines two concepts: g, the gain for updating the state estimate (position, for
instance), and h, the gain for updating the rate estimate (velocity, in motion
tracking scenarios).

Objective: To provide a simple method to estimate the current state and rate
of change for a system based on noisy measurements. It's especially useful
when the model dynamics are roughly known but subjected to random errors.

Method: The GH filter updates estimates by a blend of the previous estimate


and the new measurement, weighted by the g and h parameters, respectively.
The choice of g and h values influences the filter's responsiveness to
measurement versus prediction.

Applications: Used in scenarios where more complex filters like the Kalman
filter might be considered overkill, such as in basic tracking and sensor fusion

Signals, Filters 13
tasks where computational simplicity is desired.

%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt

# Input as id input

def g_h_filter(z,x_est,dx,g,h,dt,first_time = 0):


if first_time ==1:
x0 = z
#prediction step
x_pred = x_est + (dx*dt)
dx = dx

# update step
residual = z - x_pred
dx = dx + h * (residual) / dt
x_est = x_pred + g * residual
return x_est,dx

# Input as array of 1d inputs

"""
'data' contains the data to be filtered.
'x0' is the initial value for our state variable
'dx' is the initial change rate for our state variable
'g' is the g-h's g scale factor
'h' is the g-h's h scale factor
'dt' is the length of the time step
"""
def g_h_filter_array(data, x0, dx, g, h, dt=1.):

Signals, Filters 14
x_est = x0
results = []
k = 1
for z in data:
x_est,dx = g_h_filter(z,x_est,dx,g,h,dt,first_time =
k)
k = 0
results.append(x_est)
return np.array(results)

noise = np.random.normal(0,0.5,1000)
t = np.arange(1000)/1000
f = 1
sig = np.sin(2*np.pi*f*t)
noisy = sig + noise
a = g_h_filter_array(noisy,0,0.03,0.1,0.001,1)
plt.plot(noisy,label='noisy signal')
plt.plot(a,label='filtered output')
plt.plot(sig,color = 'black',label = 'original signal')
plt.legend(loc = 'lower left')
plt.show()

Signals, Filters 15
Understanding Gaussian Noise
Gaussian noise is defined by two parameters: its mean (μμ) and its variance
(σ2σ2). The mean typically represents the noise's center, which is often zero in a
signal processing context, indicating that the noise can be both positive and
negative with equal probability. The variance indicates the noise's spread or
intensity; a higher variance means the noise can have a wider range of values and
thus a more significant impact on the signal.

Role in Signal Processing


In the context of converting analog signals to digital, Gaussian noise can affect
both the sampling and quantization stages. During sampling, noise can introduce
errors in the timing and amplitude of the sampled points. In quantization, where
continuous amplitude values are mapped to discrete levels, Gaussian noise can
cause the signal to be mapped to incorrect levels, leading to quantization noise
that degrades the signal's quality.

Gaussian Noise Model in Practice


The Gaussian noise model is crucial for designing and analyzing systems that are
robust to noise. For instance, when engineers design analog-to-digital converters
(ADCs), understanding the nature of Gaussian noise helps in specifying the bit
depth and sampling rate required to achieve a desired signal-to-noise ratio (SNR).
Similarly, in digital signal processing (DSP), filtering techniques such as the
Moving Average Filter or more sophisticated GHK Filters are applied to mitigate
the effects of Gaussian noise, enhancing signal quality by averaging out the noise
over multiple samples or using predictive models to estimate and remove the
noise component.

Applications and Implications


Gaussian noise is not only a challenge but also a tool in various applications:

Communication Systems: Gaussian noise is a primary consideration in the


design of communication systems, where it can affect the transmission and
reception of signals. Techniques like error correction coding are used to
combat the effects of noise on data integrity.

Signals, Filters 16
Image Processing: In imaging, Gaussian noise can affect the quality of photos
and videos. Noise reduction algorithms, often employing Gaussian models, are
crucial for enhancing image quality.

Scientific Measurements: Many scientific instruments deal with Gaussian


noise when measuring physical phenomena. Understanding and modeling this
noise is essential for accurate data analysis.

Conclusion
The Gaussian noise model plays a pivotal role in the field of signal processing,
providing a mathematical framework for understanding and mitigating noise in
various systems. From the conversion of analog signals to digital to the
processing and transmission of these signals, addressing Gaussian noise is
essential for maintaining signal integrity and achieving high-quality results in
electronics, communication, and beyond. Whether through hardware design
considerations or software algorithms for noise reduction, the principles
underlying the Gaussian noise model are integral to the development of modern
technology.

Combining 2 noisy measurements


When you do and operation on 2 independent random variables, you can multiply
their probability distribution. If both of these measurement have gaussian noise
there is a special phenomenon which occurs when you and both these random
varables.

Signals, Filters 17
Signals, Filters 18
here we can see that the effective variance is always smaller than the 2 individual
variances. This means when we combine 2 measurements the error goes down in
this. This concept can be used to make the gh filter better as we can use it to set
the g and h for it.

Kalman filter:
Before we start with this let us consider a simple real life system, a slider which’s
speed can be controlled and a distance sensor (may be ultrasonic sensor) is
mounted on one side and is measuring the distance with noise. We want to
estimate the position of this slider accurately.

Signals, Filters 19
The Kalman Filter is a powerful algorithm used for estimating the state of a linear
dynamic system from a series of noisy measurements. It is widely used in
applications such as navigation and tracking systems, economics, and
engineering. Instead of diving deep into the mathematical equations, I will
introduce the Kalman Filter conceptually and demonstrate its implementation
using Python code.

Introduction to the Kalman Filter


The Kalman Filter operates in two main steps: prediction and update. In the
prediction step, it uses a model of the system to predict the next state and the
uncertainty of that state. In the update step, it incorporates a new measurement
into the prediction to refine the estimate. This process repeats for each time step,
with the filter using the latest estimate and measurement to produce a new
estimate.

Key Concepts
State Vector: Represents the true state of the system you're trying to
estimate.

Measurement Vector: Represents the measurements or observations used to


update the state estimate.

Error Covariance Matrix: Represents the uncertainty of the state estimate.

Process Noise: Represents the uncertainty in the prediction step, accounting


for the unknown influences on the system.

Measurement Noise: Represents the uncertainty in the measurement.

Python Implementation
Let's implement a simple Kalman Filter in Python. We'll simulate a system where
we want to estimate a position and velocity by measuring only the position with
some noise.

Step 1: Setup
First, we need to define our initial state, error covariance, process noise,
measurement noise, and other necessary matrices.

Signals, Filters 20
import numpy as np

# Initial state (position and velocity)


x = np.array([[0], [0]]) # initial position and velocity

# Initial uncertainty
P = np.array([[1000, 0], [0, 1000]])

# External motion (assumed to be 0 here)


u = np.array([[0], [0]])

# Transition matrix
F = np.array([[1, 1], [0, 1]])

# Measurement matrix
H = np.array([[1, 0]])

# Measurement noise covariance


R = np.array([[1]])

# Process noise covariance (assuming it's small)


Q = np.array([[1e-4, 0], [0, 1e-4]])

Step 2: Prediction
The prediction step projects the current state estimate ahead in time.

def predict(x, P, F, Q, u):


x_prime = F.dot(x) + u
P_prime = F.dot(P).dot(F.T) + Q
return x_prime, P_prime

Step 3: Update

Signals, Filters 21
The update step incorporates a new measurement into the state estimate.

def update(x_prime, P_prime, Z, H, R):


Y = Z - H.dot(x_prime) # Measurement residual
S = H.dot(P_prime).dot(H.T) + R # Residual covariance
K = P_prime.dot(H.T).dot(np.linalg.inv(S)) # Kalman gain
x = x_prime + K.dot(Y)
P = (np.eye(x.shape[0]) - K.dot(H)).dot(P_prime)
return x, P

Running the Filter


Now, we simulate some measurements and run the filter through a loop.

measurements = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # Example mea


surements

for Z in measurements:
# Predict
x, P = predict(x, P, F, Q, u)

# Update with measurement


x, P = update(x, P, np.array([[Z]]), H, R)

print(f"Updated state: {x.flatten()}")

This example demonstrates a very basic implementation of the Kalman Filter. The
actual application of the Kalman Filter involves more complexities depending on
the system dynamics, but the core concepts remain the prediction and update
steps that continuously refine the estimate of the system's state.

Conclusion
The Kalman Filter is a robust method for tracking and prediction in linear systems.
Its efficiency and effectiveness have made it a standard tool in various fields. The
Python code provided offers a basic implementation, illustrating the process of

Signals, Filters 22
prediction and update to achieve more accurate estimates over time. As you delve
deeper into specific applications, you'll encounter more complex models and
adaptations of the Kalman Filter, such as the Extended Kalman Filter and the
Unscented Kalman Filter for nonlinear systems.

Annexure
Angle Representation and Conversions
Roll, Pitch, Yaw, and Quaternions: Discusses the conversion between these
two formats for representing 3D orientation, vital in robotics and aerospace.

EulerianAngle Flight::toEulerianAngle(QuaternionData data)


{
EulerianAngle ans;

double q2sqr = data.q2 * data.q2;


double t0 = -2.0 * (q2sqr + data.q3 * data.q3) + 1.0;
double t1 = +2.0 * (data.q1 * data.q2 + data.q0 * data.q
3);
double t2 = -2.0 * (data.q1 * data.q3 - data.q0 * data.q
2);
double t3 = +2.0 * (data.q2 * data.q3 + data.q0 * data.q
1);
double t4 = -2.0 * (data.q1 * data.q1 + q2sqr) + 1.0;

t2 = t2 > 1.0 ? 1.0 : t2;


t2 = t2 < -1.0 ? -1.0 : t2;

ans.pitch = asin(t2);
ans.roll = atan2(t3, t4);
ans.yaw = atan2(t1, t0);

return ans;
}

Signals, Filters 23
QuaternionData Flight::toQuaternion(EulerianAngle data)
{
QuaternionData ans;
double t0 = cos(data.yaw * 0.5);
double t1 = sin(data.yaw * 0.5);
double t2 = cos(data.roll * 0.5);
double t3 = sin(data.roll * 0.5);
double t4 = cos(data.pitch * 0.5);
double t5 = sin(data.pitch * 0.5);

ans.q0 = t2 * t4 * t0 + t3 * t5 * t1;
ans.q1 = t3 * t4 * t0 - t2 * t4 * t1;
ans.q3 = t2 * t4 * t1 - t3 * t5 * t0;
return ans;
}

Signals, Filters 24

You might also like