0% found this document useful (0 votes)
3 views

5

The document provides comprehensive study notes on the representation of continuous and discrete-time signals, covering various types of signals, their properties, and classifications such as periodic vs. aperiodic and energy vs. power signals. It also discusses signal-to-noise ratio, correlation functions, and the characteristics of standard continuous and discrete-time signals. Additionally, it includes information on operations in continuous time signals and the implications of signal invertibility and causality.

Uploaded by

sudipmondal91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

5

The document provides comprehensive study notes on the representation of continuous and discrete-time signals, covering various types of signals, their properties, and classifications such as periodic vs. aperiodic and energy vs. power signals. It also discusses signal-to-noise ratio, correlation functions, and the characteristics of standard continuous and discrete-time signals. Additionally, it includes information on operations in continuous time signals and the implications of signal invertibility and causality.

Uploaded by

sudipmondal91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Sudip Mondal <sudipmondal91@gmail.

com>

(no subject)
1 message

Sudip Mondal <[email protected]> Tue, Apr 20, 2021 at 7:46 PM


To: Sudip Mondal <[email protected]>

In this article, you will find the study notes on Representation of Continuous and Discrete-Time
Signals which will cover the topics such as Different types of Basic Continuous/Discrete Signals,
Energy & Power Signal.

Properties of Signals

A signal can be classified as periodic or aperiodic; discrete or continuous time; discrete of continuous-
valued; or as a power or energy signal. The following defines each of these terms. In addition, the signal-to-
noise ratio of a signal corrupted by noise is defined.
Periodic / Aperiodic:

A periodic signal repeats itself at regular intervals. In general, any signal x(t) for which

x(t) = x(t+T)

for all t is said to be periodic.

The fundamental period of the signal is the minimum positive, non-zero value of T for which above equation
is satisfied. If a signal is not periodic, then it is aperiodic.
Symmetric / Asymmetric:

There are two types of signal symmetry: odd and even. A signal x(t) has odd symmetry if and only if x(-t) = -
x(t) for all t. It has even symmetry if and only if x(-t) = x(t).
Continuous and Discrete Signals and Systems

A continuous signal is a mathematical function of an independent variable, which represents a set of real
numbers. It is required that signals are uniquely defined in except for a finite number of points.

A continuous time signal is one which is defined for all values of time. A continuous time signal does
not need to be continuous (in the mathematical sense) at all points in time. A continuous-time signal
contains values for all real numbers along the X-axis. It is denoted by x(t).
Basically, the Signals are detectable quantities which are used to convey some information about
time-varying physical phenomena. some examples of signals are human speech, temperature,
pressure, and stock prices.
Electrical signals, normally expressed in the form of voltage or current waveforms, they are some of
the easiest signals to generate and process.

Example: A rectangular wave is discontinuous at several points but it is continuous time signal.

Discrete / Continuous-Time Signals:

A continuous time signal is defined for all values of t. A discrete time signal is only defined for discrete
values of t = ..., t-1, t0, t1, ..., tn, tn+1, tn+2, ... It is uncommon for the spacing between tn and tn+1 to change
with n. The spacing is most often some constant value referred to as the sampling rate,

Ts = tn+1 - tn.

It is convenient to express discrete time signals as x(nTs)=x[n].

That is, if x(t) is a continuous-time signal, then x[n] can be considered as the nth sample of x(t).

Sampling of a continuous-time signal x(t) to yield the discrete-time signal x[n] is an important step in the
process of digitizing a signal.

Energy and Power Signal:

When the strength of a signal is measured, it is usually the signal power or signal energy that is of interest.

The signal power of x(t) is defined as

and the signal energy as

A signal for which Px is finite and non-zero is known as a power signal.


A signal for which Ex is finite and non-zero is known as an energy signal.
Px is also known as the mean-square value of the signal.
Signal power is often expressed in the units of decibels (dB).
The decibel is defined as

where P0 is a reference power level, usually equal to one squared SI unit of the signal.
For example if the signal is a voltage then the P0 is equal to one square Volt.
A Signal can be Energy Signal or a Power Signal but it can not be both. Also a signal can be neither a
Energy nor a Power Signal.
As an example, the sinusoidal test signal of amplitude A,

x(t)=Asin(ωt)

has energy Ex that tends to infinity and power ,

or in decibels (dB): 20log(A)-3

The signal is thus a power signal.


Signal to Noise Ratio:

Any measurement of a signal necessarily contains some random noise in addition to the signal. In the case
of additive noise, the measurement is

x(t)=s(t)+n(t)

where s(t) is the signal component and n(t) is the noise component.

The signal to noise ratio is defined as

or in decibels,

The signal to noise ratio is an indication of how much noise is contained in a measurement.

Standard Continuous Time Signals

Impulse Signal

where ∞ is the height of impulse signal having unit area.

and When A = 1 (unit impulse Area)


Step Signal

Unit Step Signal if A =1,

Ramp Signal

Unit Ramp Signal (A=1)


Parabolic Signal

Unit Parabolic Signal when A = 1,

Unit Pulse Signal


Sinusoidal Signal

Co-sinusoidal Signal:

Where, ω0 is the angular frequency in rad/sec

f0 = frequency in cycle/sec or Hz

T = time period in second

When

When ϕ = positive,

When ϕ = negative,

Sinusoidal Signal:

Where, Angular frequency in red/sec

f0 = frequency in cycle/sec or Hz

T = time period in second

When

When ϕ = positive,

When ϕ = negative,
Exponential Signal:

Real Exponential Signal

where, A and b are real.

Complex Exponential signal

The complex exponential signal can be represented in a complex plane by a rotating vector, which
rotates with a constant angular velocity of ω0 red/sec.
Exponentially Rising/Decaying Sinusoidal Signal

Triangular Pulse Signal


Signum Signal

SinC Signal
Gaussian Signal

Important points:

The sinusoidal and complex exponential signals are always periodic.


The sum of two periodic signals is also periodic if the ratio of their fundamental periods is a rational
number.
Ideally, an impulse signal is a signal with infinite magnitude and zero duration.
Practically, an impulse signal is a signal with large magnitude and short duration.

Classification of Continuous Time Signal: The continuous time signal can be classified as

1. Deterministic and Non-deterministic Signals:


The signal that can be completely specified by a mathematical equation is called a deterministic
signal. The step, ramp, exponential and sinusoidal signals are examples of deterministic signals.
The signal whose characteristics are random in nature is called a non-deterministic signal. The
noise signal from various sources like electronic amplifiers, oscillator etc., are examples of non-
deterministic signals.
Periodic and Non-periodic Signals
A periodic signal will have a definite pattern that repeats again and again over a certain period of
time.

x(t+T) = x(t)

2. Symmetric (even) and Anti-symmetric (odd) Signals

When a signal exhibits symmetry with respect to t = 0, then it is called an even signal.

x(-t) = x(t)

When a signal exhibits anti-symmetry with respect to t = 0, then it is called an odd signal.

x(-t) = -x(t)

Let

Where, even part of

odd part of

Discrete-Time Signals
The discrete signal is a function of a discrete independent variable. In a discrete time signal, the value of
discrete time signal and the independent variable time are discrete. The digital signal is same as discrete
signal except that the magnitude of the signal is quantized. Basically, discrete time signals can be obtained
by sampling a continuous-time signal. It is denoted as x(n).

Standard Discrete Time Signals

Digital Impulse Signal or Unit Sample Sequence

Impulse signal,

Unit Step Signal

Ramp Signal

Ramp signal,
Exponential Signal

Exponential Signal,

Discrete Time Sinusoidal Signal

A discrete-time sinusoid is periodic only if its frequency is a rational number.


Discrete-time sinusoids whose frequencies are separated by an integer multiple of 2π are identical.

For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!

The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:

Detailed Schedule For GATE EC 2021 Champion Study Plan

Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)

Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering

Prep Smart. Score Better!

Thanks ko

Operations in Continuous Time Signals:


Periodic & Non-Periodic Signals:

A signal is a periodic signal if it completes a pattern within a measurable time frame, called a period
and repeats that pattern over identical subsequent periods.
The period is the smallest value of T satisfying g(t + T) = g(t) for all t. The period is defined so
because if g(t + T) = g(t) for all t, it can be verified that g(t + T') = g(t) for all t where T' = 2T, 3T, 4T,
... In essence, it's the smallest amount of time it takes for the function to repeat itself. If the period of a
function is finite, the function is called "periodic".
Functions that never repeat themselves have an infinite period, and are known as "aperiodic
functions".

Even & Odd Signals:

A function even function if it is symmetric about the y-axis. While, A signal is odd if it is inversely
symmetrical about the y-axis.

Even Signal, f(x) = f(-x)


Odd Signal, f(x) = - f(-x)

Note: Some functions are neither even nor odd. These functions can be written as a sum of even and odd
functions. A function f(x) can be expressed in terms of sum of an odd function and an even function.

Invertibility and Inverse Systems:

A system is invertible if distinct inputs results distinct outputs. As shown in the figure for the continuous-
time case, if a system is invertible, then an inverse system exists that, when cascaded with the original
system, results an output w(t) equal to the input x(t) to the first system.

An example of an invertible continuous-time system is y(t) = 2x(t),

for which the inverse system is w(t) = 1/2 y(t)

Causal System:

A system is causal if the output depends only on the input at the present time and in the past. Such
systems are often referred as non anticipative, as the system output does not anticipate future values of the
input. Similarly, if two inputs to a causal system are identical up to some point in time to or no the
corresponding outputs must also be equal up to this same time.

y1(t) = 2x(t) + x(t-1) + [x(t)]2 ⇒ Causal Signal

y1(t) = 2x(t) + x(t-1) + [x(t+2)] ⇒ Non-Causal Signal

Homogeneity (Scaling):

A system is said to be homogeneous if, for any input signal X(t), i.e. When the input signal is scaled, the
output signal is scaled by the same factor.

Time-Shifting / Time Reversal / Time Scaling

Time Shifting can be understood as shifting the signal in time. When a constant is added to the time, we
obtain the advanced signal, & when we decrease the time, we get the delayed signal.
Time Scaling:

Due to the scaling in time the output Signal may shrink or stretch it depends on the numerical value of
scaling factor.

Time Inversion:

Time Inversion referred as flipping the signal about the y-axis.

The Correlation Functions

Correlation is a mathematical operation which is similar to convolution. As convolution, in correlation two


signals are used to produce a third signal. The third signal is referred as cross-correlation of the two input
signals. When a signal is correlated with itself, the resultant signal is called the autocorrelation.

There are three basic definitions to define correlation function

(a) For an infinite duration waveform:-

It can be considered as a “power” based definition.

(b) For the finite duration waveform:- If the waveform exists only in the interval t1 ≤t ≤ t2.
It cab be considered as an “energy” based definition.

(c) For the periodic waveform:- f(t) is periodic with period T then.

for an arbitrary t0, it again can be considered as a “power” based definition.

Example:- Obtain the auto-correlation function of the square pulse which have amplitude a and duration T
as shown in figure below.

The wave form has a finite duration, and the auto-correlation function is

The auto-correlation function is developed graphically below

Properties of the Auto-correlation Function


The auto-correlation functions φff (τ ) and ρff (τ ) are even functions, that is

φff (−τ ) = φff (τ ), and ρff (−τ ) = ρff (τ )

A maximum value of ρff(τ ) (or φff(τ ) occurs at delay τ = 0,

|ρff(τ )| ≤ ρff (0), and |φff (τ )| ≤ φff (0)

and we note that is the “energy” of the waveform.

Similarly

ρff (τ) contains no phase information, and is independent of the time origin.
If f(t) is periodic with period T, φff (τ) is also periodic with period T.
If f(t) has zero mean (µ = 0), and f(t) is non-periodic, lim ρff (τ ) = 0.

Note on the relative “widths” of the Auto-correlation and Power/Energy Spectra

As in the case of Fourier analysis of waveforms, there is a general reciprocal relationship between the
width of a signal's spectrum and the width of its auto-correlation function. A narrow autocorrelation
function generally implies a “broad” spectrum

and a “broad” autocorrelation function generally implies a narrow-band waveform.

From the limit, if φff (τ)= δ(τ), then Φff (j Ω)=1, and the spectrum is said to be “white”.
The Cross-Correlation Function

The cross-correlation is referred as a measure of self-similarity between two waveforms f(t) and g(t). As in
the case of the auto-correlation functions we need two definitions:

in the case of infinite duration waveforms, and

for finite duration waveforms.

Example:- Find the cross-correlation function between the two functions as shown in figure.

It is clear in figure that g(t) is a delayed version of f(t). The cross-correlation is

where the peak occurs at τ = T2−T1 (the delay between the two signals).

Properties of the Cross-Correlation Function

φfg(τ ) = φgf (−τ), and the cross-correlation function is not necessarily an even function.
If φfg(τ ) = 0 for all τ , then f(t) and g(t) are said to be uncorrelated.
If g(t) = a.f(t−T), where a is a constant, that is g(t) is a scaled and delayed version of f(t), then φff(τ)
will have its maximum value at τ = T.
Cross-correlation is often used in optimal estimation of delay, such as in echolocation (radar, sonar),
and in GPS receivers.
The Cross-Power/Energy Spectrum

We define the cross-power/energy density spectra as the Fourier transforms of the cross-correlation
functions:
Rfg (jΩ) = (F−jΩ).G(jΩ)

Note that although Rff (jΩ) is real and even because ρff(τ) is real and even, this is not the case with the
cross-power/energy spectra,Φfg (jΩ) and Rfg(jΩ),and they are in general complex.

For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!

The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
link:

Detailed Schedule For GATE EC 2021 Champion Study Plan

Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)

Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering

Thanks

Prep Smart. Score Better.

In this article, you will find the study notes on Linear Time-Invariant System, & Sampling Theorem
which will cover the topics such as LTI Systems, Convolution & the Properties of Convolution & Sampling
Theorem.

Linear Time-Invariant System:

Linear time-invariant systems (LTI systems) are a class of systems used in signals and systems that are
both linear and time-invariant. Linear systems are systems whose outputs for a linear combination of inputs
are the same as a linear combination of individual responses to those inputs. Time-invariant systems are
systems where the output does not depend on when an input was applied. These properties make LTI
systems easy to represent and understand graphically.

Linear systems have the property that the output is linearly related to the input. Changing the input in a
linear way will change the output in the same linear way. So if the input x1(t) produces the output y1(t) and
the input x2(t) produces the output y2(t), then linear combinations of those inputs will produce linear
combinations of those outputs. The input {x1(t)+x2(t)} will produce the output {y1(t)+y2(t)}. Further, the
input {a1x1(t)+a2x2(t)} will produce the output {a1y1(t)+a2y2(t)} for some constants a1 and a2.

In other words, for a system T over time t, composed of signals x1(t) and x2(t) with outputs y1(t) and y2(t) ,
Homogeneity Principle:

Superposition Principle:

Thus, the entirety of an LTI system can be described by a single function called its impulse response. This
function exists in the time domain of the system. For an arbitrary input, the output of an LTI system is
the convolution of the input signal with the system's impulse response.

Conversely, the LTI system can also be described by its transfer function. The transfer function is
the Laplace transform of the impulse response. This transformation changes the function from the time
domain to the frequency domain. This transformation is important because it turns differential
equations into algebraic equations, and turns convolution into multiplication.

In the frequency domain, the output is the product of the transfer function with the transformed input. The
shift from time to frequency is illustrated in the following image:

Homogeneity, additivity, and shift invariance may, at first, sound a bit abstract but they are very useful. To
characterize a shift-invariant linear system, we need to measure only one thing: the way
the system responds to a unit impulse. This response is called the impulse response function of the
system. Once we’ve measured this function, we can (in principle) predict how the system will
respond to any other possible stimulus.
Introduction to Convolution

Because here’s not a single answer to define what is ? In “Signals and Systems” probably we saw
convolution in connection with Linear Time Invariant Systems and the impulse response for such a
system. This multitude of interpretations and applications is somewhat like the situation with the definite
integral.

To pursue the analogy with the integral, in pretty much all applications of the integral there is a general
method at work:

Cut the problem into small pieces where it can be solved approximately.
Sum up the solution for the pieces, and pass to a limit.
Convolution Theorem

F(g∗f)(s)=Fg(s)Ff(s)

In other notation: If f(t)⇔ F(s) and g(t) ⇔ G(s) then (g∗f)(t)⇔ G(s)F(s)
In words: Convolution in the time domain corresponds to multiplication in the frequency domain.

For the Integral to make sense i.e., to be able to evaluate g(t−x) at points outside the interval from 0 to
1, we need to assume that g is periodic. it is not the issue the present case, where we assume that f(t)
and g(t) are defined for all t, so the factors in the integral

Convolution in the Frequency Domain

In Frequency Domain convolution theorem states that

F(g ∗ f)=Fg ·Ff

here we have seen that the whole thing is carried out for inverse Fourier transform, as follow:

F−1(g∗f)=F−1g·F−1f

F(gf)(s)=(Fg∗Ff)(s)

Multiplication in the time domain corresponds to convolution in the frequency domain.

By applying Duality Formula

F(Ff)(s)=f(−s) or F(Ff)=f− without the variable.

To derive the identity F(gf) = Fg∗Ff, we assume for convenience, h = Ff and k = Fg

then we can write as F(gf)=k∗h

The one thing we know is how to take the Fourier transform of a convolution, so, in the present
notation, F(k∗h)=(Fk)(Fh).

But now Fk =FFg = g−

and likewise Fh =FFf = f

So F(k∗h)=g−f− =(gf)−, or gf =F(k∗h)−

Now, finally, take the Fourier transform of both sides of this last equation

FF identity : F(gf)=F(F(k∗h)−)=k∗h =Fg∗Ff

Note: Here we are trying to prove F(gf)(s) = (Fg∗Ff)(s) rather than F(g∗f)=(Ff)(Fg) Because, it seems
more “natural” to multiply signals in the time domain and see what effect this has in the frequency domain,
so why not work with F(fg) directly? But write the integral for F(gf); there’s nothing you can do with it to get
toward Fg∗Ff.

There is also often a general method of convolutions:

Usually there’s something that has to do with smoothing and averaging,understood broadly.
You see this in both the continuous case and the discrete case.

Some of you who have seen convolution in earlier courses,you’ve probably heard the expression “flip and
drag”

Meaning of Flip & Drag: here’s the meaning of Flip & Drag is as follow

Fix a value t.The graph of the function g(x−t) has the same shape as g(x) but shifted to the right by t.
Then forming g(t − x) flips the graph (left-right) about the line x = t.
If the most interesting or important features of g(x) are near x = 0, e.g., if it’s sharply peaked there,
then those features are shifted to x = t for the function g(t − x) (but there’s the extra “flip” to keep in
mind).Multiply f(x) and g(t − x) and integrate with respect to x.

Averaging

I prefer to think of the convolution operation as using one function to smooth and average the
other. Say g is used to smooth f in g∗f. In many common applications g(x) is a positive function,
concentrated near 0, with total area 1.

Like a sharply peaked Gaussian, for example (stay tuned). Then g(t−x) is concentrated near t and still
has area 1. For a fixed t, forming the integral

The last expression is like a weighted average of the values of f(x) near x = t, weighted by the values
of (the flipped and shifted) g. That’s the averaging part of the convolution, computing the convolution
g∗f at t replaces the value f(t) by a weighted average of the values of f near t.

Smoothing

Again take the case of an averaging-type function g(t), as above. At a given value of t,( g ∗ f)(t) is a
weighted average of values of f near t.
Then Move t a little to a point t0. Then (g∗f)(t0) is a weighted average of values of f near t0, which will
include values of f that entered into the average near t.
Thus the values of the convolutions (g∗f)(t) and (g∗f)(t0) will likely be closer to each other than are the
values f(t) and f(t0). That is, (g ∗f)(t) is “smoothing” f as t varies — there’s less of a change between
values of the convolution than between values of f.

Other identities of Convolution

It’s not hard to combine the various rules we have and develop an algebra of convolutions. Such identities
can be of great use — it beats calculating integrals. Here’s an assortment. (Lower and uppercase letters
are Fourier pairs.)

(f ·g)∗(h·k)(t) ⇔ (F ∗G)·(H ∗K)(s)


{(f(t)+g(t))·(h(t)+k(t)} ⇔ {[(F + G)∗(H + K)]}(s)
f(t)·(g∗h)(t) ⇔ F ∗(G·H)(s)
Properties of Convolution

Here we are explaining the properties of convolution in both continuous and discrete domain

Associative
Commutative
Distributive properties
As a LTI system is completely specified by its impulse response, we look into the conditions on the
impulse response for the LTI system to obey properties like memory, stability, invertibility, and
causality.
According to the Convolution theorem in Continuous & Discrete time as follow:

For Discrete system .

For Continuous System

We shall now discuss the important properties of convolution for LTI systems.

1) Commutative property :

In Discrete time: x[n]*h[n] ⇔ h[n]*x[n]

Proof: Since we know that y[n] = x[n]*h[n]

Let us assume n-k = l so,

So it clear from the derived expression that ⇒ x[n]*h[n] ⇔ h[n]*x[n]


In Continuous time:

Proof

So x[t]*h[t] ⇔ h[t]*x[t]

2. Distributive Property

By this Property we will conclude that convolution is distributive over addition.

Discrete time: x[n]{α h1[n] + βh2[n]} = α {x[n] h1[n]}+ β{x[n] h2[n]} α & β are constant.
Continuous Time: x(t){α h1(t) + βh2(t)} = α{x(t)h1(t)} + β {x(t)h2(t)} α & β are constant.

3. Associative Property

Discrete Time y[n] = x[n]*h[n]*g[n]

x[n] * h1[n] * h2[n] = x[n] * (h1[n] * h2[n])

In Continuous Time:

[x(t) * h1(t)] * h2(t) = x(t) * [h1(t) * h2(t)]

If systems are connected in cascade:

∴ Overall impulse response of the system is:


4. Invertibility

A system is said to be invertible if there exist an inverse system which when connected in series with the
original system produces an output identical to input .

(x*δ)[n]= x[n]

(x*h*h-1)[n]= x[n]

(h*h-1)[n]= (δ)[n]

5. Causality

Discrete Time

Continuous Time

6. Stability

Discrete Time

Continuous Time

Sampling Theorem

The sampling process is usually described in the time domain. In this process, an analog signal is
converted into a corresponding sequence of samples that are usually spaced uniformly in time. Consider
an arbitrary signal x(t) of finite energy, which is specified for all time as shown in figure 1(a).

Suppose that we sample the signal x(t) instantaneously and at a uniform rate, once every TS second, as
shown in figure 1(b). Consequently, we obtain an infinite sequence of samples spaced TS seconds apart
and denoted by {x(nTS)}, where n takes on all possible integer values.

Thus, we define the following terms:

1. Sampling Period: The time interval between two consecutive samples is referred as sampling period.
In figure 1(b), TS is the sampling period.
2. Sampling Rate: The reciprocal of sampling period is referred as sampling rate, i.e.

fS = 1/TS
Sampling theorem provides both a method of reconstruction of the original signal from the sampled values
and also gives a precise upper bound on the sampling interval required for distortion less reconstruction. It
states that

A band-limited signal of finite energy, which has no frequency components higher that W Hertz, is
completely described by specifying the values of the signal at instants of time separated by 1/2W
seconds.
A band-limited signal of the finite energy, which has no frequency components higher that W Hertz,
may be completely recovered from a knowledge of its samples taken at the rate of 2W samples per
second.

Aliasing & Anti-aliasing

Aliasing is such an effect of violating the Nyquist-Shannon sampling theory. During sampling the base
band spectrum of the sampled signal is mirrored to every multifold of the sampling frequency. These
mirrored spectra are called alias.
The easiest way to prevent aliasing is the application of a steep sloped low-pass filter with half the
sampling frequency before the conversion. Aliasing can be avoided by keeping Fs>2Fmax.
Since the sampling rate for an analog signal must be at least two times as high as the highest
frequency in the analog signal in order to avoid aliasing.So in order to avoid this ,the analog signal is
then filtered by a low pass filter prior to being sampled, and this filter is called an anti-aliasing filter.
Sometimes the reconstruction filter after a digital-to-analog converter is also called an anti-aliasing
filter.

Explanation of Sampling Theorem

Consider a message signal m(t) bandlimited to W, i.e.

M(f) = 0 For |f| ≥ W


Then, the sampling Frequency fS, required to reconstruct the bandlimited waveform without any error, is
given by

Fs ≥ 2 W

Nyquist Rate

Nyquist rate is defined as the minimum sampling frequency allowed to reconstruct a bandlimited waveform
without error, i.e.

fN = min {fS} = 2W

Where W is the message signal bandwidth, and fS is the sampling frequency.

Nyquist Interval

The reciprocal of Nyquist rate is called the Nyquist interval (measured in seconds), i.e.

Where fN is the Nyquist rate, and W is the message signal bandwidth.

For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!

The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:

Detailed Schedule For GATE EC 2021 Champion Study Plan

Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)

Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering

Thanks

Prep Smart. Score Better.

In the present article, you will find the study notes on Fourier Series & Representation of Continuous
Periodic Signal which will cover the topics such as Fourier Theorem, Fourier Coefficients, Fourier Sine
Series, Fourier Cosine Series, Orthogonality Relations of Fourier Series Exponential Fourier Series
Representation, Even & Odd Symmetry
Fourier Series Representation

The Periodic functions are the functions which can define by relation f(t + P) = f(t) for all t.For example, if
f(t) is the amount of time between sunrise and sunset at a certain latitude, as a function of time t, and P is
the length of the year, then f(t + P) = f(t) for all t, since the Earth and Sun are in the same position after one
full revolution of the Earth around the Sun.
Fourier Theorem

Any arbitrary continuous time signal x(t) which is periodic with a fundamental period To, can be expressed
as a series of harmonically related sinusoids whose frequencies are multiples of fundamental frequency or
first harmonic. In other word, any periodic function of (t) can be represented by an infinite series of
sinusoids called as Fourier Series.

Periodic waveform expressed in the form of Fourier series, while non-periodic waveform may be
expressed by the Fourier transform.

The different forms of Fourier series are given as follows.

(i) Trigonometric Fourier series

(ii) Complex exponential Fourier series

(iii) Polar or harmonic form Fourier series.

Trigonometric Fourier Series

Any arbitrary periodic function x(t) with fundamental period T0 can be expressed as follows

This is referred to as trigonometric Fourier series representation of signal x(t). Here, ω0 = 2π/T0 is the
fundamental frequency of x(t) and coefficients a0, an, and bn are referred to as the trigonometric continuous
time Fourier series (CTFS) coefficients. The coefficients are calculated as follows.

Fourier Series Coefficient

From equation (ii), it is clear that coefficient a0 represents the average or mean value (also referred to as
the dc component) of signal x(t).

In these formulas, the limits of integration are either (–T0/2 to +T0/2) or (0 to T0). In general, the limit of
integration is any period of the signal and so the limits can be from (t1 to t2 + T0), where t1 is any time
instant.

Trigonometric Fourier Series Coefficients for Symmetrical Signals

If the periodic signal x(t) possesses some symmetry, then the continuous time Fourier series (CTFS)
coefficients become easy to obtain. The various types of symmetry and simplification of Fourier series
coefficients are disused as below.

Consider the Fourier series representation of a periodic signals x(t) defined in equation

Even Symmetry: x(t) = x(–t)

If x(t) is an even function, then product x(t) sinωot is odd and integration in equation (iv) becomes zero.
That is bn = 0 for all n and the Fourier series representation expressed as

For example, the signal x(t) shown in below figure has even symmetry so bn = 0 and the Fourier series
expansion of x(t) is given as
The trigonometric Fourier series representation even signals contains cosine terms only. The constant
a0 may or may not be zero.

Odd Symmetry: x(t) = –x(–t)

If x(t) is an odd function, then product x(t) cosωot is also odd and integration in equation (iii) becomes zero
i.e. an = 0 for all n. Also, a0 = 0 because an odd symmetric function has a zero-average value. The Fourier
series representation is expressed as

For example, the signal x(t) shown in below figure is odd symmetric so an = a0 = 0 and the Fourier series
expansion of x(t) is given as

Fourier Sine Series

The Fourier Sine series can be written as


------(2)

Sum S(x) will inherit all three properties:


(i): Periodic S(x +2π)=S(x); (ii): Odd S(−x)=−S(x); (iii): S(0) = S(π)=0
Our first step is to compute from S(x) the number bk that multiplies sinkx.

Suppose S(x)=∑ bn sinnx. Multiply both sides by sin kx. Integrate from 0 to π in Sine Series in equation (2)

On the right side, all integrals are zero except for n = k. Here the property of “orthogonality” will
dominate. The sines make 90o angles in function space, when their inner products are integrals from 0
to π.
Orthogonality for sine Series

Condition for Orthogonailty:

------(3)

Zero comes quickly if we integrate the term cosmx from 0 to π. ⇒ 0∫π cosmx dx = 0-0=0

Integrating cosmx with m = n−k and m = n + k proves orthogonality of the sines.

The exception is when n = k. Then we are integrating (sinkx)2 = 1 2 − 1/2 cos2kx

------(4)

Notice that S(x)sinkx is even (equal integrals from −π to 0 and from 0 to π).
We will immediately consider the most important example of a Fourier sine series. S(x) is an odd
square wave with SW(x) = 1 for 0<x<π. It is an odd function with period 2 π, that vanishes at x=0 and
x= π.

Example:
Finding the Fourier sine coefficients bk of the square wave SW(x) as given above .

Solution:

For k =1 ,2,...using the formula of sine coefficient with S(x)=1 between 0 and π:

Then even-numbered coefficients b2k are all zero because cos2kπ = cos0 = 1.
The
odd-numbered coefficients bk =4/πk decrease at the rate 1/k.
We will see that same 1/k decay rate for all functions formed from smooth pieces and jumps. Put
those coefficients 4/πk and zero into the Fourier sine series for SW(x).

Fourier Cosine Series

The cosine series applies to even functions with C(−x)=C(x) as

-----(5)

Cosine has period 2π shown as above in the figure two even functions, the repeating ramp RR(x) and the
up-down train UD(x) of delta functions.

That sawtooth ramp RR is the integral of the square wave. The delta functions in UD give the
derivative of the square wave. RR and UD will be valuable examples, one smoother than SW, one
less smooth.
First we find formulas for the cosine coefficients a0 and ak. The constant term a0 is the average value
of the function C(x):

-----(6)

We will integrate the cosine series from 0 to π. On the right side, the integral of a0=a0π (divide both
sides by π). All other integrals are zero.
Again the integral over a full period from −π to π (also 0 to 2π) is just doubled.
Orthogonality Relations of Fourier Series

Since from the Fourier Series Representation we concluded that for a periodic Signal it can be written as

-------(7)

The condition of orthogonality as follow:

Proof of the orthogonality relations:

This is just a straightforward calculation using the periodicity of sine and cosine and either (or both) of
these two methods:

Energy in Function = Energy in Coefficients


There is also another important equation (the energy identity) that comes from integrating (F(x))2. When
we square the Fourier series of F(x), and integrate from −π to π, all the “cross terms” drop out. The only
nonzero integrals come from 12 and cos2 kx and sin2 kx, multiplied by a02,ak2 bk2.

Energy in F(x) equals the energy in the coefficients.


Left hand side is like the length squared of a vector, except the vector is a function.
Right hand side comes from an infinitely long vector of a’s and b’s.
If the lengths are equal, which says that the Fourier transform from function to vector is like an
orthogonal matrix.
Normalized by constants √2π and √π, we have an orthonormal basis in function space.

Complex Fourier Series

In place of separate formulas for a0 and ak and bk, we may consider one formula for all the complex
coefficients ck.
So that the function F(x) will be complex, The Discrete Fourier Transform will be much simpler when
we use N complex exponentials for a vector.

The exponential form of Fourier series of a periodic signal x(t) with period T0 is defined as

where, ω0 is the fundamental frequency given as ω0 = 2π /T0. The exponential Fourier series coefficients
cn are calculated form the following expression

Since c0 = a0 is still the average of F(x), because e0 = 1.

The orthogonality of einx and eikx is to be checked by integrating.

Example:

Compute the Fourier series of f(t), where f(t) is the square wave with period 2π. which is defined over one
period by.

The graph over several periods is shown below.


Solution:

Computing a Fourier series means computing its Fourier coefficients. We do this using the integral
formulas for the coefficients given with Fourier’s theorem in the previous note. For convenience we repeat
the theorem here.

By applying these formulas to the above waveform we have to split the integrals into two pieces
corresponding to where f(t) is +1 and where it is −1.

thus for n ≠ 0 ;

for n = 0

We have used the simplification cos nπ = (−1)n to get a nice formula for the coefficients bn.

This then gives the Fourier series for f(t)

For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!


Click here to avail Super for Electrical Engineering

The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:

Detailed Schedule For GATE EC 2021 Champion Study Plan

Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)

Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering

Prep Smart. Score Better!

Thanks

Laplace Transform

The Laplace Transform is very important tool to analyze any electrical containing by which we can convert
the Integro-Differential Equation in Algebraic by converting the given situation in Time
Domain to Frequency Domain.

is also called bilateral or two-sided Laplace transform.

If x(t) is defined for t≥0, [i.e., if x(t) is causal], then is also called
unilateral or one-sided Laplace transform.

Below we are listed the Following advantage of accepting Laplace transform:

Analysis of general R-L-C circuits become more easy.


Natural and Forced response can be easily analyzed.
Circuit can be analyzed with impedances.
Analysis of stability can be done easiest way.

Statement of Laplace Transform

The direct Laplace transform or the Laplace integral of a function f(t) defined for 0 ≤ t < ∞ is the
ordinary calculus integration problem for a given function f(t) .
Its Laplace transform is the function, denoted F(s) = L{f}(s), defined by

A causal signal x(t) is said to be of exponential order if a real, positive constant σ (where σ is real part
of s) exists such that the function, e- σt|X(t)| approaches zero as t approaches infinity.
For a causal signal, if lim e-σt|x(t)|=0, for σ > σc and if lim e-σt|x(t)|=∞ for σ > σc then σc is called
abscissa of convergence, (where σc is a point on real axis in s-plane).

The value of s for which the integral converges is called Region of Convergence

(ROC).
For a causal signal, the ROC includes all points on the s-plane to the right of abscissa of
convergence.
For an anti-causal signal, the ROC includes all points on the s-plane to the left of abscissa of
convergence.
For a two-sided signal, the ROC includes all points on the s-plane in the region in between two
abscissa of convergence.

Properties of the ROC

The region of convergence has the following properties

ROC consists of strips parallel to the jω-axis in the s-plane.


ROC does not contain any poles.
If x(t) is a finite duration signal, x(t) ≠ 0, t1 < t < t2 and is absolutely integrable, the ROC is the entire s-
plane.
If x(t) is a right sided signal, x(t) = 0, t1 < t0, the ROC is of the form R{s} > max {R{pk}}
If x(t) is a left sided signal x(t) = 0, t1 > t0, the ROC is of the form R{s} > min {R{pk}}
If x(t) is a double sided signal, the ROC is of the form p1 < R{s} < p2
If the ROC includes the jω-axis. Fourier transform exists and the system is stable.

Inverse Laplace Transform

It is the process of finding x(t) given X(s)

X(t) = L-1{X(s)}

There are two methods to obtain the inverse Laplace transform.

Inversion using Complex Line Integral

Inversion of Laplace Using Standard Laplace Transform Table.

Note A: Derivatives in t → Multiplication by s.


B: Multiplication by t → Derivatives in s.

Laplace Transform of Some Standard Signals


Some Standard Laplace Transform Pairs
Properties of Laplace Transform
Key Points

The convolution theorem of Laplace transform says that, Laplace transform of convolution of two time
domain signals is given by the product of the Laplace transform of the individual signals.
The zeros and poles are two critical complex frequencies at which a rational function of a takes two
extreme value zero and infinity respectively.

For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!

Click here to avail Super for Electrical Engineering

The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:

Detailed Schedule For GATE EC 2021 Champion Study Plan

Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering

Prep Smart. Score Better!

Thanks.

In this article, you will find the Z-Transform which will cover the topic as Z-Transform, Inverse Z-
transform, Region of Convergence of Z-Transform, Properties of Z-Transform.
Z-Transform

Computation of the Z-transform for discrete-time signals.


Enables analysis of the signal in the frequency domain.
Z-Transform takes the form of a polynomial.
Enables interpretation of the signal in terms of the roots of the polynomial.
z−1 corresponds to a delay of one unit in the signal.

The Z - Transform of a discrete time signal x[n] is defined as

, where z = r.ejω

The discrete-time Fourier Transform (DTFT) is obtained by evaluating Z-Transform at z = ejω


The z-transform defined above has both sided summation. It is called bilateral or both sided Z-
transform.

Unilateral (one-sided) z-transform

The unilateral z-transform of a sequence x[n] is defined as

Region of Convergence (ROC):

ROC is the region where z-transform converges. It is clear that z-transform is an infinite power series.
The series is not convergent for all values of z.

Significance of ROC

ROC gives an idea about values of z for which z-transform can be calculated.
ROC can be used to determine causality of the system.
ROC can be used to determine stability of the system.

Summary of ROC of Discrete Time Signals for the sequences


Characteristic Families of Signals and Corresponding ROC
Note: X(z) = z{x(n)} ; X1 (z) = Z {xl (n)} ; X2(z) = z{x2 (n)}; Y(z) =z (y (n))

Summary of Properties of z- Transform:


Impulse Response and Location of Poles
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here

GATE Electrical Engineering(EE) 2021 Champion Study Plan

Click Here to Avail GATE EE Green Card!

Click here to avail Super for Electrical Engineering

Thanks

Prep Smart. Score Better.

You might also like