5
5
com>
(no subject)
1 message
In this article, you will find the study notes on Representation of Continuous and Discrete-Time
Signals which will cover the topics such as Different types of Basic Continuous/Discrete Signals,
Energy & Power Signal.
Properties of Signals
A signal can be classified as periodic or aperiodic; discrete or continuous time; discrete of continuous-
valued; or as a power or energy signal. The following defines each of these terms. In addition, the signal-to-
noise ratio of a signal corrupted by noise is defined.
Periodic / Aperiodic:
A periodic signal repeats itself at regular intervals. In general, any signal x(t) for which
x(t) = x(t+T)
The fundamental period of the signal is the minimum positive, non-zero value of T for which above equation
is satisfied. If a signal is not periodic, then it is aperiodic.
Symmetric / Asymmetric:
There are two types of signal symmetry: odd and even. A signal x(t) has odd symmetry if and only if x(-t) = -
x(t) for all t. It has even symmetry if and only if x(-t) = x(t).
Continuous and Discrete Signals and Systems
A continuous signal is a mathematical function of an independent variable, which represents a set of real
numbers. It is required that signals are uniquely defined in except for a finite number of points.
A continuous time signal is one which is defined for all values of time. A continuous time signal does
not need to be continuous (in the mathematical sense) at all points in time. A continuous-time signal
contains values for all real numbers along the X-axis. It is denoted by x(t).
Basically, the Signals are detectable quantities which are used to convey some information about
time-varying physical phenomena. some examples of signals are human speech, temperature,
pressure, and stock prices.
Electrical signals, normally expressed in the form of voltage or current waveforms, they are some of
the easiest signals to generate and process.
Example: A rectangular wave is discontinuous at several points but it is continuous time signal.
A continuous time signal is defined for all values of t. A discrete time signal is only defined for discrete
values of t = ..., t-1, t0, t1, ..., tn, tn+1, tn+2, ... It is uncommon for the spacing between tn and tn+1 to change
with n. The spacing is most often some constant value referred to as the sampling rate,
Ts = tn+1 - tn.
That is, if x(t) is a continuous-time signal, then x[n] can be considered as the nth sample of x(t).
Sampling of a continuous-time signal x(t) to yield the discrete-time signal x[n] is an important step in the
process of digitizing a signal.
When the strength of a signal is measured, it is usually the signal power or signal energy that is of interest.
where P0 is a reference power level, usually equal to one squared SI unit of the signal.
For example if the signal is a voltage then the P0 is equal to one square Volt.
A Signal can be Energy Signal or a Power Signal but it can not be both. Also a signal can be neither a
Energy nor a Power Signal.
As an example, the sinusoidal test signal of amplitude A,
x(t)=Asin(ωt)
Any measurement of a signal necessarily contains some random noise in addition to the signal. In the case
of additive noise, the measurement is
x(t)=s(t)+n(t)
where s(t) is the signal component and n(t) is the noise component.
or in decibels,
The signal to noise ratio is an indication of how much noise is contained in a measurement.
Impulse Signal
Ramp Signal
Co-sinusoidal Signal:
f0 = frequency in cycle/sec or Hz
When
When ϕ = positive,
When ϕ = negative,
Sinusoidal Signal:
f0 = frequency in cycle/sec or Hz
When
When ϕ = positive,
When ϕ = negative,
Exponential Signal:
The complex exponential signal can be represented in a complex plane by a rotating vector, which
rotates with a constant angular velocity of ω0 red/sec.
Exponentially Rising/Decaying Sinusoidal Signal
SinC Signal
Gaussian Signal
Important points:
Classification of Continuous Time Signal: The continuous time signal can be classified as
x(t+T) = x(t)
When a signal exhibits symmetry with respect to t = 0, then it is called an even signal.
x(-t) = x(t)
When a signal exhibits anti-symmetry with respect to t = 0, then it is called an odd signal.
x(-t) = -x(t)
Let
odd part of
Discrete-Time Signals
The discrete signal is a function of a discrete independent variable. In a discrete time signal, the value of
discrete time signal and the independent variable time are discrete. The digital signal is same as discrete
signal except that the magnitude of the signal is quantized. Basically, discrete time signals can be obtained
by sampling a continuous-time signal. It is denoted as x(n).
Impulse signal,
Ramp Signal
Ramp signal,
Exponential Signal
Exponential Signal,
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here
The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:
Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering
Thanks ko
A signal is a periodic signal if it completes a pattern within a measurable time frame, called a period
and repeats that pattern over identical subsequent periods.
The period is the smallest value of T satisfying g(t + T) = g(t) for all t. The period is defined so
because if g(t + T) = g(t) for all t, it can be verified that g(t + T') = g(t) for all t where T' = 2T, 3T, 4T,
... In essence, it's the smallest amount of time it takes for the function to repeat itself. If the period of a
function is finite, the function is called "periodic".
Functions that never repeat themselves have an infinite period, and are known as "aperiodic
functions".
A function even function if it is symmetric about the y-axis. While, A signal is odd if it is inversely
symmetrical about the y-axis.
Note: Some functions are neither even nor odd. These functions can be written as a sum of even and odd
functions. A function f(x) can be expressed in terms of sum of an odd function and an even function.
A system is invertible if distinct inputs results distinct outputs. As shown in the figure for the continuous-
time case, if a system is invertible, then an inverse system exists that, when cascaded with the original
system, results an output w(t) equal to the input x(t) to the first system.
Causal System:
A system is causal if the output depends only on the input at the present time and in the past. Such
systems are often referred as non anticipative, as the system output does not anticipate future values of the
input. Similarly, if two inputs to a causal system are identical up to some point in time to or no the
corresponding outputs must also be equal up to this same time.
Homogeneity (Scaling):
A system is said to be homogeneous if, for any input signal X(t), i.e. When the input signal is scaled, the
output signal is scaled by the same factor.
Time Shifting can be understood as shifting the signal in time. When a constant is added to the time, we
obtain the advanced signal, & when we decrease the time, we get the delayed signal.
Time Scaling:
Due to the scaling in time the output Signal may shrink or stretch it depends on the numerical value of
scaling factor.
Time Inversion:
(b) For the finite duration waveform:- If the waveform exists only in the interval t1 ≤t ≤ t2.
It cab be considered as an “energy” based definition.
(c) For the periodic waveform:- f(t) is periodic with period T then.
Example:- Obtain the auto-correlation function of the square pulse which have amplitude a and duration T
as shown in figure below.
The wave form has a finite duration, and the auto-correlation function is
Similarly
ρff (τ) contains no phase information, and is independent of the time origin.
If f(t) is periodic with period T, φff (τ) is also periodic with period T.
If f(t) has zero mean (µ = 0), and f(t) is non-periodic, lim ρff (τ ) = 0.
As in the case of Fourier analysis of waveforms, there is a general reciprocal relationship between the
width of a signal's spectrum and the width of its auto-correlation function. A narrow autocorrelation
function generally implies a “broad” spectrum
From the limit, if φff (τ)= δ(τ), then Φff (j Ω)=1, and the spectrum is said to be “white”.
The Cross-Correlation Function
The cross-correlation is referred as a measure of self-similarity between two waveforms f(t) and g(t). As in
the case of the auto-correlation functions we need two definitions:
Example:- Find the cross-correlation function between the two functions as shown in figure.
where the peak occurs at τ = T2−T1 (the delay between the two signals).
φfg(τ ) = φgf (−τ), and the cross-correlation function is not necessarily an even function.
If φfg(τ ) = 0 for all τ , then f(t) and g(t) are said to be uncorrelated.
If g(t) = a.f(t−T), where a is a constant, that is g(t) is a scaled and delayed version of f(t), then φff(τ)
will have its maximum value at τ = T.
Cross-correlation is often used in optimal estimation of delay, such as in echolocation (radar, sonar),
and in GPS receivers.
The Cross-Power/Energy Spectrum
We define the cross-power/energy density spectra as the Fourier transforms of the cross-correlation
functions:
Rfg (jΩ) = (F−jΩ).G(jΩ)
Note that although Rff (jΩ) is real and even because ρff(τ) is real and even, this is not the case with the
cross-power/energy spectra,Φfg (jΩ) and Rfg(jΩ),and they are in general complex.
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here
The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
link:
Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering
Thanks
In this article, you will find the study notes on Linear Time-Invariant System, & Sampling Theorem
which will cover the topics such as LTI Systems, Convolution & the Properties of Convolution & Sampling
Theorem.
Linear time-invariant systems (LTI systems) are a class of systems used in signals and systems that are
both linear and time-invariant. Linear systems are systems whose outputs for a linear combination of inputs
are the same as a linear combination of individual responses to those inputs. Time-invariant systems are
systems where the output does not depend on when an input was applied. These properties make LTI
systems easy to represent and understand graphically.
Linear systems have the property that the output is linearly related to the input. Changing the input in a
linear way will change the output in the same linear way. So if the input x1(t) produces the output y1(t) and
the input x2(t) produces the output y2(t), then linear combinations of those inputs will produce linear
combinations of those outputs. The input {x1(t)+x2(t)} will produce the output {y1(t)+y2(t)}. Further, the
input {a1x1(t)+a2x2(t)} will produce the output {a1y1(t)+a2y2(t)} for some constants a1 and a2.
In other words, for a system T over time t, composed of signals x1(t) and x2(t) with outputs y1(t) and y2(t) ,
Homogeneity Principle:
Superposition Principle:
Thus, the entirety of an LTI system can be described by a single function called its impulse response. This
function exists in the time domain of the system. For an arbitrary input, the output of an LTI system is
the convolution of the input signal with the system's impulse response.
Conversely, the LTI system can also be described by its transfer function. The transfer function is
the Laplace transform of the impulse response. This transformation changes the function from the time
domain to the frequency domain. This transformation is important because it turns differential
equations into algebraic equations, and turns convolution into multiplication.
In the frequency domain, the output is the product of the transfer function with the transformed input. The
shift from time to frequency is illustrated in the following image:
Homogeneity, additivity, and shift invariance may, at first, sound a bit abstract but they are very useful. To
characterize a shift-invariant linear system, we need to measure only one thing: the way
the system responds to a unit impulse. This response is called the impulse response function of the
system. Once we’ve measured this function, we can (in principle) predict how the system will
respond to any other possible stimulus.
Introduction to Convolution
Because here’s not a single answer to define what is ? In “Signals and Systems” probably we saw
convolution in connection with Linear Time Invariant Systems and the impulse response for such a
system. This multitude of interpretations and applications is somewhat like the situation with the definite
integral.
To pursue the analogy with the integral, in pretty much all applications of the integral there is a general
method at work:
Cut the problem into small pieces where it can be solved approximately.
Sum up the solution for the pieces, and pass to a limit.
Convolution Theorem
F(g∗f)(s)=Fg(s)Ff(s)
In other notation: If f(t)⇔ F(s) and g(t) ⇔ G(s) then (g∗f)(t)⇔ G(s)F(s)
In words: Convolution in the time domain corresponds to multiplication in the frequency domain.
For the Integral to make sense i.e., to be able to evaluate g(t−x) at points outside the interval from 0 to
1, we need to assume that g is periodic. it is not the issue the present case, where we assume that f(t)
and g(t) are defined for all t, so the factors in the integral
here we have seen that the whole thing is carried out for inverse Fourier transform, as follow:
F−1(g∗f)=F−1g·F−1f
F(gf)(s)=(Fg∗Ff)(s)
The one thing we know is how to take the Fourier transform of a convolution, so, in the present
notation, F(k∗h)=(Fk)(Fh).
Now, finally, take the Fourier transform of both sides of this last equation
Note: Here we are trying to prove F(gf)(s) = (Fg∗Ff)(s) rather than F(g∗f)=(Ff)(Fg) Because, it seems
more “natural” to multiply signals in the time domain and see what effect this has in the frequency domain,
so why not work with F(fg) directly? But write the integral for F(gf); there’s nothing you can do with it to get
toward Fg∗Ff.
Usually there’s something that has to do with smoothing and averaging,understood broadly.
You see this in both the continuous case and the discrete case.
Some of you who have seen convolution in earlier courses,you’ve probably heard the expression “flip and
drag”
Meaning of Flip & Drag: here’s the meaning of Flip & Drag is as follow
Fix a value t.The graph of the function g(x−t) has the same shape as g(x) but shifted to the right by t.
Then forming g(t − x) flips the graph (left-right) about the line x = t.
If the most interesting or important features of g(x) are near x = 0, e.g., if it’s sharply peaked there,
then those features are shifted to x = t for the function g(t − x) (but there’s the extra “flip” to keep in
mind).Multiply f(x) and g(t − x) and integrate with respect to x.
Averaging
I prefer to think of the convolution operation as using one function to smooth and average the
other. Say g is used to smooth f in g∗f. In many common applications g(x) is a positive function,
concentrated near 0, with total area 1.
Like a sharply peaked Gaussian, for example (stay tuned). Then g(t−x) is concentrated near t and still
has area 1. For a fixed t, forming the integral
The last expression is like a weighted average of the values of f(x) near x = t, weighted by the values
of (the flipped and shifted) g. That’s the averaging part of the convolution, computing the convolution
g∗f at t replaces the value f(t) by a weighted average of the values of f near t.
Smoothing
Again take the case of an averaging-type function g(t), as above. At a given value of t,( g ∗ f)(t) is a
weighted average of values of f near t.
Then Move t a little to a point t0. Then (g∗f)(t0) is a weighted average of values of f near t0, which will
include values of f that entered into the average near t.
Thus the values of the convolutions (g∗f)(t) and (g∗f)(t0) will likely be closer to each other than are the
values f(t) and f(t0). That is, (g ∗f)(t) is “smoothing” f as t varies — there’s less of a change between
values of the convolution than between values of f.
It’s not hard to combine the various rules we have and develop an algebra of convolutions. Such identities
can be of great use — it beats calculating integrals. Here’s an assortment. (Lower and uppercase letters
are Fourier pairs.)
Here we are explaining the properties of convolution in both continuous and discrete domain
Associative
Commutative
Distributive properties
As a LTI system is completely specified by its impulse response, we look into the conditions on the
impulse response for the LTI system to obey properties like memory, stability, invertibility, and
causality.
According to the Convolution theorem in Continuous & Discrete time as follow:
We shall now discuss the important properties of convolution for LTI systems.
1) Commutative property :
Proof
So x[t]*h[t] ⇔ h[t]*x[t]
2. Distributive Property
Discrete time: x[n]{α h1[n] + βh2[n]} = α {x[n] h1[n]}+ β{x[n] h2[n]} α & β are constant.
Continuous Time: x(t){α h1(t) + βh2(t)} = α{x(t)h1(t)} + β {x(t)h2(t)} α & β are constant.
3. Associative Property
In Continuous Time:
A system is said to be invertible if there exist an inverse system which when connected in series with the
original system produces an output identical to input .
(x*δ)[n]= x[n]
(x*h*h-1)[n]= x[n]
(h*h-1)[n]= (δ)[n]
5. Causality
Discrete Time
Continuous Time
6. Stability
Discrete Time
Continuous Time
Sampling Theorem
The sampling process is usually described in the time domain. In this process, an analog signal is
converted into a corresponding sequence of samples that are usually spaced uniformly in time. Consider
an arbitrary signal x(t) of finite energy, which is specified for all time as shown in figure 1(a).
Suppose that we sample the signal x(t) instantaneously and at a uniform rate, once every TS second, as
shown in figure 1(b). Consequently, we obtain an infinite sequence of samples spaced TS seconds apart
and denoted by {x(nTS)}, where n takes on all possible integer values.
1. Sampling Period: The time interval between two consecutive samples is referred as sampling period.
In figure 1(b), TS is the sampling period.
2. Sampling Rate: The reciprocal of sampling period is referred as sampling rate, i.e.
fS = 1/TS
Sampling theorem provides both a method of reconstruction of the original signal from the sampled values
and also gives a precise upper bound on the sampling interval required for distortion less reconstruction. It
states that
A band-limited signal of finite energy, which has no frequency components higher that W Hertz, is
completely described by specifying the values of the signal at instants of time separated by 1/2W
seconds.
A band-limited signal of the finite energy, which has no frequency components higher that W Hertz,
may be completely recovered from a knowledge of its samples taken at the rate of 2W samples per
second.
Aliasing is such an effect of violating the Nyquist-Shannon sampling theory. During sampling the base
band spectrum of the sampled signal is mirrored to every multifold of the sampling frequency. These
mirrored spectra are called alias.
The easiest way to prevent aliasing is the application of a steep sloped low-pass filter with half the
sampling frequency before the conversion. Aliasing can be avoided by keeping Fs>2Fmax.
Since the sampling rate for an analog signal must be at least two times as high as the highest
frequency in the analog signal in order to avoid aliasing.So in order to avoid this ,the analog signal is
then filtered by a low pass filter prior to being sampled, and this filter is called an anti-aliasing filter.
Sometimes the reconstruction filter after a digital-to-analog converter is also called an anti-aliasing
filter.
Fs ≥ 2 W
Nyquist Rate
Nyquist rate is defined as the minimum sampling frequency allowed to reconstruct a bandlimited waveform
without error, i.e.
fN = min {fS} = 2W
Nyquist Interval
The reciprocal of Nyquist rate is called the Nyquist interval (measured in seconds), i.e.
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here
The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:
Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering
Thanks
In the present article, you will find the study notes on Fourier Series & Representation of Continuous
Periodic Signal which will cover the topics such as Fourier Theorem, Fourier Coefficients, Fourier Sine
Series, Fourier Cosine Series, Orthogonality Relations of Fourier Series Exponential Fourier Series
Representation, Even & Odd Symmetry
Fourier Series Representation
The Periodic functions are the functions which can define by relation f(t + P) = f(t) for all t.For example, if
f(t) is the amount of time between sunrise and sunset at a certain latitude, as a function of time t, and P is
the length of the year, then f(t + P) = f(t) for all t, since the Earth and Sun are in the same position after one
full revolution of the Earth around the Sun.
Fourier Theorem
Any arbitrary continuous time signal x(t) which is periodic with a fundamental period To, can be expressed
as a series of harmonically related sinusoids whose frequencies are multiples of fundamental frequency or
first harmonic. In other word, any periodic function of (t) can be represented by an infinite series of
sinusoids called as Fourier Series.
Periodic waveform expressed in the form of Fourier series, while non-periodic waveform may be
expressed by the Fourier transform.
Any arbitrary periodic function x(t) with fundamental period T0 can be expressed as follows
This is referred to as trigonometric Fourier series representation of signal x(t). Here, ω0 = 2π/T0 is the
fundamental frequency of x(t) and coefficients a0, an, and bn are referred to as the trigonometric continuous
time Fourier series (CTFS) coefficients. The coefficients are calculated as follows.
From equation (ii), it is clear that coefficient a0 represents the average or mean value (also referred to as
the dc component) of signal x(t).
In these formulas, the limits of integration are either (–T0/2 to +T0/2) or (0 to T0). In general, the limit of
integration is any period of the signal and so the limits can be from (t1 to t2 + T0), where t1 is any time
instant.
If the periodic signal x(t) possesses some symmetry, then the continuous time Fourier series (CTFS)
coefficients become easy to obtain. The various types of symmetry and simplification of Fourier series
coefficients are disused as below.
Consider the Fourier series representation of a periodic signals x(t) defined in equation
If x(t) is an even function, then product x(t) sinωot is odd and integration in equation (iv) becomes zero.
That is bn = 0 for all n and the Fourier series representation expressed as
For example, the signal x(t) shown in below figure has even symmetry so bn = 0 and the Fourier series
expansion of x(t) is given as
The trigonometric Fourier series representation even signals contains cosine terms only. The constant
a0 may or may not be zero.
If x(t) is an odd function, then product x(t) cosωot is also odd and integration in equation (iii) becomes zero
i.e. an = 0 for all n. Also, a0 = 0 because an odd symmetric function has a zero-average value. The Fourier
series representation is expressed as
For example, the signal x(t) shown in below figure is odd symmetric so an = a0 = 0 and the Fourier series
expansion of x(t) is given as
Suppose S(x)=∑ bn sinnx. Multiply both sides by sin kx. Integrate from 0 to π in Sine Series in equation (2)
On the right side, all integrals are zero except for n = k. Here the property of “orthogonality” will
dominate. The sines make 90o angles in function space, when their inner products are integrals from 0
to π.
Orthogonality for sine Series
------(3)
Zero comes quickly if we integrate the term cosmx from 0 to π. ⇒ 0∫π cosmx dx = 0-0=0
------(4)
Notice that S(x)sinkx is even (equal integrals from −π to 0 and from 0 to π).
We will immediately consider the most important example of a Fourier sine series. S(x) is an odd
square wave with SW(x) = 1 for 0<x<π. It is an odd function with period 2 π, that vanishes at x=0 and
x= π.
Example:
Finding the Fourier sine coefficients bk of the square wave SW(x) as given above .
Solution:
For k =1 ,2,...using the formula of sine coefficient with S(x)=1 between 0 and π:
Then even-numbered coefficients b2k are all zero because cos2kπ = cos0 = 1.
The
odd-numbered coefficients bk =4/πk decrease at the rate 1/k.
We will see that same 1/k decay rate for all functions formed from smooth pieces and jumps. Put
those coefficients 4/πk and zero into the Fourier sine series for SW(x).
-----(5)
Cosine has period 2π shown as above in the figure two even functions, the repeating ramp RR(x) and the
up-down train UD(x) of delta functions.
That sawtooth ramp RR is the integral of the square wave. The delta functions in UD give the
derivative of the square wave. RR and UD will be valuable examples, one smoother than SW, one
less smooth.
First we find formulas for the cosine coefficients a0 and ak. The constant term a0 is the average value
of the function C(x):
-----(6)
We will integrate the cosine series from 0 to π. On the right side, the integral of a0=a0π (divide both
sides by π). All other integrals are zero.
Again the integral over a full period from −π to π (also 0 to 2π) is just doubled.
Orthogonality Relations of Fourier Series
Since from the Fourier Series Representation we concluded that for a periodic Signal it can be written as
-------(7)
This is just a straightforward calculation using the periodicity of sine and cosine and either (or both) of
these two methods:
In place of separate formulas for a0 and ak and bk, we may consider one formula for all the complex
coefficients ck.
So that the function F(x) will be complex, The Discrete Fourier Transform will be much simpler when
we use N complex exponentials for a vector.
The exponential form of Fourier series of a periodic signal x(t) with period T0 is defined as
where, ω0 is the fundamental frequency given as ω0 = 2π /T0. The exponential Fourier series coefficients
cn are calculated form the following expression
Example:
Compute the Fourier series of f(t), where f(t) is the square wave with period 2π. which is defined over one
period by.
Computing a Fourier series means computing its Fourier coefficients. We do this using the integral
formulas for the coefficients given with Fourier’s theorem in the previous note. For convenience we repeat
the theorem here.
By applying these formulas to the above waveform we have to split the integrals into two pieces
corresponding to where f(t) is +1 and where it is −1.
thus for n ≠ 0 ;
for n = 0
We have used the simplification cos nπ = (−1)n to get a nice formula for the coefficients bn.
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here
The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:
Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering
Thanks
Laplace Transform
The Laplace Transform is very important tool to analyze any electrical containing by which we can convert
the Integro-Differential Equation in Algebraic by converting the given situation in Time
Domain to Frequency Domain.
If x(t) is defined for t≥0, [i.e., if x(t) is causal], then is also called
unilateral or one-sided Laplace transform.
The direct Laplace transform or the Laplace integral of a function f(t) defined for 0 ≤ t < ∞ is the
ordinary calculus integration problem for a given function f(t) .
Its Laplace transform is the function, denoted F(s) = L{f}(s), defined by
A causal signal x(t) is said to be of exponential order if a real, positive constant σ (where σ is real part
of s) exists such that the function, e- σt|X(t)| approaches zero as t approaches infinity.
For a causal signal, if lim e-σt|x(t)|=0, for σ > σc and if lim e-σt|x(t)|=∞ for σ > σc then σc is called
abscissa of convergence, (where σc is a point on real axis in s-plane).
The value of s for which the integral converges is called Region of Convergence
(ROC).
For a causal signal, the ROC includes all points on the s-plane to the right of abscissa of
convergence.
For an anti-causal signal, the ROC includes all points on the s-plane to the left of abscissa of
convergence.
For a two-sided signal, the ROC includes all points on the s-plane in the region in between two
abscissa of convergence.
X(t) = L-1{X(s)}
The convolution theorem of Laplace transform says that, Laplace transform of convolution of two time
domain signals is given by the product of the Laplace transform of the individual signals.
The zeros and poles are two critical complex frequencies at which a rational function of a takes two
extreme value zero and infinity respectively.
For the detailed schedule of GATE Electrical Engineering(EE) 2021 Champion Study Plan, click here
The Candidates who have missed the schedule for GATE EC 2021 champion study plan can follow the
following link:
Candidates can practice 150+ Mock Tests with Gradeup Green Card for exams like GATE, ESE, NIELIT
from the following link:
Click Here to Avail Electronics Engineering Green Card (150+ Mock Tests)
Get unlimited access to 24+ structured Live Courses all 150+ mock tests to boost your GATE 2021
Preparation with Gradeup Super:
Click here to avail Super for Electronics Engineering
Thanks.
In this article, you will find the Z-Transform which will cover the topic as Z-Transform, Inverse Z-
transform, Region of Convergence of Z-Transform, Properties of Z-Transform.
Z-Transform
, where z = r.ejω
ROC is the region where z-transform converges. It is clear that z-transform is an infinite power series.
The series is not convergent for all values of z.
Significance of ROC
ROC gives an idea about values of z for which z-transform can be calculated.
ROC can be used to determine causality of the system.
ROC can be used to determine stability of the system.
Thanks