Module 1
Module 1
The label semiconductor itself provides a hint as to its characteristics. The prefix semis normally
applied to a range of levels midway between two limits.
Conductor: is applied to any material that will support a generous flow of charge when a
voltage source of limited magnitude is applied across its terminals.
Insulator: is a material that offers a very low level of conductivity under pressure from an
applied voltage source.
Semiconductor: therefore, is a material that has conductivity level somewhere between
the extremes of an insulator and conductor.
Resistivity
Inversely related to the conductivity of a material is its resistance to the flow of charge, or
current. That is, the higher the conductivity level, the lower the resistance level. In tables, the
term resistivity (ρ, Greek letter rho) is often used when comparing the resistance levels of
materials.
Note in Table 1.1 the extreme range between the conductor and insulating materials for the 1-cm
length (1-cm2 area) of the material. Ge and Si have received the attention they have for a number
of reasons.
One very important consideration is the fact that they can be manufactured to a very high purity
level. In fact, recent advances have reduced impurity levels in the pure material to 1 part in 10
billion (1:10,000,000,000).
The ability to change the characteristics of the material significantly through this process, known
as “doping,‖ is yet another reason why Ge and Si have received such wide attention. Further
reasons include the fact that their characteristics can be altered significantly through the
application of heat or light—an important consideration in the development of heat- and light-
sensitive devices.
Some of the unique qualities of Ge and Si noted above are due to their atomic structure. The
atoms of both materials form a very definite pattern that is periodic in nature (i.e., continually
repeats itself). One complete pattern is called a crystal and the periodic arrangement of the atoms
a lattice. For Ge and Si the crystal has the three-dimensional diamond structure of Fig. 1.2
Any material composed solely of repeating crystal structures of the same kind is called a single-
crystal structure. For semiconductor materials of practical application in the electronics field, this
single-crystal feature exists, and, in addition, the periodicity of the structure does not change
significantly with the addition of impurities in the doping process.
How the structure of the atom might affect the electrical characteristics of the material?
As you are aware, the atom is composed of three basic particles: the electron, the proton, and the
neutron. In the atomic lattice, the neutrons and protons form the nucleus, while the electrons
revolve around the nucleus in a fixed orbit. The Bohr models of the two most commonly used
semiconductors; Germanium and silicon are shown in Fig. 1.3
Figure 1.3 Atomic structures: (a) germanium; Figure 1.4 Covalent bonding of the
(b) Silicon
As indicated by Fig. 1.3a, the germanium atom has 32 orbiting electrons, while silicon has 14
orbiting electrons. In each case, there are 4 electrons in the outermost (valence) shell. The
potential (ionization potential) required to remove any one of these 4 valence electrons is lower
ENERGY LEVELS
In the isolated atomic structure there are discrete (individual) energy levels associated with each
orbiting electron, as shown in Fig. 1.5. Each material will, in fact, have its own set of permissible
energy levels for the electrons in its atomic structure.
The more distant the electron from the nucleus, the higher the energy state, and any electron that has left
its parent atom has a higher energy state than any electron in the atomic structure.
The energy associated with each electron is measured in electron volts (eV). The unit of measure
is appropriate, since
As derived from the defining equation for voltage V = W/Q. The charge Q is the charge
associated with a single electron. Substituting the charge of an electron and a potential difference
of 1 volt into Eq. (1.2) will result in an energy level referred to as one electron volt. Since energy
is also measured in joules and the charge of one electron =1.6 × 10-19 coulomb,
Figure 1.6 Conduction and valence bands of an insulator, semiconductor, and conductor.
Electrons occupy specific energy states, or levels, in the conduction and valence bands, but they
may not occupy energy states located in the band gap, which is why it is frequently called the
forbidden gap. Relative to figure 1.6, to achieve electrical conduction, electrons must transfer
from energy states in the valence band to energy states in the conduction band. The valence band
represents low energy states of the electrons, in which the electrons are tightly bound to the
atoms of the material. The forbidden band is not a physical void, but rather an energy gap. To
cross the band gap, an electron must attain energy equal to or greater than the lowest allowed
energy state in the conduction band; otherwise it cannot cross the gap.
In metals once temperature rises above absolute zero (0K), electrons acquire sufficient thermal
energy to transfer from the valence band to the conduction band, thus making electrical
conduction possible in the form of that described by Ohm‘s law. In semiconductors, the term
ohmic condition is applied to this phenomenon. Atoms are ionized (electrons are torn loose), and
free (conduction) electrons are released to establish an electric current.
The forbidden gap regions associated with insulators and semiconductors represent energy levels
that electrons may not assume. The only way that an electron can move from the valence band to
the conduction band in these materials is by acquiring sufficient energy to cross the gap. Because
of the large forbidden band in insulators, the material is usually damaged or destroyed.
Example
Solution
Intrinsic semiconductor
An intrinsic semiconductor is one, which is pure enough that impurities do not appreciably affect
its electrical behaviour. In this case, all carriers are created due to thermally or optically excited
electrons from the full valence band into the empty conduction band. Thus equal numbers of
electrons and holes are present in an intrinsic semiconductor. Electrons and holes flow in
opposite directions in an electric field, though they contribute to current in the same direction
since they are oppositely charged. Hole current and electron current are not necessarily equal in
an intrinsic semiconductor, however, because electrons and holes have different effective masses
(crystalline analogues to free inertial masses).
The concentration of carriers is strongly dependent on the temperature. At low temperatures, the
valence band is completely full making the material an insulator. Increasing the temperature
leads to an increase in the number of carriers and a corresponding increase in conductivity. This
characteristic shown by intrinsic semiconductor is different from the behaviour of most metals,
which tend to become less conductive at higher temperatures due to increased phonon scattering.
Negative temperature coefficient:
Semiconductor materials such as Ge and Si that show a reduction in resistance with increase in
temperature are said to have a negative temperature coefficient.
n-Type Material
Both the n- and p-type materials are formed by adding a predetermined number of impurity
atoms into a germanium or silicon base. The n-type is created by introducing those impurity
elements that have five valence electrons (pentavalent), such as antimony, arsenic, and
phosphorus. The effect of such impurity elements is indicated in Fig. 1.7 (using antimony as the
impurity in a silicon base).
p-Type Material
The p-type material is formed by doping a pure germanium or silicon crystal with impurity
atoms having three valence electrons. The elements most frequently used for this purpose are
boron, gallium, and indium. The effect of one of these elements, boron, on a base of silicon is
indicated in Fig. 1.9.
Note that there is now an insufficient number of electrons to complete the covalent bonds of the
newly formed lattice. The resulting vacancy is called a hole and is represented by a small circle
or positive sign due to the absence of a negative charge. Since the resulting vacancy will readily
accept a ―free‖ electron:
The diffused impurities with three valence electrons are called acceptor atoms.
The resulting p-type material is electrically neutral, for the same reasons described for the n-type
material.
Electron versus Hole Flow
The effect of the hole on conduction is shown in Fig. 1.10. If a valence electron acquires
sufficient kinetic energy to break its covalent bond and fills the void created by a hole, then a
vacancy, or hole, will be created in the covalent bond that released the electron. There is,
therefore, a transfer of holes to the left and electrons to the right, as shown in Fig. 1.10. The
direction to be used in this text is that of conventional flow, which is indicated by the direction of
hole flow.
In an n-type material (Fig. 1.11a) the electron is called the majority carrier and the hole is the minority
carrier.
For the p-type material the number of holes far outweighs the number of electrons, as shown in
Fig. 1.11b. Therefore:
In a p-type material the hole is the majority carrier and the electron is the minority carrier.
When the fifth electron of a donor atom leaves the parent atom, the atom remaining acquires a
net positive charge: hence the positive sign in the donor-ion representation. For similar reasons,
the negative sign appears in the acceptor ion. The n- and p-type materials represent the basic
building blocks of semiconductor devices. We will find in the next section that the ―joining‖ of a
single n-type material with a p-type material will result in a semiconductor element of
considerable importance in electronic systems.
The construction, characteristics, and models of semiconductor diodes were introduced in Chapter 1. The
primary goal of this chapter is to develop a working knowledge of the diode in a variety of configurations
using models appropriate for the area of application. By chapter‘s end, the fundamental behavior pattern of
diodes in dc and ac networks should be clearly understood. The concepts learned in this chapter will have
significant carryover in the chapters to follow. For instance, diodes are frequently employed in the description
of the basic construction of transistors and in the analysis of transistor networks in the dc and ac domains.
The content of this chapter will reveal an interesting and very positive side of the study of a field such as
electronic devices and systems once the basic behavior of a device is understood, its function and response in
an infinite variety of configurations can be determined. The range of applications is endless, yet the
characteristics and models remain the same. The analysis will proceed from one that employs the actual diode
characteristic to one that utilizes the approximate models almost exclusively.
It is important that the role and response of various elements of an electronic system be understood without
continually having to resort to lengthy mathematical procedures. This is usually accomplished through the
approximation process, which can develop into an art itself. Although the results obtained using the actual
characteristics may be slightly different from those obtained using a series of approximations, keep in mind
that the characteristics obtained from a specification sheet may in themselves be slightly different from the
device in actual use.
LOAD-LINE ANALYSIS
The applied load will normally have an important impact on the point or region of operation of a device. If the
analysis is performed in a graphical manner, a line can be drawn on the characteristics of the device that
represents the applied load. The intersection of the load line with the characteristics will determine the point of
operation of the system. Such an analysis is, for obvious reasons, called load-line analysis.
Although the majority of the diode networks analyzed in this chapter do not employ the load-line approach, the
technique is one used quite frequently in subsequent chapters, and this introduction offers the simplest
application of the method. It also permits a validation of the approximate technique described throughout the
remainder of this chapter.
Consider the network of Fig. 2.1a employing a diode having the characteristics of Fig. 2.1b. Note in Fig. 2.1a
that the ―pressure‖ established by the battery is to establish a current through the series circuit in the clockwise
direction. The fact that this current and the defined direction of conduction of the diode are a ―match‖ reveals
that the diode is in the ―on‖ state and conduction has been established. The resulting polarity across the diode
as shown in Fig. 2.2. If we set ID = 0 A in Eq. (2.1) and solve for VD, we have the magnitude of
VD on the horizontal axis. Therefore, with ID =0 A, Eq. (2.1) becomes
Figure2.2: Drawing the load line and finding the point of operation
SERIES DIODE CONFIGURATIONS WITH DC INPUTS
In this section the approximate model is utilized to investigate a number of series diode
configurations with dc inputs. The content will establish a foundation in diode analysis that will
carry over into the sections and chapters to follow. The procedure described can, in fact, be
applied to networks with any number of diodes in a variety of configurations. For each
(c)
Figure2.3: (a) Series diode configuration (b) Determining the state of the diode (c) Substituting
the equivalent model for the ―on‖ diode
The resulting current direction does not match the arrow in the diode symbol. The diode is in the
―off‖ state, resulting in the equivalent circuit of the above figure. Due to the open circuit, the
diode current is 0 A and the voltage across the resistor R is the following:
The process of removing one-half the input signal to establish a dc level is called half-wave
rectification. The effect of using a silicon diode with VT = 0.7 V is demonstrated in Fig. 2.10 for
the forward-bias region. The applied signal must now be at least 0.7 V before the diode can turn
―on.‖ For levels of vi less than 0.7 V, the diode is still in an open circuit state and vo = 0 V as
shown in the same figure. When conducting, the difference between vo and vi is a fixed level of
VT =0.7 V and vo = vi - VT, as shown in the figure. The net effect is a reduction in area above the
axis, which naturally reduces the resulting dc voltage level. For situations where Vm >>VT, the
following equation can be applied to determine the average value with a relatively high level of
accuracy.
Bridge Network
The dc level obtained from a sinusoidal input can be improved 100% using a process called full-
wave rectification. The most familiar network for performing such a function appears in Fig.
2.11 with its four diodes in a bridge configuration. During the period t = 0 to T/2 the polarity of
the input is as shown in Fig. 2.12. The resulting polarities across the ideal diodes are also shown
in Fig. 2.12 to reveal that D2 and D3 are conducting while D1 and D4 are in the ―off‖ state. The
net result is the configuration of Fig. 2.13, with its indicated current and polarity across R. Since
the diodes are ideal the load voltage is vo = vi, as shown in the same figure.
If silicon rather than ideal diodes are employed as shown in Fig. 2.16, an application of
Kirchhoff‘s voltage law around the conduction path would result in
For situations where Vm>>2VT, Eq. (2.11) can be applied for the average value with a relatively
high level of accuracy.
Then again, if Vm is sufficiently greater than 2VT, then Eq. (2.10) is often applied as a first
approximation for Vdc.
transistor)
IC = IB………………………………………………………………………………………………..3.1
IE = I C + I B …………………………………………………………………………………………….3.2
When emitter circuit is opened, there is no supply of free electrons from emitter to collector. Even
then, there will be small collector current called reverse saturation collector current
ICBO. This is due to thermally generated electron-hole pairs. Even during normal operation, ICBO is
αdc is fraction of emitter current which flows to collector. Since ICBO is very small,
αdc = I C / IE…………………………………………………………………………………………………3.4
we have also another parameter from eq. 3.1
βdc = I C / IB………………………………………………………………………………………………….3.5
Dividing eqn 3.2 by I C we have
Example 1: If the emitter current of a transistor is 8 mA and IB is 1/100 of IC, determine the
levels of IC and IB.
Example 2: (a) Given that αdc = 0.987, determine the corresponding value of βdc.
(b) Given βdc = 120, determine the corresponding value of α.
(c) Given that βdc =180 and IC =2.0 mA, find IE and IB.
3.3 BJT CONFIGURATIONS
Transistor is 3-terminal device. For applications such as amplifier circuit, four terminals are required
– two for input and two for output. So, one of the three terminals of transistor should be made
common for both input and output in such cases. Accordingly, we will end up with three types of
configurations:
• Common base (CB) configuration
• Common emitter (CE) configuration
• Common collector (CC) configuration
Plot of input current IE versus input voltage VEB for various values of output voltage VCB is shown in
fig 3.5.
If VCB is increased, then IE increases slightly. This is due to the increase in electric field aiding the
flow of electrons from emitter.
When no input signal is applied to transistor circuit, and only dc voltages are supplied, currents IC, IB
and voltage VCE will have certain values. If these values are plotted over the transistor output
characteristics, the point we get is called ‗Operating point‘. It is also called ‗Quiescent point‘ or just
Q-point.
In above figure, currents IBQ (the value of IB at Q), ICQ and voltage VCEQ are plotted at point Q. In
practice, we have to choose Q-point according to our requirement. If we want to operate in the
middle of active region, we may choose Q as Q-point. For instance in the case of the so called Class
A amplifiers (to be discussed later) we want Q-point to be in the middle of active region. If we want
to operate near saturation, we may choose Q‘ (Q prime) as Q-point. If we want to operate near cutoff,
we may choose Q‘‘ as Q-point too. Note that if no biasing is used, Q-point will be in the origin of the
graph. So, biasing is used to fix the Q-point according to our need.
Types of bias
• Fixed bias Circuit
• Voltage divider bias (Self bias)
Fixed Bias Circuit
Base resistor RB is connected to Vcc (Instead of VBB) negative terminal of Vcc is not shown. It is
assumed to be at ground.
VCC- IB RB - VBE=0
Rearranging, we get
IC is independent of β but in order to fulfill the above condition Re must be very large. We have to
maintain very large VCC to maintain the current in the desired value. The other possibility is to keep
Rb small but this will decrease the voltage drop across Rb to the extent that it becomes less than the
voltage drop across Rc i.e CB junction becomes forward biased.
Example 7: For the emitter bias network of Figure below, determine:
(a) IB.
(b) IC.
(c) VCE.
(d) VC.
(e) VE.
(f) VB.
(g) VBC.
Figure 4.1: Junction Field Effect Transistors basic construction and their symbol
Operating characteristics of JFET
To demonstrate the i-v characteristics of JFET lets use the following n-channel JFET circuit
layout shown in figure 4.2. For normal operation of JFET the two junctions made between the
channel and the two gates should be reverse biased. As can be seen from the circuit diagram
there are two possible conditions to control the variation of channel current, either changing the
voltage level of VGG or VDD. Depending on this there are two operating conditions.
The p-channel FET is similar to the n-channel except that the voltage polarities and current
directions are reversed. And regarding response time, as electrons are more mobile than holes,
there will be considerable delay of current in p-channels compared to n-channel FETs.
Therefore, the result of applying a negative bias to the gate is to reach the saturation level at a
lower level of VDS. The resulting saturation level for ID has been reduced and in fact will
continue to decrease as VGS is made more and more negative. The region to the right of the
pinch-off locus on the fig. 4.5 is the region typically employed in linear amplifiers (amplifiers
with min distortion of the applied signal) and is commonly referred to as the constant-current,
saturation, or linear amplification region. The region to the left of the pinch-off locus of the figure
is referred to as the ohmic or voltage-controlled resistance region. In this region the JFET can
actually be employed as a variable resistor (possibly for an automatic gain control system)
whose resistance is controlled by the applied gate-to-source voltage. In the ohmic region JFET
can be used as variable resistors of value given as
ro
rd
(1 VGS )2
VP
Transfer Characteristics
In a JFET the relationship of VGS (input) and ID (output) is a little more complicated, and is given
by Shockley‘s equation:
2
V
I D I DSS 1 GS
V p ( pinchoff )
The transfer characteristics defined by Shockley‘s equation are unaffected by the network in
which the device is employed. The transfer curve can be obtained using Shockley‘s equation as
shown in Fig. 4.6.
( ( ) )
VDSsat can also be calculated as
Since VGG is a fixed dc supply, the voltage VGS is fixed in magnitude, resulting in the notation
―fixed-bias configuration.‖ And the drain current ID is controlled by:
( )
The level of ID is simply determined from a vertical line drawn by taking the fixed level of VGS
which is superimposed as a vertical line at VGS= - VGG, which is shown in figure 4.14 below.
Note from figure 4.13 that, the values of the source, drain, and gate voltages with respect to
ground, in relation to VDS and VGS are given by:
So
Replacing the capacitors (C1 and C2) with open circuit and RG with short circuit (since IG=0A),
will result in the network of dc analysis shown in figure 4.16 above.
The current through RS is the source current IS, but IS= ID and
VGS VRS 0
VGS ofVthe
Note in this case that VGS is a function RS I D RScurrent ID and not fixed in magnitude as
output
occurred for the fixed-bias configuration.
The solution of a self-bias configuration is obtained by substituting VGS into the drain current
equation as follows:
( ) ( ) ( )
Solving this quadratic equation will result in appropriate solution of ID.
The graphical analysis can also be used to determine the operating point, which is the
intersection point of the device characteristic curve and a straight line curve drawn using the
equation VGS I D RS , as shown in the following figure.
Applying Kirchhoff‘s voltage law to the output circuit, the level of VDS can also be determined:
( )
In addition,
PREPARED BY
Wogderes S.
JUNE 2023
1
UNIT-I
A Computer is an electronic device that stores, manipulates, and retrieves data. ‖ We can also refer
computer computes the information supplied to it and generates data. A System is a group of
several objects with a process. For Example: Educational System involves teachers, students
(objects). The teacher teaches the subject to students i.e., teaching (process). Similarly, a computer
system can have objects and processes.
These are the 7 major components of a computer that you need to know about,
Motherboard. ...
Central Processing Unit (CPU) ...
Graphical Processing Unit (GPU) ...
Random Access Memory (RAM) ...
Storage device.
Input Unit.
Output Unit.
1. Motherboard
A motherboard is a circuit board through which all the different components of a computer
communicate and it keeps everything together. The input and output devices are plugged into the
motherboard for function.
2
2. Output Unit
The result of the command we provide the computer with through the input device is called the
output. The most used is the monitor since we give commands using the keyboard and after the
processing, the result or outcome is displayed on the monitor.
3. Input Unit
Computers respond to commands given to them in the form of numbers, alphabets, images, etc.
through input units or devices like – keyboards, joysticks, etc.
These inputs are then processed and converted to computer language and then the response is
the output in the language that we understand or the one we have programmed the computer
with.
Another vital component of the computer is GPU. The Graphics Processing Unit or the video card
helps generate high-end visuals like the ones in video games.
Good graphics like these are also helpful for people who have to execute their work through images
like 3D modelers and others who use resource-intensive software. It generally communicates
directly with the monitor
3
5. The Random Access Memory (RAM)
6. Storage Unit
The computers need to store all their data and they have either a Hard Disk Drive (HDD) or a Solid State
Drive (SDD) for this purpose.
Hard disk drives are disks that store data and this data is read by a mechanical arm. Solid-State drives are
like SIM cards in mobile phones.
They have no moving parts and are faster than hard drives. There is no need for a mechanical arm to find
data on a physical location on the drive and therefore this takes no time at all.
The Central Processing Unit (CPU) is the primary component of a computer that acts as its “control
center.” The CPU, also referred to as the “central” or “main” processor, is a complex set of
electronic circuitry that runs the machine's operating system and apps.
4
The computer's central processing unit (CPU) is the portion of a computer that retrieves and
executes instructions. The CPU is essentially the brain of a CAD system. It consists of an
arithmetic and logic unit (ALU), a control unit, and various registers. The CPU is often simply
referred to as the processor.
CPUs perform logic, control, arithmetic, input and output operations specified by its programming
to perform basic tasks.
b) Hardware
c) Software
Hardware: Hardware of a computer system can be referred as anything which we can touch and
feel. Example: Keyboard and Mouse.
Input Devices(I/P)
Output Devices(O/P)
5
ALU: It performs the Arithmetic and Logical Operations such as
+, -, *, / (Arithmetic Operators)
CU: Every Operation such as storing, computing and retrieving the data should be governed by
the control unit.
2) Secondary Memory
Primary memory: The following are the types of memories which are treated as primary ROM:
It represents Read Only Memory that stores data and instructions even when the computer is turned
off. The Contents in the ROM can‘t be modified once if they are written . It is used to store the
BIOS information. RAM: It represents Random Access Memory that stores data and instructions
when the computer is turned on. The contents in the RAM can be modified any no. of times by
instructions. It is used to store the programs under execution.
Cache memory: It is used to store the data and instructions referred by processor.
Secondary Memory: The following are the different kinds of memories Magnetic Storage: The
Magnetic Storage devices store information that can be read, erased and rewritten a number of
times. Example: Floppy Disks, Hard Disks, Magnetic Tapes
Optical Storage: The optical storage devices that use laser beams to read and write stored data.
Example: CD (Compact Disk),DVD(Digital Versatile Disk)
6
COMPUTER SOFTWARE
Software of a computer system can be referred as anything which we can feel and see. Example:
Windows, icons Computer software is divided in to two broad categories: system software and
application software. System software manages the computer resources. It provides the interface
between the hardware and the users. Application software, on the other hand is directly responsible
for helping users solve their problems.
System Software
System software consists of programs that manage the hardware resources of a computer and
perform required information processing tasks. These programs are divided into three classes: the
operating system, system support, and system development.
Language Translators These are the programs which are used for converting the programs in
one language into machine language instructions, so that they can be executed by the computer.
7
program, translates it into machine language instruction and then immediately
COMPILER INTERPRETER
11
The compiled programs run faster The Interpreted programs run slower
Most of the Languages use compiler A very few languages use interpreters
The procedure for turning a program written in C into machine Language. The process is presented
in a straightforward, linear fashion but you should recognize that these steps are repeated many
times during development to correct errors and make improvements to the code. The following are
the four steps in this process
8
4) Executing the program
C Compilers
1 Text Editor Source Code Edit, Notepad .C
Etc..,
Executable
3 Linker C Compiler .EXE
Code
Executable
4 Runner C Compiler .EXE
Code
ALGORITHM
Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values
may be, an algorithm terminates after executing a finite number of instructions. We represent an
algorithm using a pseudo language that is a combination of the constructs of a programming
language together with informal English statements.
•Uniqueness – results of each step are uniquely defined and only depend on the input
•Finiteness – the algorithm stops after a finite number of instructions are executed.
9
•Generality – the algorithm applies to a set of inputs.
FLOWCHART
Flowchart is a diagrammatic representation of an algorithm. Flowchart is very helpful in writing
program and explaining program to others.
Symbols Used in Flowchart
Different symbols are used for different states in flowchart, For example: Input/Output and
decision making has different symbols. The table below describes all the symbols that are used in
making flowchart
KEYWORDS
C++ keywords are the words that convey a special meaning to the c compiler. The keywords
cannot be used as variable names. The list of C keywords is given below:
auto break case char const
continue default do double else
enum extern float for goto
if int long register return
short signed sizeof static struct
switch typedef union unsigned void
volatile while
IDENTIFIERS
Identifiers are used as the general terminology for the names of variables, functions and arrays.
These are user defined names consisting of arbitrarily long sequence of letters and digits with
either a letter or the underscore (_) as a first character. There are certain rules that should be
followed while naming c++ identifiers:
10
EVALUATION OF EXPRESSION
At first, the expressions within parenthesis are evaluated. If no parenthesis is present, then the
arithmetic expression is evaluated from left to right. There are two priority levels of operators in
C.
High priority: * / %
Low priority: + -
The evaluation procedure of an arithmetic expression includes two left to right passes through
the entire expression. In the first pass, the high priority operators are applied as they are
encountered and in the second pass, low priority operations are applied as they are encountered.
Suppose, we have an arithmetic expression as:
x = 9 – 12 / 3 + 3 *2 - 1
This expression is evaluated in two left to right passes as:
First Pass
Step 1: x = 9-4 + 3 * 2 – 1
Step 2: x = 9 – 4 + 6 – 1
Second Pass
Step 1: x = 5 + 6 – 1
Step 2: x = 11 – 1
Step 3: x = 10
But when parenthesis is used in the same expression, the order of evaluation gets changed.
For example,
x = 9 – 12 / (3 + 3) * (2 – 1)
When parentheses are present then the expression inside the parenthesis are evaluated first from
First Pass
Step 1: x = 9 – 12 / 6 * (2 – 1)
Step 2: x= 9 – 12 / 6 * 1
Second Pass
11
Step 1: x= 9 – 2 * 1
Step 2: x = 9 – 2
Third Pass48
Step 3: x= 7
There may even arise a case where nested parentheses are present (i.e. parenthesis inside
parenthesis). In such case, the expression inside the innermost set of parentheses is evaluated
first and then the outer parentheses are evaluated.
For example, we have an expression as:
x = 9 – ((12 / 3) + 3 * 2) – 1
The expression is now evaluated as:
First Pass:
Step 1: x = 9 – (4 + 3 * 2) – 1
Step 2: x= 9 – (4 + 6) – 1
Step 3: x= 9 – 10 -1
Second Pass
Step 1: x= - 1 – 1
Step 2: x = -2
Note: The number of evaluation steps is equal to the number of operators in the arithmetic
expression
12
UNIT-II
✓ Conditional Statements
➢ IF Statement
➢ Switch Statement
✓ Looping Statements
✓ Other Statements
✓ The following example demonstrates that correctly specifying the order in which the
actions execute is important.
✓ Consider the "rise-and-shine algorithm" followed by one executive for getting out of bed
and going to work:
13
✓ Like many other procedural languages, C++ provides different forms of statements for
different purposes.
➢ Loop statements are used for specifying computations, which need to be repeated
until a certain logical condition is satisfied.
➢ Flow control statements are used to divert the execution path to another part of
the program.
✓ Specifying the order in which statements (actions) execute in a program is called program
control.
✓ If Statement:
➢ Syntax:
if (expression)
statement;
➢ For example:
14
if (count != 0)
✓ Switch Statement:
➢ Syntax:
switch (expression) {
case constant1:
statements;
...
default:
statements;
✓ Switch Statement:
➢ For example, suppose we have parsed a binary arithmetic operation into its three
components and stored these in variables operator, operand1, and operand2.
➢ The following switch statement performs the operation and stores the result in
result.
15
✓ The ‘while’ Statement:
➢ The while statement (also called while loop) provides a way of repeating a
statement while a condition holds. It is one of the three flavours of iteration in C++.
statement; Statement1
Statement2
➢ For example, suppose we wish to calculate the sum of all numbers from 1 to 10.
i = 1;
sum = 0;
sum += i;
16
✓ The ‘for’ Statement:
➢ The for statement (also called for loop) is similar to the while statement, but has
two additional components: an expression which is evaluated only once before
everything else(initialization ), and an expression which is evaluated once at the
end of each iteration(incremental/decremental).
statement; statements’
expression1;
while (expression2) {
statement;
expression3; }
sum = 0;
sum += i;
17
➢ Any of the three expressions in a for loop may be empty.
➢ For example, removing the first and the third expression gives us something identical to a
while loop:
something; something;
something;
➢ The do statement (also called do loop) is similar to the while statement, except that
its body is executed first and then the loop condition is examined.
➢ Syntax:
do or do{
statement; statements;
18
➢ For example, suppose we wish to repeatedly read a value and print its square, and stop
when the value is zero.
do {
cin >> n;
} while (n != 0);
➢ The continue statement terminates the current iteration of a loop and instead jumps
to the next iteration.
➢ For example, a loop which repeatedly reads in a number, processes it but ignores
negative numbers, and terminates when the number is zero, may be expressed as:
do {
19
✓ The ‘break’ Statement:
➢ A break statement may appear inside a loop (while, do, or for) or a switch
statement.
➢ Like the continue statement, a break statement only applies to the loop or switch
immediately enclosing it.
➢ For example, suppose we wish to read in a user password, but would like to allow
the user a limited number of attempts:
20
✓ The ‘goto’ Statement:
goto label;
➢ The label should be followed by a colon and appear before a statement within the
same function as the goto statement itself.
➢ For example, the role of the break statement in the for loop in the previous section
can be emulated by a goto:
out:
//etc...
21
✓ The ‘return’ Statement :
return expression; where expression denotes the value returned by the function.
➢ The type of this value should match the return type of the function.
return;
➢ The only function we have discussed so far is main, whose return type is always
int.
➢ The return value of main is what the program returns to the operating system when
it completes its execution.
➢ Under UNIX, for example, it its conventional to return 0 from main when the
program executes without errors. Otherwise, a non-zero error code is returned. For
example:
return 0;
22
➢ When a function has a non-void return value (as in the above example), failing to
return a value will result in a compiler warning.
➢ The actual return value will be undefined in this case (i.e., it will be whatever value
which happens to be in its corresponding memory location at the time).
23
UNIT-III
✓ During program execution these values are accessed by using the identifier associated
with the variable in expressions etc.
✓ If only a few values were involved a different identifier could be declared for each variable,
but now a loop could not be used to enter the values.
✓ Using a loop and assuming that after a value has been entered and used no further use will
be made of it allows the following code to be written. This code enters six numbers and
outputs their sum:
✓ An individual element of an array is identified by its own unique index (or subscript).
✓ The index must be an integer and indicates the position of the element in the array.
Syntax :
For example data on the average temperature over the year in Ethiopia for each of the last
100 years could be stored in an array declared as follows:
24
The first element in an array always has the index 0, and if the array has n elements the
last element will have the index n-1.
An array element is accessed by writing the identifier of the array followed by the
subscript in square brackets.
Thus to set the 15th element of the array above to 1.5 the following assignment is used:
annual_temp[14] = 1.5; Note that since the first element is at index 0, then the ith element
is at index i-1.
loop statements are the usual means of accessing every element in an array.
Here, the first NE elements of the array annual_temp are given values from the input stream
cin.
The following code finds the average temperature recorded in the first ten elements of the
array.
sum = 0.0;
sum += annual_temp[i];
int x [SIZE] ;
int y [SIZE] ;
25
✓ Only individual elements can be assigned to using the index operator, e.g., x[1] = y[2];.
To make all elements in 'x' the same as those in 'y' (equivalent to assignment), a loop has
to be used.
Therefore a two dimensional array has two subscripts, a three dimensional array has three
subscripts, and so on.
Arrays can have any number of dimensions, although most of the arrays that you create
will likely be of one or two dimensions.
Syntax
Type Multi_arrayName [ ] [ ];
To initialize a multidimensional arrays , you must assign the list of values to array elements
in order, with last array subscript changing while the first subscript holds steady.
for the sake of clarity, the program could group the initializations with braces, as shown
below.
int theArray[5][3] = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {10, 11, 12}, {13, 14,15} };
If a one-dimensional array is initialized, the size can be omitted as it can be found from the
number of initializing elements:
int x[] = { 1, 2, 3, 4} ;
26
✓ This initialization creates an array of four elements.
Note however:
27
String Representation and Manipulation
String is nothing but a sequence of character in which the last character is the null
character ‘\0’.
Any array of character can be converted into string type in C++ by appending this special
character at the end of the array sequence.
Syntax
char StringName[ ];
char s1[10];
The string variable s1 could hold strings of length up to nine characters since space is
needed for the final null character.
Strings can be initialized at the time of declaration just as other variables are initialized.
For example:
s1 |e|x|a|m|p|l|e|\0|
s2 |a|n|o|t|h|e|r| |e|x|a|m|p|l|e|\0|?|?|?|?|
✓ Note that the length of a string does not include the terminating null character.
28
✓ would print
✓ When the input stream cin is used space characters, newline etc. are used as separators
and terminators.
✓ Thus when inputting numeric data cin skips over any leading spaces and terminates
reading a value when it finds a white-space character (space, tab, newline etc. ).
✓ This same system is used for the input of strings, hence a string to be input cannot start
with leading spaces, also if it has a space character in the middle then input will be
terminated on that space character.
✓ To read a string with several words in it using cin we have to call cin once for each
word.
<< lastname;
✓ We have solved the problem of reading strings with embedded blanks, but what about
strings with multiple lines?
29
✓ It turns out that the cin.get() function can take a third argument to help out in this
situation.This argument specifies the character that tells the function to stop reading.
✓ The default value of this argument is the newline('\n')character, but if you call the
function with some other character for this argument, the default will be overridden by
the specified character.
✓ In this example, we call the function with a dollar sign ('$') as the third argument
#include<iostream.h>
void main(){
char str[max];
✓ In this example, we call the function with a dollar sign ('$') as the third argument
#include<iostream.h>
void main(){
char str[max];
30
cout<<\n You entered:\n"<<str; }
strcpy(destination, source);
✓ strcpy copies characters from the location specified by source to the location specified
by destination.
✓ There is also another function strncpy, is like strcpy, except that it copies only a
specified number of characters.
Concatenating strings
✓ In C++ the + operator cannot normally be used to concatenate string, as it can in some
languages such as BASIC; that is you can't say: str3 = str1 + str2;
✓ The function strcat concatenates (appends) one string to the end of another string.
strcat(destination, source);
The function strncat is like strcat except that it copies only a specified number of
characters.
31
strncat(destination, source, int n);
Example:
#include <iostream.h>
#include <string.h>
void main()
{
}
32
UNIT IV
Explain the steps, tools and technical approaches involved in program design
STAGES / STEPS INVOLVED IN PROGRAMMING
4. Coding
In this step programmer writes the instructions in a computer language to
solve the problem.
5. Debugging
In this stage we remove all the errors in the program because when we are
coding, there are chances that some mistakes may occur at that time.This is
done several times until all the errors are removed from the program and the
system become errors less.
33
6. Testing
In this stage we test the program by entering dummy data (includes usual,
unusual and invalid data) to check the behavior and result of the program
towards the given data.
7. Final Output
After going through all the above stages, the program is given the TRUE
DATA. Here the programmer expects the positive results of the program and
expects full efficiency of the program.
8. . Documentation
Most of the programmer neglect this stage by giving many reasons, but this is
very important because this will help the programmer to correct the problems
that may occur in the program.
34
Characteristics of Algorithm
All the instructions of algorithm should be simple.
The logic of each steps must be clear.
There should be finite number of steps for solving problems.
35
UNIT V
Abstraction. Abstraction essentially contains and conceals the inner workings of object-
oriented programming code to create simpler interfaces.
36
Self-test
a) James Gosling
b) Charles Babbage
c) Dennis Ritchie
d) Bjarne Stroustrup
a) pascal
b) machine language
c) C
d) C#
4. Which of the following refers to the fastest, biggest and the most expensive computer?
A. Mainframe Computer
B. Supercomputer
C. Hybrid Computer
D. Micro Computer
37
5. Artificial Intelligence is an example of ?
A. Third Generation
B. Fourth Generation
C. Fifth Generation
a. $var_name
b. VAR_123
c. varname@
d. None of the abov
a. @
b. #
c. &
d. %
a. Encapsulation
b. Inheritance
c. Polymorphism
d. All of the above
10. The programming language that has the ability to create new data types is called___.
a. Overloaded c. Reprehensible
b. Encapsulated d. Extensible
38
11. Which of the following refers to characteristics of an array?
a. Right to left
b. Left to right
c. Top to bottom
d. Bottom-up
rdd2
x (5)
_DATE_
A3O
39
Digital Logic Design
Introduction
A digital computer stores data in terms of digits (numbers) and proceeds in discrete steps from one state to the next.
The states of a digital computer typically involve binary digits which may take the form of the presence or absence of
magnetic markers in a storage medium, on-off switches or relays. In digital computers, even letters, words and whole
texts are represented digitally.
Digital Logic is the basis of electronic systems, such as computers and cell phones. Digital Logic is rooted in
binary code, a series of zeroes and ones each having an opposite value. This system facilitates the design of
electronic circuits that convey information, including logic gates. Digital Logic gate functions include and, or
and not. The value system translates input signals into specific output. Digital Logic facilitates computing,
robotics and other electronic applications.
Digital Logic Design is foundational to the fields of electrical engineering and computer engineering. Digital
Logic designers build complex electronic components that use both electrical and computational
characteristics. These characteristics may involve power, current, logical function, protocol and user input.
Digital Logic Design is used to develop hardware, such as circuit boards and microchip processors. This
hardware processes user input, system protocol and other data in computers, navigational systems, cell phones
or other high-tech systems.
The numeric system we use daily is the decimal system, but this system is not convenient for machines since the
information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing
the positional calculation which will allow us to express a number in any base where we need it.
A base of a number system or radix defines the range of values that a digit may have.
In the binary system or base 2, there can be only two values for each digit of a number, either a "0" or a "1".
In the octal system or base 8, there can be eight choices for each digit of a number:
In the decimal system or base 10, there are ten different values for each digit of a number:
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9".
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D", "E", and "F".
Let’s think about what you do to obtain each digit. As an example, let's start with a decimal number 1234 and convert it
to decimal notation. To extract the last digit, you move the decimal point left by one digit, which means that you divide
the given number by its base 10.
The remainder of 4 is the last digit. To extract the next last digit, you again move the decimal point left by one digit and
see what drops out.
123/10 = 12 + 3/10
The remainder of 3 is the next last digit. You repeat this process until there is nothing left. Then you stop. In summary,
you do the following:
Now, let's try a nontrivial example. Let's express a decimal number 1341 in binary notation. Note that the desired base is
2, so we repeatedly divide the given decimal number by 2.
Quotient Remainder
1341/16 = 83 13 ------+
83/16 = 5 3 ----+ |
5/16 = 0 5 --+ | | (Stop when the quotient is 0)
| | |
5 3 D (HEX; Base 16)
In conclusion, the easiest way to convert fixed point numbers to any base is to convert each part separately. We begin by
separating the number into its integer and fractional part. The integer part is converted using the remainder method, by
using a successive division of the number by the base until a zero is obtained. At each division, the reminder is kept and
then the new number in the base r is obtained by reading the remainder from the lat remainder upwards.
The conversion of the fractional part can be obtained by successively multiplying the fraction with the base. If we iterate
this process on the remaining fraction, then we will obtain successive significant digit. This methods form the basis of
the multiplication methods of converting fractions between bases
Example. Convert the decimal number 3315 to hexadecimal notation. What about the hexadecimal equivalent of the
decimal number 3315.3?
Let's think more carefully what a decimal number means. For example, 1234 means that there are four boxes (digits);
and there are 4 one's in the right-most box (least significant digit), 3 ten's in the next box, 2 hundred's in the next box,
and finally 1 thousand's in the left-most box (most significant digit). The total is 1234:
Original Number: 1 2 3 4
| | | |
How Many Tokens: 1 2 3 4
Digit/Token Value: 1000 100 10 1
Value: 1000 + 200 + 30 + 4 = 1234
Thus, each digit has a value: 10^0=1 for the least significant digit, increasing to 10^1=10, 10^2=100, 10^3=1000, and so
forth.
Likewise, the least significant digit in a hexadecimal number has a value of 16^ 0=1 for the least significant digit,
increasing to 16^1=16 for the next digit, 16^2=256 for the next, 16^3=4096 for the next, and so forth. Thus, 1234 means
that there are four boxes (digits); and there are 4 one's in the right-most box (least significant digit), 3 sixteen's in the
next box, 2 256's in the next, and 1 4096's in the left-most box (most significant digit). The total is:
In summary, the conversion from any base to base 10 can be obtained from the formulae
n−1
=
d b
i
x Where b is the base, d the digit at position i, m the number of digit after the decimal point, n the number
10 i i
i=−m
of digits of the integer part and X10 is the obtained number in decimal. This form the basic of the polynomial method of
converting numbers from any base to decimal
2*82 + 3*81 + 4*80+1*8-1 + 4*8-2 = 2*64 +3*8 +4*1 +1/8 +4/64 =156.1875
Solution:
Original Number: 4 B 3 . 3
| | | |
How Many Tokens: 4 11 3 3
Digit/Token Value: 256 16 1 0.0625
Value: 1024 +176 + 3 + 0.1875 = 1203.1875
Solution:
Original Number: 2 3 4 . 1 4
| | | | |
How Many Tokens: 2 3 4 1 4
Digit/Token Value: 64 8 1 0.125 0.015625
Value: 128 + 24 + 4 + 0.125 + 0.0625 = 156.1875
As demonstrated by the table bellow, there is a direct correspondence between the binary system and the octal system,
with three binary digits corresponding to one octal digit. Likewise, four binary digits translate directly into one
hexadecimal digit.
0000 00 0 0
0001 01 1 1
0010 02 2 2
0011 03 3 3
0100 04 4 4
0101 05 5 5
0110 06 6 6
0111 07 7 7
1000 10 8 8
1001 11 9 9
1010 12 A 10
1011 13 B 11
1100 14 C 12
1101 15 D 13
1110 16 E 14
1111 17 F 15
With such relationship, In order to convert a binary number to octal, we partition the base 2 number into groups of three
starting from the radix point, and pad the outermost groups with 0’s as needed to form triples. Then, we convert each
triple to the octal equivalent.
Notice that the leftmost two bits are padded with a 0 on the left in order to create a full triplet.
The conversion methods can be used to convert a number from any base to any other base, but it may not be very
intuitive to convert something like 513.03 to base 7. As an aid in performing an unnatural conversion, we can convert to
the more familiar base 10 form as an intermediate step, and then continue the conversion from base 10 to the target base.
As a general rule, we use the polynomial method when converting into base 10, and we use the remainder and
multiplication methods when converting out of base 10.
Numeric complements
The radix complement of an n digit number y in radix b is, by definition, bn − y. Adding this to x results in the value x +
bn − y or x − y + bn. Assuming y ≤ x, the result will always be greater than bn and dropping the initial '1' is the same as
subtracting bn, making the result x − y + bn − bn or just x − y, the desired result.
The radix complement is most easily obtained by adding 1 to the diminished radix complement, which is (bn − 1) − y.
Since (bn − 1) is the digit b − 1 repeated n times (because bn − 1 = bn − 1n = (b − 1)(bn − 1 + bn − 2 + ... + b + 1) = (b − 1)bn
− 1
+ ... + (b − 1), see also binomial numbers), the diminished radix complement of a number is found by complementing
each digit with respect to b − 1 (that is, subtracting each digit in y from b − 1). Adding 1 to obtain the radix complement
can be done separately, but is most often combined with the addition of x and the complement of y.
In the decimal numbering system, the radix complement is called the ten's complement and the diminished radix
complement the nines' complement.
In binary, the radix complement is called the two's complement and the diminished radix complement the ones'
complement. The naming of complements in other bases is similar.
- Decimal example
To subtract a decimal number y from another number x using the method of complements, the ten's complement of y
(nines' complement plus 1) is added to x. Typically, the nines' complement of y is first obtained by determining the
complement of each digit. The complement of a decimal digit in the nines' complement system is the number that must
be added to it to produce 9. The complement of 3 is 6, the complement of 7 is 2, and so on. Given a subtraction
problem:
873 (x)
- 218 (y)
The nines' complement of y (218) is 781. In this case, because y is three digits long, this is the same as subtracting y
from 999. (The number of 9's is equal to the number of digits of y.)
873 (x)
+ 781 (complement of y)
+ 1 (to get the ten's complement of y)
=====
1655
The first "1" digit is then dropped, giving 655, the correct answer.
If the subtrahend has fewer digits than the minuend, leading zeros must be added which will become leading nines when
the nines' complement is taken. For example:
- Binary example
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained
by inverting each bit (changing '0' to '1' and vice versa). And adding 1 to get the two's complement can be done by
simulating a carry into the least significant bit. For example:
Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)
The signed magnitude (also referred to as sign and magnitude) representation is most familiar to us as the base 10
number system. A plus or minus sign to the left of a number indicates whether the number is positive or negative as in
+1210 or −1210. In the binary signed magnitude representation, the leftmost bit is used for the sign, which takes on a
value of 0 or 1 for ‘+’ or ‘−’, respectively. The remaining bits contain the absolute magnitude.
(+12)10 = (00001100)2
(−12)10 = (10001100)2
The negative number is formed by simply changing the sign bit in the positive number from 0 to 1. Notice that there are
both positive and negative representations for zero: +0= 00000000 and -0= 10000000.
The one’s complement operation is trivial to perform: convert all of the 1’s in the number to 0’s, and all of the 0’s to 1’s.
See the fourth column in Table1 for examples. We can observe from the table that in the one’s complement
representation the leftmost bit is 0 for positive numbers and 1 for negative numbers, as it is for the signed magnitude
representation. This negation, changing 1’s to 0’s and changing 0’s to 1’s, is known as complementing the bits.
Consider again representing (+12)10 and (−12)10 in an eight-bit format, now using the one’s complement
representation:
(+12)10 = (00001100)2
(−12)10 = (11110011)2
Note again that there are representations for both +0 and −0, which are 00000000 and 11111111, respectively. As a
result, there are only 28 − 1 = 255 different numbers that can be represented even though there are 2 8 different bit
patterns.
The one’s complement representation is not commonly used. This is at least partly due to the difficulty in making
comparisons when there are two representations for 0. There is also additional complexity involved in adding numbers.
The two’s complement is formed in a way similar to forming the one’s complement: complement all of the bits in the
number, but then add 1, and if that addition results in a carry-out from the most significant bit of the number, discard the
carry-out.
Examination of the fifth column of Table above shows that in the two’s complement representation, the leftmost bit is
again 0 for positive numbers and is 1 for negative numbers. However, this number format does not have the unfortunate
characteristic of signed-magnitude and one’s complement representations: it has only one representation for zero. To see
that this is true, consider forming the negative of (+0)10, which has the bit pattern: (+0)10 = (00000000)2
1 to it yields (00000000)2, thus (−0)10 = (00000000)2. The carry out of the leftmost position is discarded in two’s
complement addition (except when detecting an overflow condition). Since there is only one representation for 0, and
since all bit patterns are valid, there are 28 = 256 different numbers that can be represented.
Consider again representing (+12)10 and (−12)10 in an eight-bit format, this time using the two’s complement
representation. Starting with (+12)10 =(00001100)2, complement, or negate the number, producing (11110011)2.
(+12)10 = (00001100)2
(−12)10 = (11110100)2
There is an equal number of positive and negative numbers provided zero is considered to be a positive number, which
is reasonable because its sign bit is 0. The positive numbers start at 0, but the negative numbers start at −1, and so the
magnitude of the most negative number is one greater than the magnitude of the most positive number. The positive
number with the largest magnitude is +127, and the negative number with the largest magnitude is −128. There is thus
no positive number that can be represented that corresponds to the negative of −128. If we try to form the two’s
complement negative of −128, then we will arrive at a negative number, as shown below:
(−128)10 = (10000000)2
(−128)10 = (01111111
(−128)10 + (+0000001)2
(−128)10 ——————)2
(−128)10 = (10000000)2
The two’s complement representation is the representation most commonly used in conventional computers.
- Excess Representation
In the excess or biased representation, the number is treated as unsigned, but is “shifted” in value by subtracting the bias
from it. The concept is to assign the smallest numerical bit pattern, all zeros, to the negative of the bias, and assign the
remaining numbers in sequence as the bit patterns increase in magnitude. A convenient way to think of an excess
representation is that a number is represented as the sum of its two’s complement form and another number, which is
known as the “excess,” or “bias.” Once again, refer to Table 2.1, the rightmost column, for examples.
Consider again representing (+12)10 and (−12)10 in an eight-bit format but now using an excess 128 representation. An
excess 128 number is formed by adding 128 to the original number, and then creating the unsigned binary version. For
(+12)10, we compute (128 + 12 = 140)10 and produce the bit pattern (10001100)2. For (−12)10, we compute (128 + −12 =
116)10 and produce the bit pattern (01110100)2
(+12)10 = (10001100)2
(−12)10 = (01110100)2
Note that there is no numerical significance to the excess value: it simply has the effect of shifting the representation of
the two’s complement numbers.
There is only one excess representation for 0, since the excess representation is simply a shifted version of the two’s
complement representation. For the previous case, the excess value is chosen to have the same bit pattern as the largest
negative number, which has the effect of making the numbers appear in numerically sorted order if the numbers are
viewed in an unsigned binary representation.
Thus, the most negative number is (−128)10 = (00000000)2 and the most positive number is (+127)10 = (11111111)2.
This representation simplifies making comparisons between numbers, since the bit patterns for negative numbers have
numerically smaller values than the bit patterns for positive numbers. This is important for representing the exponents of
floating point numbers, in which exponents of two numbers are compared in order to make them equal for addition and
subtraction.
choosing a bias:
The bias chosen is most often based on the number of bits (n) available for representing an integer. To get an
approximate equal distribution of true values above and below 0, the bias should be 2(n-1) or 2(n-1) - 1
- Normalization
A potential problem with representing floating point numbers is that the same number can be represented in different
ways, which makes comparisons and arithmetic operations difficult. For example, consider the numerically equivalent
forms shown below:
In order to avoid multiple representations for the same number, floating point numbers are maintained in normalized
form. That is, the radix point is shifted to the left or to the right and the exponent is adjusted accordingly until the radix
point is to the left of the leftmost nonzero digit. So the rightmost number above is the normalized one. Unfortunately,
the number zero cannot be represented in this scheme, so to represent zero an exception is made. The exception to this
rule is that zero is represented as all 0’s in the mantissa.
If the mantissa is represented as a binary, that is, base 2, number, and if the normalization condition is that there is a
leading “1” in the normalized mantissa, then there is no need to store that “1” and in fact, most floating point formats do
not store it. Rather, it is “chopped off” before packing up the number for storage, and it is restored when unpacking the
number into exponent and mantissa. This results in having an additional bit of precision on the right of the number, due
to removing the bit on the left. This missing bit is referred to as the hidden bit, also known as a hidden 1.
For example, if the mantissa in a given format is 1.1010 after normalization, then the bit pattern that is stored is 1010—
the left-most bit is truncated, or hidden.
The number of words used (i.e. the total number of bits used.)
The representation of the mantissa (2s complement etc.)
The representation of the exponent (biased etc.)
The total number of bits devoted for the mantissa and the exponent
The location of the mantissa (exponent first or mantissa first)
Because of the five points above, the number of ways in which a floating-point number may be represented is legion.
Many representations use the format of sign bit to represent a floating point where the leading bit represents the sign
Sign Exponent Mantissa
The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for floating point format arithmetic
in mini and microcomputers.(i.e. ANSI/IEEE 754-1985). This standard specifies how single precision (32 bit) , double
precision (64 bit) and Quadruple (128 bit) floating point numbers are to be represented, as well as how arithmetic should
be carried out on them.
Binary floating-point numbers are stored in a sign-magnitude form where the most significant bit is the sign bit,
exponent is the biased exponent, and "fraction" is the significand without the most significant bit.
Exponent biasing
The exponent is biased by (2e − 1) − 1, where e is the number of bits used for the exponent field (e.g. if e=8, then (28 − 1) −
1 = 128 − 1 = 127). Biasing is done because exponents have to be signed values in order to be able to represent both
tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
To solve this, the exponent is biased before being stored by adjusting its value to put it within an unsigned range
suitable for comparison.
For example, to represent a number which has exponent of 17 in an exponent field 8 bits wide:
Single Precision
The IEEE single precision floating point standard representation requires a 32-bit word, which may be represented as
numbered from 0 to 31, left to right. The first bit is the sign bit, S, the next eight bits are the exponent bits, 'E', and the
final 23 bits are the fraction 'F':
S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF
01 89 31
To convert decimal 17.15 to IEEE Single format:
Convert decimal 17 to binary 10001. Convert decimal 0.15 to the repeating binary fraction 0.001001 Combine
integer and fraction to obtain binary 10001.001001 Normalize the binary number to obtain 1.0001001001x24
Thus, M = m-1 =0001001001 and E = e+127 = 131 = 10000011.
The number is positive, so S=0. Align the values for M, E, and S in the correct fields.
0 10000011 00010010011001100110011
Note that if the exponent does not use all the field allocated to it, there will be leading 0’s while for the mantissa,
the zero’s will be filled at the end.
Double Precision
The IEEE double precision floating point standard representation requires a 64-bit word, which may be represented as
numbered from 0 to 63, left to right. The first bit is the sign bit, S, the next eleven bits are the exponent bits, 'E', and the
final 52 bits are the fraction 'F':
S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 11 12
The IEEE Quad precision floating point standard representation requires a 128-bit word, which may be represented as
numbered from 0 to 127, left to right. The first bit is the sign bit, S, the next fifteen bits are the exponent bits, 'E', and the
final 128 bits are the fraction 'F':
S EEEEEEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 15 16
Binary code
Internally, digital computers operate on binary numbers. When interfacing to humans, digital processors, e.g. pocket
calculators, communication is decimal-based. Input is done in decimal then converted to binary for internal processing.
For output, the result has to be converted from its internal binary representation to a decimal form. Digital system
represents and manipulates not only binary number but also many other discrete elements of information.
In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each
digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for
printing or display and faster decimal calculations. Its drawbacks are the increased complexity of circuits needed to
implement mathematical operations and a relatively inefficient encoding. It occupies more space than a pure binary
representation. In BCD, a digit is usually represented by four bits which, in general, represent the
values/digits/characters0-9
To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.
Decimal: 0 1 2 3 4 5 6 7 8 9
BCD: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001
Thus, the BCD encoding for the number 127 would be:
The position weights of the BCD code are 8, 4, 2, 1. Other codes (shown in the table) use position weights of 8, 4, -2, -1
and 2, 4, 2, 1.
An example of a non-weighted code is the excess-3 code where digit codes is obtained from
their binary equivalent after adding 3. Thus, the code of a decimal 0 is 0011, that of 6 is 1001,
etc
it is very important to understand the difference between the conversion of a decimal number to binary and the binary
coding of a decimal number. In each case, the final result is a series of bits. The bits obtained from conversion are
binary digit. Bits obtained from coding are combinations of 1’s and 0’s arranged according to the rule of the code used.
e.g. the binary conversion of 13 is 1101; the BCD coding of 13 is 00010011
- Error-Detection Codes
Binary information may be transmitted through some communication medium, e.g., using wires or wireless media. A
corrupted bit will have its value changed from 0 to 1 or vice versa. To be able to detect errors at the receiver end, the
sender sends an extra bit (parity bit) with the original binary message.
A parity bit is an extra bit included with the n-bit binary message to make the total number of 1’s in this message
(including the parity bit) either odd or even. If the parity bit makes the total number of 1’s an odd (even) number, it is
called odd (even) parity. The table shows the required odd (even) parity for a 3-bit message
No error is detectable if the transmitted message has 2 bits in error since the total number of 1’s will remain even (or
odd) as in the original message.
In general, a transmitted message with even number of errors cannot be detected by the parity bit.
- Gray code
The Gray code consist of 16 4-bit code words to represent the decimal Numbers 0 to 15. For Gray code, successive
code words differ by only one bit from one to the next
Character Representation
Even though many people used to think of computers as "number crunchers", people figured out long ago that it's just as
important to handle character data.
Character data isn't just alphabetic characters, but also numeric characters, punctuation, spaces, etc. Most keys on the
central part of the keyboard (except shift, caps lock) are characters. Characters need to represented. In particular, they
need to be represented in binary. After all, computers store and manipulate 0's and 1's (and even those 0's and 1's are just
abstractions. The implementation is typically voltages).
Unsigned binary and two's complement are used to represent unsigned and signed integer respectively, because they
have nice mathematical properties, in particular, you can add and subtract as you'd expect.
However, there aren't such properties for character data, so assigning binary codes for characters is somewhat arbitrary.
The most common character representation is ASCII, which stands for American Standard Code for Information
Interchange.
There are two reasons to use ASCII. First, we need some way to represent characters as binary numbers (or,
equivalently, as bitstring patterns). There's not much choice about this since computers represent everything in binary.
If you've noticed a common theme, it's that we need representation schemes for everything. However, most importantly,
we need representations for numbers and characters. Once you have that (and perhaps pointers), you can build up
everything you need.
The other reason we use ASCII is because of the letter "S" in ASCII, which stands for "standard". Standards are good
because they allow for common formats that everyone can agree on.
Unfortunately, there's also the letter "A", which stands for American. ASCII is clearly biased for the English language
character set. Other languages may have their own character set, even though English dominates most of the computing
world (at least, programming and software).
Even though character sets don't have mathematical properties, there are some nice aspects about ASCII. In particular,
the lowercase letters are contiguous ('a' through 'z' maps to 9710 through 12210). The upper case letters are also
contiguous ('A' through 'Z' maps to 6510 through 9010). Finally, the digits are contiguous ('0' through '9' maps to 4810
through 5710).
The characters between 0 and 31 are generally not printable (control characters, etc). 32 is the space character. Also
note that there are only 128 ASCII characters. This means only 7 bits are required to represent an ASCII character.
However, since the smallest size representation on most computers is a byte, a byte is used to store an ASCII character.
The Most Significant bit(MSB) of an ASCII character is 0.
00 nul 10 dle 20 sp 30 0 40 @ 50 P 60 ` 70 p
01 soh 11 dc1 21 ! 31 1 41 A 51 Q 61 a 71 q
02 stx 12 dc2 22 " 32 2 42 B 52 R 62 b 72 r
03 etx 13 dc3 23 # 33 3 43 C 53 S 63 c 73 s
04 eot 14 dc4 24 $ 34 4 44 D 54 T 64 d 74 t
05 enq 15 nak 25 % 35 5 45 E 55 U 65 e 75 u
06 ack 16 syn 26 & 36 6 46 F 56 V 66 f 76 v
07 bel 17 etb 27 ' 37 7 47 G 57 W 67 g 77 w
08 bs 18 can 28 ( 38 8 48 H 58 X 68 h 78 x
09 ht 19 em 29 ) 39 9 49 I 59 Y 69 i 79 y
0a nl 1a sub 2a * 3a : 4a J 5a Z 6a j 7a z
0b vt 1b esc 2b + 3b ; 4b K 5b [ 6b k 7b {
0c np 1c fs 2c , 3c < 4c L 5c \ 6c l 7c |
0d cr 1d gs 2d - 3d = 4d M 5d ] 6d m 7d }
0e so 1e rs 2e . 3e > 4e N 5e ^ 6e n 7e ~
0f si 1f us 2f / 3f ? 4f O 5f _ 6f o 7f del
The difference in the ASCII code between an uppercase letter and its corresponding lowercase letter is 20 16. This makes
it easy to convert lower to uppercase (and back) in hex (or binary).
While ASCII is still popularly used, another character representation that was used (especially at IBM) was EBCDIC,
which stands for Extended Binary Coded Decimal Interchange Code (yes, the word "code" appears twice). This
character set has mostly disappeared. EBCDIC does not store characters contiguously, so this can create problems
alphabetizing "words".
Other countries have used different solutions, in particular, using 8 bits to represent their alphabets, giving up to 256
letters, which is plenty for most alphabet-based languages (recall you also need to represent digits, punctuation, etc).
However, Asian languages, which are word-based, rather than character-based, often have more words than 8 bits can
represent. In particular, 8 bits can only represent 256 words, which is far smaller than the number of words in natural
languages.
Thus, a new character set called Unicode is now becoming more prevalent. This is a 16 bit code, which allows for about
65,000 different representations. This is enough to encode the popular Asian languages (Chinese, Korean, Japanese,
etc.). It also turns out that ASCII codes are preserved. What does this mean? To convert ASCII to Unicode, take all one
byte ASCII codes, and zero-extend them to 16 bits. That should be the Unicode version of the ASCII characters.
The biggest consequence of using Unicode from ASCII is that text files double in size. The second consequence is that
endianness begins to matter again. Endianness is the ordering of individually addressable sub-units (words, bytes, or
even bits) within a longer data word stored in external memory. The most typical cases are the ordering of bytes within a
16-, 32-, or 64-bit word, where endianness is often simply referred to as byte order. The usual contrast is between most
versus least significant byte first, called big-endian and little-endian respectively.
Big-endian places the most significant bit, digit, or byte in the first, or leftmost, position. Little-endian places the most
significant bit, digit, or byte in the last, or rightmost, position. Motorola processors employ the big-endian approach,
whereas Intel processors take the little-endian approach. Table below illustrates how the decimal value 47,572 would be
expressed in hexadecimal and binary notation (two octets) and how it would be stored using these two methods.
Table: Endianess
Number Big-Endian Little-Endian
Hexadecimal
B9D4 B9D4 4D9B
Binary
10111001 10111001 11010100
11010100 11010100 10111001
With single bytes, there's no need to worry about endianness. However, you have to consider that with two byte
quantities.
While C and C++ still primarily use ASCII, Java has already used Unicode. This means that Java must create a byte
type, because char in Java is no longer a single byte. Instead, it's a 2 byte Unicode representation.
Exercise
1. The state of a 12-bit register is 010110010111. what is its content if it represents:
i. Decimal digits in BCD code ii. Decimal digits in excess-3 code
2. The result of an experiment fall in the range -4 to +6. A scientist wishes to read the result into a computer and then
process them. He decides to use a 4-bit binary code to represents each of the possible inputs. Devise a 4-bit binary code
of representing numbers in the range of -4 to 6
3. The (r-1)’s complement of a base-6 numbers is called the 5’s complement. Explain the procedure for obtaining the 5’s
complement of base 6 numbers. Obtain the 5’s complement of (3210)6
4. Design a three-bit code to represent each of the six digits of the base 6 number system.
5. Represent the decimal number –234.125 using the IEEE 32- bit (single) format
Introduction
Binary logic deals with variables that assume discrete values and with operators that assume logical meaning.
While each logical element or condition must always have a logic value of either "0" or "1", we also need to have
ways to combine different logical signals or conditions to provide a logical result.
For example, consider the logical statement: "If I move the switch on the wall up, the light will turn on." At first
glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more
to it than this. In this example, a more complete statement would be: "If I move the switch on the wall up and the
light bulb is good and the power is on, the light will turn on."
If we look at these two statements as logical expressions and use logical terminology, we can reduce the first
statement to:
Light = Switch
This means nothing more than that the light will follow the action of the switch, so that when the switch is
up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be
off/false/0.
Looking at the second version of the statement, we have a slightly more complex expression:
When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need
some special symbols to denote these functions in a logical diagram. There are three fundamental logical
operations, from which all other functions, no matter how complex, can be derived. These functions are named
and, or, and not. Each of these has a specific symbol and a clearly-defined behavior.
AND. The AND operation is represented by a dot(.) or by the absence of an operator. E.g. x.y=z xy=z are all read as x
AND y=z. the logical operation AND is interpreted to mean that z=1 if and only if x=1 and y=1 otherwise z=0
OR. The operation is represented by a + sign for example, x+y=z is interpreted as x OR y=z meaning that z=1 if x=1 or
y=1 or if both x=1 and y=1. If both x and y are 0, then z=0
NOT. This operation is represented by a bar or a prime. For example, x′= x =z is interpreted as NOT x =z meaning that z
is what x is not
It should be noted that although the AND and the OR operation have some similarity with the multiplication and
addition respectively in binary arithmetic, however one should note that an arithmetic variable may consist of many
digits. A binary logic variable is always 0 or 1.
Basic Gate
The basic building blocks of a computer are called logical gates or just gates. Gates are basic circuits that have at least
one (and usually more) input and exactly one output. Input and output values are the logical values true and false. In
computer architecture it is common to use 0 for false and 1 for true. Gates have no memory. The value of the output
depends only on the current value of the inputs. A useful way of describing the relationship between the inputs of gates
We usually consider three basic kinds of gates, and-gates, or-gates, and not-gates (or inverters).
The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals
applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0.
The truth table for an and-gate with two inputs looks like this:
x y | z
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
There is no limit to the number of inputs that may be applied to an AND function, so there is no functional limit to
the number of inputs an AND gate may have. However, for practical reasons, commercial AND gates are most
commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for
practical size and handling. A standard 14-pin package can contain four 2-input gates, three 3-input gates, or two 4-
input gates, and still have room for two pins for power supply connections.
- The OR Gate
The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to
be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, "If it is raining OR if I turn on the
sprinkler, the lawn will be wet." Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is
correctly reflected by the basic OR function.
In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol below designates the
OR gate.
The truth table for an or-gate with two inputs looks like this:
x y | z
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
As with the AND function, the OR function can have any number of inputs. However, practical commercial OR gates
are mostly limited to 2, 3, and 4 inputs, as with AND gates.
The inverter is a little different from AND and OR gates in that it always has exactly one input as well as one output.
Whatever logical state is applied to the input, the opposite state will appear at the output.
x | y
0 | 1
1 | 0
The NOT function, as it is called, is necessary in many applications and highly useful in others. A practical verbal
application might be:
In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it "cleans up"
the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The
circle could have been placed at the input instead, and the logical meaning would still be the same
Combined gates
Sometimes, it is practical to combine functions of the basic gates into more complex gates, for instance in order to save
space in circuit diagrams. In this section, we show some such combined gates together with their truth tables.
- The NAND-gate
The NAND -gate is an and-gate with an inverter on the output. So instead of drawing several gates like this:
We draw a single and-gate with a little ring on the output like this:
The nand-gate, like the and-gate can take an arbitrary number of inputs.
The truth table for the nand-gate is like the one for the and-gate, except that all output values have been inverted:
0 0 | 1
0 1 | 1
1 0 | 1
1 1 | 0
The truth table clearly shows that the NAND operation is the complement of the AND
- The nor-gate
The nor-gate is an or-gate with an inverter on the output. So instead of drawing several gates like this:
We draw a single or-gate with a little ring on the output like this:
The nor-gate, like the or-gate can take an arbitrary number of inputs.
The truth table for the nor-gate is like the one for the or-gate, except that all output values have been inverted:
xy|z
0 0|1
0 1|0
1 0|0
1 1|0
- The exclusive-or-gate
The exclusive-or-gate is similar to an or-gate. It can have an arbitrary number of inputs, and its output value is 1 if and
only if exactly one input is 1 (and thus the others 0). Otherwise, the output is 0.
The truth table for an exclusive-or-gate with two inputs looks like this:
0 0|0
0 1|1
1 0|1
1 1|0
- The Exclusive-Nor-gate
The exclusive-Nor-gate is similar to an N or-gate. It can have an arbitrary number of inputs, and its output value is 1 if
and only if the two input are of the same values (1 and 1 or 0 and 0). Otherwise, the output is 0.
The truth table for an exclusive-nor-gate with two inputs looks like this:
xy|z
0 0|1
0 1|0
1 0|0
1 1|1
Let us limit ourselves to gates with n inputs. The truth tables for such gates have 2n lines. Such a gate is completely
defined by the output column in the truth table. The output column can be viewed as a string of 2 n binary digits. How
many different strings of binary digits of length 2n are there? The answer is 22n, since there are 2k different strings of k
binary digits, and if k=2n, then there are 22n such strings. In particular, if n=2, we can see that there are 16 different
types of gates with 2 inputs.
Diode logic gates use diodes to perform AND and OR logic functions. Diodes have the property of easily passing an
electrical current in one direction, but not the other. Thus, diodes can act as a logical switch.
Diode logic gates are very simple and inexpensive, and can be used effectively in specific situations. However, they
cannot be used extensively, as they tend to degrade digital signals rapidly. In addition, they cannot perform a NOT
function, so their usefulness is quite limited.
Resistor-transistor logic gates use Transistors to combine multiple input signals, which also amplify and invert the
resulting combined signal. Often an additional transistor is included to re-invert the output signal. This combination
provides clean output signals and either inversion or non-inversion as needed.
Although they are not designed for linear operation, RTL integrated circuits are sometimes used as inexpensive small-
signal amplifiers, or as interface devices between linear and digital circuits.
By letting diodes perform the logical AND or OR function and then amplifying the result with a transistor, we can avoid
some of the limitations of RTL. DTL takes diode logic gates and adds a transistor to the output, in order to provide logic
inversion and to restore the signal to full logic levels.
The physical construction of integrated circuits made it more effective to replace all the input diodes in a DTL gate with
a transistor, built with multiple emitters. The result is transistor-transistor logic, which became the standard logic circuit
in most applications for a number of years.
As the state of the art improved, TTL integrated circuits were adapted slightly to handle a wider range of requirements,
but their basic functions remained the same. These devices comprise the 7400 family of digital ICs.
Also known as Current Mode Logic (CML), ECL gates are specifically designed to operate at extremely high speeds, by
avoiding the "lag" inherent when transistors are allowed to become saturated. Because of this, however, these gates
demand substantial amounts of electrical current to operate correctly.
- CMOS Logic
One factor is common to all of the logic families we have listed above: they use significant amounts of electrical power.
Many applications, especially portable, battery-powered ones, require that the use of power be absolutely minimized. To
accomplish this, the CMOS (Complementary Metal-Oxide-Semiconductor) logic family was developed. This family
uses enhancement-mode MOSFETs as its transistors, and is so designed that it requires almost no current to operate.
CMOS gates are, however, severely limited in their speed of operation. Nevertheless, they are highly useful and
effective in a wide range of battery-powered applications.
Most logic families share a common characteristic: their inputs require a certain amount of current in order to operate
correctly. CMOS gates work a bit differently, but still represent a capacitance that must be charged or discharged when
the input changes state. The current required to drive any input must come from the output supplying the logic signal.
Therefore, we need to know how much current an input requires, and how much current an output can reliably supply,
in order to determine how many inputs may be connected to a single output.
However, making such calculations can be tedious, and can bog down logic circuit design. Therefore, we use a different
technique. Rather than working constantly with actual currents, we determine the amount of current required to drive
one standard input, and designate that as a standard load on any output. Now we can define the number of standard
loads a given output can drive, and identify it that way. Unfortunately, some inputs for specialized circuits require more
than the usual input current, and some gates, known as buffers, are deliberately designed to be able to drive more inputs
than usual. For an easy way to define input current requirements and output drive capabilities, we define two new terms:
fan-in and fan-out
Fan-in
Fan-in is a term that defines the maximum number of digital inputs that a single logic gate can accept. Most transistor-
transistor logic (TTL) gates have one or two inputs, although some have more than two. A typical logic gate has a fan-
in of 1 or 2.
Fan-out
Fan-out is a term that defines the maximum number of digital inputs that the output of a single logic gate can feed. Most
transistor-transistor logic (TTL) gates can feed up to 10 other digital gates or devices. Thus, a typical TTL gate has a
fan-out of 10.
In some digital systems, it is necessary for a single TTL logic gate to drive more than 10 other gates or devices. When
this is the case, a device called a buffer can be used between the TTL gate and the multiple devices it must drive. A
buffer of this type has a fan-out of 25 to 30. A logical inverter (also called a NOT gate) can serve this function in most
digital circuits.
Remember, fan-in and fan-out apply directly only within a given logic family. If for any reason you need to interface
between two different logic families, be careful to note and meet the drive requirements and limitations of both families,
within the interface circuitry
Boolean Algebra
One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible.
This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce
the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler
circuit, which in turn saves the price of the unnecessary gates, reduces the number of gates needed, and reduces the
power and the amount of space required by those gates.
One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854
and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied
to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that
the reduction was valid.
Boolean algebra is an algebraic structure defined on a set of elements B, together with two binary operators (+, .)
provided the following postulates are satisfied.
5. For every element x belonging to B, there exist an element x′ or x called the complement of x such that x.
x′=0 and x+ x′=1
6. There exists at least two elements x,y belonging to B such that x ≠y
The two valued Boolean algebra is defined on a set B={0,1} with two binary operators + and.
Closure. from the tables, the result of each operation is either 0 or 1 and 1 ,0 belongs to B
Identity. From the truth table we see that 0 is the identity element for + and 1 is the identity element for . .
From the complement table we can see that x+ x′=1 i.e 1+0=1 and x. x′=0 i.e 1.0=0
The principle of duality state that every algebraic expression which can be deduced from the postulates of Boolean
algebra remains valid if the operators and the identity elements are interchanged. This mean that the dual of an
expression is obtained changing every AND(.) to OR(+), every OR(+) to AND(.) and all 1's to 0's and vice-versa
Postulate 5 :
(a) A + A = A (b) A A = A
Theorem2
(a) 1 + A = 1 (b) 0. A = 0
Theorem3: involution
A′′=A
(a) A + B = B + A (b) A B = B A
(a) (A + B) + C = A + (B + C) (b) (A B) C = A (B C)
(a) A (B + C) = A B + A C (b) A + (B C) = (A + B) (A + C)
Theorem6 : Absorption
(a) A + A B = A (b) A (A + B) = A
The combinational circuit consist of logic gates whose outputs at any time is determined directly from the present
combination of input without any regard to the previous input. A combinational circuit performs a specific information
processing operation fully specified logically by a set of Boolean functions.
A combinatorial circuit is a generalized gate. In general such a circuit has m inputs and n outputs. Such a circuit can
always be constructed as n separate combinatorial circuits, each with exactly one output. For that reason, some texts
only discuss combinatorial circuits with exactly one output. In reality, however, some important sharing of intermediate
signals may take place if the entire n-output circuit is constructed at once. Such sharing can significantly reduce the
number of gates required to build the circuit.
When we build a combinatorial circuit from some kind of specification, we always try to make it as good as possible.
The only problem is that the definition of "as good as possible" may vary greatly. In some applications, we simply want
to minimize the number of gates (or the number of transistors, really). In other, we might be interested in as short a
delay (the time it takes a signal to traverse the circuit) as possible, or in as low power consumption as possible. In
general, a mixture of such criteria must be applied.
To specify the exact way in which a combinatorial circuit works, we might use different methods, such as logical
expressions or truth tables.
A truth table is a complete enumeration of all possible combinations of input values, each one with its associated output
value.
When used to describe an existing circuit, output values are (of course) either 0 or 1. Suppose for instance that we wish
to make a truth table for the following circuit:
All we need to do to establish a truth table for this circuit is to compute the output value for the circuit for each possible
combination of input values. We obtain the following truth table:
wxy|ab
-
000|01
001|01
010|11
011|10
100|11
101|11
110|11
111|10
When used as a specification for a circuit, a table may have some output values that are not specified, perhaps because
the corresponding combination of input values can never occur in the particular application. We can indicate such
unspecified output values with a dash -.
For instance, let us suppose we want a circuit of four inputs, interpreted as two nonnegative binary integers of two
binary digits each, and two outputs, interpreted as the nonnegative binary integer giving the quotient between the two
input numbers. Since division is not defined when the denominator is zero, we do not care what the output value is in
this case. Of the sixteen entries in the truth table, four have a zero denominator. Here is the truth table:
x1 x0 y1 y0 | z1 z0
-
0 0 0 0 |- -
0 0 0 1 |0 0
0 0 1 0 |0 0
0 0 1 1 |0 0
0 1 0 0 |- -
0 1 0 1 |0 1
0 1 1 0 |0 0
0 1 1 1 |0 0
1 0 0 0 |- -
1 0 0 1 |1 0
1 0 1 0 |0 1
1 0 1 1 |0 0
1 1 0 0 |- -
1 1 0 1 |1 1
1 1 1 0 |0 1
1 1 1 1 |0 1
Unspecified output values like this can greatly decrease the number of circuits necessary to build the circuit. The reason
is simple: when we are free to choose the output value in a particular situation, we choose the one that gives the fewest
total number of gates.
Circuit minimization is a difficult problem from complexity point of view. Computer programs that try to optimize
circuit design apply a number of heuristics to improve speed. In this course, we are not concerned with optimality. We
are therefore only going to discuss a simple method that works for all possible combinatorial circuits (but that can waste
large numbers of gates).
A separate single-output circuit is built for each output of the combinatorial circuit.
Our simple method starts with the truth table (or rather one of the acceptable truth tables, in case we have a choice). Our
circuit is going to be a two-layer circuit. The first layer of the circuit will have at most 2n AND-gates, each with n inputs
(where n is the number of inputs of the combinatorial circuit). The second layer will have a single OR-gate with as many
inputs as there are gates in the first layer. For each line of the truth table with an output value of 1, we put down a AND-
gate with n inputs. For each input entry in the table with a 1 in it, we connect an input of the AND-gate to the
corresponding input. For each entry in the table with a 0 in it, we connect an input of the AND-gate to the corresponding
input inverted.
The output of each AND-gate of the fist layer is then connected to an input of the OR-gate of the second layer.
As an example of our general method, consider the following truth table (where a - indicates that we don't care what
value is chosen):
A=x′y′z+x′yz′+xyz
B=x′y′z+xy′z′
For the first column, we get three 3-input AND-gates in the first layer, and a 3-input OR-gate in the second layer. We get
three AND -gates since there are three rows in the a column with a value of 1. Each one has 3-inputs since there are
three inputs, x, y, and z of the circuit. We get a 3-input OR-gate in the second layer since there are three AND -gates in
the first layer.
For the second column, we get two 3-input AND -gates in the first layer, and a 2-input OR-gate in the second layer. We
get two AND-gates since there are two rows in the b column with a value of 1. Each one has 3-inputs since again there
are three inputs, x, y, and z of the circuit. We get a 2-input AND-gate in the second layer since there are two AND-gates
in the first layer.
While this circuit works, it is not the one with the fewest number of gates. In fact, since both output columns have a 1 in
the row correspoding to the inputs 0 0 1, it is clear that the gate for that row can be shared between the two subcircuits:
In some cases, even smaller circuits can be obtained, if one is willing to accept more layers (and thus a higher circuit
delay).
Operations of binary variables can be described by mean of appropriate mathematical function called Boolean function.
A Boolean function define a mapping from a set of binary input values into a set of output values. A Boolean function is
formed with binary variables, the binary operators AND and OR and the unary operator NOT.
For example , a Boolean function f(x1,x2,x3,……,xn) =y defines a mapping from an arbitrary combination of binary input
values (x1,x2,x3,……,xn) into a binary value y. a binary function with n input variable can operate on 2 n distincts values.
Any such function can be described by using a truth table consisting of 2n rows and n columns. The content of this table
are the values produced by that function when applied to all the possible combination of the n input variable.
Example
x y x.y
0 0 0
0 1 0
1 0 0
1 1 1
The function f, representing x.y, that is f(x,y)=xy. Which mean that f=1 if x=1 and y=1 and f=0 otherwise.
For each rows of the table, there is a value of the function equal to 1 or 0. The function f is equal to the sum of all rows
that gives a value of 1.
A Boolean function may be transformed from an algebraic expression into a logic diagram composed of AND, OR and
NOT gate. When a Boolean function is implemented with logic gates, each literal in the function designates an input to a
gate and each term is implemented with a logic gate . e.g.
F=xyz
F=x+y′z
Complement of a function
The complement of a function F is F′ and is obtained from an interchange of 0’s to 1’s and 1’s to 0’s in the value of F.
the complement of a function may be derived algebraically trough De Morgan’s theorem
(A+B+C+….)′= A′B′C′….
(ABC….)′= A′+ B′+C′……
The generalized form of de Morgan’s theorem state that the complement of function is obtained by interchanging AND
and OR and complementing each literal.
F=X′YZ′+X′Y′Z′
F′=( X′YZ′+X′Y′Z′)′
=( X′YZ′)′.( X′Y′Z′)′
=( X′′+Y′+Z′′)( X′′+Y′′+Z′′)
=( X+Y′+Z)( X+Y+Z)
A binary variable may appear either in it normal form or in it complement form . consider two binary variables x and y
combined with AND operation. Since each variable may appears in either form there are four possible combinations:
x′y′, x′y, xy′,xy. Each of the term represent one distinct area in the Venn diagram and is called minterm or a standard
product. With n variable, 2n minterms can be formed.
In a similar fashion, n variables forming an OR term provide 2n possible combinations called maxterms or standard
sum. Each maxterm is obtained from an OR term of the n variables, with each variable being primed if the
corresponding bit is 1 and un-primed if the corresponding bit is 0. Note that each maxterm is the complement of its
corresponding minterm and vice versa.
X Y Z Minterm maxterm
0 0 0 x′y′z′ X+y+z
0 0 1 X′y′z X+y+z′
0 1 0 X′yz′ X+y′+z
0 1 1 X′yz X+y′+z′
1 0 0 Xy′z′ X′+y+z
1 0 1 Xy′z X′+y+z′
1 1 0 Xyz′ X′+y′+z
1 1 1 Xyz X′+y′+z′
A Boolean function may be expressed algebraically from a given truth table by forming a minterm for each combination
of variable that produce a 1 and taken the OR of those terms.
Similarly, the same function can be obtained by forming the maxterm for each combination of variable that produces 0
and then taken the AND of those term.
It is sometime convenient to express the bolean function when it is in sum of minterms, in the following notation:
F(X,Y,Z)=∑(1,4,5,6,7) . the summation symbol∑ stands for the ORing of the terms; the number follow ing it are the
minterms of the function. The letters in the parenthesis following F form list of the variables in the order taken when
the minterm is converted to an AND term.
Sometime it is convenient to express a Boolean function in its sum of minterm. If it is not in that case, the expression is
expanded into the sum of AND term and if there is any missing variable, it is ANDed with an expression such as x+x′
where x is one of the missing variable.
To express a Boolean function as a product of maxterms, it must first be brought into a form of OR terms. This can be
done by using distributive law x+xz=(x+y)(x+z). then if there is any missing variable, say x in each OR term is ORded
with xx′.
=(xy +x′)(xy+z)
(x+x′)(y+x′)(x+z)(y+z)
(y+x′)(x+z)(y+z)
Adding missing variable in each term
(y+x′)= x′+y+zz′ =(x′+y+z)( x′+y+z′)
(x+z)= x+z+yy′ =(x+y+z)(x+y′+z)
F(x,y,z)= ∏ (0,2,4,5)
Standard form
Another way to express a boolean function is in satndard form. Here the term that form the function may contains one,
two or nay number of literals. There are two types of standard form. The sum of product and the product of sum.
The sum of product(SOP) is a Boolean expression containing AND terms called product term of one or more literals
each. The sum denotes the ORing of these terms
e.g. F=x+xy′+x′yz
the product of sum (POS)is a Boolean expression containing OR terms called SUM terms. Each term may have any
number of literals. The product denotes the ANDing of these terms
e.g. F= x(x+y′)(x′+y+z)
a boolean function may also be expressed in a non standard form. In that case, distributive law can be used to remove
the parenthesis
F=(xy+zw)(x′y′+z′w′)
= xy(x′y′+z′w′)+zw(x′y′+z′w′)
=xyz′w′+zwx′y′
A Boolean equation can be reduced to a minimal number of literal by algebraic manipulation. Unfortunately, there are
no specific rules to follow that will guarantee the final answer. The only methods is to use the theorem and postulate of
Boolean algebra and any other manipulation that becomes familiar
To define what a combinatorial circuit does, we can use a logic expression or an expression for short. Such an
expression uses the two constants 0 and 1, variables such as x, y, and z (sometimes with suffixes) as names of inputs and
outputs, and the operators +, . and a horizontal bar or a prime (which stands for not). As usual, multiplication is
considered to have higher priority than addition. Parentheses are used to modify the priority.
If Boolean functions in either Sum of Product or Product of Sum forms can be implemented using 2-Level
implementations.
For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.
For POS forms OR gates will be in the first level and a single AND gate will be in the second level.
Note that using inverters to complement input variables is not counted as a level.
(X′+Y)(Y+XZ′)′+X(YZ)′
The equation is neither in sum of product nor in product of sum. The implementation is as follow
X1X2′X3+X1′X2′X2+X1′X2X′3
The equation is in sum of product. The implementation is in 2-Levels. AND gates form the first level and a single OR
gate the second level.
(X+1)(Y+0Z)
The equation is neither in sum of product nor in product of sum. The implementation is as follow
A valid question is: can logic expressions describe all possible combinatorial circuits?. The answer is yes and here is
why:
You can trivially convert the truth table for an arbitrary circuit into an expression. The expression will be in the form of
a sum of products of variables and there inverses. Each row with output value of 1 of the truth table corresponds to one
term in the sum. In such a term, a variable having a 1 in the truth table will be uninvested, and a variable having a 0 in
the truth table will be inverted.
xyz|f
-
000|0
001|0
010|1
011|0
100|1
101|0
110|0
111|1
The corresponding expression is:
X′Y′Z+XY′Z′+XYZ
Since you can describe any combinatorial circuit with a truth table, and you can describe any truth table with an
expression, you can describe any combinatorial circuit with an expression.
There are many logic expressions (and therefore many circuits) that correspond to a certain truth table, and therefore to a
certain function computed. For instance, the following two expressions compute the same function:
X(Y+Z) XY+XZ
The left one requires two gates, one and-gate and one or-gate. The second expression requires two and-gates and one
or-gate. It seems obvious that the first one is preferable to the second one. However, this is not always the case. It is not
always true that the number of gates is the only way, nor even the best way, to determine simplicity.
We have, for instance, assumed that gates are ideal. In reality, the signal takes some time to propagate through a gate.
We call this time the gate delay. We might be interested in circuits that minimize the total gate delay, in other words,
circuits that make the signal traverse the fewest possible gates from input to output. Such circuits are not necessarily the
same ones that require the smallest number of gates.
Circuit minimization
The complexity of the digital logic gates that implement a Boolean function is directly related to the complexity of the
algebraic expression from which the function is implemented. Although the truth table representation of a function is
unique, it can appear in many different forms when expressed algebraically.
x+x′y=(x+x′)(x+y)=x+y
simplify x′y′z+x′yz+xy′
x′y′z+x′yz+xy′=x′z(y+y′)+xy′
=x′z+xy′
Karnaugh map
The Karnaugh map also known as Veitch diagram or simply as K map is a two dimensional form of the truth table,
drawn in such a way that the simplification of Boolean expression can be immediately be seen from the location of 1’s
in the map. The map is a diagram made up of squares , each sqare represent one minterm. Since any Boolean function
can be expressed as a sum of minterms, it follows that a Boolean function is recognised graphically in the map from the
area enclosed by those squares whose minterms are included in the function.
A
A 0 1
B
A’B’ AB’
0
A’B AB
B 1
A
AB
00 01 11 10
C
A’B’C’ A’BC’ ABC’ AB’C’
0
A
AB
00 01 11 10
CD
A’B’C’D’ A’BC’D’ ABC’D’ AB’C’D’
00
A’B’C’D A’BC’D ABC’D AB’C’D
01 D
A’B’CD A’BCD ABCD AB’CD
01
C
A’B’CD’ A’BCD’ ABCD’ AB’CD’
01
To simplify a Boolean function using karnaugh map, the first step is to plot all ones in the function truth table on the
map. The next step is to combine adjacent 1’s into a group of one, two, four, eight, sixteen. The group of minterm
should be as large as possible. A single group of four minterm yields a simpler expression than two groups of two
minterms.
The final stage is reached when each of the group of minterms are ORded together to form the simplified sum of
product expression
The karnaugh map is not a square or rectangle as it may appear in the diagram. The top edge is adjacent to the bottom
edge and the left hand edge adjacent to the right hand edge. Consequent, two squares in karnaugh map are said to be
adjacent if they differ by only one variable
Implicant
In Boolean logic, an implicant is a "covering" (sum term or product term) of one or more minterms in a sum of products
(or maxterms in a product of sums) of a boolean function. Formally, a product term P in a sum of products is an
implicant of the Boolean function F if P implies F. More precisely:
P implies F (and thus is an implicant of F) if F also takes the value 1 whenever P equals 1.
where
• F is a Boolean of n variables.
• P is a product term
This means that P < = F with respect to the natural ordering of the Boolean space. For instance, the function
f(x,y,z,w) = xy + yz + w
Prime implicant
A prime implicant of a function is an implicant that cannot be covered by a more general (more reduced - meaning with
fewer literals) implicant. W.V. Quine defined a prime implicant of F to be an implicant that is minimal - that is, if the
removal of any literal from P results in a non-implicant for F. Essential prime implicants are prime implicants that cover
an output of the function that no combination of other prime implicants is able to cover.
A
AB Non prime implicant
00 01 11 10
CD
00
01 D
1 1
1 1
11 prime implicant
C
1
10 prime implicant
A
AB
00 01 11 10
CD Essential prime implicant
1
00
01 D
1
Non Essential prime implicant
1 1 1
11
C Essential prime implicant
1 1 1
10
In simplifying a Boolean function using karnaugh map, non essential prime implicant are not needed
a b C M(output)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
= bc + ac + ab
The abc term was replicated and combined with the other terms.
To use a Karnaugh map we draw the following map which has a position (square) corresponding to each of the 8
possible combinations of the 3 Boolean variables. The upper left position corresponds to the 000 row of the truth table,
the lower right position corresponds to 101.
a
ab
00 01 11 10
c
1
0
1 1 1
c 1
The 1s are in the same places as they were in the original truth table. The 1 in the first row is at position 110 (a = 1, b =
1, c = 0).
The minimization is done by drawing circles around sets of adjacent 1s. Adjacency is horizontal, vertical, or both. The
circles must always contain 2n 1s where n is an integer.
1 1 1
c 1
We have circled two 1s. The fact that the circle spans the two possible values of a
(0 and 1) means that the a term is eliminated from the Boolean expression corresponding to this circle.
Now we have drawn circles around all the 1s. Thus the expression reduces to
bc + ac + ab
as we saw before.
What is happening? What does adjacency and grouping the 1s together have to do with minimization? Notice that the 1
at position 111 was used by all 3 circles. This 1 corresponds to the abc term that was replicated in the original algebraic
minimization. Adjacency of 2 1s means that the terms corresponding to those 1s differ in one variable only. In one case
that variable is negated and in the other it is not.
The map is easier than algebraic minimization because we just have to recognize patterns of 1s in the map instead of
using the algebraic manipulations. Adjacency also applies to the edges of the map.
Now for 4 Boolean variables. The Karnaugh map is drawn as shown below.
A
AB
00 01 11 10
CD
1
00
1 1
01 D
1 1 1
01
C
1 1
01
RULE: Minimization is achieved by drawing the smallest possible number of circles, each containing the largest
possible number of 1s.
A
AB
00 01 11 10
CD
1
00
1 1
01 D
1 1 1
01
C
1 1
01
Q = BD + AC + AB
Other examples
1. F=A′B+AB
A
A
0 1
B
0
=B
1 1
B 1
2. F=A′B′C′+A′B′C+A′BC′+ABC′+ABC
3. F=AB+A′BC′D+A′BCD+AB′C′D′
A
AB
00 01 11 10
CD
1 1
00
1 1
01 D =BD+AB+AC’D’
1 1
01
C
1
01
4. F=AC′D′+A′B′C+A′C′D+AB′D
A
AB
00 01 11 10
CD
1 1
00
1 1 1
01 D =B’D+AC’D’+A’C’D+A’B’C
1 1
11
C
1
10
A
AB
00 01 11 10
CD
1 1
00
1 1
01 D =BD+D’B’
1 1
11
C
10 1 1
F=A′B′C′D′+A′BC′D′+AB′C′D′+A′BC′D+A′B′CD′+A′BCD′+AB′CD′
A
AB
00 01 11 10
CD
1 1 0 1
00
0 1 0 0
01 D
0 0 0 0
11
C
1 1 0 1
10
The obtained simplified F′=AB+CD+BD′. Since F′′=F, By applying de morgan’s rule to F′, we obtain
F′′=(AB+CD+BD′)′
A B C D F
0 0 0 0 0
0 0 0 1 0
0 0 1 0 1
0 0 1 1 1
0 1 0 0 0
0 1 0 1 1
0 1 1 0 0
0 1 1 1 1
1 0 0 0 0
1 0 0 1 0
1 0 1 0 X
1 0 1 1 X
1 1 0 0 X
1 1 0 1 X
1 1 1 0 X
1 1 1 1 X
F=A′B′CD′+A′B′CD+A′BC′D+A′BCD
The X in the above stand for "don’t care", we don't care whether a 1 or 0 is the value for that combination of inputs
because (in this case) the inputs will never occur.
A
AB
00 01 11 10
CD
0 0 x 0
00
0 1 x 0
01 D F=BD+B’C
1 1 x x
11
C
1 0 x x
10
The Quine–McCluskey algorithm (or the method of prime implicants) is a method used for minimization of boolean
functions. It is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in
computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean function has
been reached.
Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as
other prime implicants that are necessary to cover the function.
ABCD f
m0 0 0 0 0 0
m1 0 0 0 1 0
m2 0 0 1 0 0
m3 0 0 1 1 0
m4 0 1 0 0 1
m5 0 1 0 1 0
m6 0 1 1 0 0
m7 0 1 1 1 0
m8 1 0 0 0 1
m9 1 0 0 1 x
m10 1 0 1 0 1
m11 1 0 1 1 1
m12 1 1 0 0 1
m13 1 1 0 1 0
m14 1 1 1 0 x
m15 1 1 1 1 1
One can easily form the canonical sum of products expression from this table, simply by summing the minterms
(leaving out don't-care terms) where the function evaluates to one:
Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first placed in a minterm
table. Don't-care terms are also added into this table, so they can be combined with minterms:
1 m4 0100
m8 1000
2 m9 1001
m10 1010
m12 1100
3 m11 1011
m14 1110
4 m15 1111
At this point, one can start combining minterms with other minterms. If two terms vary by only a single digit changing,
that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more are
marked with a "*". When going from Size 2 to Size 4, treat '-' as a third bit value. Ex: -110 and -100 or -11- can be
combined, but not -110 and 011-. (Trick: Match up the '-' first.)
At this point, the terms marked with * can be seen as a solution. That is the solution is
F=AB′+AD′+AC+BC′D′
If the karnaugh map was used, we should have obtain an expression simplier than this. To obtain a minimal form, we
need to use the prime implicant chart
None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table.
Along the side goes the prime implicants that have just been generated, and along the top go the minterms specified
4 8 10 11 12 15
m(8,9,10,11) X X X 10--(AB′)
In the prime implicant table shown above, there are 5 rows, one row for each of the prime implicant and 6 columns,
each representing one minterm of the function. X is placed in each row to indicate the minterms contained in the prime
implicant of that row. For example, the two X in the first row indicate that minterm 4 and 12 are contained in the prime
implicant represented by (-100) i.e. BC′D′
The completed prime implicant table is inspected for columns containing only a single x. in this example, there are two
minterms whose column have a single x. 4,15. The minterm 4 is covered by prime implicant BC′D′. that is the selection
of prime implicant BC′D′ guarantee that minterm 4 is included in the selection. Similarly, for minterm 15 is covered by
prime implicant AC. Prime implicants that cover minterms with a single X in their column are called essential prime
implicants.
Now we find out each column whose minterm is covered by the selected essential prime implicant
For this example, essential prime implicant BC′D′ covers minterm 4 and 12. Essential prime implicant AC covers 10, 11
and 15. An inspection of the implicant table shows that, all the minterms are covered by the essential prime implicant
except the minterms 8. The minterms not selected must be included by the selection of one or more prime implicants.
From this example, we have only one minterm which is 8. It can be included in the selection either by including the
prime implicant AB′ or AD′. Since both of them have minterm 8 in their selection. We have thus found the minimum set
of prime implicants whose sum gives the required minimized function:
F=BC′D′+AD′+AC OR F= BC′D′+AB′+AC.
Both of those final equations are functionally equivalent to this original (very area-expensive) equation:
In addition to AND, OR, and NOT gates, other logic gates like NAND and NOR are also used in the design of digital
circuits.
The NAND gate represents the complement of the AND operation. Its name is an
abbreviation of NOT AND. The graphic symbol for the NAND gate consists of an AND symbol with a bubble on the
output, denoting that a complement operation is performed on the output of the AND gate as shown earlier
A universal gate is a gate which can implement any Boolean function without need to use any other gate type. The
NAND and NOR gates are universal gates. In practice, this is advantageous since NAND and NOR gates are
economical and easier to fabricate and are the basic gates used in all IC digital logic families. In fact, an AND gate is
typically implemented as a NAND gate followed by an inverter not the other way around.
Likewise, an OR gate is typically implemented as a NOR gate followed by an inverter not the other way around.
To prove that any Boolean function can be implemented using only NAND gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates. A universal gate is a gate which can implement any Boolean
function without need to use any other gate type.
The figure shows two ways in which a NAND gate can be used as an inverter (NOT gate).
1. All NAND input pins connect to the input signal A gives an output A′.
2. One NAND input pin is connected to the input signal A while all other input pins are connected to logic 1. The output
will be A′.
An AND gate can be replaced by NAND gates as shown in the figure (The AND is
replaced by a NAND gate with its output complemented by a NAND gate inverter).
An OR gate can be replaced by NAND gates as shown in the figure (The OR gate is replaced by a NAND gate with all
its inputs complemented by NAND gate inverters).
To prove that any Boolean function can be implemented using only NOR gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates.
The figure shows two ways in which a NOR gate can be used as an inverter (NOT gate).
1.All NOR input pins connect to the input signal A gives an output A′.
2. One NOR input pin is connected to the input signal A while all other input pins are connected to logic 0. The output
will be A′.
An OR gate can be replaced by NOR gates as shown in the figure (The OR is replaced by a NOR gate with its output
complemented by a NOR gate inverter)
An AND gate can be replaced by NOR gates as shown in the figure (The AND gate is replaced by a NOR gate with all
its inputs complemented by NOR gate inverters)
Equivalent Gates:
The shown figure summarizes important cases of gate equivalence. Note that bubbles indicate a complement operation
(inverter).
Two NOT gates in series are same as a buffer because they cancel each other as A′′=A.
We have seen before that Boolean functions in either SOP or POS forms can be implemented using 2-Level
implementations.
For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.
For POS forms OR gates will be in the first level and a single AND gate will be in the second level.
Note that using inverters to complement input variables is not counted as a level.
To implement a function using NAND gates only, it must first be simplified to a sum of product and to implement a
function using NOR gates only, it must first be simplified to a product of sum
We will show that SOP forms can be implemented using only NAND gates, while POS forms can be implemented using
only NOR gates through examples.
Example 1: Implement the following SOP function using NAND gate only
F = XZ + Y′Z + X′YZ
Introducing two successive inverters at the inputs of the OR gate results in the shown equivalent implementation. Since
two successive inverters on the same line will not have an overall effect on the logic as it is shown before.
By associating one of the inverters with the output of the first level AND gate and the other with the input of the OR
gate, it is clear that this implementation is reducible to 2-level implementation where both levels are NAND gates as
shown in Figure.
Introducing two successive inverters at the inputs of the AND gate results in the shown equivalent implementation.
Since two successive inverters on the same line will not have an overall effect on the logic as it is shown before.
By associating one of the inverters with the output of the first level OR gates and the other with the input of the AND
gate, it is clear that this implementation is reducible to 2-level implementation where both levels are NOR gates as
shown in Figure.
• NAND-AND
• AND-NOR,
• NOR-OR,
• OR-NAND
AND-NOR functions:
By complementing the output we can get F, or by using NAND-AND circuit as shown in the figure.
It can also be implemented using AND-NOR circuit as it is equivalent to NAND- AND circuit as shown in the figure.
By complementing the output we can get F, or by using NOR-OR circuit as shown in the figure.
It can also be implemented using OR-NAND circuit as it is equivalent to NOR-OR circuit as shown in the figure
The design of a combinational circuit starts from the verbal outline of the problem and ends with a logic circuit diagram
or a set of Boolean functions from which the Boolean function can be easily obtained. The procedure involves the
following steps:
Adders
In electronics, an adder or summer is a digital circuit that performs addition of numbers. In modern computers adders
reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for
many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary
numbers. In cases where twos complement or ones complement is being used to represent negative numbers, it is trivial
to modify an adder into an adder-subtracter. Other signed number representations require a more complex adder.
-Half Adder
A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum
and a carry value which are both binary digits.
A half adder has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR
of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with
C being the most significant of these two outputs.
The drawback of this circuit is that in case of a multibit addition, it cannot include a carry.
A B Carry Sum
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
Sum=A′B+AB′ Carry=AB
One can see that Sum can also be implemented using XOR gate as A B
A full adder has three inputs A, B, and a carry in C, such that multiple adders can be used to add larger numbers. To
remove ambiguity between the input and output carry lines, the carry in is labelled Ci or Cin while the carry out is
labelled Co or Cout.
A full adder is a logical circuit that performs an addition operation on three binary digits. The full adder produces a sum
and carry value, which are both binary digits. It can be combined with other full adders or work on its own.
Input Output
A B Ci Co S
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1
Co=A′BCi+AB′Ci+ABCi′+ABCi
S=A′B′Ci +A′BCi′+ABCi′+ABCi
A full adder can be trivially built using our ordinary design methods for combinatorial circuits. Here is the resulting
Note that the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting
logic. This is because the only discrepancy between OR and XOR gates occurs when both inputs are 1; for the adder
shown here, this is never possible. Using only two types of gates is convenient if one desires to implement the adder
directly using common IC chips.
A full adder can be constructed from two half adders by connecting A and B to the input of one half adder, connecting
the sum from that to an input to the second adder, connecting Ci to the other input and OR the two carry outputs.
Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the three-bit majority function of
A, B, and Ci. The output of the full adder is the two-bit arithmetic sum of three one-bit numbers.
It is possible to create a logical circuit using multiple full adders to add N-bit numbers. Each full adder inputs a Cin,
which is the Cout of the previous adder. This kind of adder is a ripple carry adder, since each carry bit "ripples" to the
next full adder. Note that the first (and only the first) full adder may be replaced by a half adder.
The layout of ripple carry adder is simple, which allows for fast design time; however, the ripple carry adder is
relatively slow, since each full adder must wait for the carry bit to be calculated from the previous full adder. The gate
delay can easily be calculated by inspection of the full adder circuit. Following the path from Cin to Cout shows 2 gates
that must be passed through. Therefore, a 32-bit adder requires 31 carry computations and the final sum calculation for a
total of 31 * 2 + 1 = 63 gate delays.
Subtractor
In electronics, a subtractor can be designed using the same approach as that of an adder. The binary subtraction process
is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved
in performing the subtraction for each bit: the minuend (Xi), subtrahend (Yi), and a borrow in from the previous (less
significant) bit order position (Bi). The outputs are the difference bit (Di) and borrow bit Bi + 1.
Half subtractor
The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, X
(minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow). Such a circuit is called a half-subtractor
because it enables a borrow out of the current arithmetic operation but no borrow in from a previous arithmetic
operation.
X Y D B
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
D=X′Y+XY′ or D= X Y
B=X′Y
As in the case of the addition using logic gates , a full subtractor is made by combining two half-subtractors and an
additional OR-gate. A full subtractor has the borrow in capability (denoted as BORIN in the diagram below) and so
allows cascading which results in the possibility of multi-bit subtraction.
A B BORIN D BOROUT
0 0 0 0 0
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 0 0 1 1
1 0 1 0 1
1 1 0 0 0
1 1 1 1 1
For a wide range of operations many circuit elements will be required. A neater solution will be to use subtraction via
addition using complementing as was discussed in the binary arithmetic topic. In this case only adders are needed as
shown bellow.
The purpose of circuit minimization is to obtain an algebraic expression that, when implemented results in a low cost
circuit. Digital circuit are constructed with integrated circuit(IC). An IC is a small silicon semiconductor crystal called
chip containing the electronic component for digital gates. The various gates are interconnected inside the chip to form
the required circuit. Digital IC are categorized according to their circuit complexity as measured by the number of logic
gates in a single packages.
- Small scale integration (SSI). SSi devices contain fewer than 10 gates. The input and output of the gates are
connected directly to the pins in the package.
- Medium Scale Integration. MSI devices have the complexity of approximately 10 to 100 gates in a single
package
- Large Scale Integration (LSI). LSI devices contain between 100 and a few thousand gates in a single package
- Very Large Scale Integration(VLSI). VLSI devices contain thousand of gates within a single package. VLSI
devices have revolutionized the computer system design technology giving the designer the capabilities to
create structures that previously were uneconomical.
Multiplexer
A multiplexer is a combinatorial circuit that is given a certain number (usually a power of two) data inputs, let us say 2n,
and n address inputs used as a binary number to select one of the data inputs. The multiplexer has a single output, which
has the same value as the selected data input.
In other words, the multiplexer works like the input selector of a home music system. Only one input is selected at a
time, and the selected input is transmitted to the single output. While on the music system, the selection of the input is
made manually, the multiplexer chooses its input based on a binary number, the address input.
The truth table for a multiplexer is huge for all but the smallest values of n. We therefore use an abbreviated version of
the truth table in which some inputs are replaced by `-' to indicate that the input value does not matter.
Here is such an abbreviated truth table for n = 3. The full truth table would have 2(3 + 23) = 2048 rows.
We can abbreviate this table even more by using a letter to indicate the value of the selected input, like this:
a 2 a 1 a 0 d7 d6 d5 d4 d3 d2 d1 d0 | x
- - - - - - - - - - - --- -
0 0 0 - - - - - - - c | c
0 0 1 - - - - - - c - | c
0 1 0 - - - - - c - - | c
0 1 1 - - - - c - - - | c
1 0 0 - - - c - - - - | c
1 0 1 - - c - - - - - | c
1 1 0 - c - - - - - - | c
1 1 1 c - - - - - - - | c
The same way we can simplify the truth table for the multiplexer, we can also simplify the corresponding circuit.
Indeed, our simple design method would yield a very large circuit. The simplified circuit looks like this:
The demultiplexer is the inverse of the multiplexer, in that it takes a single data input and n address inputs. It has 2 n
outputs. The address input determine which data output is going to have the same value as the data input. The other data
outputs will have the value 0.
Here is an abbreviated truth table for the demultiplexer. We could have given the full table since it has only 16 rows, but
we will use the same convention as for the multiplexer where we abbreviated the values of the data inputs.
a2 a1 a0 d | x7 x6 x5 x4 x3 x2 x1 x0
00 0 c| 0 0 0 0 00 0 c
00 1 c| 0 0 0 0 00 c 0
0 1 0 c| 0 0 0 0 0c 0 0
0 1 1 c| 0 0 0 0 c0 0 0
1 0 0 c| 0 0 0 c 00 0 0
1 0 1 c| 0 0 c 0 00 0 0
11 0 c| 0 c 00 0 0 0 0
11 1 c| c 0 00 0 0 0 0
Here is one possible circuit diagram for the demultiplexer:
In both the multiplexer and the demultiplexer, part of the circuits decode the address inputs, i.e. it translates a binary
number of n digits to 2n outputs, one of which (the one that corresponds to the value of the binary number) is 1 and the
others of which are 0.
It is sometimes advantageous to separate this function from the rest of the circuit, since it is useful in many other
applications. Thus, we obtain a new combinatorial circuit that we call the decoder. It has the following truth table (for n
= 3):
a2 a1 a0 | x7 x6 x5 x4 x3 x2 x1 x0
0 0 0| 0 0 0 0 0 0 0 1
0 0 1| 0 0 0 0 0 0 1 0
0 1 0| 0 0 0 0 0 1 0 0
0 1 1| 0 0 0 0 1 0 0 0
1 0 0| 0 0 0 1 0 0 0 0
1 0 1| 0 0 1 0 0 0 0 0
1 1 0| 0 1 0 0 0 0 0 0
1 1 1| 1 0 0 0 0 0 0 0
Here is the circuit diagram for the decoder:
An encoder has 2n input lines and n output lines. The output lines generate a binary code corresponding to the input
value. For example a single bit 4 to 2 encoder takes in 4 bits and outputs 2 bits. It is assumed that there are only 4 types
of input signals these are : 0001, 0010, 0100, 1000.
I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1
4 to 2 encoder
A priority encoder is such that if two or more inputs are given at the same time, the input having the highest priority will
take precedence. An example of a single bit 4 to 2 encoder is shown.
I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 X 0 1
0 1 X X 1 0
1 X X X 1 1
4 to 2 priority encoder
The X’s designate the don’t care condition designating that fact that the binary value may be equal either to 0 or 1. For
example, the input I3has the highest priority so regarded the value of other inputs, if the value of I3 is 1, the output for
F1F0=11(binary 3)
Exercise
2. A circuit has four inputs D,C,B,A encoded in natural binary form where A is the least significant bit. The inputs in the
range 0000=0 to 1011=11 represents the months of the year from January (0) to December (11). Input in the range 1100-
1111(i.e.12 to 15) cannot occur. The output of the circuit is true if the month represented by the input has 31 days.
Otherwise the output is false. The output for inputs in the range 1100 to 1111 is undefined.
- Draw the truth table to represent the problem and obtain the function F as a Sum of minterm.
- Use the Karnaugh map to obtain a simplified expression for the function F.
- Construct the circuit to implements the function using NOR gates only.
3. A circuit has four inputs P,Q,R,S, representing the natural binary number 0000=0, to 1111=15. P is the most
significant bit. The circuit has one output, X, which is true if the input to the circuit represents is a prime number and
false otherwise (A prime number is a number which is only divisible by 1 and by itself. Note that zero(0000) and
one(0001) are not considered as prime numbers)
i. Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S.
ii. Design a circuit diagram to implement this function using NOR gate only
4. A combinational circuit is defined by the following three Boolean functions: F1=x’y’z’+xz F2=xy’z’+x’y
F3=x’y’z+xy Design the circuit that implements the functions
5. A circuit implements the Boolean function F=A’B’C’D’+A’BCD’+AB’C’D’+ABC’D It is found that the circuit input
combinations A’B’CD’, A’BC’D’, AB’CD’ can never occur.
i. Find a simpler expression for F using the proper don’t care condition.
ii. Design the circuit implementing the simplified expression of F
6. A combinational circuit is defined by the following three Boolean functions: F1=x’y’z’+xz F2=xy’z’+x’y
F3=x’y’z+xy Design the circuit with a decoder and external gates.
7. A circuit has four inputs P,Q,R,S, representing the natural binary number 0000=0, to 1111=15. P is the most
significant bit. The circuit has one output, X, which is true if the number represented is divisible by three (Regard zero
as being indivisible by three.)
Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S as a product of maxterms
and also as a sum of minterms
Design a circuit diagram to implement this function
8. Plot the following function on K map and use the K map to simplify the expression.
F = ABC + ABC + ABC + ABC + ABC + ABC F = ABC + ABC + ABC + ABC
In the previous session, we said that the output of a combinational circuit depends solely upon the input. The implication
is that combinational circuits have no memory. In order to build sophisticated digital logic circuits, including computers,
we need more a powerful model. We need circuits whose output depends upon both the input of the circuit and its
previous state. In other words, we need circuits that have memory.
It is possible to produce circuits with memory using the digital logic gates we've already seen. To do that, we need to
introduce the concept of feedback. So far, the logical flow in the circuits we've studied has been from input to output.
Such a circuit is called acyclic. Now we will introduce a circuit in which the output is fed back to the input, giving the
circuit memory. (There are other memory technologies that store electric charges or magnetic fields; these do not
depend on feedback.)
In the same way that gates are the building blocks of combinatorial circuits, latches and flip-flops are the building blocks
of sequential circuits.
While gates had to be built directly from transistors, latches can be built from gates, and flip-flops can be built from
latches. This fact will make it somewhat easier to understand latches and flip-flops.
Both latches and flip-flops are circuit elements whose output depends not only on the current inputs, but also on
previous inputs and outputs. The difference between a latch and a flip-flop is that a latch does not have a clock signal,
whereas a flip-flop always does.
Latches
How can we make a circuit out of gates that is not combinatorial? The answer is feed-back, which means that we create
loops in the circuit diagrams so that output values depend, indirectly, on themselves. If such feed-back is positive then
the circuit tends to have stable states, and if it is negative the circuit will tend to oscillate.
In order for a logical circuit to "remember" and retain its logical state even after the controlling input signal(s)
have been removed, it is necessary for the circuit to include some form of feedback. We might start with a pair of
inverters, each having its input connected to the other's output. The two outputs will always have opposite logic
levels.
The circuit shown below is a basic NAND latch. The inputs are generally designated "S" and "R" for "Set" and
"Reset" respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action,
the inputs are considered to be inverted in this circuit.
The outputs of any single-bit latch or memory are traditionally designated Q and Q'. In a commercial latch circuit,
either or both of these may be available for use by other circuits. In any case, the circuit itself is:
For the NAND latch circuit, both inputs should normally be at a logic 1 level. Changing an input to a logic 0 level
will force that output to a logic 1. The same logic 1 will also be applied to the second input of the other NAND gate,
allowing that output to fall to a logic 0 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 1.
Applying another logic 0 input to the same gate will have no further effect on this circuit. However, applying a logic
0 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.
Note that it is forbidden to have both inputs at a logic 0 level at the same time. That state will force both outputs to a
logic 1, overriding the feedback latching action. In this condition, whichever input goes to logic 1 first will lose
control, while the other input (still at logic 0) controls the resulting state of the latch. If both inputs go to logic 1
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.
The same functions can also be performed using NOR gates. A few adjustments must be made to allow for the
difference in the logic function, but the logic involved is quite similar.
The circuit shown below is a basic NOR latch. The inputs are generally designated "S" and "R" for "Set" and
"Reset" respectively. Because the NOR inputs must normally be logic 0 to avoid overriding the latching action, the
inputs are not inverted in this circuit. The NOR-based latch circuit is:
For the NOR latch circuit, both inputs should normally be at a logic 0 level. Changing an input to a logic 1 level will
force that output to a logic 0. The same logic 0 will also be applied to the second input of the other NOR gate,
allowing that output to rise to a logic 1 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 0 even after the external input is removed.
Applying another logic 1 input to the same gate will have no further effect on this circuit. However, applying a logic
1 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.
Note that it is forbidden to have both inputs at a logic 1 level at the same time. That state will force both outputs to
a logic 0, overriding the feedback latching action. In this condition, whichever input goes to logic 0 first will lose
control, while the other input (still at logic 1) controls the resulting state of the latch. If both inputs go to logic 0
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.
Flip-flops
Latches are asynchronous, which means that the output changes very soon after the input changes. Most computers
today, on the other hand, are synchronous, which means that the outputs of all the sequential circuits change
simultaneously to the rhythm of a global clock signal.
A flip-flop circuit can be constructed from two NAND gates or two NOR gates. These flip-flops are shown in Figure 2
and Figure 3. Each flip-flop has two outputs, Q and Q′, and two inputs, set and reset. This type of flip-flop is referred to
as an SR flip-flop or SR latch. The flip-flop in Figure 2 has two useful states. When Q=1 and Q′=0, it is in the set state
(or 1-state). When Q=0 and Q′=1, it is in the clear state (or 0 -state). The outputs Q and Q′ are complements of each
other and are referred to as the normal and complement outputs, respectively. The binary state of the flip-flop is taken to
be the value of the normal output.
When a 1 is applied to both the set and reset inputs of the flip-flop in Figure 2, both Q and Q′ outputs go to 0. This
condition violates the fact that both outputs are complements of each other. In normal operation this condition must be
avoided by making sure that 1's are not applied to both inputs simultaneously.
The NAND basic flip-flop circuit in Figure 3(a) operates with inputs normally at 1 unless the state of the flip-flop has to
be changed. A 0 applied momentarily to the set input causes Q to go to 1 and Q′ to go to 0, putting the flip-flop in the set
state. When both inputs go to 0, both outputs go to 1. This condition should be avoided in normal operation.
Clocked SR Flip-Flop
The clocked SR flip-flop shown in Figure 4 consists of a basic NOR flip-flop and two AND gates. The outputs of the
two AND gates remain at 0 as long as the clock pulse (or CP) is 0, regardless of the S and R input values. When the
clock pulse goes to 1, information from the S and R inputs passes through to the basic flip-flop. With both S=1 and R=1,
the occurrence of a clock pulse causes both outputs to momentarily go to 0. When the pulse is removed, the state of the
flip-flop is indeterminate, ie., either state may result, depending on whether the set or reset input of the flip-flop remains
a 1 longer than the transition to 0 at the end of the pulse.
The D flip-flop shown in Figure 5 is a modification of the clocked SR flip-flop. The D input goes directly into the S
input and the complement of the D input goes to the R input. The D input is sampled during the occurrence of a clock
pulse. If it is 1, the flip-flop is switched to the set state (unless it was already set). If it is 0, the flip-flop switches to the
clear state.
JK Flip-Flop
A JK flip-flop is a refinement of the SR flip-flop in that the indeterminate state of the SR type is defined in the JK type.
Inputs J and K behave like inputs S and R to set and clear the flip-flop (note that in a JK flip-flop, the letter J is for set
and the letter K is for clear). When logic 1 inputs are applied to both J and K simultaneously, the flip-flop switches to its
complement state, ie., if Q=1, it switches to Q=0 and vice versa.
A clocked JK flip-flop is shown in Figure 6. Output Q is ANDed with K and CP inputs so that the flip-flop is cleared
during a clock pulse only if Q was previously 1. Similarly, ouput Q′ is ANDed with J and CP inputs so that the flip-flop
is set with a clock pulse only if Q′ was previously 1.
Note that because of the feedback connection in the JK flip-flop, a CP signal which remains a 1 (while J=K=1) after the
outputs have been complemented once will cause repeated and continuous transitions of the outputs. To avoid this, the
clock pulses must have a time duration less than the propagation delay through the flip-flop. The restriction on the pulse
width can be eliminated with a master-slave or edge-triggered construction. The same reasoning also applies to the T
flip-flop presented next.
T Flip-Flop
The T flip-flop is a single input version of the JK flip-flop. As shown in Figure 7, the T flip-flop is obtained from the JK
type if both inputs are tied together. The output of the T flip-flop "toggles" with each clock pulse.
Triggering of Flip-flops
The state of a flip-flop is changed by a momentary change in the input signal. This change is called a trigger and the
transition it causes is said to trigger the flip-flop. The basic circuits of Figure 2 and Figure 3 require an input trigger
defined by a change in signal level. This level must be returned to its initial level before a second trigger is applied.
Clocked flip-flops are triggered by pulses.
The feedback path between the combinational circuit and memory elements in Figure 1 can produce instability if the
outputs of the memory elements (flip-flops) are changing while the outputs of the combinational circuit that go to the
The clock pulse goes through two signal transitions: from 0 to 1 and the return from 1 to 0. As shown in Figure 8 the
positive transition is defined as the positive edge and the negative transition as the negative edge.
The clocked flip-flops already introduced are triggered during the positive edge of the pulse, and the state transition
starts as soon as the pulse reaches the logic-1 level. If the other inputs change while the clock is still 1, a new output
state may occur. If the flip-flop is made to respond to the positive (or negative) edge transition only, instead of the entire
pulse duration, then the multiple-transition problem can be eliminated.
Master-Slave Flip-Flop
A master-slave flip-flop is constructed from two seperate flip-flops. One circuit serves as a master and the other as a
slave. The logic diagram of an SR flip-flop is shown in Figure 9. The master flip-flop is enabled on the positive edge of
the clock pulse CP and the slave flip-flop is disabled by the inverter. The information at the external R and S inputs is
transmitted to the master flip-flop. When the pulse returns to 0, the master flip-flop is disabled and the slave flip-flop is
enabled. The slave flip-flop then goes to the same state as the master flip-flop.
The timing relationship is shown in Figure 10 and is assumed that the flip-flop is in the clear state prior to the
occurrence of the clock pulse. The output state of the master-slave flip-flop occurs on the negative transition of the clock
Another type of flip-flop that synchronizes the state changes during a clock pulse transition is the edge-triggered flip-
flop. When the clock pulse input exceeds a specific threshold level, the inputs are locked out and the flip-flop is not
affected by further changes in the inputs until the clock pulse returns to 0 and another pulse occurs. Some edge-triggered
flip-flops cause a transition on the positive edge of the clock pulse (positive-edge-triggered), and others on the negative
edge of the pulse (negative-edge-triggered). The logic diagram of a D-type positive-edge-triggered flip-flop is shown in
Figure 11.
When using different types of flip-flops in the same circuit, one must ensure that all flip-flop outputs make their
transitions at the same time, ie., during either the negative edge or the positive edge of the clock pulse.
Direct Inputs
Flip-flops in IC packages sometimes provide special inputs for setting or clearing the flip-flop asynchronously. They are
usually called preset and clear. They affect the flip-flop without the need for a clock pulse. These inputs are useful for
bringing flip-flops to an intial state before their clocked operation. For example, after power is turned on in a digital
system, the states of the flip-flops are indeterminate. Activating the clear input clears all the flip-flops to an initial state
of 0. The graphic symbol of a JK flip-flop with an active-low clear is shown in Figure 12.
Summary
Since memory elements in sequential circuits are usually flip-flops, it is worth summarising the behaviour of various
flip-flop types before proceeding further. All flip-flops can be divided into four basic types: SR, JK, D and T. They
differ in the number of inputs and in the response invoked by different value of input signals. The four types of flip-
flops are defined in Table 1.
S R Q(next) Q Q(next) S R
0 0 Q 0 0 0 X
Q(next) = S + R′Q
SR 0 1 0 0 1 1 0
SR = 0 1 0 0 1
1 0 1
1 1 X 0
1 1 ?
J K Q(next) Q Q(next) J K
0 0 Q 0 0 0 X
JK 0 1 0 Q(next) = JQ′ + K′Q 0 1 1 X
1 0 1 1 0 X 1
1 1 Q′ 1 1 X 0
Q Q(next) D
0 0 0
D Q(next)
D Q(next) = D 0 1 1
0 0
1 0 0
1 1
1 1 1
Q Q(next) T
T Q(next) 0 0 0
T 0 Q Q(next) = TQ′ + T′Q 0 1 1
1 Q′ 1 0 1
1 1 0
The characteristic table in the third column of Table 1 defines the state of each flip-flop as a function of its inputs and
previous state. Q refers to the present state and Q(next) refers to the next state after the occurrence of the clock pulse.
The characteristic table for the RS flip-flop shows that the next state is equal to the present state when both inputs S and
R are equal to 0. When R=1, the next clock pulse clears the flip-flop. When S=1, the flip-flop output Q is set to 1. The
equation mark (?) for the next state when S and R are both equal to 1 designates an indeterminate next state.
The characteristic table for the JK flip-flop is the same as that of the RS when J and K are replaced by S and R
respectively, except for the indeterminate case. When both J and K are equal to 1, the next state is equal to the
complement of the present state, that is, Q(next) = Q′.
The next state of the D flip-flop is completely dependent on the input D and independent of the present state.
The next state for the T flip-flop is the same as the present state Q if T=0 and complemented if T=1.
The characteristic table is useful during the analysis of sequential circuits when the value of flip-flop inputs are known
and we want to find the value of the flip-flop output Q after the rising edge of the clock signal. As with any other truth
table, we can use the map method to derive the characteristic equation for each flip-flop, which are shown in the third
column of Table 1.
During the design process we usually know the transition from present state to the next state and wish to find the flip-
flop input conditions that will cause the required transition. For this reason we will need a table that lists the required
inputs for a given change of state. Such a list is called the excitation table, which is shown in the fourth column of Table
1. There are four possible transitions from present state to the next state. The required input conditions are derived from
the information available in the characteristic table. The symbol X in the table represents a "don't care" condition, that
is, it does not matter whether the input is 1 or 0.
asynchronous system is a system whose outputs depend upon the order in which its input variables change and can be
affected at any instant of time.
Gate-type asynchronous systems are basically combinational circuits with feedback paths. Because of the feedback
among logic gates, the system may, at times, become unstable. Consequently they are not often used.
Synchronous type of system uses storage elements called flip-flops that are employed to change their binary value only
at discrete instants of time. Synchronous sequential circuits use logic gates and flip-flop storage devices. Sequential
circuits have a clock signal as one of their inputs. All state transitions in such circuits occur only when the clock value is
either 0 or 1 or happen at the rising or falling edges of the clock depending on the type of memory elements used in the
circuit. Synchronization is achieved by a timing device called a clock pulse generator. Clock pulses are distributed
throughout the system in such a way that the flip-flops are affected only with the arrival of the synchronization pulse.
Synchronous sequential circuits that use clock pulses in the inputs are called clocked-sequential circuits. They are stable
and their timing can easily be broken down into independent discrete steps, each of which is considered separately.
A clock signal is a periodic square wave that indefinitely switches from 0 to 1 and from 1 to 0 at fixed intervals. Clock
cycle time or clock period: the time interval between two consecutive rising or falling edges of the clock.
Mealy and Moore models are the basic models of state machines. A state machine which uses only Entry Actions, so
that its output depends on the state, is called a Moore model. A state machine which uses only Input Actions, so that the
output depends on the state and also on inputs, is called a Mealy model. The models selected will influence a design but
there are no general indications as to which model is better. Choice of a model depends on the application, execution
means (for instance, hardware systems are usually best realised as Moore models) and personal preferences of a
designer or programmer. In practise, mixed models are often used with several action types
By Israel W. Digital Logic Design. Page 76
Design of Sequential Circuits
The design of a synchronous sequential circuit starts from a set of specifications and culminates in a logic diagram or a
list of Boolean functions from which a logic diagram can be obtained. In contrast to a combinational logic, which is
fully specified by a truth table, a sequential circuit requires a state table for its specification. The first step in the design
of sequential circuits is to obtain a state table or an equivalence representation, such as a state diagram.
A synchronous sequential circuit is made up of flip-flops and combinational gates. The design of the circuit consists of
choosing the flip-flops and then finding the combinational structure which, together with the flip-flops, produces a
circuit that fulfils the required specifications. The number of flip-flops is determined from the number of states needed
in the circuit.
The recommended steps for the design of sequential circuits are set out below:
We have examined a general model for sequential circuits. In this model the effect of all previous inputs on the outputs
is represented by a state of the circuit. Thus, the output of the circuit at any time depends upon its current state and the
input. These also determine the next state of the circuit. The relationship that exists among the inputs, outputs, present
states and next states can be specified by either the state table or the state diagram.
State Table
The state table representation of a sequential circuit consists of three sections labelled present state, next state and
output. The present state designates the state of flip-flops before the occurrence of a clock pulse. The next state shows
the states of flip-flops after the clock pulse, and the output section lists the value of the output variables during the
present state.
The binary number inside each circle identifies the state the circle represents. The directed lines are labelled with two
binary numbers separated by a slash (/). The input value that causes the state transition is labelled first. The number after
the slash symbol / gives the value of the output. For example, the directed line from state 00 to 01 is labelled 1/0,
meaning that, if the sequential circuit is in a present state and the input is 1, then the next state is 01 and the output is 0.
If it is in a present state 00 and the input is 0, it will remain in that state. A directed line connecting a circle with itself
indicates that no change of state occurs. The state diagram provides exactly the same information as the state table and
is obtained directly from the state table.
Example: Consider a sequential circuit shown in Figure 4. It has one input x, one output Z and two state variables
Q1Q2 (thus having four possible present states 00, 01, 10, 11).
Z = xQ1
D1 = x′ + Q1
D2 = xQ2′ + x′*Q1′
These equations can be used to form the state table. Suppose the present state (i.e. Q 1Q2) = 00 and input x = 0. Under
these conditions, we get Z = 0, D1 = 1, and D2 = 1. Thus the next state of the circuit D1D2 = 11, and this will be the
present state after the clock pulse has been applied. The output of the circuit corresponding to the present state
Q1Q2 = 00 and x = 1 is Z = 0. This data is entered into the state table as shown in Table 2.
Next State
Present State Output (Z)
X=0 X=1
Q1 Q2 x=0 x=1
Q1 Q0 Q1 Q0
0 0 1 1 0 1 0 0
0 1 1 1 0 0 0 0
1 0 1 0 1 1 0 1
1 1 1 0 1 0 0 1
The state diagram for the sequential circuit in Figure 4 is shown in Figure 5.
SR
JK
You can see from the table that all four flip-flops have the same number of states and transitions. Each flip-flop is in the
set state when Q=1 and in the reset state when Q=0. Also, each flip-flop can move from one state to another, or it can re-
enter the same state. The only difference between the four types lies in the values of input signals that cause these
transitions.
A state diagram is a very convenient way to visualise the operation of a flip-flop or even of large sequential
components.
Derive the state table and state diagram for the sequential circuit shown in Figure 7.
SOLUTION:
STEP 1: First we derive the Boolean expressions for the inputs of each flip-flops in the schematic, in terms of
external input Cnt and the flip-flop outputs Q1 and Q0. Since there are two D flip-flops in this example, we derive two
expressions for D1 and D0:
These Boolean expressions are called excitation equations since they represent the inputs to the flip-flops of the
sequential circuit in the next clock cycle.
STEP 2: Derive the next-state equations by converting these excitation equations into flip-flop characteristic
equations. In the case of D flip-flops, Q(next) = D. Therefore the next state equal the excitation equations.
STEP 3: Now convert these next-state equations into tabular form called the next-state table.
Next State
Present State
Cnt = 0 Cnt = 1
Q1 Q0
Q1 Q0 Q1 Q0
0 0 0 0 0 1
0 1 0 1 1 0
1 0 1 0 1 1
1 1 1 1 0 0
Each row is corresponding to a state of the sequential circuit and each column represents one set of input values. Since
we have two flip-flops, the number of possible states is four - that is, Q1Q0 can be equal to 00, 01, 10, or 11. These are
present states as shown in the table.
Note that each entry in the next-state table indicates the values of the flip-flops in the next state if their value in the
present state is in the row header and the input values in the column header.
Each of these next-state values has been computed from the next-state equations in STEP 2.
STEP 4: The state diagram is generated directly from the next-state table, shown in Figure 8.
Each arc is labelled with the values of the input signals that cause the transition from the present state (the source of the
arc) to the next state (the destination of the arc).
Example 1.2
Derive the next state, the output table and the state diagram for the sequential circuit shown in Figure 10.
The input combinational logic in Figure 10 is the same as in example1.1 so the excitation and the next-state equations
will be the same as in Example 1.1.
Excitation equations:
As this equation shows, the output Y will equal to 1 when the counter is in state Q1Q0 = 11, and it will stay 1 as long as
the counter stays in that state.
Next State
Present State Output
Cnt = 0 Cnt = 1
Q1 Q0 Z
Q1 Q0 Q1 Q0
00 00 01 0
01 01 10 0
10 10 11 0
11 11 00 1
State diagram:
State Reduction
Any design process must consider the problem of minimising the cost of the final circuit. The two most obvious cost
reductions are reductions in the number of flip-flops and the number of gates.
Example: Let us consider the state table of a sequential circuit shown in Table 6.
It can be seen from the table that the present state A and F both have the same next states, B (when x=0) and C (when
x=1). They also produce the same output 1 (when x=0) and 0 (when x=1). Therefore states A and F are equivalent. Thus
one of the states, A or F can be removed from the state table. For example, if we remove row F from the table and
replace all F's by A's in the columns, the state table is modified as shown in Table 7.
It is apparent that states B and E are equivalent. Removing E and replacing E's by B's results in the reduce table shown
in Table 8.
The removal of equivalent states has reduced the number of states in the circuit from six to four. Two states are
considered to be equivalent if and only if for every input sequence the circuit produces the same output sequence
irrespective of which one of the two states is the starting state.
Example 1.3
From the state diagram, we can generate the state table shown in Table 9. Note that there is no output section for this
circuit. Two flip-flops are needed to represent the four states and are designated Q0Q1. The input variable is labelled x.
We shall now derive the excitation table and the combinational structure. The table is now arranged in a different form
shown in Table 11, where the present state and input variables are arranged in the form of a truth table. Remember, the
excitable for the JK flip-flop was derive in table 1
Q →Q(next) JK
0→0 0X
0 →1 1X
1→0 X1
1 →1 X0
Q0 Q1 x Q0 Q1 J0 K0 J1 K1
0 0 0 0 0 0 X 0 X
0 0 1 0 1 0 X 1 X
0 1 0 1 0 1 X X 1
0 1 1 0 1 0 X X 0
1 0 0 1 0 X 0 0 X
1 0 1 1 1 X 0 1 X
1 1 0 1 1 X 0 X 0
1 1 1 0 0 X 1 X 1
In the first row of Table 11, we have a transition for flip-flop Q0 from 0 in the present state to 0 in the next state. In
Table 10 we find that a transition of states from 0 to 0 requires that input J = 0 and input K = X. So 0 and X are copied
in the first row under J0 and K0 respectively. Since the first row also shows a transition for the flip-flop Q1 from 0 in
the present state to 0 in the next state, 0 and X are copied in the first row under J1 and K1. This process is continued for
each row of the table and for each flip-flop, with the input conditions as specified in Table 10.
J0 = Q1*x′ K0 = Q1*x
J1 = x K1 = Q0′*x′ + Q0*x = Q0 x
Example 1.4 Design a sequential circuit whose state tables are specified in Table 12, using D flip-flops.
X=0 X=1
Q0 Q1 x=0 x=1
Q0 Q1 Q0 Q1
00 00 01 0 0
01 00 10 0 0
10 11 10 0 0
11 00 01 0 1
Q→Q(next) D
0
0 → 0
0 → 1 1
1 → 0 0
1 → 1
1
Next step is to derive the excitation table for the design circuit, which is shown in Table 14. The output of the circuit is
labelled Z.
Now plot the flip-flop inputs and output functions on the Karnaugh map to derive the Boolean expressions, which is
shown in Figure 16.
D0 = Q0*Q1′ + Q0′*Q1*x
D1 = Q0′*Q1′*x + Q0*Q1*x + Q0*Q1′*x′
Z = Q0*Q1*x
When the ld input is 0, the outputs are uneffected by any clock transition. When the ld input is 1, the x inputs are stored
in the register at the next clock transition, making the y outputs into copies of the x inputs before the clock transition.
We can explain this behavior more formally with a state table. As an example, let us take a register with n = 4. The left
side of the state table contains 9 columns, labeled x0, x1, x2, x3, ld, y0, y1, y2, and y3. This means that the state table
has 512 rows. We will therefore abbreviate it. Here it is:
0 -- -- -- -- c3 c2 c1 c0 | c3 c2 c1 c0
1 c3 c2 c1 c0 -- -- -- -- | c3 c2 c1 c0
As you can see, when ld is 0 (the top half of the table), the right side of the table is a copy of the values of the old
outputs, independently of the inputs. When ld is 1, the right side of the table is instead a copy of the values of the inputs,
independently of the old values of the outputs.
Registers play an important role in computers. Some of them are visible to the programmer, and are used to hold
variable values for later use. Some of them are hidden to the programmer, and are used to hold values that are internal to
the central processing unit, but nevertheless important.
Shift registers
Shift registers are a type of sequential logic circuit, mainly for storage of digital data. They are a group of flip-flops
connected in a chain so that the output from one flip-flop becomes the input of the next flip-flop. Most of the registers
possess no characteristic internal sequence of states. All the flip-flops are driven by a common clock, and all are set or
reset simultaneously.
In this section, the basic types of shift registers are studied, such as Serial In - Serial Out, Serial In - Parallel Out,
Parallel In - Serial Out, Parallel In - Parallel Out, and bidirectional shift registers. A special form of counter - the shift
register counter, is also introduced.
A basic four-bit shift register can be constructed using four D flip-flops, as shown below. The operation of the circuit is
as follows. The register is first cleared, forcing all four outputs to zero. The input data is then applied sequentially to
the D input of the first flip-flop on the left (FF0). During each clock pulse, one bit is transmitted from left to right.
Assume a data word to be 1001. The least significant bit of the data has to be shifted through the register from FF0 to
FF3.
To avoid the loss of data, an arrangement for a non-destructive reading can be done by adding two AND gates, an OR
gate and an inverter to the system. The construction of this circuit is shown below.
The data is loaded to the register when the control line is HIGH (ie WRITE). The data can be shifted out of the register
when the control line is LOW (ie READ)
For this kind of register, data bits are entered serially in the same manner as discussed in the last section. The difference
is the way in which the data bits are taken out of the register. Once the data are stored, each bit appears on its respective
output line, and all bits are available simultaneously. A construction of a four-bit serial in - parallel out register is
shown below.
A four-bit parallel in - serial out shift register is shown below. The circuit uses D flip-flops and NAND gates for
entering data (ie writing) to the register.
For parallel in - parallel out shift registers, all data bits appear on the parallel outputs immediately following the
simultaneous entry of the data bits. The following circuit is a four-bit parallel in - parallel out shift register constructed
by D flip-flops.
The D's are the parallel inputs and the Q's are the parallel outputs. Once the register is clocked, all the data at the D
inputs appear at the corresponding Q outputs simultaneously.
The registers discussed so far involved only right shift operations. Each right shift operation has the effect of
successively dividing the binary number by two. If the operation is reversed (left shift), this has the effect of
multiplying the number by two. With suitable gating arrangement a serial shift register can perform both operations.
A bidirectional, or reversible, shift register is one in which the data can be shift either left or right. A four-bit
bidirectional shift register using D flip-flops is shown below.
Here a set of NAND gates are configured as OR gates to select data inputs from the right or left adjacent bistables, as
selected by the LEFT/RIGHT control line.
Ring Counters
A ring counter is basically a circulating shift register in which the output of the most significant stage is fed back to the
input of the least significant stage. The following is a 4-bit ring counter constructed from D flip-flops. The output of
each stage is shifted into the next stage on the positive edge of a clock pulse. If the CLEAR signal is high, all the flip-
flops except the first one FF0 are reset to 0. FF0 is preset to 1 instead.
Since the count sequence has 4 distinct states, the counter can be considered as a mod-4 counter. Only 4 of the
maximum 16 states are used, making ring counters very inefficient in terms of state usage. But the major advantage of a
ring counter over a binary counter is that it is self-decoding. No extra decoding circuit is needed to determine what state
the counter is in.
Johnson Counters
Johnson counters are a variation of standard ring counters, with the inverted output of the last stage fed back to the input
of the first stage. They are also known as twisted ring counters. An n-stage Johnson counter yields a count sequence of
length 2n, so it may be considered to be a mod-2n counter. The circuit above shows a 4-bit Johnson counter. The state
sequence for the counter is given in the table
Counters
A sequential circuit that goes through a prescribed sequence of states upon the application of input pulses is called a
counter. The input pulses, called count pulses, may be clock pulses. In a counter, the sequence of states may follow a
binary count or any other sequence of states. Counters are found in almost all equipment containing digital logic. They
are used for counting the number of occurrences of an even and are useful for generating timing sequences to control
operations in a digital system.
A counter is a sequential circuit with 0 inputs and n outputs. Thus, the value after the clock transition depends only on
old values of the outputs. For a counter, the values of the outputs are interpreted as a sequence of binary digits (see the
section on binary arithmetic).
We shall call the outputs o0, o1, ..., on-1. The value of the outputs for the counter after a clock transition is a binary
number which is one plus the binary number of the outputs before the clock transition.
We can explain this behavior more formally with a state table. As an example, let us take a counter with n = 4. The left
side of the state table contains 4 columns, labeled o0, o1, o2, and o3. This means that the state table has 16 rows. Here it
is in full:
0 0 0 0| 0 0 0 1
0 0 0 1| 0 0 1 0
0 0 1 0| 0 0 1 1
0 0 1 1| 0 1 0 0
0 1 0 0| 0 1 0 1
0 1 0 1| 0 1 1 0
0 1 1 0| 0 1 1 1
0 1 1 1| 1 0 0 0
1 0 0 0| 1 0 0 1
1 0 0 1| 1 0 1 0
1 0 1 0| 1 0 1 1
1 0 1 1| 1 1 0 0
1 1 0 0| 1 1 0 1
1 1 0 1| 1 1 1 0
1 1 1 0| 1 1 1 1
1 1 1 1| 0 0 0 0
As you can see, the right hand side of the table is always one plus the value of the left hand side of the table, except for
the last line, where the value is 0 for all the outputs. We say that the counter wraps around.
Design of Counters
Example 1.5 A counter is first described by a state diagram, which is shows the sequence of states through which the
counter advances when it is clocked. Figure 18 shows a state diagram of a 3-bit binary counter.
The circuit has no inputs other than the clock pulse and no outputs other than its internal state (outputs are taken off each
flip-flop in the counter). The next state of the counter depends entirely on its present state, and the state transition occurs
every time the clock pulse occurs. Figure 19 shows the sequences of count after each clock pulse.
Since there are eight states, the number of flip-flops required would be three. Now we want to implement the counter
design using JK flip-flops.
Next step is to develop an excitation table from the state table, which is shown in Table 16.
Now transfer the JK states of the flip-flop inputs from the excitation table to Karnaugh maps to derive a simplified
Boolean expression for each flip-flop input. This is shown in Figure 20.
The 1s in the Karnaugh maps of Figure 20 are grouped with "don't cares" and the following expressions for the J and K
inputs of each flip-flop are obtained:
J0 = K0 = 1
J1 = K1 = Q0
J2 = K2 = Q1*Q0
The final step is to implement the combinational logic from the equations and connect the flip-flops to form the
sequential circuit. The complete logic of a 3-bit binary counter is shown in Figure 21.
Example 1.6 Design a counter specified by the state diagram in Example 1.5 using T flip-flops. The state diagram is
shown here again in Figure 22.
Now derive the excitation table from the state table, which is shown in Table 17.
Next step is to transfer the flip-flop input functions to Karnaugh maps to derive a simplified Boolean expressions, which
is shown in Figure 23.
Figure 23.
Karnaugh
maps
T0 = 1; T1 = Q0; T2 = Q1*Q0
Exercises
1. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.1. Draw the timing diagram of the circuit.
Figure
1.1
2. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.2.
Figure
1.2
3. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.3.
4. Derive the state output and state diagran for the sequential circuit shown in Figure 1.4.
Figure
1.4
5. A sequential circuit uses two D flip-flops as memory elements. The behaviour of the circuit is described by the
following equations:
D1 = Q1 + x′*Q2
D2 = x*Q1′ + x′*Q2
Z = x′*Q1*Q2 + x*Q1′*Q2′
Derive the state table and draw the state diagram of the circuit.
Table 6.1
8. Design a mod-5 counter which has the following binary sequence: 0, 1, 2, 3, 4. Use JK flip-flops.
9. Design a counter that has the following repeated binary sequence: 0, 1, 2, 3, 4, 5, 6, 7. Use RS flip-flops.
10. Design a counter with the following binary sequence: 1, 2, 5, 7 and repeat. Use JK flip-flops.
11. Design a counter with the following repeated binary sequence: 0, 4, 2, 1, 6. Use T flip-flops.
13. The content of a 5-bit shift register serial in parallel out with rotation capability is initially 11001. The register is
shifted four times to the right. What are the content and the output of the register after each shift?
With tri-state logic circuits, this is no longer true. As their names indicate, they manipulate signals that can be in one of
three states, as opposed to only 0 or 1. While this may sound confusing at first, the idea is relatively simple.
Consider a fairly common case in which there are a number of source circuits S1, S2, etc in different parts of a chip (i.e.,
they are not real close together). At different times, exactly one of these circuit will generate some binary value that is to
be distributed to some set of destination circuits D1, D2, etc also in different parts of the chip. At any point in time,
exactly one source circuit can generate a value, and the value is always to be distributed to all the destination circuits.
Obviously, we have to have some signals that select which source circuit is to generate information. Assume for the
moment that we have signals s1, s2, etc for exactly that purpose. One solution to this problem is indicated in this figure:
As you can see, this solution requires that all outputs are routed to a central place. Often such solutions are impractical
or costly. Since only one of the sources is "active" at one point, we ought to be able to use a solution like this:
A tri-state circuit (combinatorial or sequential) is like an ordinary circuit, except that it has an additional input that we
shall call enable. When the enable input is 1, the circuit behaves exactly like the corresponding normal (not tri-state)
circuit. When the enable input is 0, the outputs are completely disconnected from the rest of the circuit. It is as if there
we had taken an ordinary circuit and added a switch on every output, such that the switch is open when enable is 0 and
closed if enable is 1 like this:
which is pretty close to the truth. The switch is just another transistor that can be added at a very small cost.
Any circuit can exist in a tri-state version. However, as a special case, we can convert any ordinary circuit to a tri-state
circuit, by using a special tri-state combinatorial circuit that simply copies its inputs to the outputs, but that also has an
enable input. We call such a circuit a bus driver for reasons that will become evident when we discuss buses. A bus
driver with one input is drawn like this:
In general, a memory has m inputs that are called the address inputs that are used to select exactly one out of 2m words,
each one consisting of n bits.
Furthermore, it has n connectors that are bidirectional that are called the data lines. These data lines are used both as
inputs in order to store information in a word selected by the address inputs, and as outputs in order to recall a
previously stored value. Such a solution reduces the number of required connectors by a factor two.
Finally, it has an input called enable (see the section on tri-state logic for an explanation) that controls whether the data
lines have defined states or not, and an input called r/w that determines the direction of the data lines.
A memory with an arbitrary value of m and an arbitrary value of n can be built from memories with smaller values of
these parameters. To show how this can be done, we first show how a one-bit memory (one with m = 0 and n = 1) can
be built. Here is the circuit:
The central part of the circuit is an SR-latch that holds one bit of information. When enable is 0, the output d0 is isolated
both from the inputs to and the output from the SR-latch. Information is passed from d0 to the inputs of the latch when
enable is 1 and r/w is 1 (indicating write). Information is passed from the output x to d0 when enable is 1 and r/w is 0
(indicating read).
Now that we know how to make a one-bit memory, we must figure out how to make larger memories. First, suppose we
have n memories of 2m words, each one consisting of a single bit. We can easily convert these to a single memory with
2m words, each one consisting of a n bits. Here is how we do it:
Next, we have to figure out how to make a memory with more words. To show that, we assume that we have two
memories each with m address inputs and n data lines. We show how we can connect them so as to obtain a single
memory with m + 1 address inputs and n data lines. Here is the circuit:
As you can see, the additional address line is combined with the enable input to select one of the two smaller memories.
Only one of them will be connected to the data lines at a time (because of the way tri-state logic works).
Since the contents cannot be altered, we don't have a r/w signal. Except for the enable signal, a ROM is thus like an
ordinary combinatorial circuit with m inputs and n outputs.
ROMs are usually programmable. They are often sold with a contents of all 0s or all 1s. The user can then stick it in a
special machine and fill it with the desired contents, i.e. the ROM can be programmed. In that case, we sometimes call it
a PROM (programmable ROM).
Some varieties of PROMS can be erased and re-programmed. The way they are erased is typically with ultra-violet
light. When the PROM can be erased, we sometimes call it EPROM (erasable PROM).
The advantage of using a ROM in this way is that any conceivable function of the m inputs can be made to appear at any
of the n outputs, making this the most general-purpose combinatorial logic device available. Also, PROMs
(programmable ROMs), EPROMs (ultraviolet-erasable PROMs) and EEPROMs (electrically erasable PROMs) are
available that can be programmed using a standard PROM programmer without requiring specialised hardware or
software. However, there are several disadvantages:
• they cannot necessarily provide safe "covers" for asynchronous logic transitions so the PROM's outputs may
glitch as the inputs switch,
• Because only a small fraction of their capacity is used in any one application, they often make an inefficient use
of space.
Since most ROMs do not have input or output registers, they cannot be used stand-alone for sequential logic. An
external TTL register was often used for sequential designs such as state machines.
MROM MaskedROM
The very first ROMs were hard-wired devices that contained a pre-programmed set of data orinstructions. These
kind of ROMs are known as masked ROMs. It is inexpensive ROM.
PROM ProgrammableReadOnlyMemory
PROM is read-only memory that can be modified only once by a user. The user buys a blank PROMand enters
the desired contents using a PROM programmer. Inside the PROM chip there are small fuses which are burnt
open during programming. It can be programmed only once and is not erasable.
EPROM ErasableandProgrammableReadOnlyMemory
The EPROM can be erased by exposing it to ultra-violet light for a duration of upto 40 minutes. Usually, an
EPROM eraser achieves this function. During programming an electrical charge is trapped in an insulated gate
region. The charge is retained for more than ten years because the charge has no leakage path. For erasing this
charge, ultra-violet light is passed through a quartz crystal window lid. This exposure to ultra-violet light
dissipates the charge. During normal use thequartz lid is sealed with a sticker.
EEPROM ElectricallyErasableandProgrammableReadOnlyMemory
Answers:
1. c 5. a 9. b
2. b 6. a 10. c
3. c 7. d
4. b 8. d
Answers: 4. a 8. d
1. c 5. d 9. d
2. b 6. b 10. d
3. a 7. c
1. The associative law for addition is normally written as
a. A + B = B + A
b. (A + B) + C = A + (B + C)
c. AB = BA
d. A + AB = A
2. The Boolean equation AB + AC = A(B+ C) illustrates
a. the distribution law
b. the commutative law
c. the associative law
d. DE Morgan’s theorem
3. The Boolean expression A . 1 is equal to
a. A b. B
10. In VHDL code, the two main parts are called the
a. I/O and the module b. entity and the architecture
References
Alan Clements, Principles of computer hardware. second edition oxford science publications
http://www.ied.edu.hk/has/phys/de/de-ba.htm
http://www.eelab.usyd.edu.au/digital_tutorial/
http://cwx.prenhall.com/bookbind/pubbooks/mano2/chapter5/deluxe.html
http://www.eelab.usyd.edu.au/digital_tutorial/part3/
http://wearcam.org/ece385/lectureflipflops/flipflops/
http://users.ece.gatech.edu/~leehs/ECE2030/reading/mixed-logic.pdf
CHARGE
The most basic quantity in an electric circuit is the electric charge. We all experience the effect of
electric charge when we try to remove our wool sweater and have it stick to our body or walk across a
carpet and receive a shock.
Charge is an electrical property of the atomic particles of which matter consists, measured in
coulombs (C). Charge, positive or negative, is denoted by the letter q or Q.
We also know that the charge ‘e’ on an electron is negative and equal in magnitude to 1.602x10-19 C,
while a proton carries a positive charge of the same magnitude as the electron and the neutron has no
charge. The presence of equal numbers of protons and electrons leaves an atom neutrally charged.
Coulomb’s Law
Charles Coulomb, a French scientist, observed that when two charges are placed near each
other, they experience a force. He performed a number of experiments to study the nature and
magnitude of the force between the charged bodies. He summed up his conclusions into two laws,
known as Coulomb’s laws.
First law. This law relates to the nature of force between two charged bodies and may be stated
as under :
Like charges repel each other while unlike charges attract each other.
In other words, if two charges are of the same nature (i.e. both positive or both negative), the
force between them is repulsion. On the other hand, if one charge is positive and the other negative,
the force between them is an attraction.
Second law. This law tells about the magnitude of force between two charged bodies and may
be stated as under :
The force between two *point charges is directly proportional to the product of their magnitudes
and inversely proportional to the square of distance between their centres.
Where K is the Constant whose value depends upon the medium in which charges are
placed.
one coulomb is that charge which when placed in air at a distance of one metre from an equal and similar charge
repels it with a force of 9 ×109 N.
Electric Field
Electric field can be considered as an electric property associated with each point in the space where a
charge is present in any form. An electric field is also described as the electric force per unit charge.
The formula of electric field is given as;
E = F /Q
Where,
Voltage (or potential difference) is the energy required to move charge from one point to the other,
measured in volts (V). Voltage is denoted by the letter v or V.
Fig. 1.2 Two common types of current: (a) direct current (DC), (b) alternative current (AC)
ENERGY
Energy is the capacity to do work, and is measured in joules (J). The energy absorbed or
supplied by an element from time 0 to t is given by,
POWER
Power is the time rate of expending or absorbing energy, measured in watts (W). Power, is
denoted bythe letter p or P.
Mathematically,
Thus, if the magnitude of current I and voltage are given, then power can be evaluated as the
product of the two quantities and is measured in watts (W).
Sign of power:
Plus sign: Power is absorbed by the element. (Resistor, Inductor)
Minus sign: Power is supplied by the element. (Battery, Generator)
Example 1
An electric heater consumes 1.8Mj when connected to a 250 V supply for 30 minutes. Find the power
rating of the heater and the current taken from the supply.
If the magnet is moved away from the coil, the galvanometer again shows deflection but in the
opposite direction. In either case, the deflection will persist so long as the magnet is in motion.
The production of e.m.f. and hence current in the coil C is due to the fact that when the magnet is
in motion (towards or away from the coil), the amount of flux linking the coil changes—the
basic requirement for inducing e.m.f. in the coil. If the movement of the magnet is stopped,
though the flux is linking the coil, there is no change in flux and hence no e.m.f. is induced in
the coil. Consequently, the deflection of the galvanometer reduces to zero.
Faradays 1st law :
First Law of Faraday's Electromagnetic Induction state that whenever a conductor are placed
in a varying magnetic field emf are induced which is called induced emf, if the conductor circuit are
closed current are also induced which is called induced current.
Faradays 2nd law :
Second Law of Electro magnetic induction states that the induced Emf Equal to Rate of Change
of Flux Linkages ( Product of turns N of the coil and flux associated with it ) .
Let
Initial Flux linkages = N1
Final Flux Linkages = N2 .
Change in Flux linkages = (N2 - N1 )
e
e=-N volts
Negative sign indicates is accoding to the Len’s Law
(A)Self-inductance (L)
The property of a coil that opposes any change in the amount of current flowing through it is
called its self-inductance or inductance.
(B)Mutual inductance :
RESISTOR
Materials in general have a characteristic behavior of resisting the flow of electric charge.
This physical property, or ability to resist the flow of current, is known as resistance and is
represented by the symbol R. The Resistance is measured in ohms (Ω). The circuit element
used to model the current- resisting behavior of a material is called the resistor.
The resistance of a resistor depends on the material of which the conductor is made and
geometrical shape of the conductor. The resistance of a conductor is proportional to its length
(𝑙) and inversely proportional to its cross sectional area (A). Therefore the resistance of a
conductor can be written as,
𝜌𝑙
𝑅=
𝐴
The proportionality constant 𝜌 is called the specific resistance o resistivity of the conductor
and its value depends on the material of which the conductor is made.
The inverse of the resistance is called the conductance and inverse of resistivity is called
specific conductance or conductivity. The symbol used to represent the conductance is G and
conductivity is𝜎. Thus conductivity 𝜎 = 1/𝜌 and its units are Siemens per meter.
Example 2 : In the circuit shown in Fig. below, calculate the current i, the conductance G, the
power p and energylost in the resistor W in 2hours.
Solution:
The voltage across the resistor is the same as the source voltage (30 V) because the resistor and the
voltage source are connected to the same pair of terminals. Hence, the current is
INDUCTOR
A change in the magnitude of the current changes the electromagnetic field. Increase in current
expands the fields, and decrease in current reduces it.
Therefore, a change in current produces change in the electromagnetic field, which induces a voltage
across the coil according to Faraday'slaw of electromagnetic induction. i.e., the voltage across the inductor
is directly proportional to the time rate of change of current.
Where L is the constant of proportionality called the inductance of an inductor. The unit of inductance
is Henry (H).we can rewrite the above equation as
CAPACITOR
(a) Typical Capacitor, (b) Capacitor connected to a voltage source, (c) Circuit Symbol of capacitor
Any two conducting surfaces separated by an insulating medium exhibit the property of a capacitor.
The conducting surfaces are called electrodes, and the insulating medium is called dielectric. A
capacitor stores energy in the form of an electric field that is established by the opposite charges on
the two electrodes. The electric field is represented by lines of force between the positive and negative
charges, and is concentrated within the dielectric.
The amount of charge stored, represented by q, is directly pro-proportional to the applied voltage v so
that
𝑞 = 𝐶𝑣
Where C, the constant of proportionality, is known as the capacitance of the capacitor. The unit of
capacitance is the farad (F).
Although the capacitance C of a capacitor is the ratio of the charge q per plate to the applied voltagev, it
does not depend on q or v. It depends on the physical dimensions of the capacitor.
The current flowing through the capacitor is given by
Source Conversion
An electrical source transformation (or just ”source transformation”) is a method for simplifying
circuits by replacing a voltage source with its equivalent current source, or a current source with
its equivalent voltage source. Source transformations are implemented using Thévenin’s theorem
and Norton’s theorem.
Chapter – II
Circuit Analysis
KIRCHOFF’S LAWS
The most common and useful set of laws for solving electric circuits are the Kirchhoff’s voltage and
current laws. Several other useful relationships can be derived based on these laws. These laws are
formally known as Kirchhoff’s current law (KCL) and Kirchhoff’s voltage law (KVL).
∑ i𝑛 = 0
𝑛=1
Where N is the number of branches connected to the node and i𝑛 is the nth current entering (or
leaving) the node. By this law, currents entering a node may be regarded as positive, while currents
leaving the node may be taken as negative or vice versa.
Alternate Statement: Sum of the currents flowing towards a junction is equal to the sum of the
currents flowing away from the junction.
This is also called as Kirchhoff's second law or Kirchhoff's loop or mesh law. Kirchhoff’s second law
is based on the principle of conservation of energy.
Statement: Algebraic sum of all the voltages around a closed path or closed loop at any instant is
zero. Algebraic sum of the voltages means the magnitude and direction of the voltages; care should be
taken in assigning proper signs or polarities for voltages in different sections of the circuit.
The polarity of the voltages across active elements is fixed on its terminals. The polarity of the
voltage drop across the passive elements (Resistance in DC circuits) should be assigned with
reference to the direction of the current through the elements with the concept that the current flows
from a higher potential to lower potential. Hence, the entry point of the current through the passive
elements should be marked as the positive polarity of voltage drop across the element and the exit
point of the current as the negative polarity. The direction of currents in different branches of the
circuits is initially marked either with the known direction or assumed direction.
After assigning the polarities for the voltage drops across the different passive elements, algebraic
sum is accounted around a closed loop, either clockwise or anticlockwise, by assigning a particular
sign, say the positive sign for all rising potentials along the path of tracing and the negative sign for
all decreasing potentials. For example consider the circuit shown in Fig. 1.17
The circuit has three active elements with voltages E1, E2 and E3. The polarity of each of them is
fixed. R1, R2, R3 are three passive elements present in the circuit. Currents I1 and I3 are marked
flowing into the junction A and current I2 marked away from the junction A with known information
or assumed directions. With reference to the direction of these currents, the polarity of voltage drops
V1, V2 and V3 are marked.
Delta to Star
Star to Delta
Mesh (loop)
Circuit Terminology
Node - A point where two or more branches meet
Essential node - A node where three or more branches combine
Path - A trace of the adjacent circuit elements, where no element is included more than once.
Branch - A path that connects two nodes, and contains a single element such as voltage source or resistor
Essential Branch - Path that connects two nodes without passing through an essential node
Loop - A closed path in a circuit
Mesh - A loop that does not contain any other loops.
Mesh (loop) Analysis
Mesh analysis provides a general procedure for analyzing circuits, using mesh currents as the circuit variables. Using
mesh currents instead of element currents as circuit variables is convenient and reduces the number of equations that
must be solved simultaneously.
A loop is a closed path with no node passed more than once. A mesh is a loop that does not contain any other loop
within it. Mesh analysis applies KVL to find unknown currents.
Steps to determine mesh currents
1. Assign mesh currents i1, i2. . . in to the n meshes.
2. Apply KVL to each of the n meshes. Use Ohm’s law to express the voltages in terms of the mesh currents.
3. Solve the resulting n simultaneous equations to get the mesh currents.
Explanation by a simple Circuit
Mesh ABDA.
– I1R1 – (I1 – I2) R2 + E1 = 0
or
I1 (R1 + R2) – I2R2 = E1 ……………………… ……(i)
Mesh BCDB.
– I2R3 – E2 – (I2 – I1) R2 = 0
or`
– I1R2 + (R2 + R3) I2 = – E2 ………………………………. (ii)
Solving eq. (i) and eq. (ii) simultaneously, mesh currents I1 and I2 can be found out. Once the mesh
currents are known, the branch currents can be readily obtained. The advantage of this method is that it
usually reduces the number of equations to solve a network problem.
+
40 V i1 Vo 8Ω i2 6Ω i3
20 V
--
Solution:
We have 3 meshes (loops)
KVL left loop : ( )
Solve the above three equations involving the variables i1, i2 and i3 using simaltaneous solving method. In matrix
form the equations can be expressed as
[ ][ ] [ ]
Example 2: Calculate the current in each branch of the circuit shown below
Solution. Assign mesh currents I1, I2 and I3 to meshes ABHGA, HEFGH and BCDEHB
respectively as shown below
By determinant method
14I1 – 5I2 – 3I3 = 8
–5I1 + 10I2 – 4I3 = 7
3I1 + 4I2 – 9I3 = 0
NODAL ANALYSIS:
First we find the number of KCL equations (These are used to find the nodal voltages). N -1 =
n, here N = number of equations, n = number of nodes.
Then we write the KCL equations for the nodes and solve them to find the respected nodal
voltages.
Once we have these nodal voltages, we can use them to further analyze the circuit.
Super Node: Two Nodes with a independent Voltage source between them is a Super node
and one forms a KVL equation for it.
Example Find the Nodal Voltages in the below circuit?
Thevenin’s Theorem and Norton’s theorem are two important theorems in solving
Network problems having many active and passive elements. Using these theorems the networks
can be reduced to simple equivalent circuits with one active source and one element. In circuit
analysis many a times the current through a branch is required to be found when it’s value is
changed with all other element values remaining same. In such cases finding out every time the
branch current using the conventional mesh and node analysis methods is quite awkward and
time consuming. But with the simple equivalent circuits (with one active source and one
element) obtained using these two theorems the calculations become very simple. Thevenin’s
and Norton’s theorems are dual theorems.
(a) (b)
Figure (a) shows a simple block representation of a network with several active / passive
elements with the load resistance RL connected across the terminals ‘a & b’ and figure (b) shows
the Thevenin equivalent circuit with VTh connected across RTh & RL .
Main steps to find out VTh and RTh :
1. The terminals of the branch/element through which the current is to be found out are
marked as say a & b after removing the concerned branch/element.
2. Open circuit voltage VOC across these two terminals is found out using the conventional
network mesh/node analysis methods and this would be VTh .
3. Thevenin resistance RTh is found out by the method depending upon whether the
network contains dependent sources or not.
a. With dependent sources: RTh = Voc / Isc
4. Replace the network with VTh in series with RTh and the concerned branch resistance (or)
load resistance across the load terminals(A&B) as shown in below fig.
Example: Find VTH, RTH and the load current and load voltage flowing through RL resistor
as shown in fig. by using Thevenin’s Theorem?
Fig.(a)
Solution:
The resistance RL is removed and the terminals of the resistance RL are marked as A & B as
shown in the fig. (1)
Fig.(1)
Calculate / measure the Open Circuit Voltage. This is the Thevenin Voltage (V TH). We have
already removed the load resistor from fig.(a), so the circuit became an open circuit as shown in
fig (1). Now we have to calculate the Thevenin’s Voltage. Since 3mA Current flows in both
12kΩ and 4kΩ resistors as this is a series circuit because current will not flow in the 8kΩ resistor
as it is open. So 12V (3mA x 4kΩ) will appear across the 4kΩ resistor. We also know that
current is not flowing through the 8kΩ resistor as it is open circuit, but the 8kΩ resistor is in
parallel with 4k resistor. So the same voltage (i.e. 12V) will appear across the 8kΩ resistor as
4kΩ resistor. Therefore 12V will appear across the AB terminals. So,VTH = 12V
Fig(2)
All voltage & current sources replaced by their internal impedances (i.e. ideal voltage sources
short circuited and ideal current sources open circuited) as shown in fig.(3)
Fig(3)
Calculate /measure the Open Circuit Resistance. This is the Thevenin Resistance (RTH)We have
Reduced the 48V DC source to zero is equivalent to replace it with a short circuit as shown in
figure (3) We can see that 8kΩ resistor is in series with a parallel connection of 4kΩ resistor and
12k Ω resistor. i.e.:
8kΩ + (4k Ω || 12kΩ) ….. (|| = in parallel with)
RTH = 8kΩ + [(4kΩ x 12kΩ) / (4kΩ + 12kΩ)]
RTH = 8kΩ + 3kΩ
RTH = 11kΩ
Fig(4)
Connect the RTH in series with Voltage Source VTH and re-connect the load resistor across the
load terminals(A&B) as shown in fig (5) i.e. Thevenin circuit with load resistor. This is the
Thevenin’s equivalent circuit
RTH
VTH
Fig(5)
Now apply Ohm’s law and calculate the total load current from fig 5.
IL = VTH/ (RTH + RL)= 12V / (11kΩ + 5kΩ) = 12/16kΩ
IL= 0.75mA
And VL = ILx RL= 0.75mA x 5kΩ
VL= 3.75V
Norton’s Theorem Statement :
Any linear, bilateral two terminal network consisting of sources and
resistors(Impedance),can be replaced by an equivalent circuit consisting of a current source in
parallel with a resistance (Impedance),the current source being the short circuited current across
the load terminals and the resistance being the internal resistance of the source network looking
through the open circuited load terminals.
(a) (b)
Figure (a) shows a simple block representation of a network with several active / passive
elements with the load resistance RL connected across the terminals ‘a & b’ and figure (b) shows
the Norton equivalent circuit with IN connected across RN & RL .
3. Next Norton resistance RN is found out depending upon whether the network contains
dependent sources or not.
4. Replace the network with IN in parallel with RN and the concerned branch resistance
across the load terminals(A&B) as shown in below fig
Example: Find the current through the resistance RL (1.5 Ω) of the circuit shown in the
figure (a) below using Norton’s equivalent circuit.?
Fig(a)
Solution: To find out the Norton’s equivalent ckt we have to find out IN = Isc ,RN=Voc/ Isc.
Short the 1.5Ω load resistor as shown in (Fig 2), and Calculate / measure the Short Circuit
Current. This is the Norton Current (IN).
Fig(2)
We have shorted the AB terminals to determine the Norton current, IN. The 6Ω and 3Ω are then
in parallel and this parallel combination of 6Ω and 3Ω are then in series with 2Ω.So the Total
Resistance of the circuit to the Source is:-
2Ω + (6Ω || 3Ω) ….. (|| = in parallel with)
RT = 2Ω + [(3Ω x 6Ω) / (3Ω + 6Ω)]
RT = 2Ω + 2Ω
RT = 4Ω
IT = V / R T
IT = 12V / 4Ω= 3A..
Now we have to find ISC = IN… Apply CDR… (Current Divider Rule)…
ISC = IN = 3A x [(6Ω / (3Ω + 6Ω)] = 2A.
ISC= IN = 2A.
Fig(3)
All voltage & current sources replaced by their internal impedances (i.e. ideal voltage sources
short circuited and ideal current sources open circuited) and Open Load Resistor. as shown in
fig.(4)
Fig(4)
Calculate /measure the Open Circuit Resistance. This is the Norton Resistance (R N) We have
Reduced the 12V DC source to zero is equivalent to replace it with a short circuit as shown in
fig(4), We can see that 3Ω resistor is in series with a parallel combination of 6Ω resistor and 2Ω
resistor. i.e.:
3Ω + (6Ω || 2Ω) ….. (|| = in parallel with)
RN = 3Ω + [(6Ω x 2Ω) / (6Ω + 2Ω)]
RN = 3Ω + 1.5Ω
RN = 4.5Ω
Fig(5)
Connect the RN in Parallel with Current Source IN and re-connect the load resistor. This is
shown in fig (6) i.e. Norton Equivalent circuit with load resistor.
Fig(6)
Now apply the Ohm’s Law and calculate the load current through Load resistance across the
terminals A&B. Load Current through Load Resistor is
IL = IN x [RN / (RN+ RL)]
IL= 2A x (4.5Ω /4.5Ω +1.5kΩ)
IL = 1.5A IL = 1. 5A
Maximum Power Transfer Theorem:
In many practical situations, a circuit is designed to provide power to a load.
While for electric utilities, minimizing power losses in the process of transmission and
distribution is critical for Efficiency and economic reasons, there are other applications in areas
such as communications where it is desirable to maximize the power delivered to a load.
electrical applications with electrical loads such as Loud speakers, antennas, motors etc. it would
be required to find out the condition under which maximum power would be transferred from the
circuit to the load.
According to Maximum Power Transfer Theorem, for maximum power transfer from the
network to the load resistance , RL must be equal to the source resistance i.e. Network’s
Thevenin equivalent resistance RTh . i.e. RL = RTh
The load current I in the circuit shown above is given by,
𝑉𝑇𝐻
𝐼=
𝑅𝑇𝐻 +𝑅𝐿
The condition for maximum power transfer can be obtained by differentiating the above
expression for power delivered with respect to the load resistance (Since we want to find out the
value of RL for maximum power transfer) and equating it to zero as :
𝜕𝑃 𝑉2𝑇𝐻 2𝑉2𝑇𝐻
𝜕𝑅𝐿
=0= 2− 3 𝑅𝐿 = 0
(𝑅𝑇𝐻 +𝑅𝐿 ) (𝑅𝑇𝐻+𝑅𝐿 )
Under the condition of maximum power transfer, the efficiency 𝜼 of the network is then given
by:
2 2
𝑉𝑇𝐻 𝑉𝑇𝐻
𝑃𝐿𝑂𝑆𝑆 = 2
× 𝑅𝑇𝐻 =
(𝑅𝐿 +𝑅𝐿 ) 4𝑅𝐿
𝑉2𝑇𝐻
output 4𝑅𝐿
𝜼= = 2 2 = 0.50
input 𝑉 𝑉
𝑇𝐻 𝑇𝐻
4𝑅𝐿 + 4𝑅𝐿 )
(
For maximum power transfer the load resistance should be equal to the Thevenin equivalent
resistance ( or Norton equivalent resistance) of the network to which it is connected . Under the
condition of maximum power transfer the efficiency of the system is 50 %.
Example: Find the value of RL for maximum power transfer in the circuit of Fig. Find the
maximum power.?
Solution:We need to find the Thevenin resistance RTh and the Thevenin voltage VTh across the
terminals a-b. To get RTh, we use the circuit in Fig. (a)
6×12
RTh= 2 + 3 + (6 // 12 )=5+(6+12)=5+4=9Ω
i2 = −2 A,
Solving for i1, we get i1= −2/3.
Applying KVL around the outer loop to get VTh across terminals a-b, we obtain,
VTh= 22 V
For maximum power transfer, RL= RTh= 9Ω and the maximum power is,
2
𝑉𝑇𝐻 22×22
𝑃𝑀𝐴𝑋 = = =13.44W
4𝑅𝐿 4×9
Superposition Theorem:
The principle of superposition helps us to analyze a linear circuit with more than
one current or voltage sources sometimes it is easier to find out the voltage across or current in a
branch of the circuit by considering the effect of one source at a time by replacing the other
sources with their ideal internal resistances.
Any linear, bilateral two terminal network consisting of more than one sources,
The total current or voltage in any part of a network is equal to the algebraic sum of the currents
or voltages in the required branch with each source acting individually while other sources are
replaced by their ideal internal resistances. (i.e. Voltage sources by a short circuit and current
sources by open circuit)
Steps to Apply Super position Principle:
1. Replace all independent sources with their internal resistances except one source. Find the
output (voltage or current) due to that active source using nodal or mesh analysis.
2. Repeat step 1 for each of the other independent sources.
3. Find the total contribution by adding algebraically all the contributions due to the
independent sources.
Example: By Using the superposition theorem find I in the circuit shown in figure?
Fig.(a)
Solution: Applying the superposition theorem, the current I2 in the resistance of 3 Ω due to the
voltage source of 20V alone, with current source of 5A open circuited [ as shown in the figure.1
below ] is given by :
Fig1
I2 = 20/(5+3) = 2.5A
Similarly the current I5 in the resistance of 3 Ω due to the current source of 5A alone with
voltage source of 20V short circuited [ as shown in the figure.2 below ] is given by :
Fig.2
Practice Problem
1. Find the Current through the 3Ω resistance connected between C and D in the below figure.
Answer : 1 A from C to D.
STEADY STATE ANALYSIS OF SINGLE PHASE AC CIRCUITS
The upper terminal of alternating voltage source is positive and lower terminal negative so that
current flows in the circuit as shown . After some time (a fraction of a second), the polarities of the
voltage source are reversed so that current now flows in the opposite direction. This is called alternating
current because the current flows in alternate directions in the circuit.
sinusoidal current can be expressed in the same way as voltage i.e. i = Im sin ωt.
Why Sine Waveform?
Although it is possible to produce alternating voltages and currents with an endless variety
of waveforms (e.g., square waves, triangular waves, rectangular waves etc), yet the engineers
choose to adopt **sine waveform. The following are the technical and economical advantages of
producingsinusoidal alternating voltages and currents :
(i) The sine waveform produces the least disturbance in the electrical circuit and is the smoothest
and efficient waveform. For example, when current in a capacitor, in an inductor or in a
transformer is sinusoidal, the voltage across the element is also sinusoidal. This is not true of
any other waveform.
The use of sinusoidal voltages applied to appropriately designed coils results in a revolving magnetic field which has
the capacity to do work.
Fig. 11.6
A cycle can also be defined in terms of angular measure. One cycle corresponds to 360º
electrical or 2ω radians. The voltage or current generated in a conductor will span 360º
electrical (or complete one cycle) when the conductor moves past successive north and south
poles.
(v) Alternation. One-half cycle of an alternating quantity is called an alternation. An alternation spans
180º electrical. Thus in Fig. 11.6, the positive or negative half of alternating voltage is the alternation.
(vi) Time period. The time taken in seconds to complete one cycle of an alternating quantity is called
its time period. It is generally represented by T.
(vii) Frequency. The number of cycles that occur in one second is called the frequency (f) of the
alternating quantity. It is measured in cycles/sec (C/s) or Hertz (Hz). One Hertz is equal to 1C/s.
(viii) Amplitude. The maximum value (positive or negative) attained by an alternating quantity is
called its amplitude or peak value. The amplitude of an alternating voltage or current is designated by Vm
(or Em) or Im.
Table of Contents
Chapter 1 .............................................................................................................................. 2
1.1 Basic definitions and representation of networks ......................................................... 2
1.2. Analysis and synthesis ................................................................................................ 2
1.3. Network components .................................................................................................. 2
1.4. Types (active, passive; linear, non-linear; lumped, distributed) ................................... 3
1.5. Mathematical equations (time-domain and transformed) ............................................. 5
1.7. Important definitions and mathematical representations .............................................. 7
1.7.1. Poles and zeros of rational function (Laplace domain) ......................................... 7
1.7.2. Partial fraction expansion and residues................................................................. 8
1.7.3. Continued Fraction expansion ............................................................................ 10
Chapter 2 ............................................................................................................................ 10
2 Realizability Theory and Positive Real Functions ............................................................. 10
2.1 Realizability criteria (Passive Networks) ................................................................... 10
2.2 Positive Real Conditions............................................................................................ 11
2.3 Hurwitz and Strictly-Hurwitz Polynomials ................................................................ 11
2.4 Tests for Hurwitz nature of polynomials ( P(S)) ....................................................... 12
2.5 Sturm’s Theorem and Sturm’s Test............................................................................ 15
2.6 Testing Deriving-point functions ............................................................................... 20
2.6.1 General realizability criteria ................................................................................ 20
Chapter 3 ............................................................................................................................ 23
3 Elements of Realizability theory ....................................................................................... 23
3.1 Introduction ............................................................................................................... 23
3.2 Hurwitz Polynomials ................................................................................................. 24
3.3 Positive Real Functions ............................................................................................. 28
3.4 Elementary Synthesis Procedure ................................................................................ 35
CHAPTER 4 ....................................................................................................................... 41
Two Port Network............................................................................................................... 41
CHAPTER 5 ....................................................................................................................... 57
5 Active Networks .............................................................................................................. 57
5.1. Active network components...................................................................................... 57
5.2. Operational Amplifier Circuits ................................................................................. 60
5.3. Realization of Active Networks ................................................................................ 61
1
Chapter 1
1 Introduction
1.1 Basic definitions and representation of networks
2
Linear and non-linear elements:
A system is linear if superposition theorem holds true for the input-output relationship
Similarly, linear elements are those that have a linar response (current or voltage) to the input
(voltage or current); or elements that have linear relationship between current and voltage
through/across them.
3
One-port, two-port, multi-port networks
A pair of terminals such that current entering one of the terminals is the same as current leaving
the other terminal is called port. Depending on the number of ports, networks can be classified
as 1-port, 2-port…, n-port (multi-port).
4
1.5. Mathematical equations (time-domain and transformed)
Network analysis and synthesis is usually done in Laplace domain so that it is important to
know mathematical equations of network elements in Laplace domain. Consider the following
general network element.
5
Example 1
Find the deriving-point impedance and admittance functions for the following network
6
Example 2
Determine Z11(s) for the following network.
Conclussion
Deriving-point functions of linear, lumped and passive elementes (resistors inductor and
capacitors ) are rationals of S (Laplace domain) with positive and real coefficients.
7
Example 3
Example 4
8
The constants Ki are called residues of the poles.
The term Kos exists only if H(s) has a pole at infinity.
Example 5
9
1.7.3. Continued Fraction expansion
Example 6 Express the following impedance function using continues fraction expansion.
Problem 2
Chapter 2
2 Realizability Theory and Positive Real Functions
2.1 Realizability criteria (Passive Networks)
Deriving-point functions of networks made up of linear, lumped, passive elements (resistors,
inductors, capacitors and transformers) are rational functions of s. But not all rational functions
of s describe a realizable network of RLCM elements.
According to Otto Brune1, a deriving-point function (Z(s) or Y(s)) is realizable using lumped
passive elements (R, L, C, M) elements if it is positive real (PR) rational function of s. i.e if it
satisfies the following conditions.
Or
1) H(S) (Z(s) or Y(s)) is a real rational function of s where all coefficients (of the numerator
and denominator polynomials) are real and positive.
10
2) If s (which is generally complex, s = σ + jω) has non-negative real part, H(s) should not have
negative real part. i.e. if Re[s] > 0, then Re[H(s)] > 0.
11
c) P(s) = (s+2)(s2 – 4s + 5)
Properties
A strictly Hurwitz polynomial has the form:
12
The expansion stops when remainder of the subsequent division is zero. If there
is no premature termination, denominator of the last division is a constant.
If the last denominator is not a constant, then we say there is premature
termination.
The polynomial P(s) is strictly Hurwitz if the all coefficients, αi, are real and positive, and if
there is no premature termination.
If the coefficients are real and positive, then P(s) is Hurwitz. (*** there can be premature
termination)
Otherwise, the polynomial is non-Hurwitz.
Example 1
Test for Hurwitz nature of the following polynomial using CFE.
P(s) = (s2 + 2s + 1)(s2 + s + 1)(s2+4)
= s6 + 3s5 + 8s4 + 15s3 + 17s2 + 12s + 4
Solution:
13
Test for Hurwitz nature of the following polynomial using CFE
P(s) = s4 + s3 + 5s2 +3s+2
Method 3– Routh-Hurwitz Array
Cases
❖ If there is no sign change in coefficients of the first column, and if there is no vanishing
row, the polynomial is strictly Hurwitz. (If there are coefficients that contain ε, evaluate sign
of the coefficients by taking the limit ε→ 0+).
❖ If there is at least one sign change, the polynomial is non-Hurwitz.
❖ If there exists vanishing row (root on the jw-axis), the polynomial is not strictly-Hurwitz,
but if there is still no sign change, the polynomial is Hurwitz. i.e. The polynomial is Hurwitz
if there is no sign change. (Vanishing rows are allowed)
Example 2
Test Hurwitz nature the following polynomial using Routh-Hurwitz array
a) P(s) = s4 + s3 + 5s2 +3s+2
14
Solution:
15
= He(s) + Ho(s)
Where, De(s)2 – Do(s)2 = D(s)D(-s)
For s = jω
He(jω) – is purely real
Ho(jω) – is purely imaginary.
De(s)2 – Do(s)2 = D(s)D(-s) = D(jω)D(-jω) = |D(jω)|2 which is always positive.
Ne(jω)De(jω) – No(jω)Do(jω) = P(ω2) is an even polynomial function of ω
Therefore, Re[H(jω)] = He(jω)
The second positive real condition (condition B) states that:
For all real ω, Re[H(jω)] ≥ 0
►He(jω) ≥ 0
where He(jω) = P(ω2)/ |D(jω)|2
Now condition B can be restated as:
P(ω2) ≥ 0 for all real ω OR for ω2 ≥ 0
Let x =ω2
16
c) P(x) = x(x2 - 4x + 4)
Test for condition B is reduced to two steps
1. Check that both a0 and an are non-negative.
2. Check that P(x) has no odd-ordered zeros on the positive x-axis. This can be done in three
ways:
Method 1 – Factorization
Method 2 – Plot P(x) over a sufficient range of x
Method 3 – Sturm’s test
Sturm’s Test
Step 1: Develop a sequence of polynomials Po(x), P1(x), P2(x)… called Sturm’s
functions as follows:
Define P0(x) = P(x)
P1(x) = P’(x)
To find P2(x), divide P0(x) by P1(x) to get a two term quotient and a remainder. The remainder
is negative of P2(x).
i.e. P0(x) / P1(x) = b1x + c1 + [-P2(x)] / P1(x)
Repeat this stem for P3, P4 …
Euclid Algorithm:
P0(x) = P(x)
P1(x) = P’(x)
P0(x) = q1(x)P1 + [-P2(x)]
P1(x) = q2(x)P2 + [-P3(x)]
…
…
Pk-2(x) = q1(x)Pk-1 + [-Pk(x)]
The process stops when the remainder Pk(x) becomes a constant (when k = n) or zero ( when
k ≤ n premature termination)
Step 2:
Case 1: Pk(x) = constant when k = n
Sturm’s Theorem
17
The number of odd-ordered zeros which Pk(x) has in the interval a ≤ x ≤ b is equal to |Sb – Sa|
where Sa and Sb are the number of sign changes in the test (P0, P1,P2,…) evaluated at x=a and
x=b respectively.
Here, we are interested in the presence of odd-ordered zeros on the positive x-axis, hence we
take a = 0 and b → ∞.
Case 2: Pk(x) = 0 for some k ≤ n (Premature termination)
This shows that Pk-1(x) is a factor of P(x) so that all zeros of Pk-1 are zeros of P(x) and the
multiplicity of these zeros in P(x) is one higher than their multiplicity in Pk-1(x). The test
continues by taking polynomials P0 to Pk-1. The zero count |Sb – Sa| in this case is the sum of
number of odd-ordered zeros plus multiple zeros (due to Pk-1(x) each counted once.
Example 3
Test whether the following polynomials satisfy the condition P(x) ≥0 for all x ≥ 0.
a) p(x) = x2 – 4x + 3
Solution:
Method – 1
P(x) = (x – 1)1(x – 3)1
Two odd-ordered zeros on the positive x – axis (x =1 and x = 3)
Therefore, P(x) does not satisfy the condition.
Method – 2
P2(x) = - (-1) = 1
P2(x) = 1 ← constant when k = 2 = n
18
|Sb – Sa| = 2 ← the number of odd-ordered zeros on the positive x – axis
Condition not satisfied.
b) p(x) = x4 – 8x3 + 23x2 – 28x + 12
Solution:
Sturm’s test n = 4
P0(x) = x4 – 8x3 + 23x2 – 28x + 12
P1(x) = P0’(x) = 4x3 – 24x2 + 46x – 28
Using the Euclid algorithm:
P2(x) = ½x2 – 2x + 2
P3(x) = 2x – 4
P4(x) = 0 k=4 ≤ n = 4 ← premature termination.
P3(x) = 2(x-2)1 is a factor of P(x)
The multiplicity of the zero (x-2) in P(x) is one higher than in P3(x)
(x-2)1+1 = (x-2)2 is a factor of P(x)
P(x) has one double (even-ordered) zero
P(x) has 3 zeros on the positive x – axis among which the double zero (x-2)2 is the one.
Therefore, P(x) has 3-1 = 2 odd-ordered zeros.
Hence, P(x) 0 for x ≥ 0
Problem 2
19
Check whether p(x) = x4 – 15x2 + 10x + 24 satisfies the condition P(x) ≥0 for all x ≥ 0.
20
Solution:
1. Inspection test. 2. Necessary and sufficient conditions
✓ Real and positive coefficients P(ω2) = NeDe – NoDo |s = jω
✓ Highest degrees differ by 1-1 = 0 ≤ 1 = (3)(1) – (j2ω)(jω)
✓ Lowest degrees differ by 0-0 = 0 ≤ 1 = 2ω2 + 3
✓ No missing terms P(x) = 2x + 3 ← all coefficients are positive
✓ No imaginary axis poles and zeros P(x) ≥ 0 for all x ≥ 0
Ok! Re[z(jω)] ≥ 0 for all real ω Ok!
N(s) + D(s) = 3s + 4 = 3(s + 4/3) ← zero at s = -4/3
N(s) + D(s) is strictly Hurwitz
All condition met
Z(s) is positive real
Problem 3
Test the following admittance function.
Example 5
Test positive realness of the following impedance function
Solution:
1. Inspection test.
✓ Real and positive coefficients
✓ Highest degrees differ by 3-3 = 0 ≤ 1
✓ Lowest degrees differ by 0-0 = 0 ≤ 1
✓ No missing terms
➢ Test for imaginary axis poles and zeros.
❖ This can be done using Routh-Hurwitz array; there will be vanishing row if a polynomial
has imaginary axis (jω – axis) roots.
21
Pk(s) = 3s2 + 3 = 3(s2 + 1)1 is a factor of P(s)
P(s) has one simple root.
Z(s) has one simple zero in the jω – axis
Ok!
2. Necessary and sufficient conditions
P(ω2) = NeDe – NoDo |s = jω
= (3s2 + 3)(3s2 + 1) – (2s3 + 2s)(s3 + 4s)| s = jω
= 2ω6 – ω4 - 4 ω2 + 3
P(x) = 2x3 – x2 – 4x + 3
Sturm’s Test
P0(x) = 2x3 – x2 – 4x + 3
P1(x) = 6x2 – 2x – 4
P2(x) = 25/9x – 25/9
P3(x) = 0 ← premature termination
P2(x) = 25/9(x – 1)1
(x-1)1+1 = (x-1)2 is a factor of P(x)
22
P(x) has one double zero
P(x) has one zero on the positive x – axis among which the double zero (x-1)2 is the one.
Therefore, P(x) has 1-1 = 0 odd-ordered zeros.
No odd-ordered zeros on the positive x - axis
Hence, P(x) ≥0 for x ≥ 0
N(s) + D(s) = 3s3 + 6s2 + 6s + 4
Chapter 3
3 Elements of Realizability theory
3.1 Introduction
In the frequency domain, the stability criterion requires that the system function possess poles
in the left-half plane or on the j axis only. Moreover, the poles on the j axis must be
simple. As a result of the requirement of simple poles on the j axis, if H(s) is given as
then the order of the numerator n cannot exceed the order of the denominator m by more than
unity, that is, n – m 1. If n exceeded m by more than unity, this would imply that at s = j =
, and there would be a multiple pole. To summarize, in order for a network to be stable, the
23
1. H(s) cannot have poles in the right-half plane.
2. H(s) cannot have multiple poles in the j axis.
3. The degree of the numerator of H(s) cannot exceed the degree of the denominator by
Finally, it should be pointed out that a rational function H(s) with poles in the left-half plane
only has an inverse transform h(t), which is zero for t < 0. In this respect, stability implies
causality. Since system functions of passive linear networks with lumped elements are rational
functions with poles in the left-half plane or j axis only, causality ceases to be a problem
when we deal with system functions of this type. We are only concerned with the problem of
causality when we have to design a filter for a given amplitude characteristic such as the ideal
low-pass filter. We know we could never hope to realize exactly a filter of this type because
the impulse response would not be causal. To this extent the Paley-Wiener criterion is helpful
in defining the limits of our capability.
half plane or the j axis. Moreover, the poles on the j axis must be simple. The
denominator polynomial of the system function belongs to a class of polynomials known as
Hurwitz polynomials. A polynomial P(s) is said to be Hurwitz if the following conditions are
satisfied:
24
s = -i i real and positive
s = ji i real
25
where m(s) is of one higher degree than n(s). Then if we divide n(s) into m(s), we obtain a
single quotient and a remainder
R1 ( s )
(s)=q1s+ (3.9)
n( s )
The degree of the term R1(s) is one lower than the degree of n(s). Therefore if we invert the
remainder term and divide, we have
n( s ) R ( s)
= q2 s + 2 (3.10)
R1 ( s ) R1 ( s )
We see that the process of obtaining the continued fraction expansion of (s ) simply
involves division and inversion. At each step we obtain a quotient term qis and a remainder
term, Ri+1(s) /Ri(s). We then invert the remainder term and divide Ri+1(s) into Ri(s) to obtain
a new quotient. There is a theorem in the theory of continued fractions which states that the
continued fraction expansion of the even to odd or odd to even parts of a polynomial must
be finite in length. Another theorem states that, if the continued fraction expansion of the
odd to even or even to odd parts of a polynomial yields positive quotient terms, then the
polynomial must be Hurwitz to within a multiplicative factor W(s). That is, if we write
F(s) = W(s)F1(s) (3.12)
then F(s) is Hurwitz, if W(s) and F1(s) are Hurwitz. For example, let us test whether the
polynomial
F(s) = s4+s3+5s2+3s+4 (3.13)
is Hurwitz. The even and odd parts of F(s) are
m(s) = s4+5s2+4
n(s) = s3+3s (3.14)
We now perform a continued fraction expansion of (s ) = m(s)/n(s) by dividing m(s) by
n(s), and then inverting and dividing again, as given by the operation
s3 + 3s)s4+5s2+4(s
s4+3 s2
---------------------
2 s2+4) s3 + 3s(s/2
s3 + 2s
26
------------------ (3.15)
s) 2 s2+4(2s
2 s2
--------------------------
4)s(s/4
s
------------------
so that the continued fraction expansion of (s ) is
m(s )
(s ) =
1
=s+
n(s ) s
+
1
2 2s + 1
s/4
Since all the quotient terms of the continued fraction expansion are positive, F(s) is
Hurwitz.
Example 3.1. Let us test whether the polynomial
G(s) = s3+2s2+3s+6 (3.16)
is Hurwitz. The continued fraction expansion of n(s)/m(s) is obtained from the division
2 s 2 + 6) s 3 + 3s ( s / 2
s 3 + 3s
0
We see that the division has been terminated abruptly by a common factor s3+3s. The
polynomial can then be written as
2
G(s) = (s3+3s) 1 + (3.17)
s
We know that the term 1+2/s is Hurwitz. Since the multiplicative factor s3+3s is also
Huwitz. The term s3+3s is the multiplicative factor W(s), which we referred to earlier.
Example 3.2. Next consider a case where W(s) is non-Hurwitz.
F(s) = s7 + 2s6 +2s5 + s4 + 4s3 + 8s2 + 8s + 4 (3.18)
The continued fraction fraction expansion of F(s) is now obtained.
n(s ) s 1
= + (3.19)
m(s ) 2 4
s+
1
3 3 4
2
ss +4 ( )
s4 + 4( )
We thus see that W(s) = s4+4, which can be factored into
27
W(s) = (s2+2s+2)(s2-2s+2) (3.20)
It is clear that F(s) is not Hurwitz.
Example 3.3. Let us consider a a more obvious non-Hurwitz polynomial
F(s) = s4+s3+2s2+3s+2 (3.21)
The continued fraction expansion is
s3 + 3s)s4+2s2+2(s
s4+3 s2
---------------------
- s2+2) s3 + 3s(-s
s3 - 2s
------------------
5 s) - s2+2(-s/5
- s2
--------------------------
2)5s(5s/2
5s
------------------
We see that F(s) is not Hurwitz because of negative quotients.
Example 3.4. Consider the case where F(s) is an odd or even function. It is impossible to
perform a continued fraction expansion on the function as it stands. However, we can test the
ratio of F(s) to its derivative, F'(s). If the ratio F(s)/ F'(s) gives a continued fraction expansion
with all positive coefficients, then F(s) is Hurwitz. For example, if F(s) is given as
F(s) = s7 + 3 s5 + 2s3 + s (3.22)
F'(s) = 7s6 + 15s4+ 6s2+1 (3.23)
Without going into the details, it can be shown that the continued fraction expansion of
F(s)/F’(s) does not yield all positive quotients. Therefore F(s) is not Hurwitz.
28
2. The real part of F(s) is greater than or equal to zero when the real part of s is greater than
or equal to zero, that is,
Re F (s ) 0 for Re s 0
Let us consider a complex plane interpretation of a p.r. function.
Consider the s plane and the F(s) plane in Fig. 10.5. If F(s) is p.r., then a point 0 on the
positive real axis of the s plane would correspond to, or map onto, a point F( 0) which must
be on the positive real axis of the F(s) plane. In addition, a point si in the right half of the s
plane would map onto a point F(si) in the right half of the F(s) plane.
jImF
S plane
F(s) plane
o
O
si F(si)
F( 0 ) Re F
K K
Then Re = 2 0 (3.24)
s +
2
Therefore, F(s) is p.r. If F(s) an impedance function, then the corresponding element is
a capacitor of 1/K farads.
We thus that the basic passive impedances are p.r. functions.
29
Similarly, it is clear that the admittances
Y(s) = K
Y(s) = Ks (3.25)
K
Y (s ) =
s
are positive real if K is real and positive. We now show that all driving point immittances of
passive networks must be p.r. The proof depends upon the following assertion: for a sinusoidal
input, the average power dissipated by a passive network is nonnegative. For the passive
network in Fig. 10.6, the average power dissipated by the network is
ReZ in ( j ) I 0
1 2
Average power = (3.26)
2
We then conclude that, for any passive network
Re Zin ( j ) 0 (3.27)
Zin(s) Passive
network
Point impedance is Zin(s). Let us load the network with incidental dissipation such that if the
driving-point impedance of the uniformly loaded network is Z1(s), then
Z1(s) = Zin(s+ ) (3.28)
where , the dissipation constant, is real and positive. Since Z1(s) is the impedance of a passive
network,
Re Z1 ( j ) 0
(3.29)& (3.30)
Re Z in ( + j ) 0
Since is an arbitrary real positive quantity, it can be taken to be . Thus the theorem is
proved.
Next let us consider some useful properties of p.r. functions. The proofs of these
properties are not given here.
1. If F(s) is p.r., then 1/F(s) is also p.r. This property implies that if a driving-point
impedance is p.r., then its reciprocal, the driving-point admittance, is also p.r.
30
2. The sum of p.r. functions is p.r. From an impedance standpoint, we see that if two
impedances are connected in series, the sum of the impedances is p.r. An analogous
situation holds for two admittances in parallel. Note that the difference of two p.r.
functions is not necessarily p.r.; for example, F(s)= s-1/s is not p.r.
3. The poles and zeros of a p.r. function cannot have positive real parts, i.e., they cannot
be in the right half of the s plane.
4. Only simple poles with real positive residues can exist on the j axis.
5. The poles and zeros of a p.r. function are real or occur in conjugate pairs. We know
that the poles and zeros of a network function are functions of the elements in the
network. Since the elements themselves are real, there cannot be complex poles or
zeros without conjugates because this would imply imaginary elements.
6. The highest powers of the numerator and denominator polynomials may differ at most
by unity. This condition prohibits multiple poles and zeros at s = .
7. The lowest powers of the denominator and numerator polynomials may differ by at
most unity. This condition prevents the possibility of multiple poles or zeros at s =0.
8. The necessary and sufficient conditions for a rational function with real coefficients
F(s) to be p.r. are
(a) F(s) must have no poles in the right-half plane.
(b) F(s) may have only simple poles on the j axis with real and positive residues.
(c) Re F(j ) 0, for all .
Let us compare this new definition with the original one which requires the two conditions.
1. F(s) is real when s is real.
2. Re F(s) 0, when Re s 0.
In order to test condition 2 of the original definition, we must test every single point in
the right-half plane. In the alternate definition, condition (c) merely requires that we test the
behaviour of F(s) along j axis. It is apparent that testing a function for the three conditions
given by the alternate definition represents a considerable saving of effort, except in simple
cases as F(s) = 1/s.
Let us examine the implications of each criterion of the second definition.
Condition (a) requires that we test the denominator of F(s) for roots in the right-half plane, i.e.,
we must determine whether the denominator of F(s) is Hurwitz. This is readily accomplished
through a continued fraction expansion of the odd to even or even to odd parts of the
denominator. The second requirement--condition (b) -- is tested by making a partial fraction
31
expansion of F(s) and checking whether the residues of the poles on the j axis are positive
and real. Thus, if F(s) has a pair of poles at s = j1, a partial fraction expansion gives terms
If K1 is found to be positive, then F(s) satisfies the second of the three conditions.
In order to test for the third condition for positive realness, we must first find the real
part of F(j ) form the original function F(s). To do this, let us consider a function F(s) given
as a quotient of two polynomials
P(s )
F (s ) = (3.32)
Q(s )
We can separate the even parts from the odd parts of P(s) and Q(s) so that F(s) is
M 1 (s ) + N1 (s )
F (s ) = (3.33)
M 2 (s ) + N 2 (s )
where Mi(s) is an even function and Ni(s) is odd function. F(s) is now decomposed into even
and odd parts by multiplying both P(s) and Q(s) by (M2-N2) so that
M 1 + N1 M 2 − N 2
F (s ) =
M 2 + N2 M 2 − N2
M 1M 2 − N1 N 2 M 2 N1 − M 1 N 2
= + (3.34)
M 22 − N 22 M 22 − N 22
We see that the products M1 M2 and N1 N2 are even functions, while M1N2 and M2N1 are odd
functions. Therefore, the even part of F(s) is
M 1M 2 − N1 N 2
EvF (s ) = (3.35)
M 22 − N 22
and the odd part of F(s) is
M 2 N1 − M 1 N 2
Odd F (s ) = (3.36)
M 22 − N 22
32
If we let s = j , we see that the even part of any polynomial is real, while the odd part of the
polynomial is imaginary, so that if F ( j ) is written as
F ( j ) = Re f ( j ) + j ImF ( j ) (3.37)
Therefore, to test for the third condition for positive realness, we determine the real part of
F(j) by finding the even part of F(s) and then letting s = j. We then check to see whether Re
F( j ) 0 for all .
A( )
Single Double
Fig.
root 7 Fig. 8 root
M 2 ( j ) − N 2 ( j ) = M 2 ( ) + N 2 ( ) 0
2 2 2 2
(3.40)
That is, there is an extra j or imaginary term in N2( j ), which, when squared, gives –1, so
that the denominator of Re F( j )is the sum of two squared numbers and is always positive.
Therefore, our task resolves into the problem of determining whether
A( ) M 1 ( j )M 2 ( j ) − N1 ( j )N 2 ( j ) 0 (3.41)
If we call the preceding function A( ) , we see that A( ) must not have positive, real
roots of the type shown in Fig. 10.7; i.e., A( ) must never have single, real roots of .
However, A( ) may have double roots (Fig.10.8), because A( ) need not become negative
in this case.
As an example, consider the requirements for
s+a
F (s ) = (3.42)
s + bs + c
2
to be p.r. First, we know that, in order for the poles and zeros to be in the left-half plane or on
the j axis, the coefficients a, b, c must be greater than or equal to zero. Second, if b = 0, then
F(s) will possess poles on the j axis. We can then write F(s) as
F (s ) =
s a
+ 2 (3.43)
s +c s +c
2
33
We shall show later that the coefficient a must also be zero when b = 0. Let us proceed
with the third requirement, namely, Re F(j) 0. From the equation
M 1 ( j )M 2 ( j ) − N1 ( j )N 2 ( j ) 0 (3.44)
shows that the residues of the poles at s = ± j is negative. Therefore F(s) is not p.r.
Since the impedances and admittances of passive time-invariant networks are p.r.
functions, we can make use of our knowledge of impedances connected in series or parallel in
our testing for the p.r. property. For example, if Z1(s) and Z2(s) are passive impedances, then
Z1 connected in parallel with Z2 gives overall impedance
34
Z1 ( s ) Z 2 ( s )
Z(s) = (3.51)
Z1 ( s ) + Z 2 ( s )
Since the connecting of the two impedances in parallel has not affected the passivity of the
network, we know that Z(s) must also be p.r. We see that if F 1(s) and F2(s) are p.r. functions,
then
F1 ( s ) F2 ( s )
F(s) = (3.52)
F1 ( s ) + F2 ( s )
35
could synthesize a network whose driving-point impedance is Z(s) by simply connecting all
the Zi(s) in series. However, if we were to start with Z(s) alone, how could we decompose Z(s)
to give us the individual Zi(s)? Suppose Z(s) is given in general as
an s n + an −1s n −1 + ... + a1s + a0 P( s)
Z(s) = m −1
= (3.61)
bm s + bm −1s + ... + b1s + b0 Q( s)
m
Consider the case where Z(s) has a pole at s = 0(that is, b 0 = 0). Let us divide P(s) by Q(s) to
give a quotient D/s and a remainder R(s), which we can denote as Z 1(s) and Z2(s).
D
Z(s) = + R(s) D0
s
= Z1(s) + Z2(s) (3.62)
Are Z1 and Z2 p.r.? We know that Z1 = D/s is p.r. Is Z2(s) p.r.? Consider the p.r. criteria given
previously.
1. Z2(s) must have no poles in the right-half plane.
2. Poles of Z2(s) on the imaginary axis must be simple, and their residues must be real and
positive.
3. Re [Z2(s)] 0, for all .
Let us examine these cases one by one. Criterion 1 is satisfied because the poles of Z2(s)
are also poles of Z(s). Criterion 2 is satisfied by this same argument. A simple partial fraction
expansion does not affect the residues of the other poles. When s = j, Re [Z(j) = D/ j] = 0.
Therefore we have
Re [Z2(j)] = Re [Z(j)] 0 (3.63)
From the foregoing discussion, it is seen that if Z(s) has a pole at s = 0, a partial fraction
expansion can be made such that one of the terms is of the form K/s and the other terms
combined still remain p.r.
A similar argument shows that if Z(s) has a pole at s = (that is, n - m = 1), we can divide
numerator by denominator to give a quotient Ls and a remainder term R(s), again denoted as
Z1(s) and Z2(s).
Z(s) = Ls + R(s) = Z1(s) + Z2(s). (3.64)
Here Z2(s) is also p.r. If Z(s) has a pair of conjugate imaginary poles on the imaginary axis, for
example, poles at s = E j1, then Z(s) can be expanded into partial fractions so that
2 Ks
Z(s) = + Z 2 ( s) (3.65)
s + 1
2 2
36
2 Ks j 2 K
Here Re 2
2
= =0
2
(3.66)
s + 1 s = j − 2
+ 1
Z1(s)
Z(s) Z2(s)
Fig 10
Consider the following p.r. function
s 2 + 2s + 6
Z(s) = (3.68)
s( s + 3)
We see that Z(s) has a pole at s =0. a partial fraction expansion of Z(s) yields
2 s
Z(s) = +
s s+3
= Z1(s) + Z2(s) (3.69)
If we remove Z1(s) from Z(s), we obtain Z2(s), which can be shown by a resistor in parallel
with an inductor, as illustrated in fig.11.
Fig 11
Example 6:
7s + 2
Y(s) = (3.70)
2s + 4
Let us synthesize the network by first removing min [Re Y(j)]. The real part of Y(j) can be
easily obtained as
8 + 14 2
Re[Y(j)] = (3.71)
16 + 4 2
37
We see that the minimum of Re[Y(j)] occurs at = 0, and is equal to min[Re Y(j)] = 0.5.
Let us then remove Y1 = 0.5 mho from Y(s) and denote the remainder as Y2(s), as shown in fig
12. The remainder function Y2(s) p.r. because we have removed only the minimum real part of
Y2(j). Y2(s) is obtained as
3s
Y2(s) = Y(s) – 0.5 = (3.72)
s+2
1 3
It is readily seen that Y2(s) is made up of a in series with a farad capacitor. Thus the
3 2
final network is shown in fig 13.
The real part of the function is a constant, equal to unity. Removing a constant of 1 , we
obtain (fig. 14)
38
1
Z(s) Z1(s)
Fig 14
3s 2 + 1
Z1(s) = Z(s) – 1= (3.75)
6 s 3 + 3s
The reciprocal of Z1(s) is an admittance
6 s 3 + 3s
Y1(s) = (3.76)
3s 2 + 1
Which has a pole at s = . The pole is removed by finding the partial fraction expansion of
Y1(s);
s
Y1(s) = 2s+ (3.77)
3s + 1
2
and then by removing the term with the pole at s = to give a capacitor of 2 farads in parallel
with Y2(s) below(fig 15). Y2(s) is now obtained as
s
Y2(s) = Y1(s) - 2s = (3.78)
3s + 1
2
1
Z(s) Y2
Y1 2F
Fig 15
39
1 3H
Z(s) 2F 1F
Fig 16
These examples are special cases of driving-point synthesis problem. However, they do
illustrate the basic techniques involved.
40
CHAPTER 4
Two Port Network
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
CHAPTER 5
5 Active Networks
5.1. Active network components
Linear, lumped, and time-invariant active network components can be classified in to two
categories: basic building blocks and secondary building blocks. This classification is based on
the observation that every element in the secondary building block category can be realized by
interconnecting elements in the basic building block category. Secondary building blocks are
not fundamental; they are useful to model some network components.
• Basic building blocks
This category includes passive elements such as resistors, capacitors, inductors, and active
elements such as operational amplifiers (Op-Amps).
• Secondary building blocks
o Negative impedance converter (NIC)
It is a two-port device that has input impedance equal to the negative of impedance of the load
scaled by a constant 1/k.
57
Inductors can be realized from resistors. This is very important in filters to substitute inductors
with active element especially in extreme frequency applications since they require physically
large inductors which will be difficult to fabricate integrated circuits.
o Frequency dependant negative resistance (FDNR)
58
A circuit realization of an FDNR can be obtained by terminating port 1 of the GIC with a
capacitor as shown below.
Operational amplifiers
59
• An op amp is a high voltage gain, DC amplifier with high input impedance, low output
impedance, and differential inputs.
• Positive input at the non-inverting input produces positive output; positive input at the
inverting input produces negative output.
• Practically, op amps are not used in open-loop manner; a feedback is included to reduce
the gain and get a more precise and predictable characteristic.
• If one of the terminals is grounded in feedback configuration, the other will be virtually
grounded.
Inverting feedback
The impedance elements Zs and ZF are impedances of a one-port network containing one or
more elements.
60
5.3. Realization of Active Networks
Transfer functions can be realized using active elements (especially operational amplifiers) and
RC elements. Inductors are not common in active networks because of their large size and since
they can be simulated using active circuits. Transfer functions of simple feedback circuits have
orders not more than 2. However, in most applications such as filters and control systems,
higher order transfer functions are required. The most common and easiest way to realize these
function is to break the given transfer function in to product of smaller 1 st and 2nd ordered
transfer functions so that cascaded interconnections of sub networks obtained from these
functions will realize the given transfer function if there is no loading effect. If the transfer
function is voltage gain function, cascaded op amp feedback circuits can be used.
Operational amplifiers have high input impedance and low output impedance so that there will
not be loading effect during cascaded interconnections.
Example 1 Realize the following transfer function using op-amps and RC elements.
61
62
1.2 Signals and systems
+A
–A
Classification of signals and systems 1.3
Example:
x(n)
n
0 1 2 3 4 5
Mathematically a discrete time signal is denoted as
x(n) {0, 2, 4, 1, 3, 1}
Mathematically the functional relationship between Input and Output may be written as
y(t) = H[x(t)]
Symbolically, x(t) y(t)
where, x(t) Input signal
y(t) Output signal
H System operator
Ex: Audio and Video amplifiers
1.4 Signals and systems
When a system satisfies the properties of linearity and time invariant then it is called LTI
(Linear Time Invariant) system.
Applications of signals and systems
It is mainly used in Science and Engineering. Some of the applications are
Image processing
Speech processing
Communication
Audio & Video Equipments
Bio medical
Problems
Solution:
x(–2) = –3, x(–1) = –1, x(0) = 2, x(1) = 3, x(2) = 4
x(n)
4
3
2
1
n
–3 –2 –1 0 1 2 3
–1
–2
–3
x(n)
4
3
2
1
n
–3 –2 –1 0 1 2 3
–1
–2
–3
n 2, n0
3. x (n) n 1, n 0 . Draw DT signal.
n 3, n0
Solution:
n > 0, x(n) = n + 2
n = 1, x(1) = 3
n = 2, x(2) = 4
n = 3, x(3) = 5
n = 0, x(n) = n + 1
x(0) = 1
n < 0, x(n) = n + 3
x(–1) = 2
x(–2) = 1
x(–3) = 0
x(n) {0, 1, 2, 1, 3, 4, 5}
1.6 Signals and systems
x(n)
5
4
3
2
1
n
–4 –3 –2 –1 0 1 2 3 4
–1
–2
–3
4. Sketch the continuous time signal x(t) = e–t for an interval 0 < t < 2. Sample the signal
with a sampling period T = 0.4 second and sketch the discrete time signal.
Solution:
x(t) = e–t; 0 < t < 2
Continuous time signal
Consider t = {0, 0.5, 1, 1.5, 2}
x(0) = e–0 = 1
x(0.5) = e–0.5 = 0.606
x(1) = e–1 = 0.368
x(1.5) = e–1.5 = 0.223
x(2) = e–2 = 0.135
x(t)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
t
0
0.5 1 1.5 2
Classification of signals and systems 1.7
Discrete time signal
T = 0.4 second
= e–nT
x(n) = e–0.4n
For choosing n,
0.4n = 2
2
n
0.4
n=5
i.e., n = 0 to 4
x(n) = e–0.4n
n = 0, x(0) = e0 = 1
n = 1, x(1) = e–0.4 = 0.67
n = 2, x(2) = e–0.8 = 0.449
n = 3, x(3) = e–1.2 = 0.301
n = 4, x(4) = e–1.6 = 0.201
x(n)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
n
0 1 2 3 4
Classification of signals and systems 1.9
n = 0, x(0) = 2sin (0) = 0
n = 1, x(1) = 2sin (0.4) = 1.9
n = 2, x(2) = 2sin (0.8) = 1.17
n = 3, x(3) = 2sin (1.2) = –1.17
n = 4, x(4) = 2sin (1.6) = –1.9
n = 5, x(5) = 2sin (2) = 0
x(n)
2
1
0 n
1 2 3 4 5
–1
–2
1; t0
x(t) u(t)
0; t0
2. Ramp signal
x(t) r(t)
The ramp signal is defined as
At; t 0 3A 3
x(t)
0; t 0 2A 2
t t
t; t0 0 1 2 3 0
x(t) r(t) 1 2 3
0; t0
1.10 Signals and systems
(or)
t
t –0.5
t 0 t
0.5
1; |t|
(t) 2 2 2
0; otherwise
|t|
1 ; |t| a
x(t) a (t) a
0 ; |t| a
t
5. Impulse signal –a 0 a
A; t 0
x(t) A
0; t 0
t
It has zero duration and infinite magnitude. 0
The unit impulse signal (Delta signal) is defined as (t)
1; t0 1
(t)
0; t0
t
Properties of unit impulse signal 0
(i) (t)dt 1
(ii) x(t) (t) dt x(0)
Classification of signals and systems 1.11
(iii) x(t) (t t 0 )dt x(t 0 )
(iv) x() (t )d x(t)
1
(v) (at) (t) .
|a |
6. Sinusoidal signal
A continuous time sinusoidal signal is given by
x(t) = Asin( t )
where,
A Amplitude
Angular frequency in radians
Phase angle in radians
A sinusoidal signal is an example of periodic signal. The time period of this signal is given by
2
T
x(t)
t
0
–A
7. Exponential signal
The exponential signal plays an important role in signal analysis. It is classified into
(i) Real exponential signal
(ii) Complex exponential signal
1.12 Signals and systems
A
A A
t t t
0 0
0
Case (i) a = 0 Case (ii) a > 0 Case (iii) a < 0
Complex exponential signals
The complex exponential signal is represented as
x(t) = eSt
where S is complex signal, i.e., S = j
x(t) e
j t
x(t)
t
0
Case (ii)
0 t 0 t
Figure: exponentially growing signal Figure: exponentially decaying signal
Case (iii)
We consider 0, 0, x (t) e jt then x(t) is sinusoidally damped signal
(a) 0, 0, x(t) = ejt, e2jt..... ( 1, 2........ ) then x(t) is said to be sinusoidally exponential
growing signal.
x(t)
0 t
x(t)
x(t)
t
t
Figure: sinusoidally exponential growing signal Figure: sinusoidally exponential decaying signal
8. Parabolic signal
The parabolic signal is defined as
At 2
; t0
x(t) 2
0; t0
x(t)
4.5A
2A
0.5A
t
1 2 3
The unit parabolic signal is defined as
t2
; t0
P(t) 2
0; t0
Classification of signals and systems 1.15
P(t)
4.5
2
0.5
t
1 2 3
9. Signum function [Sgn(t)]
The signum function (or) signum signal is defined as
1; t0
x(t) Sgn(t) 0; t0
1; t0
x(t)
0 t
–1
sin t
x(t) sin c(t) ; t
t
x(t)
1
0 t
Classification of signals and systems 1.17
2. d[r(t)] u(t)
dt
L.H.S,
t; t0
Let r(t)
0; t0
dt
d[r(t)] ; t 0
dt
dt 0; t 0
1; t0
u(t)
0; t0
L.H.S. = R.H.S.
d[r(t)]
u(t)
dt
Hence proved.
1.5 Elementary Discrete time signals/Standard Discrete time signals
x(n)
1. Impulse function
It is defined as A
A; n0
n
x(n)
0; n0 –2 –1 0 1 2
The unit impulse sequence (or) unit sample sequence is defined as (n)
1; n0 1
(n)
0; n0
n
2. Step function (or) step sequence –2 –1 0 1 2
It is defined as x(n)
A; n0 A
x(n)
0; n0
3. Ramp sequence
It is defined as
n
0 1 2 3
An; n0 x(n) r(n)
x(n)
0; n0
3A 3
The unit ramp sequence is defined as 2A 2
A 1
n; n0
r(n)
0; n0 0 1 2 3 n 0 1 2 3 n
4. Parabolic sequence
x(n)
The parabolic sequence is defined as 4.5A
2A
An 2
; n0 0.5A
x(n) 2
0; n0
0 1 2 3 n
The unit parabolic sequence is defined as P(n)
4.5
n2
; n0 2
P(n) 2
0; 0.5
n0
0 1 2 3 n
5. Sinusoidal sequence
The discrete time sinusoidal sequence is defined as
x(n) = Asin( n )
where,
A Amplitude
Angular frequency
Phase angle
n Integer
The period of the discrete time sinusoidal sequence is,
2
N m.
Classification of signals and systems 1.19
x(n)
6. Exponential sequence
It is classified into
(i) Real exponential sequence
(ii) Complex exponential sequence
Real exponential sequence
The real exponential sequence is represented as
x(n) = an for all n
We consider four cases depend on ‘a’
Case (i): a > 1
The sequence is said to be exponentially growing sequence.
Case (ii): 0 < a < 1
The sequence is said to be exponentially decaying sequence.
x(n) x(n)
a>1 0<a<1
1 2 3 4 5 n n
0 0 1 2 3 4 5
Figure: exponentially growing sequence Figure: exponentially decaying sequence
Case (iii): –1 < a < 0
The sequence is said to be exponentially decaying sequence. The sequence should be positive,
negative, positive.... etc.
1.20 Signals and systems
x(n) x(n)
–1 < a < 0 a < –1
0 1 2 3 4 5 n 0 1 2 3 4 5 n
x (n) a n e j( n )
We consider three cases based on ‘a’.
Case (i): a = 1
x (n) e j( n )
The sequence is said to be purely sinusoidal.
x(n)
x(n) a > 1
x(n) a < 1
n
n
The discrete time unit impulse is the first difference of the discrete time unit step.
i.e., u(n) (m) with k = n – m
m
u(n) (n k) .
k 0
x2(t)
x1(t)
3 3
2 2
1
x2(t) 1
–2 –1 0 1 2 t
–2 –1 0 1 2
y(t)
5
4
3
2
1
t
–2 –1 0 1 2
(ii) Signal Multiplication
The multiplication of two signals can be obtained by multiplying their values at every instant of
time. Consider the two signals x 1(t) and x 2(t) then the multiplication of these signals
y(t) = x1(t) x2(t).
Example:
x1(t) x2(t) y(t)
2 2 2
1 1 1
t t
0 1 2 3 0 1 2 3 t 0 1 2 3
Classification of signals and systems 1.23
(iii)Amplitude Scaling
The amplitude scaling of a signal x(t) can be represented by
y(t) = A x(t)
where A Scaling factor
If A < 1, then the signal attenuates.
If A > 1, then the signal amplifies.
Example:
Consider x(t) = cos t y(t) = 0.5 cos t
x(t)
0.5 A = 0.5
1
0 t 0 t
–1
–0.5
y(t) = 2 cos t
2 A=2
0 t
–2
(iv)Time Scaling
The time scaling of a signal x(t) can be accomplished by replacing t by at in it.
It is expressed as
y(t) = x( t)
–1 0 1 t –2 –1 0 1 2 t –0.5 0 0.5 t
(v) Time Reversal
The time reversal of a signal x(t) can be obtained by folding the signal about t = 0. It is denoted
by x(–t). It is obtained by replacing the independent variable t by (–t). It is a mirror image of the
original signal x(t) with respect to the time origin t = 0.
Example:
x(–t)
x(t)
1
1
t
–4 –3 –2 –1 0
0 1 2 3 4 t
x(t) x(t)
t
–1 0 1 2 –5 –4 –3 –2 –1
t
(vi)Time shifting
The time shifting of a signal x(t) can be represented by
y(t) = x(t – t0)
If t0 > 0. For all values of t0 then a signal is said to be positive (right sided) shifted signal. The
shifting delays the signal.
If t0 < 0. For all values of t0 then a signal is said to be negative (left sided) shifted signal. The
shifting advances the signal.
Classification of signals and systems 1.25
Example:
2 2
1 1
0 1 2 3 t 0 1 2 3 4 5 t –2 –1 0 1 t
Problems
1. Sketch the signal u(t) – u(t – 4).
Solution:
1 1 1
0 t 0 4 t 0 4 t
2. Sketch the signal x(2t + 3) for the given signal x(t).
x(t)
–1 0 1 t
Solution:
(i) Using time shifting (ii) Using time scaling
x(t+3) x(2t+3)
1 1
t t
–4 –3 –2 –1 –3 –2 –1 0
Classification of signals and systems 1.33
u(-n + 5)
1
-1 0 1 2 3 4 5
1.8 Classification of CT and DT signals
Both CT and DT signals are classified into several types
(i) Deterministic and Random signals
(ii) Periodic and Aperiodic signals
(iii) Even and odd signals
(iv) Causal and Non-causal signals
(v) Energy and power signals
1.8.1 Deterministic and Random signals
A Deterministic signal can be completely represented by a mathematical equation at any time.
The nature and amplitude of such signals at any time can be predicted.
Example: sinusoidal signal, exponential signal
x(t)
Deterministic signal
x(t) = Asin t
A
The signal cannot be predicted at any time. The signal whose characteristics are random in
nature is called random signals. It cannot be represented by mathematical expression.
Example: noise signals
1.34 Signals and systems
x(t)
Random Signal
1
i.e., f =
T
The fundamental angular frequency is given by
2
0 2f
T
2
Fundamental period, T = .
0
A signal is a periodic if the condition x(t + T) = x(t) is not satisified even for one value of t.
Similarly a discrete time signal x(n) is called periodic if it satisfies the condition x(n + N) = x(n)
for all integers n.
The smallest value of N which satisfies the above condition is called fundamental period.
A signal is aperiodic if the condition x(n + N) = x(n) is not satisfied even for one value of n.
The fundamental angular frequency is given by,
2
0 m, m - integer
N
The examples of continuous time periodic signals are complex exponential & sinusoidal signals.
All singularity function, i.e., unit step, unit ramp and unit impulse signals are aperiodic signals.
Classification of signals and systems 1.35
Problems
1. Prove the complex exponential signal is periodic or not?
Solution:
x (t) A e j0 t
x(t T) A e j0 (t T)
A e j0t e j0 t
j
2
.T 2
j0 t T 0
Ae e T
A e j0 t e j2
A e j0 t [1 0]
A e j0 t x (t)
x(t + T) = x(t)
Hence it is periodic.
2. Prove the sinusoidal signal is periodic or not.
Solution:
x(t) = A sin 0 t
x(t + T) = A sin 0 (t + T)
= A sin ( 0 t + 0 T)
2 2
= A sin 0 t .T 0
T T
= A sin ( 0 t + 2 )
= A sin 0 t = x(t)
x(t + T) = x(t)
Hence it is periodic.
1.36 Signals and systems
3. Prove the cosine signal is periodic or not?
Solution:
x(t) = A cos 0 t
x(t + T) = A cos 0 (t + T)
= A cos( 0 t + 0 T)
2 2
= A cos 0 t .T 0
T T
= A cos( 0 t + 2 )
= A cos 0 t
= x(t)
x(t + T) = x(t)
Hence it is periodic.
4. Find the fundamental period for following signals.
(i) x(t) = ej7t
(ii) x(t) = 10 sin 20t 3
t
(iii) 2 cos
3
n n n
(iv) x(n) 2cos sin 2cos .
4 8 2 6
Solution:
(i) x(t) = ej7t
2
Fundamental period, T =
0
0 7
2
T second
7
= 0.285 second.
Classification of signals and systems 1.37
(ii) x(t) = 10 sin 20t 3
0 20
2
T
20
T = 0.1 second.
t
(iii) x(t) = 2 cos
3
0 1
3
2
T
1
3
T = 6 second.
n n n
(iv) x(n) 2cos sin 2cos
4 8 2 6
2
N1 m 8m
4
Choose m = 1, N1 = 8
2
N2 m 16m
8
Choose m = 1, N2 = 16
2
N3 m 4m
2
Choose m = 1, N3 = 4
The L.C.M of N1, N2, N3 = 16
Fundamental period = 16.
5. Find whether the following signals are periodic or not? If periodic determine the
fundamental period.
(i) 2sin100 t + cos250 t
1.38 Signals and systems
(ii) x(t) = 2sin3t + 3cos(4t + 1)
(iii) x(t) = sin2t
(iv) x(t) = 3u(t) + sin4t
4 2
(v) x(t) = 2cos t + 5sin t
3 3
(vi) x(t) = 3cos5t + 2sin t
(vii) cos t + sin t
3 5
(viii) j ej5t
(ix) e j8 t
2
Time period, T1 = 0.02seconds
100
2
T2 = 0.008seconds
250
The ratio of two periods
T1 0.02 5
2.5 is a rational number
T2 0.008 2
2
T= 0.04seconds
50
(ii) x(t) = 2sin3t + 3cos(4t + 1)
Period, T1 = 2 second
3
T2 = 2 second
4
The ratio of two periods
Classification of signals and systems 1.39
2
T1 3 4
is a rational number
T2 2 3
4
Hence x(t) is periodic signal
Fundamental period T = 3T1 = 4T2
2 2
= 3 = 4
3 4
T = 2 second
(iii) x(t) = sin2t
1 cos 2t
x(t)
2
4 2
(v) x(t) = 2cos t + 5sin t
3 3
2
Period, T1 = 3 sec ond
4 2
3
2
T2 = 3 sec ond
2
3
The ratio of two periods
1.40 Signals and systems
3
T1 1
2 is a rational number
T2 3 2
Period, T1 = 2 second
5
2
T2 = 2 second
The ratio of two periods,
2
T1
5 is a irrational number
T2 2 5
(vii) cos t + sin t
3 5
2
T1 = 6sec ond
3
2
T2 = 10 sec ond
5
The ratio of two periods
T1 6
is a rational number
T2 10
Period, T1 = 2 2 second
1
2
T2 = 2 sec ond
2
The ratio of two periods,
T1 2
2 is a irrational number
T2 2
2
= (t + T) esint esinT T 1 2
= (t + 2 ) esint esin2
= (t + 2 ) esint
x(t)
Hence the given x(t) is non-periodic signal.
7. Check whether the following signals are periodic or not? If periodic determine the
fundamental period?
2n 2n
(i) cos + cos
5 7
1.42 Signals and systems
(ii) e n
j7
n n
(iii) cos cos
8 8
n
(iv) cos
4
2
(v) sin n
3
n
(vi) sin
8
3 j3 1
(viii) e n
5 2
(ix) cos 2 n
8
n n
(x) 1 + ej4 7 – ej2 5
Solution:
2
Period, N = m
0
2
N1 m 5m
2
5
2
N2 m 7m
2
7
The ratio of two periods,
N1 5
N 2 7 is a rational number
Classification of signals and systems 1.43
Hence the given signal is periodic signal
Fundamental period, N = 7N1 = 5N2
N = 35 second.
(ii) e j7 n
2
Fundamental period, N m
7
Choose m = 7, N = 2
n n
(iii) cos cos
8 8
2m
Period, N1 = 16m
1
8
Choose m = 1, N1 = 16
2m
N2 = 16m
8
Choose m = 1, N2 = 16
The ratio of two periods,
N1 16
is not a rational number
N2 16
n
(iv) cos
4
2m
Period, N = 8m
1
4
2
(v) sin n
3
2m
Period, N = 3m
2
3
Choose m = 1, N = 3 is a rational number
Hence the given signal is periodic signal with period N = 3 second.
n
(vi) sin
8
2m
Period, N = 16m
1
8
3 j3 1
(viii) e n
5 2
Period, N = 2 m 2 m
3 3
Choose m = 3, N = 2 is a rational number
Hence the given signal is periodic signal with period N = 2 second.
(ix) x(n) = cos2 n
8
2
1 cos n
2
x(n) = cos n = 8
8 2
Classification of signals and systems 1.45
1 cos n
x(n) = 4
2
2
N m
4
= 8m
Choose m = 1, N = 8 is a rational number
Hence the given signal is periodic signal with fundamental period N = 8 second.
n n
(x) 1 + ej4 7 – ej2 5
2 7
N1 m m
4 2
7
Choose m = 2, N1 = 7
2
N2 m 5m
2
5
Choose m = 1, N2 = 5
The ratio of two periods,
N1 7
is a rational number
N2 5
t
0
A continuous time signal x(t) is said to be odd signal if it satisfies the condition
x(–t) = –x(t) for all t
Example:
x(t) = A sin t
t
0
Any signal x(t) can be expressed as sum of even and odd components.
i.e., x(t) = xe(t) + xo(t) ----------- (1)
where, xe(t) even component of the signal
xo(t) odd component of the signal
Replacing t by –t in equation (1)
x(–t) = xe(t) + xo(–t)
x(–t) = xe(t) – xo(t) ----------- (2)
Adding equation (1) & (2)
x(t) + x(–t) = 2xe(t)
Classification of signals and systems 1.47
x(t) x( t)
xe (t)
2
Subtracting equation (1) & (2)
x(t) – x(–t) = 2xo(t)
x(t) x( t)
xo (t)
2
Similarly a discrete time signal x(n) is even if it satisfies the condition
x(–n) = x(n) for all n
A DT signal x(n) is odd if it satisfies the condition
x(–n) = –x(n) for all n
For DT signal the even and odd part of a signal can be obtained by
x(n) x ( n)
xe (n)
2
x(n) x( n)
xo (n)
2
Note:
Even Even = Even
odd odd = Even
Even odd = odd
Problems
1. Find the even and odd components of x(t) = cost + sint.
Solution:
Given x(t) = cost + sint
x(–t) = cos(–t) + sin(–t)
= cost – sint
2cos t
2
1.48 Signals and systems
xe(t) = cost
2sin t
2
xo(t) = sint.
2. Find the even and odd components of x(t) = 1 + 2t + 3t2.
Solution:
Given x(t) = 1 + 2t + 3t2
x(–t) = 1 – 2t + 3t2
1 2t 3t 2 1 2t 3t 2
2
2 6t 2
2
xe(t) = 1 + 3t2
1 2t 3t 2 1 2t 3t 2
2
4t
2
xo(t) = 2t.
3. Find the even and odd component of the following signals.
n = 0, 1, 2, 3, 4
x (n) x ( n)
The even component, xe (n)
2
x(0) x(0) 5 5
For n = 0, xe (0) 5
2 2
x(1) x( 1) 4 0
For n = 1, xe (1) 2
2 2
x(2) x(2) 3 0
For n = 2, xe (2) 1.5
2 2
x(3) x( 3) 2 0
For n = 3, xe (3) 1
2 2
x(4) x(4) 1 0
For n = 4, xe (4) 0.5
2 2
x (n) x ( n)
The odd component, xo (n)
2
x (0) x(0) 5 5
For n = 0, xo (0) 0
2 2
x (1) x ( 1) 4 0
For n = 1, xo (1) 2
2 2
x(2) x(2) 3 0
For n = 2, xo (2) 1.5
2 2
x (3) x ( 3) 2 0
For n = 3, xo (3) 1
2 2
x(4) x(4) 1 0
For n = 4, xo (4) 0.5
2 2
n = –2, –1, 0, 1, 2
x (n) x ( n)
The even component, xe (n)
2
x(2) x(2) 3 2
For n = –2, xe (2) 0.5
2 2
x(1) x(1) 2 4
For n = –1, xe (1) 3
2 2
x(0) x(0) 1 1
For n = 0, xe (0) 1
2 2
x(1) x(1) 4 2
For n = 1, xe (1) 3
2 2
x(2) x(2) 2 3
For n = 2, xe (2) 0.5
2 2
x (n) x ( n)
The odd component, xo (n)
2
x(2) x(2) 3 2
For n = –2, xo (2) 2.5
2 2
x(1) x(1) 2 4
For n = –1, xo (1) 1
2 2
x (0) x ( 0) 1 1
For n = 0, xo (0) 0
2 2
x (1) x ( 1) 4 2
For n = 1, xo (1) 1
2 2
x(2) x(2) 2 3
For n = 2, x0 (n) = 2.5
2 2
6. Find the even and odd signal for the unit step signal.
Solution:
x(t)
t
0
x(t) x( t)
For even signal, xe (t)
2
x(-t) xe(t)
1 0.5
0 0
x(t) x( t)
For odd signal, xo (t)
2
-x(-t) x0(t)
0.5
t t
0 0
-1 -0.5
7. Find the even and odd component of the signal x(t) = ejt.
Solution:
Given x(t) = ejt
e jt e jt
xe (t) cos t
2
1.54 Signals and systems
e jt e jt
xo (t) jsin t .
2
8. Find the even and odd component of the signal x(t) = cost sint + 2sin2t cost
Solution:
Given x(t) = cost sint + 2sin2t cost
x(–t) = cos(–t) sin(–t) + 2sin2(–t) cos(–t)
= –cost sint + 2sin2t cost
cost sint + 2sin 2 t cost cost sint 2sin 2 t cost 4sin 2 t cost
xe (t)
2 2
cost sint + 2sin 2 t cost cost sint 2sin 2 t cost 2cost sint
xo (t)
2 2
xo(t) = cost sint.
1.8.4 Causal and Non-causal signals
A continuous time signal x(t) is said to be causal if x(t) = 0 for t < 0, otherwise the signal is non-
causal. A continuous time signal x(t) is said to be anticausal if x(t) = 0 for t > 0.
Similarly a discrete time signal x(n) is said to be causal if x(n) = 0 for n < 0, otherwise the signal
is non-causal. A discrete time signal x(n) is said to be anticausal if x(n) = 0 for n > 0.
Problems
1. Find which of the following signals are causal or non-causal.
(i) x(t) = e3t u(t – 3)
(ii) x(t) = cos3t
(iii) x(t) = 4sinct
Classification of signals and systems 1.55
t T
1
Average power, P = T i 2 (t)dt watts
2T T
1.56 Signals and systems
The Energy (E) of a continuous time signal x(t) is defined as
t T
2
E = T | x(t) | dt in Joules
T
2
= | x(t) | dt
The average power of a continuous time signal x(t) is defined as
t T
1
P = T | x(t) |2 dt in watts
2T T
RMS value = P
For energy signals, the energy will be finite (or) constant i.e., (0 < E < ) and average power
will be zero.
For power signals, the average power is finite (or) constant i.e., (0 < P < ) and energy will
be infinite.
For discrete time signals
The total energy is finite and average power is zero. The signal is said to be energy signal.
t N
E N | x(n) |2
n N
The total energy is infinite and average power is finite, the signal is said to be power signal,
t N
1
P N | x(n) |2
2N 1 n N
2. The normalized energy is finite and The average power is finite and energy will be
average power will be zero. infinite.
3. Non-periodic signals are energy Periodic signals are power signals.
signals.
1.80 Signals and systems
t
(iii) x(t) = rect cos 0 t
10
t cos 0 t, 5 t 5
x(t) = rect cos 0 t =
10 0, otherwise
2
Energy, E = | x(t) | dt
5
2
= cos 0 t dt
5
5
1 cos 20 t
= dt
5
2
5 5 5
1 cos 2 t dt 0
= dt
2 cos 20 t dt
0
5 5 5
1 5
= t 0
2 5
10
=
2
E = 5J
t T
1
Power, P = T | x(t) |2 dt
2T T
t 5
1
= T cos 2 0 t dt
2T 5
t 1
= T 5
2T
P=0
Therefore the energy is finite and power is zero. Hence the signal is energy signal.
1.9 CT systems and DT systems
Continuous time (CT) systems
A system which can process continuous time signal and produces a continuous time output
signal is called continuous time system.
Classification of signals and systems 1.81
Continuous time
x(t) y(t)
system
Mathematically the functional relationship between input and output may be written as
y(t) = H[x(t)]
Symbolically, x(t) y(t)
where,
x(t) continuous time input signal
y(t) continuous time output signal
H system operator
When a continuous time system satisfies the properties of linearity and time invariant then it
is called Linear Time Invariant (LTI) continuous time system.
Discrete time (DT) system
A system which can process discrete time signal and produces a Discrete time output signal
is called Discrete time system.
Mathematically the functional relationship between input and output may be written as
y(t) = H[x(t)]
Symbolically, x(t) y(t)
where,
x(t) discrete time input signal
y(t) discrete time output signal
H system operator
When a discrete time system satisfies the properties of linearity and time invariant then it is
called linear time invariant (LTI) discrete time system.
1.10 Classifications of systems/properties of system
1. Lumped parameter and distributed parameter systems.
2. Static (memoryless) and dynamic (memory) systems.
3. Linear and Non-Linear systems.
4. Time variant and Time invariant systems.
1.82 Signals and systems
d x(t)
y(t) = x(t)
dt
y(n) = x(n) + x(n + 1)
Any continuous time system described by a differential equation (or) any discrete time system
described by a difference equation is a dynamic system.
Problems
1. Check whether the following systems are static (or) dynamic.
(i) y(t) = x(t – 4)
(ii) y(n) = x2(n)
d 2 x (t)
(iii) y(t) x (t)
dt 2
Classification of signals and systems 1.83
d 2 x (t)
(iii) y(t) x(t)
dt 2
The system is described by a differential equation. Therefore the system is dynamic system.
The output y(t) is the integral of the input x(t). Therefore the system is dynamic system.
(v) y(n) = x(n) + x(n – 3)
Put n = 0, y(0) = x(0) + x(–3)
The output depends on present and past value input. Therefore the system is dynamic system.
Also, the system is described by a difference equation. Therefore the system is dynamic
system.
(vi) y(n) = x(n + 5)
Put n = 0, y(0) = x(5)
The output depends on future value of input. Therefore the system is dynamic system.
(vii) y(t) = x(3t)
Put n = 2, y(2) = x(6)
The output depends on future value of input. Therefore the system is dynamic system.
1.10.3 Linear and Non-Linear systems
A system that satisfies the superposition principle is said to be a linear system. A system
which does not satisfies the superposition principle is said to be a non-linear system.
1.84 Signals and systems
The superposition principle consist of two properties.
(i) Additive property
(ii) Scaling/Homogeneity property
Consider the two systems defined as follows.
y1(t) = H[x1(t)]
y2(t) = H[x2(t)]
Additive property is,
H[x1(t) + x2(t)] = H[x1(t)] + H[x2(t)]
= y1(t) + y2(t)
Scaling property is,
H[ax(t)] = aH[x(t)] = ay(t)
The superposition principle states that the response to a weighted sum of input signals is
equal to the weighted sum of the outputs corresponding to each of the individual input signal.
i.e., H[ax1(t) + bx2(t)] = aH[x1(t)] + bH[x2(t)]
= ay1(t) + by2(t)
H[ax1(t) + bx2(t)] = ay1(t) + by2(t)
where a, b constants
For DT systems
H[ax1(n) + bx2(n)] = aH[x1(n)] + bH[x2(n)]
= ay1(n) + by2(n)
H[ax1(n) + bx2(n)] = ay1(n) + by2(n)
Problems
1. Check whether the following system are linear (or) not.
d y(t)
(1) 3t y(t) t 2 x (t)
dt
1
(2) y(n) 2 x (n)
x (n 1)
d 2 y(t) d y(t)
(10) 2
2 x (t) 4
dt dt
d y(t)
(1) Given 3t y(t) t 2 x (t)
dt
Consider two inputs x1(t) and x2(t). The response of the system is y1(t) and y2(t).
d y1 (t)
3t y1 (t) t 2 x1 (t) -------------- (1)
dt
d y2 (t)
3t y2 (t) t 2 x2 (t) -------------- (2)
dt
Adding Equation (1) & (2)
d
y1 (t) y2 (t) 3t y1 (t) y2 (t) t 2 [ x1 (t) x2 (t)] -------------- (3)
dt
Substitute y(t) = y1(t) + y2(t) and
x(t) = x1(t) + x2(t) in given system
d y1 (t) y 2 (t)
3t y1 (t) y 2 (t) t 2 [ x1 (t) x2 (t)] -------------- (4)
dt
Compare equations (3) & (4)
(3) = (4)
Therefore the system is linear system.
1.86 Signals and systems
1
(2) Given y(n ) 2 x (n)
x (n 1)
1
y1 (n) 2 x1 (n) -------------- (1)
x1 (n 1)
1
y2 (n) 2 x2 (n) -------------- (2)
x2 (n 1)
1 1
y1 (n) y2 (n) 2 x1 (n) 2 x2 (n) -------------- (3)
x1 (n 1) x2 (n 1)
1
y1 (n) y 2 (n) 2[ x1 (n) x2 (n)] -------------- (4)
x1 (n 1) x2 (n 1)
d y(t)
(10) 10 y(t) 5 x(t)
dt
d y(1)
For t = –1 10 y(1) 5 x(1)
dt
d y(0)
For t = 0 10 y(0) 5 x(0)
dt
d y(1)
For t = 1 10 y(1) 5 x(1)
dt
For all values of t, the output depends on present values of input. Therefore the system is
causal.
1.10.6 Stable and unstable systems
A system is said to be BIBO (Bounded Input Bounded Output) stable if and only if every
bounded input produces a bounded output.
Let the input signal x(t) be finite (bounded)
i.e., |x(t)| < Mx < for all t
If output signal y(t) is also finite (bounded)
i.e., |y(t)| < My < for all t
where Mx, My positive real number
The system gives unbounded output for bounded input is called unstable system.
Condition for stability for an LTI-CT system
| h(t) |dt
| h(n) |
n
R tR L
(9) h(t) e u(t)
L
Solution:
(1) Given y(t) = t x(t)
x(t) is bounded when t , i.e., x(t).
Therefore the output is unbounded. The system is said to be unstable system.
(2) y(t) = 5x(t) + 3
x(t) is bounded. It produces a bounded output then the system is said to be stable system.
(3) h(t) = e–4|t|
4|t|
| h(t) |dt e dt
0
= e4|t| dt e 4|t| dt
0
0
= e4t dt e4t dt
0
0
e4t e4t
=
4 4 0
1 1
= 0 0
4 4
1
=
2
Hence the system is stable.
Classification of signals and systems 1.101
(4) h(n) = u(n)
| h(n) | | u(n) |
n n
= 1
n 0
= 1 + 1 + ...........
=
So the output is unbounded and the system is unstable.
(5) h(t) = e3t u(t)
3t
| h(t) | dt e u(t) dt
3t
= e dt
0
e3t
=
3 0
1
=
3
The output is unbounded. Hence the system is unstable.
(6) h(n) = an u(n)
| h(n) | a n u(n)
n n
= an
n 0
1
= 1 a; |a | 1
| h(n) | 3n u(n 2)
n n
= 3n
n 2
= 32 + 33 + ..... + 3
=
The output is unbounded. Hence the system is unstable.
(8) h(t) = t e–t u(t – 1)
t
| h(t) | dt te u(t 1) dt
t
= te dt
1
Here u = t, dv = e–t dt u dv uv vdu
du = dt v = –e–t
= t e t e t dt
1
= t e t e t dt
1
= t e t e t
1
= [0 + e–1 + e–1]
= 2e–1 <
Hence the output is bounded. The system is stable.