0% found this document useful (0 votes)
29 views

Module 1

Uploaded by

Senay Mehari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Module 1

Uploaded by

Senay Mehari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 363

Chapter one

Basic semiconductor theory

1.1 Semiconductor Materials

The label semiconductor itself provides a hint as to its characteristics. The prefix semis normally
applied to a range of levels midway between two limits.
Conductor: is applied to any material that will support a generous flow of charge when a
voltage source of limited magnitude is applied across its terminals.
Insulator: is a material that offers a very low level of conductivity under pressure from an
applied voltage source.
Semiconductor: therefore, is a material that has conductivity level somewhere between
the extremes of an insulator and conductor.
Resistivity
Inversely related to the conductivity of a material is its resistance to the flow of charge, or
current. That is, the higher the conductivity level, the lower the resistance level. In tables, the
term resistivity (ρ, Greek letter rho) is often used when comparing the resistance levels of
materials.

Figure 1.1 defining the metric units of resistivity

Note in Table 1.1 the extreme range between the conductor and insulating materials for the 1-cm
length (1-cm2 area) of the material. Ge and Si have received the attention they have for a number
of reasons.
One very important consideration is the fact that they can be manufactured to a very high purity
level. In fact, recent advances have reduced impurity levels in the pure material to 1 part in 10
billion (1:10,000,000,000).
The ability to change the characteristics of the material significantly through this process, known
as “doping,‖ is yet another reason why Ge and Si have received such wide attention. Further
reasons include the fact that their characteristics can be altered significantly through the
application of heat or light—an important consideration in the development of heat- and light-
sensitive devices.

Compiled by Wondimagegn M. Page 1


1.2 Atomic theory

Some of the unique qualities of Ge and Si noted above are due to their atomic structure. The
atoms of both materials form a very definite pattern that is periodic in nature (i.e., continually
repeats itself). One complete pattern is called a crystal and the periodic arrangement of the atoms
a lattice. For Ge and Si the crystal has the three-dimensional diamond structure of Fig. 1.2

Figure 1.2 Ge and Si (Single-crystal structure Silicon Lattice)

Any material composed solely of repeating crystal structures of the same kind is called a single-
crystal structure. For semiconductor materials of practical application in the electronics field, this
single-crystal feature exists, and, in addition, the periodicity of the structure does not change
significantly with the addition of impurities in the doping process.

How the structure of the atom might affect the electrical characteristics of the material?

As you are aware, the atom is composed of three basic particles: the electron, the proton, and the
neutron. In the atomic lattice, the neutrons and protons form the nucleus, while the electrons
revolve around the nucleus in a fixed orbit. The Bohr models of the two most commonly used
semiconductors; Germanium and silicon are shown in Fig. 1.3

Figure 1.3 Atomic structures: (a) germanium; Figure 1.4 Covalent bonding of the
(b) Silicon

As indicated by Fig. 1.3a, the germanium atom has 32 orbiting electrons, while silicon has 14
orbiting electrons. In each case, there are 4 electrons in the outermost (valence) shell. The
potential (ionization potential) required to remove any one of these 4 valence electrons is lower

Compiled by Wondimagegn M. Page 2


than that required for any other electron in the structure. In a pure germanium or silicon crystal
these 4 valence electrons are bonded to 4 adjoining atoms, as shown in Fig. 1.4 for silicon. Both
Ge and Si are referred to as tetravalent atoms because they each have four valence electrons.

A bonding of atoms, strengthened by the sharing of electrons, is called covalent bonding.

ENERGY LEVELS

In the isolated atomic structure there are discrete (individual) energy levels associated with each
orbiting electron, as shown in Fig. 1.5. Each material will, in fact, have its own set of permissible
energy levels for the electrons in its atomic structure.
The more distant the electron from the nucleus, the higher the energy state, and any electron that has left
its parent atom has a higher energy state than any electron in the atomic structure.

Figure 1.5 Energy levels: discrete levels in isolated atomic structures;

The energy associated with each electron is measured in electron volts (eV). The unit of measure
is appropriate, since

As derived from the defining equation for voltage V = W/Q. The charge Q is the charge
associated with a single electron. Substituting the charge of an electron and a potential difference
of 1 volt into Eq. (1.2) will result in an energy level referred to as one electron volt. Since energy
is also measured in joules and the charge of one electron =1.6 × 10-19 coulomb,

Compiled by Wondimagegn M. Page 3


Energy Band Model

One method of characterizing an electrical material is based up on a diagram that represents


electron energy in that material. In the general case, electronic energy is divided among three
bands that are designated as the valence band (bonding electrons with lowest energy), forbidden
gap or band (electrons do not occupy energy states), and conduction band (conduction electrons
with highest energy). In metallic conductors the forbidden gap is absent. In insulators, the
forbidden gap is very large, and in semiconductors it is relatively small. Energy band diagrams
for these three classes of materials are illustrated in figure 1.6.

Figure 1.6 Conduction and valence bands of an insulator, semiconductor, and conductor.

Electrons occupy specific energy states, or levels, in the conduction and valence bands, but they
may not occupy energy states located in the band gap, which is why it is frequently called the
forbidden gap. Relative to figure 1.6, to achieve electrical conduction, electrons must transfer
from energy states in the valence band to energy states in the conduction band. The valence band
represents low energy states of the electrons, in which the electrons are tightly bound to the
atoms of the material. The forbidden band is not a physical void, but rather an energy gap. To
cross the band gap, an electron must attain energy equal to or greater than the lowest allowed
energy state in the conduction band; otherwise it cannot cross the gap.
In metals once temperature rises above absolute zero (0K), electrons acquire sufficient thermal
energy to transfer from the valence band to the conduction band, thus making electrical
conduction possible in the form of that described by Ohm‘s law. In semiconductors, the term
ohmic condition is applied to this phenomenon. Atoms are ionized (electrons are torn loose), and
free (conduction) electrons are released to establish an electric current.
The forbidden gap regions associated with insulators and semiconductors represent energy levels
that electrons may not assume. The only way that an electron can move from the valence band to
the conduction band in these materials is by acquiring sufficient energy to cross the gap. Because
of the large forbidden band in insulators, the material is usually damaged or destroyed.

Compiled by Wondimagegn M. Page 4


In the pure (intrinsic) state, semiconducting materials manifest a forbidden gap that is less than
that found in insulators. These materials are basically insulators, but not particularly good ones.
When intrinsic semiconductors are modified by the addition of certain impurities, new (allowed)
valence electrons states are created high in the forbidden gap, so that electrons can jump
relatively easily into the conduction band.

Example

1. How much energy in joules is required to move a charge of 6 C through a difference in


potential of 3 V?

Solution

2. If 48 eV of energy is required to move a charge through a potential difference of 12V,


determine the charge involved.

Intrinsic semiconductor
An intrinsic semiconductor is one, which is pure enough that impurities do not appreciably affect
its electrical behaviour. In this case, all carriers are created due to thermally or optically excited
electrons from the full valence band into the empty conduction band. Thus equal numbers of
electrons and holes are present in an intrinsic semiconductor. Electrons and holes flow in
opposite directions in an electric field, though they contribute to current in the same direction
since they are oppositely charged. Hole current and electron current are not necessarily equal in
an intrinsic semiconductor, however, because electrons and holes have different effective masses
(crystalline analogues to free inertial masses).
The concentration of carriers is strongly dependent on the temperature. At low temperatures, the
valence band is completely full making the material an insulator. Increasing the temperature
leads to an increase in the number of carriers and a corresponding increase in conductivity. This
characteristic shown by intrinsic semiconductor is different from the behaviour of most metals,
which tend to become less conductive at higher temperatures due to increased phonon scattering.
Negative temperature coefficient:
Semiconductor materials such as Ge and Si that show a reduction in resistance with increase in
temperature are said to have a negative temperature coefficient.

Compiled by Wondimagegn M. Page 5


Extrinsic semiconductor
The characteristics of semiconductor materials can be altered significantly by the addition of
certain impurity atoms into the relatively pure semiconductor material. These impurities,
although only added to perhaps 1 part in 10 million, can alter the band structure sufficiently to
totally change the electrical properties of the material.
A semiconductor material that has been subjected to the doping process is called an extrinsic material.
There are two extrinsic materials of immeasurable importance to semiconductor device
fabrication: n-type and p-type. Each will be described in some detail in the following paragraphs.

n-Type Material
Both the n- and p-type materials are formed by adding a predetermined number of impurity
atoms into a germanium or silicon base. The n-type is created by introducing those impurity
elements that have five valence electrons (pentavalent), such as antimony, arsenic, and
phosphorus. The effect of such impurity elements is indicated in Fig. 1.7 (using antimony as the
impurity in a silicon base).

Figure 1.7 Antimony impurity in n-type material.


Note that the four covalent bonds are still present. There is, however, an additional fifth electron
due to the impurity atom, which is not associated with any particular covalent bond. This
remaining electron, loosely bound to its parent (antimony) atom, is relatively free to move within
the newly formed n-type material. Since the inserted impurity atom has donated a relatively
―free‖ electron to the structure:
Diffused impurities with five valence electrons are called donor atoms.
It is important to realize that even though a large number of ―free‖ carriers have been established
in the n-type material, it is still electrically neutral since ideally the number of positively charged
protons in the nuclei is still equal to the number of ―free‖ and orbiting negatively charged
electrons in the structure.
The effect of this doping process on the relative conductivity can best be described through the
use of the energy-band diagram of Fig. 1.8. Note that a discrete energy level (called the donor
level) appears in the forbidden band with an Eg significantly less than that of the intrinsic
material. Those ―free‖ electrons due to the added impurity sit at this energy level and have less
difficulty absorbing a sufficient measure of thermal energy to move into the conduction band at
room temperature. The result is that at room temperature, there are a large number of carriers
(electrons) in the conduction level and the conductivity of the material increases significantly. At
room temperature in an intrinsic Si material there is about one free electron for every 1012 atom
12
(1 to 109 for Ge). If our dosage level were 1 in 10 million ( 107 ), the ratio ( 10  10 5 ) would
10 7
indicate that the carrier concentration has increased by a ratio of 100,000: 1.

Compiled by Wondimagegn M. Page 6


Figure 1.8 Effect of donor impurities on the energy band structure.

p-Type Material
The p-type material is formed by doping a pure germanium or silicon crystal with impurity
atoms having three valence electrons. The elements most frequently used for this purpose are
boron, gallium, and indium. The effect of one of these elements, boron, on a base of silicon is
indicated in Fig. 1.9.

Figure 1.9 Boron impurity in p-type material.

Note that there is now an insufficient number of electrons to complete the covalent bonds of the
newly formed lattice. The resulting vacancy is called a hole and is represented by a small circle
or positive sign due to the absence of a negative charge. Since the resulting vacancy will readily
accept a ―free‖ electron:
The diffused impurities with three valence electrons are called acceptor atoms.
The resulting p-type material is electrically neutral, for the same reasons described for the n-type
material.
Electron versus Hole Flow
The effect of the hole on conduction is shown in Fig. 1.10. If a valence electron acquires
sufficient kinetic energy to break its covalent bond and fills the void created by a hole, then a
vacancy, or hole, will be created in the covalent bond that released the electron. There is,
therefore, a transfer of holes to the left and electrons to the right, as shown in Fig. 1.10. The
direction to be used in this text is that of conventional flow, which is indicated by the direction of
hole flow.

Compiled by Wondimagegn M. Page 7


Figure 1.10 Electron versus Hole flow.

Diffusion and drift current


The diffusion is a flow of charge carriers from a region of high density to a region of low density
due to non uniform distribution of it. Diffusion current is the transport of charge carriers in a
semiconductor.
Drift is charged particle motion in response to an applied electric field. When an electric field is
applied across a semiconductor, the carriers start moving, producing a current. The positively
charged holes move with the electric field, whereas the negatively charged electrons move
against the electric field.

Majority and Minority Carriers


In the intrinsic state, the number of free electrons in Ge or Si is due only to those few electrons in
the valence band that has acquired sufficient energy from thermal or light sources to break the
covalent bond or to the few impurities that could not be removed. The vacancies left behind in
the covalent bonding structure represent our very limited supply of holes. In an n-type material,
the number of holes has not changed significantly from this intrinsic level. The net result,
therefore, is that the number of electrons far outweighs the number of holes. For this reason:

In an n-type material (Fig. 1.11a) the electron is called the majority carrier and the hole is the minority
carrier.

For the p-type material the number of holes far outweighs the number of electrons, as shown in
Fig. 1.11b. Therefore:
In a p-type material the hole is the majority carrier and the electron is the minority carrier.

When the fifth electron of a donor atom leaves the parent atom, the atom remaining acquires a
net positive charge: hence the positive sign in the donor-ion representation. For similar reasons,
the negative sign appears in the acceptor ion. The n- and p-type materials represent the basic
building blocks of semiconductor devices. We will find in the next section that the ―joining‖ of a
single n-type material with a p-type material will result in a semiconductor element of
considerable importance in electronic systems.

Compiled by Wondimagegn M. Page 8


Figure 1.11 (a) n-type material; (b) p-type material.

Compiled by Wondimagegn M. Page 9


Chapter Two

Semiconductor diodes and their applications


Introduction

The construction, characteristics, and models of semiconductor diodes were introduced in Chapter 1. The
primary goal of this chapter is to develop a working knowledge of the diode in a variety of configurations
using models appropriate for the area of application. By chapter‘s end, the fundamental behavior pattern of
diodes in dc and ac networks should be clearly understood. The concepts learned in this chapter will have
significant carryover in the chapters to follow. For instance, diodes are frequently employed in the description
of the basic construction of transistors and in the analysis of transistor networks in the dc and ac domains.
The content of this chapter will reveal an interesting and very positive side of the study of a field such as
electronic devices and systems once the basic behavior of a device is understood, its function and response in
an infinite variety of configurations can be determined. The range of applications is endless, yet the
characteristics and models remain the same. The analysis will proceed from one that employs the actual diode
characteristic to one that utilizes the approximate models almost exclusively.
It is important that the role and response of various elements of an electronic system be understood without
continually having to resort to lengthy mathematical procedures. This is usually accomplished through the
approximation process, which can develop into an art itself. Although the results obtained using the actual
characteristics may be slightly different from those obtained using a series of approximations, keep in mind
that the characteristics obtained from a specification sheet may in themselves be slightly different from the
device in actual use.

LOAD-LINE ANALYSIS
The applied load will normally have an important impact on the point or region of operation of a device. If the
analysis is performed in a graphical manner, a line can be drawn on the characteristics of the device that
represents the applied load. The intersection of the load line with the characteristics will determine the point of
operation of the system. Such an analysis is, for obvious reasons, called load-line analysis.
Although the majority of the diode networks analyzed in this chapter do not employ the load-line approach, the
technique is one used quite frequently in subsequent chapters, and this introduction offers the simplest
application of the method. It also permits a validation of the approximate technique described throughout the
remainder of this chapter.
Consider the network of Fig. 2.1a employing a diode having the characteristics of Fig. 2.1b. Note in Fig. 2.1a
that the ―pressure‖ established by the battery is to establish a current through the series circuit in the clockwise
direction. The fact that this current and the defined direction of conduction of the diode are a ―match‖ reveals
that the diode is in the ―on‖ state and conduction has been established. The resulting polarity across the diode

Compiled by Wondimagegn M. Page 10


will be as shown and the first quadrant (VD and ID positive) of Fig. 2.1b will be the region of interest the
forward-bias region. Applying Kirchhoff‘s voltage law to the series circuit of Fig. 2.1a will result in

Figure2.1: Series diode configuration: (a) circuit; (b) characteristics


The two variables of Eq. (2.1) (VD and ID) are the same as the diode axis variables of Fig. 2.1b.
This similarity permits a plotting of Eq. (2.1) on the same characteristics of Fig. 2.1b.
The intersections of the load line on the characteristics can easily be determined if one simply
employs the fact that anywhere on the horizontal axis ID = 0 A and anywhere on the vertical axis
VD =0 V.
If we set VD =0 V in Eq. (2.1) and solve for ID, we have the magnitude of ID on the vertical axis.
Therefore, with VD =0 V, Eq. (2.1) becomes

as shown in Fig. 2.2. If we set ID = 0 A in Eq. (2.1) and solve for VD, we have the magnitude of
VD on the horizontal axis. Therefore, with ID =0 A, Eq. (2.1) becomes

Compiled by Wondimagegn M. Page 11


as shown in Fig. 2.2. A straight line drawn between the two points will define the load line as
depicted in Fig. 2.2. Change the level of R (the load) and the intersection on the vertical axis will
change. The result will be a change in the slope of the load line and a different point of
intersection between the load line and the device characteristics.
We now have a load line defined by the network and a characteristic curve defined by the device.
The point of intersection between the two is the point of operation for this circuit. By simply
drawing a line down to the horizontal axis the diode voltage VDQ can be determined, whereas a
horizontal line from the point of intersection to the vertical axis will provide the level of IDQ. The
current ID is actually the current through the entire series configuration of Fig. 2.1a. The point of
operation is usually called the quiescent point (abbreviated ―Q-pt.‖) to reflect its ―still,
unmoving‖ qualities as defined by a dc network.

Figure2.2: Drawing the load line and finding the point of operation
SERIES DIODE CONFIGURATIONS WITH DC INPUTS
In this section the approximate model is utilized to investigate a number of series diode
configurations with dc inputs. The content will establish a foundation in diode analysis that will
carry over into the sections and chapters to follow. The procedure described can, in fact, be
applied to networks with any number of diodes in a variety of configurations. For each

Compiled by Wondimagegn M. Page 12


configuration the state of each diode must first be determined. Which diodes are ―on‖ and which
are ―off‖? Once determined, the appropriate equivalent as defined in Section 2.3 can be
substituted and the remaining parameters of the network determined.
In general, a diode is in the “on” state if the current established by the applied sources is such
that its direction matches that of the arrow in the diode symbol, and VD =0.7 V for silicon and
VD=0.3 V for germanium.
For each configuration, mentally replace the diodes with resistive elements and note the resulting
current direction as established by the applied voltages (―pressure‖). If the resulting direction is a
―match‖ with the arrow in the diode symbol, conduction through the diode will occur and the
device is in the ―on‖ state. The description above is, of course, contingent on the supply having a
voltage greater than the ―turn on‖ voltage (VT) of each diode.
If a diode is in the ―on‖ state, one can either place a 0.7-V drop across the element, or the
network can be redrawn with the VT equivalent circuit. In time the preference will probably
simply be to include the 0.7-V drop across each ―on‖ diode and draw a line through each diode
in the ―off‖ or open state. Initially, however, the substitution method will be utilized to ensure
that the proper voltage and current levels are determined.
The series circuit of Fig. 2.10 described in some detail in Section 2.2 will be used to demonstrate
the approach described in the paragraphs above. The state of the diode is first determined by
mentally replacing the diode with a resistive element as shown in Fig. 2.11. The resulting
direction of I is a match with the arrow in the diode symbol, and since E >VT the diode is in the
―on‖ state. The network is then redrawn as shown in Fig. 2.12 with the appropriate equivalent
model for the forward-biased silicon diode. Note for future reference that the polarity of VD is the
same as would result if in fact the diode were a resistive element. The resulting voltage and
current levels are the following:

Compiled by Wondimagegn M. Page 13


(a) (b)

(c)
Figure2.3: (a) Series diode configuration (b) Determining the state of the diode (c) Substituting
the equivalent model for the ―on‖ diode

Compiled by Wondimagegn M. Page 14


Figure2.4: Reversing the diode; determining the state of the diode and Substituting the equivalent
model for the ―off‖ diode

The resulting current direction does not match the arrow in the diode symbol. The diode is in the
―off‖ state, resulting in the equivalent circuit of the above figure. Due to the open circuit, the
diode current is 0 A and the voltage across the resistor R is the following:

Example 1: Determine Vo and ID for the series circuit given below.

Compiled by Wondimagegn M. Page 15


PARALLEL DIODE CONFIGURATION
The methods applied in series diode configuration can be extended to the analysis of parallel
configuration. For each area of application, simply match the sequential series of steps applied to
series diode configurations.
Example 2: Determine the current I for the network shown below.

Compiled by Wondimagegn M. Page 16


SINUSOIDAL INPUTS; HALF-WAVE RECTIFICATION
The diode analysis will now be expanded to include time-varying functions such as the
sinusoidal waveform and the square wave. There is no question that the degree of difficulty will
increase, but once a few fundamental maneuvers are understood, the analysis will be fairly direct
and follow a common thread.
The simplest of networks to examine with a time-varying signal appears in Fig. 2.6. For the
moment we will use the ideal model (note the absence of the Si or Ge label to denote ideal diode)
to ensure that the approach is not clouded by additional mathematical complexity.

Figure2.5: Half-wave rectifier


Over one full cycle, defined by the period T of Fig. 2.43, the average value (the algebraic sum of
the areas above and below the axis) is zero. The circuit of Fig. 2.6, called a half-wave rectifier,
will generate a waveform vo that will have an average value of particular, use in the ac-to-dc
conversion process. When employed in the rectification process, a diode is typically referred to

Compiled by Wondimagegn M. Page 17


as a rectifier. Its power and current ratings are typically much higher than those of diodes
employed in other applications, such as computers and communication systems. During the
interval t=0→T/2 in Fig. 2.6 the polarity of the applied voltage vi is such as to establish
―pressure‖ in the direction indicated and turn on the diode with the polarity appearing above the
diode. Substituting the short-circuit equivalence for the ideal diode will result in the equivalent
circuit of Fig. 2.7, where it is fairly obvious that the output signal is an exact replica of the
applied signal. The two terminals defining the output voltage are connected directly to the
applied signal via the short-circuit equivalence of the diode.

Figure2.6: Conduction region (0→T/2)


For the period T/2 →T, the polarity of the input vi is as shown in Fig. 2.8 and the resulting
polarity across the ideal diode produces an ―off‖ state with an open-circuit equivalent. The result
is the absence of a path for charge to flow and vo= iR =(0)R= 0 V for the period T/2→T. The
input vi and the output vo were sketched together in Fig. 2.9 for comparison purposes. The output
signal vo now has a net positive area above the axis over a full period and an average value
determined by

Figure2.7: Nonconduction region (T/2→T)

Compiled by Wondimagegn M. Page 18


Figure2.8: Half-wave rectified signal

The process of removing one-half the input signal to establish a dc level is called half-wave
rectification. The effect of using a silicon diode with VT = 0.7 V is demonstrated in Fig. 2.10 for
the forward-bias region. The applied signal must now be at least 0.7 V before the diode can turn
―on.‖ For levels of vi less than 0.7 V, the diode is still in an open circuit state and vo = 0 V as
shown in the same figure. When conducting, the difference between vo and vi is a fixed level of
VT =0.7 V and vo = vi - VT, as shown in the figure. The net effect is a reduction in area above the
axis, which naturally reduces the resulting dc voltage level. For situations where Vm >>VT, the
following equation can be applied to determine the average value with a relatively high level of
accuracy.

Compiled by Wondimagegn M. Page 19


Figure2.9: Effect of VT on half-wave rectified signal
Example 3:
(a) Sketch the output vo and determine the dc level of the output for the network of figure shown
below.
(b) Repeat part (a) if the ideal diode is replaced by a silicon diode.
(c) Repeat parts (a) and (b) if Vm is increased to 200 V and compare solutions using the above
two equations.

Compiled by Wondimagegn M. Page 20


Compiled by Wondimagegn M. Page 21
FULL-WAVE RECTIFICATION

Bridge Network

The dc level obtained from a sinusoidal input can be improved 100% using a process called full-
wave rectification. The most familiar network for performing such a function appears in Fig.
2.11 with its four diodes in a bridge configuration. During the period t = 0 to T/2 the polarity of
the input is as shown in Fig. 2.12. The resulting polarities across the ideal diodes are also shown
in Fig. 2.12 to reveal that D2 and D3 are conducting while D1 and D4 are in the ―off‖ state. The
net result is the configuration of Fig. 2.13, with its indicated current and polarity across R. Since
the diodes are ideal the load voltage is vo = vi, as shown in the same figure.

Figure2.10: Full-wave bridge rectifier

Figure2.11: Network for the period 0 T/2 of the input voltage vi

Compiled by Wondimagegn M. Page 22


Figure2.12: Conduction path for the positive region of vi
For the negative region of the input the conducting diodes are D1 and D4, resulting in the
configuration of Fig. 2.14. The important result is that the polarity across the load resistor R is
the same as in Fig. 2.12, establishing a second positive pulse, as shown in Fig. 2.14. Over one
full cycle the input and output voltages will appear as shown in Fig. 2.15.

Figure2.13: Conduction path for the negative region of vi

Figure2.14: Input and output waveforms for a full-wave rectifier

Compiled by Wondimagegn M. Page 23


Since the area above the axis for one full cycle is now twice that obtained for a half-wave
system, the dc level has also been doubled and

If silicon rather than ideal diodes are employed as shown in Fig. 2.16, an application of
Kirchhoff‘s voltage law around the conduction path would result in

The peak value of the output voltage vo is therefore

For situations where Vm>>2VT, Eq. (2.11) can be applied for the average value with a relatively
high level of accuracy.

Figure2.15: Determining Vomax for silicon diodes in the bridge configuration

Then again, if Vm is sufficiently greater than 2VT, then Eq. (2.10) is often applied as a first
approximation for Vdc.

Compiled by Wondimagegn M. Page 24


CHAPTER THREE
BIPOLAR JUNCTION TRANSISTORS
3.1 INTRODUCTION
Transistors are three terminal, three-layered, two-junction electronic devices whose voltage-
current relationship is controlled by a third voltage or current. We may regard a transistor as a
controlled voltage or current source.
They were demonstrated by a team of scientists at Bell laboratories in 1947 and their
introduction brought an end to the era of vacuum tube devices.
Advantages of transistors over vacuum tubes:
• Smaller size, light weight
• No heating elements required
• Low power consumption
• Low operating voltages
Areas of application:
Used in applications such as signal amplifiers, electronic switches, oscillators, design of digital
logics, memory circuits etc.
Physical structure of transistors:
According to the physics of the device, we can classify transistors into two main classes:
1. Bipolar junction transistors (BJT)
2. Field effect transistors (FET) (to be discussed in the next chapter).

BJT: definition, construction, normal operating condition & symbolic representation


BJT is a diode-based transistor which is usually blocked unless the control terminals are forward
biased. So, the controlling agent is a current and BJT is a current amplifier by nature. Two types
of construction exist namely:
• Thin layer of n-type material sandwiched between two p-type materials (called a PNP
transistor).

Compiled by Wondimagegn M. Page 25


• Thin layer of p-type material sandwiched between two n-type materials (called NPN

transistor)

Fig 3.1 physical structure and the terminals of BJT


Emitter (E) is heavily doped – supplies charge carriers.
Base (B) is lightly doped – allows most of the charge carriers to pass through it.
Collector is moderately doped(C) – collects the charge carriers.
We can also see that there are two junctions shared between the three terminals, the Emitter-base
junction and Collector-base junction.
The normal operating condition prevails if E-B junction is forward biased and C-B junction is reverse
biased.
The following figure shows the symbolic representation of the PNP and NPN transistors. In each
case arrow head represents the direction of current through emitter.

Fig. 3.2: Transistor symbolic representation


3.2 PRINCIPLE OF OPERATION AND CHARACTERISTICS
The working principle of NPN transistor is discussed here and that of PNP transistor is similar
except the fact that roles of free electrons and holes are interchanged and current directions are
reversed.

Compiled by Wondimagegn M. Page 26


Principle of operation of an NPN BJT transistor
• EB diode is forward biased. So, depletion region at EB junction is narrow.
• CB diode is reverse biased. So, depletion region at CB junction is wide.
• Free electrons from emitter region cross the junction and reach base region repelled by the
negative potential at the emitter terminal.
• Some of these free electrons combine with the holes in the base region. So, they move towards
the base terminal and form the base current.
• There are very less number of holes available in base. Therefore, most electrons (say, about
99%) coming from emitter do not combine with holes. They fall down the potential gradient and
enter collector region attracted by the positive potential at the collector terminal. This is how the
number of holes at base determines the flow from emitter to collector.
• So, emitter emits electrons acting as source of electrons. The collector collects these electrons
acting as absorbent and the naming resulted from this fact.
Directions of three currents are shown in figure 3.3.

Fig. 3.3: Transistor operation & direction of currents


Characteristics of BJT
We are still considering an NPN transistor and by virtual agreement, current directions are
opposite to electron-flow directions. Because of the collective statements we made above we can
say that, we use a small base current to induce a large collector current. This large collector
current is proportional to the base current because of the role of the small number of holes there
in base. So,

IC = IB………………………………………………………………………………………………..3.1

Compiled by Wondimagegn M. Page 27


Where, IE is emitter current, IB is base current, IC is collector current.
Current relationship simply obtained from the current directions stated in fig 3.3:

IE = I C + I B …………………………………………………………………………………………….3.2
When emitter circuit is opened, there is no supply of free electrons from emitter to collector. Even
then, there will be small collector current called reverse saturation collector current

ICBO. This is due to thermally generated electron-hole pairs. Even during normal operation, ICBO is

present. We define another equation adding the effect of ICBO.

I C = αdc IE + ICBO ………………………………………………………………………………………3.3

αdc is fraction of emitter current which flows to collector. Since ICBO is very small,
αdc = I C / IE…………………………………………………………………………………………………3.4
we have also another parameter from eq. 3.1

βdc = I C / IB………………………………………………………………………………………………….3.5
Dividing eqn 3.2 by I C we have

1/ αdc =1 + 1/ βdc and αdc = βdc /1+ βdc………………………………………………….3.6

Example 1: If the emitter current of a transistor is 8 mA and IB is 1/100 of IC, determine the
levels of IC and IB.
Example 2: (a) Given that αdc = 0.987, determine the corresponding value of βdc.
(b) Given βdc = 120, determine the corresponding value of α.
(c) Given that βdc =180 and IC =2.0 mA, find IE and IB.
3.3 BJT CONFIGURATIONS
Transistor is 3-terminal device. For applications such as amplifier circuit, four terminals are required
– two for input and two for output. So, one of the three terminals of transistor should be made
common for both input and output in such cases. Accordingly, we will end up with three types of
configurations:
• Common base (CB) configuration
• Common emitter (CE) configuration
• Common collector (CC) configuration

Compiled by Wondimagegn M. Page 28


3.5.1 Common base configuration

Fig. 3.4: Common Base configuration


Base is common, emitter is input terminal, and collector is output terminal. We will consider two
characteristics: input characteristics and output characteristics
Input characteristics:

Plot of input current IE versus input voltage VEB for various values of output voltage VCB is shown in
fig 3.5.

As VEB is increased, IE increases similar to diode characteristics

If VCB is increased, then IE increases slightly. This is due to the increase in electric field aiding the
flow of electrons from emitter.

Fig 3.5: CB Input and Output characteristics

Compiled by Wondimagegn M. Page 29


Example 3: (a) Using the characteristics of the following figure, determine IC if VCB = 10 V and
VBE =800 mV.
(b) Determine VBE if IC =5 mA and VCB =10 V.

3.5.2 Common emitter configuration


Emitter is common or reference to base terminal and collector terminal, base is input terminal and
collector is output terminal.
Again we get two characteristics: input characteristics and output characteristics.

Fig. 3.6: Common-Emitter configuration

Compiled by Wondimagegn M. Page 30


Example 4: (a) Using the characteristics of the above figure, determine IC at IB =30 µA and VCE
= 10 V.
(b) Using the characteristics of the above figure, determine IC at VBE =0.7 V and VCE = 15 V.

3.4 BIASING METHODS


Introduction
One of the most common applications of transistors that should be stated repeatedly is its role in
amplifier circuits. For a faithful amplification we require that transistor be operated in active region
throughout the duration of input signal. To ensure this, proper dc voltages should be applied which
result to a situation called biasing. But first of all, how can we sense and measure the effect of
biasing?
Operating point

When no input signal is applied to transistor circuit, and only dc voltages are supplied, currents IC, IB
and voltage VCE will have certain values. If these values are plotted over the transistor output
characteristics, the point we get is called ‗Operating point‘. It is also called ‗Quiescent point‘ or just
Q-point.

Compiled by Wondimagegn M. Page 31


Fig. 3.8: Quiescent point

In above figure, currents IBQ (the value of IB at Q), ICQ and voltage VCEQ are plotted at point Q. In
practice, we have to choose Q-point according to our requirement. If we want to operate in the
middle of active region, we may choose Q as Q-point. For instance in the case of the so called Class
A amplifiers (to be discussed later) we want Q-point to be in the middle of active region. If we want
to operate near saturation, we may choose Q‘ (Q prime) as Q-point. If we want to operate near cutoff,
we may choose Q‘‘ as Q-point too. Note that if no biasing is used, Q-point will be in the origin of the
graph. So, biasing is used to fix the Q-point according to our need.
Types of bias
• Fixed bias Circuit
• Voltage divider bias (Self bias)
Fixed Bias Circuit
Base resistor RB is connected to Vcc (Instead of VBB) negative terminal of Vcc is not shown. It is
assumed to be at ground.

Fig. 3.9: Fixed bias circuit

Compiled by Wondimagegn M. Page 32


Applying KVL to the input side, we get

VCC- IB RB - VBE=0
Rearranging, we get

IB = VCC - VBE / RB …………………………………………………….3.8


VCC is constant, VBE is almost constant (0.7V for silicon). So by selecting proper R B, we can fix IB as
required.
Applying KVL to output side we get:

VCC - IC RC- VCE =0


VCC - IC RC = VCE……………………………………………………….3.9
IC is related to IB by β.
Example 5: Determine the following for the fixed-bias configuration of figure shown below.
(a) IBQ and ICQ.
(b) VCEQ.
(c) VB and VC.
(d) VBC.

Compiled by Wondimagegn M. Page 33


Fixed base biasing with emitter resistor
In the fixed biasing we have seen that Ic varies with β this implies that a replacement of a defective
transistor will result in different β and Qpoint fluctuates. One way of improving this is to connect a
resistor to the emitter terminal. It is just to connect a resistor Re to emitter terminal of the figure
below.

Compiled by Wondimagegn M. Page 34


Taking KVL from ground to ground, we have

VCC – IB Rb- IERe- VBE=0


But, IE= (β +1) IB
VCC = IB (Rb + (β +1)Re) + VBE
IB= VCC – VBE/[Rb+(1+ β)Re] = VCC - VBE/[Rb + β Re]
IC = β IB = VCC - VBE /(Rb/ β) + Re = VCC - VBE/Re with the condition Re>> Rb/ β

IC is independent of β but in order to fulfill the above condition Re must be very large. We have to
maintain very large VCC to maintain the current in the desired value. The other possibility is to keep
Rb small but this will decrease the voltage drop across Rb to the extent that it becomes less than the
voltage drop across Rc i.e CB junction becomes forward biased.
Example 7: For the emitter bias network of Figure below, determine:
(a) IB.
(b) IC.
(c) VCE.
(d) VC.
(e) VE.
(f) VB.
(g) VBC.

Compiled by Wondimagegn M. Page 35


Compiled by Wondimagegn M. Page 36
CHAPTER FOUR
FIELD EFFECT TRANSISTOR
4.1. Introduction
Field Effect Transistors (FETs) are three terminal electronic devices used for variety of
applications, mostly similar to BJTs, such as amplifiers, electronic switches and impedance
matching circuits. However, the field effect transistor differs from bipolar junction transistor in
the following important characteristics.
 In FETs an Electric Field is established to control the conduction path of output
devices without the need for direct contact between the controlling and controlled
quantities.
 Its operation depends upon the flow of majority carriers (one-polarity), hence, uni-
polar device.
 It exhibits high input impedance, typically, in many mega ohms.
 FET‘s are less sensitive to temperature variations and because of their construction
they are more easily integrated on IC‘s.
 FET‘s are also generally more static sensitive than BJT‘s.

4.2. Types of FET


There are two types of field effect transistors. These are Junction Field Effect Transistor (JFET)
and Metal-Oxide Field Effect Transistor (MOSFET).
4.2.1. Junction Field Effect Transistors (JFET)
The basic Structure of junction field effect transistor is formed from a bar of n/p semiconductor
material called channel with a region of p/n material embedded in each side. The n-channel JFET
is constructed by the n-type material that forms the channel between the embedded layers of p-
type material. The top of the n-type channel is connected through an ohmic contact to a terminal
referred to as the drain (D), while the lower end of the same material is connected through an
ohmic contact to a terminal referred to as the source (S). The two p-type materials are connected
together and to the gate (G) terminal as shown in figure 4.1(a). In the same way, the p-channel
JFET is constructed by the p-type material that forms the channel between the embedded layers
of n-type material as shown in figure 4.1(b). In practice, the channel is always lightly doped than
the gate. In the absence of any applied potentials the JFET has two p-n junctions under no-bias

Compiled by Wondimagegn M. Page 37


conditions. The result is a depletion region at each junction that resembles the same region of a
diode under no-bias conditions.

Figure 4.1: Junction Field Effect Transistors basic construction and their symbol
Operating characteristics of JFET
To demonstrate the i-v characteristics of JFET lets use the following n-channel JFET circuit
layout shown in figure 4.2. For normal operation of JFET the two junctions made between the
channel and the two gates should be reverse biased. As can be seen from the circuit diagram
there are two possible conditions to control the variation of channel current, either changing the
voltage level of VGG or VDD. Depending on this there are two operating conditions.
The p-channel FET is similar to the n-channel except that the voltage polarities and current
directions are reversed. And regarding response time, as electrons are more mobile than holes,
there will be considerable delay of current in p-channels compared to n-channel FETs.

Figure 4.2: n-channel JFET circuit connection


Case 1: VGS = 0, VDS increasing to some positive value

Compiled by Wondimagegn M. Page 38


A positive voltage VDS has been applied across the channel and the gate has been connected
directly to the source to establish the condition VGS = 0 V. The instant the voltage VDS is applied,
the electrons will be drawn to the drain terminal, establishing the conventional current ID with
the defined direction. The path of charge flow clearly reveals that the drain and source currents
are equivalent (ID = IS). For a few volts increase in VDS, the current will increase as determined
by Ohm‘s Law (See fig. 4.3a). But further increase in VDS begins to make the depletion region
near the drain to be wider and wider than the source ends because the relative voltage level near
the drain is greater than the source. This causes the channel resistance to change See fig. 4.3(b).
As VDS increases and when it gets large enough to cause the two depletion regions touch near the
drain, pinch-off occurs and no further increase in ID. At this point, ID maintains the saturation
level defined as IDSS and the voltage is called pinch-off voltage Vp. In this region JFETs can act
as constant current source, See fig. 4.3(c).

Case 2: VGS < 0 (VGS varying)


As VGS becomes more negative, the width of the depletion region increases uniformly across the
channel causing an increase in channel resistance. See fig. 4.4 (b). At this condition, the effect of
varying VDS is to establish depletion regions similar to those obtained with VGS=0V but a lower

Compiled by Wondimagegn M. Page 39


level of VDS is required to reach the saturation level. If VGS is taken up to a position where the
two depletion regions are pinched, then the device will be turned off and any change in V DS will
produce no current. See fig. 4.4 (c).

Therefore, the result of applying a negative bias to the gate is to reach the saturation level at a
lower level of VDS. The resulting saturation level for ID has been reduced and in fact will
continue to decrease as VGS is made more and more negative. The region to the right of the
pinch-off locus on the fig. 4.5 is the region typically employed in linear amplifiers (amplifiers
with min distortion of the applied signal) and is commonly referred to as the constant-current,
saturation, or linear amplification region. The region to the left of the pinch-off locus of the figure
is referred to as the ohmic or voltage-controlled resistance region. In this region the JFET can
actually be employed as a variable resistor (possibly for an automatic gain control system)
whose resistance is controlled by the applied gate-to-source voltage. In the ohmic region JFET
can be used as variable resistors of value given as

ro
rd 
(1  VGS )2
VP

Compiled by Wondimagegn M. Page 40


Where ro is the resistance of the channel before applying VGS (VGS=0) and Vp is the pinch-off
voltage.

Figure 4.5: current voltage relationships curve

Transfer Characteristics
In a JFET the relationship of VGS (input) and ID (output) is a little more complicated, and is given
by Shockley‘s equation:
2
  V 
I D  I DSS 1   GS 
  V p ( pinchoff ) 

The transfer characteristics defined by Shockley‘s equation are unaffected by the network in
which the device is employed. The transfer curve can be obtained using Shockley‘s equation as
shown in Fig. 4.6.

Figure 4.6: transfer charactersitcs curve


4.2.2. Metal Oxide Field Effect Transistors (MOSFETs)
MOSFETs have characteristics similar to JFETs and additional characteristics, but they have
added features of characteristics extended to the region of opposite polarities of V GS that make

Compiled by Wondimagegn M. Page 41


them very useful. There are two types of MOSFET; Depletion-Type and Enhancement-Type
MOSFET.
4.2.2.1. Depletion-Type MOSFET
The basic construction of the n-channel depletion-type MOSFET is provided in Fig.4.7. The
Drain (D) and Source (S) are connected to the n-doped regions. These n-doped regions are
connected via an n-channel. This n-channel is connected to the Gate (G) via a thin insulating
layer of SiO2. The n-doped material lies on a p-doped substrate that may have an additional
terminal connection called SS.

Figure 4.7: Construction of n channel Depletion type MOSFET and symbol


Operational Characteristics of Depletion-type MOSFET
If the VGS is set to zero and VDS is made to increase, the effect will be to establish a current
similar to that established through the channel of the JFET. But if VGS is increase negatively, it
will tend to pressure electrons toward the p-type substrate and attract holes from the p-type
substrate as shown in Fig. 4.8. Depending on the magnitude of the negative bias established by
VGS, a level of recombination between electrons and holes will occur that will reduce the number
of free electrons in the n-channel available for conduction. The more negative the bias, the
higher the rate of recombination. The resulting level of drain current is therefore reduced with
increasing negative bias for VGS as shown. This is called depletion mode operation.
For positive values of VGS, the positive gate will draw additional electrons (free carriers) from
the p-type substrate due to the reverse leakage current and establish new carriers through the
collisions resulting between accelerating particles. As the gate-to-source voltage continues to
increase in the positive direction. This is called enhancement mode operation.

Compiled by Wondimagegn M. Page 42


Figure 4.8:Operation of depletion type MOSFETs Figure 4.9: operation characteristics of n-
channel Depletion type MOSFETs
4.2.2.2. Enhancement-Type MOSFET
Although there are some similarities in construction and mode of operation between depletion-
type and enhancement-type MOSFETs, the characteristics of the enhancement-type MOSFET
are quite different from anything obtained thus far.
The Drain (D) and Source (S) connect to the n-doped regions. The Gate (G) connects to the p-
doped substrate via a thin insulating layer of SiO2. There is no channel. The n-doped material
lies on a p-doped substrate that may have an additional terminal connection called SS.

Figure 4.10: n-Channel enhancement-type MOSFET and symbol


Basic Operation
If VGS is set at 0 V and a voltage applied between the drain and source of the device, the absence
of an n-channel (with its generous number of free carriers) will result in a current of effectively
zero amperes—quite different from the depletion-type MOSFET and JFET where ID=IDSS.

Compiled by Wondimagegn M. Page 43


The Enhancement-type MOSFET only operates in the enhancement mode. Hence, VGS is always
positive and as VGS increases, ID increases. But if VGS is kept constant and VDS is increased, then
ID saturates (IDSS) after the saturation level, VDSst is reached.

Figure 4.11: Drain characteristics of an n-channel enhancement-type MOSFET


To determine ID given VGS:
( )
Where, VT is threshold voltage or voltage at which the MOSFET turns on.
k is a constant that can be determined by using the formula:
( )

( ( ) )
VDSsat can also be calculated as

Example1: Given IDSS = 9 mA and VP=3.5 V, determine ID when:


(a) VGS = 0 V.
(b) VGS=-2 V.
(c) VGS=-3.5 V.
(d) VGS=-5 V.
4.3. Biasing techniques
There are different biasing techniques for FET circuits: some of commonly used are fixed bias, self-bias and voltage
divider bias.
4.3.1. Fixed-bias configuration
Consider the following simplest biasing configuration circuit for the n-channel JFET,

Figure 4.12: Fixed-bias configuration. Figure 4.13: Network for dc analysis.

Compiled by Wondimagegn M. Page 44


The coupling capacitors (C1 and C2) are open circuits for the dc analysis as is shown in figure
4.13; it would be short circuit for the ac analysis. Attempting the circuit for dc analysis:

This is shown in the above figure replacing RG with short circuit.


Applying Kirchhoff‘s voltage law in the clockwise direction of the indicated loop of Fig. 4.13
will result in

Since VGG is a fixed dc supply, the voltage VGS is fixed in magnitude, resulting in the notation
―fixed-bias configuration.‖ And the drain current ID is controlled by:

( )
The level of ID is simply determined from a vertical line drawn by taking the fixed level of VGS
which is superimposed as a vertical line at VGS= - VGG, which is shown in figure 4.14 below.

Figure 4.14: Finding the solution for the fixed-bias


configuration
Hence, the solution for a fixed bias configuration is the
intersection of the two curves in the above figure, and this is commonly referred to as quiescent
point or simply Q-point. Note in the figure that the q-point of ID is determined by drawing a
horizontal line across the intersection point of the two curves and crossing the ID axis.
The drain-to-source voltage of the output section can be determined by applying Kirchhoff‘s
voltage law as follows:

Note from figure 4.13 that, the values of the source, drain, and gate voltages with respect to
ground, in relation to VDS and VGS are given by:

But VDS is given by:

So

In addition, VGS is given by:

Compiled by Wondimagegn M. Page 45


Example 2: Determine the following for the network of Figure shown below.
(a) VGSQ.
(b) IDQ.
(c) VDS.
(d) VD.
(e) VG.
(f) VS.

Compiled by Wondimagegn M. Page 46


4.3.2. Self-bias configuration
Here a resistor RS is introduced in the source leg of the configuration, which is used to determine
the controlling gate-to-source voltage (VGS). This is shown in the following figure.

Compiled by Wondimagegn M. Page 47


Figure 4.15: JFET self-bias configuration. Figure 4.16: DC analysis of the self-bias
configuration.

Replacing the capacitors (C1 and C2) with open circuit and RG with short circuit (since IG=0A),
will result in the network of dc analysis shown in figure 4.16 above.
The current through RS is the source current IS, but IS= ID and

For the indicated loop of figure 4.16,

 VGS  VRS  0
VGS ofVthe
Note in this case that VGS is a function RS   I D RScurrent ID and not fixed in magnitude as
output
occurred for the fixed-bias configuration.
The solution of a self-bias configuration is obtained by substituting VGS into the drain current
equation as follows:

( ) ( ) ( )
Solving this quadratic equation will result in appropriate solution of ID.
The graphical analysis can also be used to determine the operating point, which is the
intersection point of the device characteristic curve and a straight line curve drawn using the
equation VGS   I D RS , as shown in the following figure.

Figure 4.17: Finding the solution for the self-bias configuration

Applying Kirchhoff‘s voltage law to the output circuit, the level of VDS can also be determined:

( )
In addition,

Compiled by Wondimagegn M. Page 48


Example 3: Determine the following for the network of Figure shown below.
(a) VGSQ.
(b) IDQ.
(c) VDS.
(d) VS.
(e) VG.
(f) VD.

Compiled by Wondimagegn M. Page 49


Compiled by Wondimagegn M. Page 50
Compiled by Wondimagegn M. Page 51
BULE HORA UNIVERSITY

COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING

Module for introduction to computing

PREPARED BY

Wogderes S.

BULE HORA, ETHIOPIA

JUNE 2023

1
UNIT-I

INTRODUCTION TO COMPUTERS COMPUTER SYSTEMS ―

A Computer is an electronic device that stores, manipulates, and retrieves data. ‖ We can also refer
computer computes the information supplied to it and generates data. A System is a group of
several objects with a process. For Example: Educational System involves teachers, students
(objects). The teacher teaches the subject to students i.e., teaching (process). Similarly, a computer
system can have objects and processes.

What are the main components of a computer?

These are the 7 major components of a computer that you need to know about,

Motherboard. ...
Central Processing Unit (CPU) ...
Graphical Processing Unit (GPU) ...
Random Access Memory (RAM) ...
Storage device.
Input Unit.
Output Unit.

1. Motherboard
A motherboard is a circuit board through which all the different components of a computer
communicate and it keeps everything together. The input and output devices are plugged into the
motherboard for function.

2
2. Output Unit

The result of the command we provide the computer with through the input device is called the
output. The most used is the monitor since we give commands using the keyboard and after the
processing, the result or outcome is displayed on the monitor.

3. Input Unit

Computers respond to commands given to them in the form of numbers, alphabets, images, etc.
through input units or devices like – keyboards, joysticks, etc.
These inputs are then processed and converted to computer language and then the response is
the output in the language that we understand or the one we have programmed the computer
with.

4. Graphics Processing Unit (GPU)

Another vital component of the computer is GPU. The Graphics Processing Unit or the video card
helps generate high-end visuals like the ones in video games.
Good graphics like these are also helpful for people who have to execute their work through images
like 3D modelers and others who use resource-intensive software. It generally communicates
directly with the monitor

3
5. The Random Access Memory (RAM)

RAM is the most commonly referred to a component in a computer.


The RAM is also known as the volatile memory since it gets erased every time the computer
restarts.
It stores the data regarding the programs which are frequently accessed programs and processed.
It helps programs to start up and close quickly.
It being slower has made it more obsolete these days.

6. Storage Unit

The computers need to store all their data and they have either a Hard Disk Drive (HDD) or a Solid State
Drive (SDD) for this purpose.
Hard disk drives are disks that store data and this data is read by a mechanical arm. Solid-State drives are
like SIM cards in mobile phones.
They have no moving parts and are faster than hard drives. There is no need for a mechanical arm to find
data on a physical location on the drive and therefore this takes no time at all.

7. CENTRAL PROCESSING UNIT

The Central Processing Unit (CPU) is the primary component of a computer that acts as its “control
center.” The CPU, also referred to as the “central” or “main” processor, is a complex set of
electronic circuitry that runs the machine's operating system and apps.

4
The computer's central processing unit (CPU) is the portion of a computer that retrieves and
executes instructions. The CPU is essentially the brain of a CAD system. It consists of an
arithmetic and logic unit (ALU), a control unit, and various registers. The CPU is often simply
referred to as the processor.

CPUs perform logic, control, arithmetic, input and output operations specified by its programming
to perform basic tasks.

following are the objects of the Computer System

a) User ( A person who uses the computer)

b) Hardware

c) Software

Hardware: Hardware of a computer system can be referred as anything which we can touch and
feel. Example: Keyboard and Mouse.

The hardware of a computer system can be classified as

Input Devices(I/P)

Processing Devices (CPU)

Output Devices(O/P)

5
ALU: It performs the Arithmetic and Logical Operations such as

+, -, *, / (Arithmetic Operators)

&&, || (Logical Operators)

CU: Every Operation such as storing, computing and retrieving the data should be governed by
the control unit.

MU: The Memory unit is used for storing the data.

The Memory unit is classified into two types.

They are 1) Primary Memory

2) Secondary Memory

Primary memory: The following are the types of memories which are treated as primary ROM:
It represents Read Only Memory that stores data and instructions even when the computer is turned
off. The Contents in the ROM can‘t be modified once if they are written . It is used to store the
BIOS information. RAM: It represents Random Access Memory that stores data and instructions
when the computer is turned on. The contents in the RAM can be modified any no. of times by
instructions. It is used to store the programs under execution.

Cache memory: It is used to store the data and instructions referred by processor.

Secondary Memory: The following are the different kinds of memories Magnetic Storage: The
Magnetic Storage devices store information that can be read, erased and rewritten a number of
times. Example: Floppy Disks, Hard Disks, Magnetic Tapes

Optical Storage: The optical storage devices that use laser beams to read and write stored data.
Example: CD (Compact Disk),DVD(Digital Versatile Disk)

6
COMPUTER SOFTWARE

Software of a computer system can be referred as anything which we can feel and see. Example:
Windows, icons Computer software is divided in to two broad categories: system software and
application software. System software manages the computer resources. It provides the interface
between the hardware and the users. Application software, on the other hand is directly responsible
for helping users solve their problems.

System Software

System software consists of programs that manage the hardware resources of a computer and
perform required information processing tasks. These programs are divided into three classes: the
operating system, system support, and system development.

Difference between Machine, Assembly, High Level Languages

Feature Machine Assembly High Level


Form 0‘s and 1‘s Mnemonic codes Normal English
Machine Dependent Dependent Dependent Independent
Translator Not Needed Needed (Assembler) Needed (Compiler)
Execution Time Less Less High
Different
Languages Only one Different Languages
Manufacturers
Nature Difficult Difficult Easy
Memory Space Less Less More

Language Translators These are the programs which are used for converting the programs in
one language into machine language instructions, so that they can be executed by the computer.

1) Compiler: It is a program which is used to convert the high-level language

programs into machine language

2) Assembler: It is a program which is used to convert the assembly level

language programs into machine language

3) Interpreter: It is a program, it takes one statement of a high level language

7
program, translates it into machine language instruction and then immediately

executes the resulting machine language instruction and so on.

Comparison between a Compiler and Interpreter

COMPILER INTERPRETER

A Compiler is used to compile an entire


An interpreter is used to translate each line of
program and an executable program is
the program code immediately as it is entered
generated through the object program

11

The executable program is generated in RAM and


The executable program is stored in a disk for future
the interpreter is required for each run of the
use or to run it in another computer
program

The compiled programs run faster The Interpreted programs run slower

Most of the Languages use compiler A very few languages use interpreters

CREATING AND RUNNING PROGRAMS

The procedure for turning a program written in C into machine Language. The process is presented
in a straightforward, linear fashion but you should recognize that these steps are repeated many
times during development to correct errors and make improvements to the code. The following are
the four steps in this process

1) Writing and editing the program

2) Compiling the program

3) Linking the program with the required modules

8
4) Executing the program

Sl. No. Phase Name of Code Tools File Extension

C Compilers
1 Text Editor Source Code Edit, Notepad .C
Etc..,

2 Compiler Object Code C Compiler .OBJ

Executable
3 Linker C Compiler .EXE
Code
Executable
4 Runner C Compiler .EXE
Code

ALGORITHM

Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values
may be, an algorithm terminates after executing a finite number of instructions. We represent an
algorithm using a pseudo language that is a combination of the constructs of a programming
language together with informal English statements.

The ordered set of instructions required to solve a problem is known as an algorithm.

The characteristics of a good algorithm are:

•Precision – the steps are precisely stated (defined).

•Uniqueness – results of each step are uniquely defined and only depend on the input

and the result of the preceding steps.

•Finiteness – the algorithm stops after a finite number of instructions are executed.

•Input – the algorithm receives input.

•Output – the algorithm produces output.

9
•Generality – the algorithm applies to a set of inputs.

FLOWCHART
Flowchart is a diagrammatic representation of an algorithm. Flowchart is very helpful in writing
program and explaining program to others.
Symbols Used in Flowchart
Different symbols are used for different states in flowchart, For example: Input/Output and
decision making has different symbols. The table below describes all the symbols that are used in
making flowchart

KEYWORDS
C++ keywords are the words that convey a special meaning to the c compiler. The keywords
cannot be used as variable names. The list of C keywords is given below:
auto break case char const
continue default do double else
enum extern float for goto
if int long register return
short signed sizeof static struct
switch typedef union unsigned void
volatile while

IDENTIFIERS
Identifiers are used as the general terminology for the names of variables, functions and arrays.
These are user defined names consisting of arbitrarily long sequence of letters and digits with
either a letter or the underscore (_) as a first character. There are certain rules that should be
followed while naming c++ identifiers:

➢ They must begin with a letter or underscore (_).


➢ They must consist of only letters, digits, or underscore. No other special character is
allowed.
➢ It should not be a keyword.
➢ It must not contain white space.

10
EVALUATION OF EXPRESSION
At first, the expressions within parenthesis are evaluated. If no parenthesis is present, then the
arithmetic expression is evaluated from left to right. There are two priority levels of operators in
C.
High priority: * / %
Low priority: + -
The evaluation procedure of an arithmetic expression includes two left to right passes through
the entire expression. In the first pass, the high priority operators are applied as they are
encountered and in the second pass, low priority operations are applied as they are encountered.
Suppose, we have an arithmetic expression as:
x = 9 – 12 / 3 + 3 *2 - 1
This expression is evaluated in two left to right passes as:
First Pass
Step 1: x = 9-4 + 3 * 2 – 1
Step 2: x = 9 – 4 + 6 – 1
Second Pass
Step 1: x = 5 + 6 – 1
Step 2: x = 11 – 1
Step 3: x = 10
But when parenthesis is used in the same expression, the order of evaluation gets changed.
For example,
x = 9 – 12 / (3 + 3) * (2 – 1)
When parentheses are present then the expression inside the parenthesis are evaluated first from

left to right. The expression is now evaluated in three passes as:

First Pass

Step 1: x = 9 – 12 / 6 * (2 – 1)

Step 2: x= 9 – 12 / 6 * 1

Second Pass

11
Step 1: x= 9 – 2 * 1

Step 2: x = 9 – 2

Third Pass48

Step 3: x= 7
There may even arise a case where nested parentheses are present (i.e. parenthesis inside
parenthesis). In such case, the expression inside the innermost set of parentheses is evaluated
first and then the outer parentheses are evaluated.
For example, we have an expression as:
x = 9 – ((12 / 3) + 3 * 2) – 1
The expression is now evaluated as:
First Pass:
Step 1: x = 9 – (4 + 3 * 2) – 1
Step 2: x= 9 – (4 + 6) – 1
Step 3: x= 9 – 10 -1
Second Pass
Step 1: x= - 1 – 1
Step 2: x = -2
Note: The number of evaluation steps is equal to the number of operators in the arithmetic
expression

12
UNIT-II

CONTROL STRUCTURES, ARRAYS, AND STRINGS

✓ Conditional Statements

➢ IF Statement

➢ Switch Statement

✓ Looping Statements

➢ The ‘While’ Statement

➢ The ‘for’ Statement

➢ The ‘do….While’ Statement

✓ Other Statements

➢ The ‘Continue’ Statement

➢ The ‘break’ Statement

➢ The ‘goto’ Statement

➢ The ‘Return’ Statement

✓ The following example demonstrates that correctly specifying the order in which the
actions execute is important.

✓ Consider the "rise-and-shine algorithm" followed by one executive for getting out of bed
and going to work:

1) get out of bed; 4) get dressed;

2) take off pyjamas; 5) eat breakfast;

3) take a shower; 6) go to work.

13
✓ Like many other procedural languages, C++ provides different forms of statements for
different purposes.

➢ Declaration statements are used for defining variables.

➢ Assignment statements are used for simple, algebraic computations.

➢ Branching statements are used for specifying alternate paths of execution,


depending on the outcome of a logical condition.

➢ Loop statements are used for specifying computations, which need to be repeated
until a certain logical condition is satisfied.

➢ Flow control statements are used to divert the execution path to another part of
the program.

✓ Specifying the order in which statements (actions) execute in a program is called program
control.

✓ Program control or control statement can be :

➢ Conditional (Selection) Statement (if, if…else, switch)

➢ Looping (Repetition )Statement(while, do…while, for-loop)

➢ Branching Statement(break, continue, return)

✓ If Statement:

➢ It is sometimes desirable to make the execution of a statement dependent upon a


condition being satisfied.

➢ Syntax:

if (expression)

statement;

➢ For example:

14
if (count != 0)

average = sum / count;

✓ Switch Statement:

➢ The switch statement provides a way of choosing between a set of alternatives,


based on the value of an expression.

➢ Syntax:

switch (expression) {

case constant1:

statements;

...

default:

statements;

✓ Switch Statement:

➢ For example, suppose we have parsed a binary arithmetic operation into its three
components and stored these in variables operator, operand1, and operand2.

➢ The following switch statement performs the operation and stores the result in
result.

15
✓ The ‘while’ Statement:

➢ The while statement (also called while loop) provides a way of repeating a
statement while a condition holds. It is one of the three flavours of iteration in C++.

➢ The general form is:

while (expression) or while (expression){

statement; Statement1

Statement2

✓ The while’ Statement:

➢ For example, suppose we wish to calculate the sum of all numbers from 1 to 10.

➢ This can be expressed as:

i = 1;

sum = 0;

while (i <= 10)

sum += i;

16
✓ The ‘for’ Statement:

➢ The for statement (also called for loop) is similar to the while statement, but has
two additional components: an expression which is evaluated only once before
everything else(initialization ), and an expression which is evaluated once at the
end of each iteration(incremental/decremental).

➢ The general form is:

for (exp1;exp2; exp3) or for (exp1;exp2; exp3){

statement; statements’

➢ The general for loop is equivalent to the following while loop:

expression1;

while (expression2) {

statement;

expression3; }

➢ for example, calculates the sum of all integers from 1 to n.

sum = 0;

for (i = 1; i <= n; ++i)

sum += i;

17
➢ Any of the three expressions in a for loop may be empty.

➢ For example, removing the first and the third expression gives us something identical to a
while loop:

for (; i != 0;) // is equivalent to: while (i != 0)

something; something;

➢ Removing all the expressions gives us an infinite loop.

➢ This loop's condition is assumed to be always true:

for (;;) // infinite loop

something;

✓ The ‘do…while’ Statement:

➢ The do statement (also called do loop) is similar to the while statement, except that
its body is executed first and then the loop condition is examined.

➢ Syntax:

do or do{

statement; statements;

while (expression); }while(expression);

18
➢ For example, suppose we wish to repeatedly read a value and print its square, and stop
when the value is zero.

➢ This can be expressed as the following loop:

do {

cin >> n;

cout << n * n << '\n';

} while (n != 0);

✓ The ‘continue’ Statement:

➢ The continue statement terminates the current iteration of a loop and instead jumps
to the next iteration.

➢ It applies to the loop immediately enclosing the continue statement.

➢ It is an error to use the continue statement outside a loop.

✓ The ‘continue’ Statement:

➢ For example, a loop which repeatedly reads in a number, processes it but ignores
negative numbers, and terminates when the number is zero, may be expressed as:

do {

cin >> num;

if (num < 0) continue;

// process num here...

} while (num != 0);

19
✓ The ‘break’ Statement:

➢ A break statement may appear inside a loop (while, do, or for) or a switch
statement.

➢ It causes a jump out of these constructs, and hence terminates them.

➢ Like the continue statement, a break statement only applies to the loop or switch
immediately enclosing it.

➢ It is an error to use the break statement outside a loop or a switch.

✓ The ‘break’ Statement:

➢ For example, suppose we wish to read in a user password, but would like to allow
the user a limited number of attempts:

for (i = 0; i < attempts; ++i) {

cout << "Please enter your password: ";

cin >> password;

if (password==123) // check password for correctness

break; // drop out of the loop

cout << "Incorrect!\n";

20
✓ The ‘goto’ Statement:

➢ The goto statement provides the lowest-level of jumping.

➢ It has the general form:

goto label;

where label is an identifier which marks the jump destination of goto.

➢ The label should be followed by a colon and appear before a statement within the
same function as the goto statement itself.

➢ For example, the role of the break statement in the for loop in the previous section
can be emulated by a goto:

for (i = 0; i < attempts; ++i) {

cout << "Please enter your password: ";

cin >> password;

if (Verify(password)) // check password for correctness

goto out; // drop out of the loop

cout << "Incorrect!\n";

out:

//etc...

21
✓ The ‘return’ Statement :

➢ The return statement enables a function to return a value to its caller.

➢ It has the general form:

return expression; where expression denotes the value returned by the function.

➢ The type of this value should match the return type of the function.

✓ The ‘return’ Statement:

➢ For a function whose return type is void, expression should be empty:

return;

➢ The only function we have discussed so far is main, whose return type is always
int.

➢ The return value of main is what the program returns to the operating system when
it completes its execution.

➢ The ‘return’ Statement:

➢ Under UNIX, for example, it its conventional to return 0 from main when the
program executes without errors. Otherwise, a non-zero error code is returned. For
example:

int main (void)

cout << "Hello World\n";

return 0;

✓ The ‘return’ Statement:

22
➢ When a function has a non-void return value (as in the above example), failing to
return a value will result in a compiler warning.

➢ The actual return value will be undefined in this case (i.e., it will be whatever value
which happens to be in its corresponding memory location at the time).

23
UNIT-III

Array and Strings in C++

✓ Variables in a program have values associated with them.

✓ During program execution these values are accessed by using the identifier associated
with the variable in expressions etc.

✓ If only a few values were involved a different identifier could be declared for each variable,
but now a loop could not be used to enter the values.

✓ Using a loop and assuming that after a value has been entered and used no further use will
be made of it allows the following code to be written. This code enters six numbers and
outputs their sum:

✓ An array is a data structure which allows a collective name to be given to a group of


elements which all have the same type.

✓ An individual element of an array is identified by its own unique index (or subscript).

✓ The index must be an integer and indicates the position of the element in the array.

✓ Thus the elements of an array are ordered by the index.

✓ Types of array one dimensional and multi dimensional array.

 An array declaration is very similar to a variable declaration.

 Syntax :

Type Array_name[ No of elements]

 The number of elements must be an integer.

 For example data on the average temperature over the year in Ethiopia for each of the last
100 years could be stored in an array declared as follows:

float annual_temp [100];

24
 The first element in an array always has the index 0, and if the array has n elements the
last element will have the index n-1.

 An array element is accessed by writing the identifier of the array followed by the
subscript in square brackets.

 Thus to set the 15th element of the array above to 1.5 the following assignment is used:

annual_temp[14] = 1.5; Note that since the first element is at index 0, then the ith element
is at index i-1.

 Hence in the above the 15th element has index 14.

 loop statements are the usual means of accessing every element in an array.

 Here, the first NE elements of the array annual_temp are given values from the input stream
cin.

for (i = 0; i < NE; i++) cin >> annual_temp[i];

 The following code finds the average temperature recorded in the first ten elements of the
array.

sum = 0.0;

for (i = 0; i <10; i++)

sum += annual_temp[i];

av1 = sum / 10;

 The assignment operator cannot be applied to array variables:

const int SIZE=10

int x [SIZE] ;

int y [SIZE] ;

x=y; // Error - Illegal

25
✓ Only individual elements can be assigned to using the index operator, e.g., x[1] = y[2];.

 To make all elements in 'x' the same as those in 'y' (equivalent to assignment), a loop has
to be used.

 An array may have more than one dimension.

 Each dimension is represented as a subscript in the array.

 Therefore a two dimensional array has two subscripts, a three dimensional array has three
subscripts, and so on.

 Arrays can have any number of dimensions, although most of the arrays that you create
will likely be of one or two dimensions.

 A chess board is a good example of a two-dimensional array. One dimension represents


the eight rows, the other dimension represents the eight columns.

 Syntax

Type Multi_arrayName [ ] [ ];

 To initialize a multidimensional arrays , you must assign the list of values to array elements
in order, with last array subscript changing while the first subscript holds steady.

 The program initializes this array by writing

int theArray[5][3] ={ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};

 for the sake of clarity, the program could group the initializations with braces, as shown
below.

int theArray[5][3] = { {1, 2, 3}, {4, 5, 6}, {7, 8, 9}, {10, 11, 12}, {13, 14,15} };

 If a one-dimensional array is initialized, the size can be omitted as it can be found from the
number of initializing elements:

int x[] = { 1, 2, 3, 4} ;

26
✓ This initialization creates an array of four elements.

 Note however:

int x[][] = { {1,2}, {3,4} } ; // error is not allowed.

✓ and must be written

int x[2][2] = { {1,2}, {3,4} } ;

27
String Representation and Manipulation

 String is nothing but a sequence of character in which the last character is the null
character ‘\0’.

 The null character indicates the end of the string.

 Any array of character can be converted into string type in C++ by appending this special
character at the end of the array sequence.

 Syntax

char StringName[ ];

 For example a string variable s1 could be declared as follows:

char s1[10];

 The string variable s1 could hold strings of length up to nine characters since space is
needed for the final null character.

 Strings can be initialized at the time of declaration just as other variables are initialized.
For example:

char s1[] = "example";

char s2[20] = "another example"

✓ would store the two strings as follows:

s1 |e|x|a|m|p|l|e|\0|

s2 |a|n|o|t|h|e|r| |e|x|a|m|p|l|e|\0|?|?|?|?|

✓ Note that the length of a string does not include the terminating null character.

✓ A string is output by sending it to an output stream, for example:

cout << "The string s1 is " << s1 << endl;

28
✓ would print

The string s1 is example

✓ When the input stream cin is used space characters, newline etc. are used as separators
and terminators.

✓ Thus when inputting numeric data cin skips over any leading spaces and terminates
reading a value when it finds a white-space character (space, tab, newline etc. ).

✓ This same system is used for the input of strings, hence a string to be input cannot start
with leading spaces, also if it has a space character in the middle then input will be
terminated on that space character.

✓ To read a string with several words in it using cin we have to call cin once for each
word.

✓ For example to read in a name in the form of a First_name followed by a Last_name we


might use code as follows:

char firstname [12], lastname[12];

cout << "Enter name ";

cin >> firstname ;

cin >> lastname;

cout << "The name entered was "

<< firstname << " "

<< lastname;

Reading multiple lines

✓ We have solved the problem of reading strings with embedded blanks, but what about
strings with multiple lines?

29
✓ It turns out that the cin.get() function can take a third argument to help out in this
situation.This argument specifies the character that tells the function to stop reading.

✓ The default value of this argument is the newline('\n')character, but if you call the
function with some other character for this argument, the default will be overridden by
the specified character.

✓ In this example, we call the function with a dollar sign ('$') as the third argument

//reads multiple lines, terminates on '$' character

#include<iostream.h>

void main(){

const int max=80;

char str[max];

cout<<"\n Enter a string:\n";

cin.get(str, max, '$'); //terminates with $

cout<<\n You entered:\n"<<str; }

✓ In this example, we call the function with a dollar sign ('$') as the third argument

//reads multiple lines, terminates on '$' character

#include<iostream.h>

void main(){

const int max=80;

char str[max];

cout<<"\n Enter a string:\n";

cin.get(str, max, '$'); //terminates with $

30
cout<<\n You entered:\n"<<str; }

Copying string the easy way

✓ You can copy strings using strcpy or strncpy function.

✓ We assign strings by using the string copy function strcpy.

✓ The prototype for this function is in string.h.

strcpy(destination, source);

✓ strcpy copies characters from the location specified by source to the location specified
by destination.

✓ It stops copying characters after it copies the terminating null character.

✓ The return value is the value of the destination parameter.

✓ There is also another function strncpy, is like strcpy, except that it copies only a
specified number of characters.

strncpy(destination, source, int n);

✓ It may not copy the terminating null character.

Concatenating strings

✓ In C++ the + operator cannot normally be used to concatenate string, as it can in some
languages such as BASIC; that is you can't say: str3 = str1 + str2;

✓ You can use strcat() or strncat

✓ The function strcat concatenates (appends) one string to the end of another string.

strcat(destination, source);

 The function strncat is like strcat except that it copies only a specified number of
characters.

31
strncat(destination, source, int n);

 It may not copy the terminating null character.

 Example:

 #include <iostream.h>

 #include <string.h>

 void main()

 {

 cout << strncmp("abc", "def", 2) << endl;

 cout << strncmp("abc", "abcdef", 3) << endl;

 cout << strncmp("abc", "abcdef", 2) << endl;

 cout << strncmp("abc", "abcdef", 5) << endl;

 cout << strncmp("abc", "abcdef", 20) << endl;

 }

32
UNIT IV

Explain the steps, tools and technical approaches involved in program design
STAGES / STEPS INVOLVED IN PROGRAMMING

1. Analyzing the Problem


These things are very important for the programmer because it provides him
the basis for planning about the programming and to control the potential
difficulties that may arise.
2. Algorithm Design
In this stage all the instructions which are to be perform at different stages are
listed. These are in simple English language. We may call it as a strategy.
3. Flowchart
It is a graphical tool that shows the steps/stages which are to be executed in a
program. All the steps which are written in the second stage are now presented
in a diagrammatic manner so as to make it easily understandable.

4. Coding
In this step programmer writes the instructions in a computer language to
solve the problem.
5. Debugging
In this stage we remove all the errors in the program because when we are
coding, there are chances that some mistakes may occur at that time.This is
done several times until all the errors are removed from the program and the
system become errors less.

33
6. Testing
In this stage we test the program by entering dummy data (includes usual,
unusual and invalid data) to check the behavior and result of the program
towards the given data.

7. Final Output
After going through all the above stages, the program is given the TRUE
DATA. Here the programmer expects the positive results of the program and
expects full efficiency of the program.
8. . Documentation

Most of the programmer neglect this stage by giving many reasons, but this is
very important because this will help the programmer to correct the problems
that may occur in the program.

An algorithm is the step-wise logical instructions written in any human-understandable


language to solve a particular problem in a finite amount of time. It is written in simple English
language.

Steps used to develop algorithm are:


The problem has to understood by the programmer.
The expected output has to be identified.
The logic that will produce the required output from the input has to be developed.
The algorithm should be tested for accuracy for a given set of input data.
The steps are repeated till the desired result is produced.

34
Characteristics of Algorithm
All the instructions of algorithm should be simple.
The logic of each steps must be clear.
There should be finite number of steps for solving problems.

35
UNIT V

Advanced Programming Concepts

There are four principles of object-oriented programming:


Encapsulation. Encapsulation is effectively the first step of object-oriented programming. It
groups related data variables (called properties) and functions (called methods) into single
units (called objects) to reduce source code complexity and increase its reusability.

Abstraction. Abstraction essentially contains and conceals the inner workings of object-
oriented programming code to create simpler interfaces.

Inheritance. Inheritance is object-oriented programming’s mechanism for eliminating


redundant code. It means that relevant properties and methods can be grouped into a single
object that can then be reused repeatedly – without repeating the code again and again.

Polymorphism. Polymorphism, meaning many forms, is the technique used in object-oriented


programming to render variables, methods, and objects in multiple forms.

Programming languages are typically split into two groups:


High-level languages. These are the languages that people are most familiar with, and are written
to be user-centric. High-level languages are typically written in English so that they are accessible
to many people for writing and debugging, and include languages such as Python, Java, C, C++,
SQL, and so on.
Low-level languages. These languages are machine-oriented – represented in 0 or 1 forms – and
include machine-level language and assembly language.

36
Self-test

1. Who is the father of Computers?

a) James Gosling

b) Charles Babbage

c) Dennis Ritchie

d) Bjarne Stroustrup

2. Which of the following is the correct abbreviation of COMPUTER?


a) Commonly Occupied Machines Used in Technical and Educational Research
b) Commonly Operated Machines Used in Technical and Environmental Research
c) Commonly Oriented Machines Used in Technical and Educational Research
d) Commonly Operated Machines Used in Technical and Educational Research

3. Which of the following computer language is written in binary codes only?

a) pascal

b) machine language

c) C

d) C#

4. Which of the following refers to the fastest, biggest and the most expensive computer?

A. Mainframe Computer

B. Supercomputer

C. Hybrid Computer

D. Micro Computer

37
5. Artificial Intelligence is an example of ?

A. Third Generation

B. Fourth Generation

C. Fifth Generation

D. All of the Above

6. Which of the following is the correct identifier?

a. $var_name
b. VAR_123
c. varname@
d. None of the abov

7. Which of the following is the address operator?

a. @
b. #
c. &
d. %

9. Which of the following features must be supported by any programming language to


become a pure object-oriented programming language?

a. Encapsulation
b. Inheritance
c. Polymorphism
d. All of the above

10. The programming language that has the ability to create new data types is called___.

a. Overloaded c. Reprehensible
b. Encapsulated d. Extensible

38
11. Which of the following refers to characteristics of an array?

a. An array is a set of similar data items


b. An array is a set of distinct data items
c. An array can hold different types of datatypes
d. None of the above

12. Which type of approach is used by the C++ language?

a. Right to left
b. Left to right
c. Top to bottom
d. Bottom-up

13. Which one is not a valid identifier?

rdd2
x (5)
_DATE_
A3O

For more practice https://www.interviewbit.com/cpp-mcq/

39
Digital Logic Design
Introduction

A digital computer stores data in terms of digits (numbers) and proceeds in discrete steps from one state to the next.
The states of a digital computer typically involve binary digits which may take the form of the presence or absence of
magnetic markers in a storage medium, on-off switches or relays. In digital computers, even letters, words and whole
texts are represented digitally.

Digital Logic is the basis of electronic systems, such as computers and cell phones. Digital Logic is rooted in
binary code, a series of zeroes and ones each having an opposite value. This system facilitates the design of
electronic circuits that convey information, including logic gates. Digital Logic gate functions include and, or
and not. The value system translates input signals into specific output. Digital Logic facilitates computing,
robotics and other electronic applications.

Digital Logic Design is foundational to the fields of electrical engineering and computer engineering. Digital
Logic designers build complex electronic components that use both electrical and computational
characteristics. These characteristics may involve power, current, logical function, protocol and user input.
Digital Logic Design is used to develop hardware, such as circuit boards and microchip processors. This
hardware processes user input, system protocol and other data in computers, navigational systems, cell phones
or other high-tech systems.

By Israel W. Digital Logic Design. Page 1


Data Representation and Number system
Numeric systems

The numeric system we use daily is the decimal system, but this system is not convenient for machines since the
information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing
the positional calculation which will allow us to express a number in any base where we need it.

Radix number systems


The numeric system we use daily is the decimal system, but this system is not convenient for machines since the
information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing
the positional calculation which will allow us to express a number in any base where we need it.

A base of a number system or radix defines the range of values that a digit may have.

In the binary system or base 2, there can be only two values for each digit of a number, either a "0" or a "1".

In the octal system or base 8, there can be eight choices for each digit of a number:

"0", "1", "2", "3", "4", "5", "6", "7".

In the decimal system or base 10, there are ten different values for each digit of a number:

"0", "1", "2", "3", "4", "5", "6", "7", "8", "9".

In the hexadecimal system, we allow 16 values for each digit of a number:

"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D", "E", and "F".

Where “A” stands for 10, “B” for 11 and so on.

Conversion among radices


- Convert from Decimal to Any Base

Let’s think about what you do to obtain each digit. As an example, let's start with a decimal number 1234 and convert it
to decimal notation. To extract the last digit, you move the decimal point left by one digit, which means that you divide
the given number by its base 10.

1234/10 = 123 + 4/10

The remainder of 4 is the last digit. To extract the next last digit, you again move the decimal point left by one digit and
see what drops out.

123/10 = 12 + 3/10

The remainder of 3 is the next last digit. You repeat this process until there is nothing left. Then you stop. In summary,
you do the following:

By Israel W. Digital Logic Design. Page 2


Quotient Remainder

1234/10 = 123 4 --------+


123/10 = 12 3 ------+ |
12/10 = 1 2 ----+ | |
1/10 = 0 1 --+ | | |(Stop when the quotient is 0)
| | | |
1 2 3 4 (Base 10)

Now, let's try a nontrivial example. Let's express a decimal number 1341 in binary notation. Note that the desired base is
2, so we repeatedly divide the given decimal number by 2.
Quotient Remainder

1341/2 = 670 1 ----------------------+


670/2 = 335 0 --------------------+ |
335/2 = 167 1 ------------------+ | |
167/2 = 83 1 ----------------+ | | |
83/2 = 41 1 --------------+ | | | |
41/2 = 20 1 ------------+ | | | | |
20/2 = 10 0 ----------+ | | | | | |
10/2 = 5 0 --------+ | | | | | | |
5/2 = 2 1 ------+ | | | | | | | |
2/2 = 1 0 ----+ | | | | | | | | |
1/2 = 0 1 --+ | | | | | | | | | |(Stop when the
| | | | | | | | | | | quotient is 0)
1 0 1 0 0 1 1 1 1 0 1 (BIN; Base 2)

Let's express the same decimal number 1341 in octal notation.


Quotient Remainder

1341/8 = 167 5 --------+


167/8 = 20 7 ------+ |
20/8 = 2 4 ----+ | |
2/8 = 0 2 --+ | | | (Stop when the quotient is 0)
| | | |
2 4 7 5 (OCT; Base 8)

Let's express the same decimal number 1341 in hexadecimal notation.


Quotient Remainder

1341/16 = 83 13 ------+
83/16 = 5 3 ----+ |
5/16 = 0 5 --+ | | (Stop when the quotient is 0)
| | |
5 3 D (HEX; Base 16)

In conclusion, the easiest way to convert fixed point numbers to any base is to convert each part separately. We begin by
separating the number into its integer and fractional part. The integer part is converted using the remainder method, by
using a successive division of the number by the base until a zero is obtained. At each division, the reminder is kept and
then the new number in the base r is obtained by reading the remainder from the lat remainder upwards.

The conversion of the fractional part can be obtained by successively multiplying the fraction with the base. If we iterate
this process on the remaining fraction, then we will obtain successive significant digit. This methods form the basis of
the multiplication methods of converting fractions between bases

Example. Convert the decimal number 3315 to hexadecimal notation. What about the hexadecimal equivalent of the
decimal number 3315.3?

By Israel W. Digital Logic Design. Page 3


Solution:
Quotient Remainder

3315/16 = 207 3 ------+


207/16 = 12 15 ----+ |
12/16 = 0 12 --+ | | (Stop when the quotient is 0)
| | |
C F 3 (HEX; Base 16)

(HEX; Base 16)


Product Integer Part 0.4 C C C ...
| | | |
0.3*16= 4.8 4 ----+ | | | | |
0.8*16= 12.8 12 ------+ | | | |
0.8*16= 12.8 12 --------+ | | |
0.8*16= 12.8 12 ----------+ | |
: ---------------------+
:
Thus, 3315.3 (DEC) --> CF3.4CCC... (HEX)

- Convert From Any Base to Decimal

Let's think more carefully what a decimal number means. For example, 1234 means that there are four boxes (digits);
and there are 4 one's in the right-most box (least significant digit), 3 ten's in the next box, 2 hundred's in the next box,
and finally 1 thousand's in the left-most box (most significant digit). The total is 1234:

Original Number: 1 2 3 4
| | | |
How Many Tokens: 1 2 3 4
Digit/Token Value: 1000 100 10 1
Value: 1000 + 200 + 30 + 4 = 1234

or simply, 1*1000 + 2*100 + 3*10 + 4*1 = 1234

Thus, each digit has a value: 10^0=1 for the least significant digit, increasing to 10^1=10, 10^2=100, 10^3=1000, and so
forth.

Likewise, the least significant digit in a hexadecimal number has a value of 16^ 0=1 for the least significant digit,
increasing to 16^1=16 for the next digit, 16^2=256 for the next, 16^3=4096 for the next, and so forth. Thus, 1234 means
that there are four boxes (digits); and there are 4 one's in the right-most box (least significant digit), 3 sixteen's in the
next box, 2 256's in the next, and 1 4096's in the left-most box (most significant digit). The total is:

1*4096 + 2*256 + 3*16 + 4*1 = 4660

In summary, the conversion from any base to base 10 can be obtained from the formulae

n−1
=
d b
i
x Where b is the base, d the digit at position i, m the number of digit after the decimal point, n the number
10 i i
i=−m
of digits of the integer part and X10 is the obtained number in decimal. This form the basic of the polynomial method of
converting numbers from any base to decimal

Example. Convert 234.14 expressed in an octal notation to decimal.

2*82 + 3*81 + 4*80+1*8-1 + 4*8-2 = 2*64 +3*8 +4*1 +1/8 +4/64 =156.1875

By Israel W. Digital Logic Design. Page 4


Example. Convert the hexadecimal number 4B3 to decimal notation. What about the decimal equivalent of the
hexadecimal number 4B3.3?

Solution:
Original Number: 4 B 3 . 3
| | | |
How Many Tokens: 4 11 3 3
Digit/Token Value: 256 16 1 0.0625
Value: 1024 +176 + 3 + 0.1875 = 1203.1875

Example. Convert 234.14 expressed in an octal notation to decimal.

Solution:

Original Number: 2 3 4 . 1 4
| | | | |
How Many Tokens: 2 3 4 1 4
Digit/Token Value: 64 8 1 0.125 0.015625
Value: 128 + 24 + 4 + 0.125 + 0.0625 = 156.1875

- Relationship between Binary - Octal and Binary-hexadecimal

As demonstrated by the table bellow, there is a direct correspondence between the binary system and the octal system,
with three binary digits corresponding to one octal digit. Likewise, four binary digits translate directly into one
hexadecimal digit.

BIN OCT HEX DEC

0000 00 0 0
0001 01 1 1
0010 02 2 2
0011 03 3 3
0100 04 4 4
0101 05 5 5
0110 06 6 6
0111 07 7 7

1000 10 8 8
1001 11 9 9
1010 12 A 10
1011 13 B 11
1100 14 C 12
1101 15 D 13
1110 16 E 14
1111 17 F 15

With such relationship, In order to convert a binary number to octal, we partition the base 2 number into groups of three
starting from the radix point, and pad the outermost groups with 0’s as needed to form triples. Then, we convert each
triple to the octal equivalent.

For conversion from base 2 to base 16, we use groups of four.

Consider converting 101102 to base 8:

101102 = 0102 1102 = 28 68 = 268

Notice that the leftmost two bits are padded with a 0 on the left in order to create a full triplet.

By Israel W. Digital Logic Design. Page 5


Now consider converting 101101102 to base 16:

101101102 = 10112 01102 = B16 616 = B616

(Note that ‘B’ is a base 16 digit corresponding to 1110. B is not a variable.)

The conversion methods can be used to convert a number from any base to any other base, but it may not be very
intuitive to convert something like 513.03 to base 7. As an aid in performing an unnatural conversion, we can convert to
the more familiar base 10 form as an intermediate step, and then continue the conversion from base 10 to the target base.
As a general rule, we use the polynomial method when converting into base 10, and we use the remainder and
multiplication methods when converting out of base 10.

Numeric complements
The radix complement of an n digit number y in radix b is, by definition, bn − y. Adding this to x results in the value x +
bn − y or x − y + bn. Assuming y ≤ x, the result will always be greater than bn and dropping the initial '1' is the same as
subtracting bn, making the result x − y + bn − bn or just x − y, the desired result.

The radix complement is most easily obtained by adding 1 to the diminished radix complement, which is (bn − 1) − y.
Since (bn − 1) is the digit b − 1 repeated n times (because bn − 1 = bn − 1n = (b − 1)(bn − 1 + bn − 2 + ... + b + 1) = (b − 1)bn
− 1
+ ... + (b − 1), see also binomial numbers), the diminished radix complement of a number is found by complementing
each digit with respect to b − 1 (that is, subtracting each digit in y from b − 1). Adding 1 to obtain the radix complement
can be done separately, but is most often combined with the addition of x and the complement of y.

In the decimal numbering system, the radix complement is called the ten's complement and the diminished radix
complement the nines' complement.

In binary, the radix complement is called the two's complement and the diminished radix complement the ones'
complement. The naming of complements in other bases is similar.

- Decimal example

To subtract a decimal number y from another number x using the method of complements, the ten's complement of y
(nines' complement plus 1) is added to x. Typically, the nines' complement of y is first obtained by determining the
complement of each digit. The complement of a decimal digit in the nines' complement system is the number that must
be added to it to produce 9. The complement of 3 is 6, the complement of 7 is 2, and so on. Given a subtraction
problem:

873 (x)
- 218 (y)
The nines' complement of y (218) is 781. In this case, because y is three digits long, this is the same as subtracting y
from 999. (The number of 9's is equal to the number of digits of y.)

Next, the sum of x, the nines' complement of y, and 1 is taken:

873 (x)
+ 781 (complement of y)
+ 1 (to get the ten's complement of y)
=====
1655

The first "1" digit is then dropped, giving 655, the correct answer.

If the subtrahend has fewer digits than the minuend, leading zeros must be added which will become leading nines when
the nines' complement is taken. For example:

By Israel W. Digital Logic Design. Page 6


48032 (x)
- 391 (y)
becomes the sum:
48032 (x)
+ 99608 (nines' complement of y)
+ 1 (to get the ten's complement)
=======
147641
Dropping the "1" gives the answer: 47641

- Binary example

The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained
by inverting each bit (changing '0' to '1' and vice versa). And adding 1 to get the two's complement can be done by
simulating a carry into the least significant bit. For example:

01100100 (x, equals decimal 100)


- 00010110 (y, equals decimal 22)

becomes the sum:


01100100 (x)
+ 11101001 (ones' complement of y)
+ 1 (to get the two's complement)
==========
101001110

Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)

Signed fixed point numbers


Up to this point we have considered only the representation of unsigned fixed-point numbers. The situation is quite
different in representing signed fixed-point numbers. There are four different ways of representing signed numbers that
are commonly used: sign-magnitude, one’s complement, two’s complement, and excess notation. We will cover each in
turn, using integers for our examples. The Table below shows for a 3-bit number how the various representations
appear.

Decimal Unsigned Sign–Mag. 1’s Comp. 2’s Comp. Excess 4


7 111 – – – –
6 110 – – – –
5 101 – – – –
4 100 – – – –
3 011 011 011 011 111
2 010 010 010 010 110
1 001 001 001 001 101
+0 000 000 000 000 100
-0 – 100 111 000 100
-1 – 101 110 111 011
-2 – 110 101 110 010
-3 – 111 100 101 001
-4 – – – 100 000
Table1. 3-bit number representation

By Israel W. Digital Logic Design. Page 7


- Signed Magnitude Representation

The signed magnitude (also referred to as sign and magnitude) representation is most familiar to us as the base 10
number system. A plus or minus sign to the left of a number indicates whether the number is positive or negative as in
+1210 or −1210. In the binary signed magnitude representation, the leftmost bit is used for the sign, which takes on a
value of 0 or 1 for ‘+’ or ‘−’, respectively. The remaining bits contain the absolute magnitude.

Consider representing (+12)10 and (−12)10 in an eight-bit format:

(+12)10 = (00001100)2

(−12)10 = (10001100)2

The negative number is formed by simply changing the sign bit in the positive number from 0 to 1. Notice that there are
both positive and negative representations for zero: +0= 00000000 and -0= 10000000.

- One’s Complement Representation

The one’s complement operation is trivial to perform: convert all of the 1’s in the number to 0’s, and all of the 0’s to 1’s.
See the fourth column in Table1 for examples. We can observe from the table that in the one’s complement
representation the leftmost bit is 0 for positive numbers and 1 for negative numbers, as it is for the signed magnitude
representation. This negation, changing 1’s to 0’s and changing 0’s to 1’s, is known as complementing the bits.
Consider again representing (+12)10 and (−12)10 in an eight-bit format, now using the one’s complement
representation:

(+12)10 = (00001100)2

(−12)10 = (11110011)2

Note again that there are representations for both +0 and −0, which are 00000000 and 11111111, respectively. As a
result, there are only 28 − 1 = 255 different numbers that can be represented even though there are 2 8 different bit
patterns.

The one’s complement representation is not commonly used. This is at least partly due to the difficulty in making
comparisons when there are two representations for 0. There is also additional complexity involved in adding numbers.

- Two’s Complement Representation

The two’s complement is formed in a way similar to forming the one’s complement: complement all of the bits in the
number, but then add 1, and if that addition results in a carry-out from the most significant bit of the number, discard the
carry-out.

Examination of the fifth column of Table above shows that in the two’s complement representation, the leftmost bit is
again 0 for positive numbers and is 1 for negative numbers. However, this number format does not have the unfortunate
characteristic of signed-magnitude and one’s complement representations: it has only one representation for zero. To see
that this is true, consider forming the negative of (+0)10, which has the bit pattern: (+0)10 = (00000000)2

Forming the one’s complement of (00000000)2 produces (11111111)2 and adding

1 to it yields (00000000)2, thus (−0)10 = (00000000)2. The carry out of the leftmost position is discarded in two’s
complement addition (except when detecting an overflow condition). Since there is only one representation for 0, and
since all bit patterns are valid, there are 28 = 256 different numbers that can be represented.

Consider again representing (+12)10 and (−12)10 in an eight-bit format, this time using the two’s complement
representation. Starting with (+12)10 =(00001100)2, complement, or negate the number, producing (11110011)2.

By Israel W. Digital Logic Design. Page 8


Now add 1, producing (11110100)2, and thus (−12)10 = (11110100)2:

(+12)10 = (00001100)2

(−12)10 = (11110100)2

There is an equal number of positive and negative numbers provided zero is considered to be a positive number, which
is reasonable because its sign bit is 0. The positive numbers start at 0, but the negative numbers start at −1, and so the
magnitude of the most negative number is one greater than the magnitude of the most positive number. The positive
number with the largest magnitude is +127, and the negative number with the largest magnitude is −128. There is thus
no positive number that can be represented that corresponds to the negative of −128. If we try to form the two’s
complement negative of −128, then we will arrive at a negative number, as shown below:

(−128)10 = (10000000)2
(−128)10 = (01111111
(−128)10 + (+0000001)2
(−128)10 ——————)2
(−128)10 = (10000000)2

The two’s complement representation is the representation most commonly used in conventional computers.

- Excess Representation

In the excess or biased representation, the number is treated as unsigned, but is “shifted” in value by subtracting the bias
from it. The concept is to assign the smallest numerical bit pattern, all zeros, to the negative of the bias, and assign the
remaining numbers in sequence as the bit patterns increase in magnitude. A convenient way to think of an excess
representation is that a number is represented as the sum of its two’s complement form and another number, which is
known as the “excess,” or “bias.” Once again, refer to Table 2.1, the rightmost column, for examples.

Consider again representing (+12)10 and (−12)10 in an eight-bit format but now using an excess 128 representation. An
excess 128 number is formed by adding 128 to the original number, and then creating the unsigned binary version. For
(+12)10, we compute (128 + 12 = 140)10 and produce the bit pattern (10001100)2. For (−12)10, we compute (128 + −12 =
116)10 and produce the bit pattern (01110100)2

(+12)10 = (10001100)2

(−12)10 = (01110100)2

Note that there is no numerical significance to the excess value: it simply has the effect of shifting the representation of
the two’s complement numbers.

There is only one excess representation for 0, since the excess representation is simply a shifted version of the two’s
complement representation. For the previous case, the excess value is chosen to have the same bit pattern as the largest
negative number, which has the effect of making the numbers appear in numerically sorted order if the numbers are
viewed in an unsigned binary representation.

Thus, the most negative number is (−128)10 = (00000000)2 and the most positive number is (+127)10 = (11111111)2.
This representation simplifies making comparisons between numbers, since the bit patterns for negative numbers have
numerically smaller values than the bit patterns for positive numbers. This is important for representing the exponents of
floating point numbers, in which exponents of two numbers are compared in order to make them equal for addition and
subtraction.

choosing a bias:

The bias chosen is most often based on the number of bits (n) available for representing an integer. To get an
approximate equal distribution of true values above and below 0, the bias should be 2(n-1) or 2(n-1) - 1

By Israel W. Digital Logic Design. Page 9


Floating point representation
Floating point is a numerical representation system in which a string of digits represent a real number. The name
floating point refers to the fact that the radix point (decimal point or more commonly in computers, binary point) can be
placed anywhere relative to the digits within the string. A fixed point is of the form a  bn where a is the fixed point part
often referred to as the mantissa, or significand of the number b represents the base and n the exponent. Thus a floating
point number can be characterized by a triple of numbers: sign, exponent, and significand.

- Normalization

A potential problem with representing floating point numbers is that the same number can be represented in different
ways, which makes comparisons and arithmetic operations difficult. For example, consider the numerically equivalent
forms shown below:

3584.1  100 = 3.5841  103 = .35841  104.

In order to avoid multiple representations for the same number, floating point numbers are maintained in normalized
form. That is, the radix point is shifted to the left or to the right and the exponent is adjusted accordingly until the radix
point is to the left of the leftmost nonzero digit. So the rightmost number above is the normalized one. Unfortunately,
the number zero cannot be represented in this scheme, so to represent zero an exception is made. The exception to this
rule is that zero is represented as all 0’s in the mantissa.

If the mantissa is represented as a binary, that is, base 2, number, and if the normalization condition is that there is a
leading “1” in the normalized mantissa, then there is no need to store that “1” and in fact, most floating point formats do
not store it. Rather, it is “chopped off” before packing up the number for storage, and it is restored when unpacking the
number into exponent and mantissa. This results in having an additional bit of precision on the right of the number, due
to removing the bit on the left. This missing bit is referred to as the hidden bit, also known as a hidden 1.

For example, if the mantissa in a given format is 1.1010 after normalization, then the bit pattern that is stored is 1010—
the left-most bit is truncated, or hidden.

Possible floating-point format.


In order to choose a possible floating-point format for a given computer, the programmer must take into
considerationthe following:

The number of words used (i.e. the total number of bits used.)
The representation of the mantissa (2s complement etc.)
The representation of the exponent (biased etc.)
The total number of bits devoted for the mantissa and the exponent
The location of the mantissa (exponent first or mantissa first)

Because of the five points above, the number of ways in which a floating-point number may be represented is legion.
Many representations use the format of sign bit to represent a floating point where the leading bit represents the sign
Sign Exponent Mantissa

- The IEEE standard for floating point

The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for floating point format arithmetic
in mini and microcomputers.(i.e. ANSI/IEEE 754-1985). This standard specifies how single precision (32 bit) , double
precision (64 bit) and Quadruple (128 bit) floating point numbers are to be represented, as well as how arithmetic should
be carried out on them.

By Israel W. Digital Logic Design. Page 10


General layout

The three fields in an IEEE 754 float

Sign Exponent Fraction

Binary floating-point numbers are stored in a sign-magnitude form where the most significant bit is the sign bit,
exponent is the biased exponent, and "fraction" is the significand without the most significant bit.

Exponent biasing

The exponent is biased by (2e − 1) − 1, where e is the number of bits used for the exponent field (e.g. if e=8, then (28 − 1) −
1 = 128 − 1 = 127). Biasing is done because exponents have to be signed values in order to be able to represent both
tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
To solve this, the exponent is biased before being stored by adjusting its value to put it within an unsigned range
suitable for comparison.

For example, to represent a number which has exponent of 17 in an exponent field 8 bits wide:

exponentfield = 17 + (28 − 1) − 1 = 17 + 128 − 1 = 144.

Single Precision

The IEEE single precision floating point standard representation requires a 32-bit word, which may be represented as
numbered from 0 to 31, left to right. The first bit is the sign bit, S, the next eight bits are the exponent bits, 'E', and the
final 23 bits are the fraction 'F':

S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF
01 89 31
To convert decimal 17.15 to IEEE Single format:

Convert decimal 17 to binary 10001. Convert decimal 0.15 to the repeating binary fraction 0.001001 Combine
integer and fraction to obtain binary 10001.001001 Normalize the binary number to obtain 1.0001001001x24
Thus, M = m-1 =0001001001 and E = e+127 = 131 = 10000011.

The number is positive, so S=0. Align the values for M, E, and S in the correct fields.

0 10000011 00010010011001100110011

Note that if the exponent does not use all the field allocated to it, there will be leading 0’s while for the mantissa,
the zero’s will be filled at the end.

Double Precision

The IEEE double precision floating point standard representation requires a 64-bit word, which may be represented as
numbered from 0 to 63, left to right. The first bit is the sign bit, S, the next eleven bits are the exponent bits, 'E', and the
final 52 bits are the fraction 'F':

S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 11 12

By Israel W. Digital Logic Design. Page 11


Quad Precision

The IEEE Quad precision floating point standard representation requires a 128-bit word, which may be represented as
numbered from 0 to 127, left to right. The first bit is the sign bit, S, the next fifteen bits are the exponent bits, 'E', and the
final 128 bits are the fraction 'F':

S EEEEEEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 15 16

Single Double Quadruple


No. of sign bit 1 1 1
No. of exponent bit 8 11 15
No. of fraction 23 52 111
Total bits used 32 64 128
Bias 127 1023 16383

Table2 Basic IEEE floating point format

Binary code

Internally, digital computers operate on binary numbers. When interfacing to humans, digital processors, e.g. pocket
calculators, communication is decimal-based. Input is done in decimal then converted to binary for internal processing.
For output, the result has to be converted from its internal binary representation to a decimal form. Digital system
represents and manipulates not only binary number but also many other discrete elements of information.

-Binary coded Decimal

In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each
digit is represented by its own binary sequence. Its main virtue is that it allows easy conversion to decimal digits for
printing or display and faster decimal calculations. Its drawbacks are the increased complexity of circuits needed to
implement mathematical operations and a relatively inefficient encoding. It occupies more space than a pure binary
representation. In BCD, a digit is usually represented by four bits which, in general, represent the
values/digits/characters0-9

To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.

Decimal: 0 1 2 3 4 5 6 7 8 9

BCD: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001

Thus, the BCD encoding for the number 127 would be:

0001 0010 0111

The position weights of the BCD code are 8, 4, 2, 1. Other codes (shown in the table) use position weights of 8, 4, -2, -1
and 2, 4, 2, 1.

An example of a non-weighted code is the excess-3 code where digit codes is obtained from

their binary equivalent after adding 3. Thus, the code of a decimal 0 is 0011, that of 6 is 1001,

etc

By Israel W. Digital Logic Design. Page 12


Decimal
Digit
8421 8 4 -2 -1 2421 Excess-3
Code code code code
0 0000 0000 0000 0011
1 0001 0111 0001 0100
2 0010 0110 0010 0101
3 0011 0101 0011 0110
4 0100 0100 0100 0111
5 0101 1011 1011 1000
6 0110 1010 1100 1001
7 0111 1001 1101 1010
8 1000 1000 1110 1011
9 1001 1111 1111 1100

it is very important to understand the difference between the conversion of a decimal number to binary and the binary
coding of a decimal number. In each case, the final result is a series of bits. The bits obtained from conversion are
binary digit. Bits obtained from coding are combinations of 1’s and 0’s arranged according to the rule of the code used.
e.g. the binary conversion of 13 is 1101; the BCD coding of 13 is 00010011

- Error-Detection Codes

Binary information may be transmitted through some communication medium, e.g., using wires or wireless media. A
corrupted bit will have its value changed from 0 to 1 or vice versa. To be able to detect errors at the receiver end, the
sender sends an extra bit (parity bit) with the original binary message.

A parity bit is an extra bit included with the n-bit binary message to make the total number of 1’s in this message
(including the parity bit) either odd or even. If the parity bit makes the total number of 1’s an odd (even) number, it is
called odd (even) parity. The table shows the required odd (even) parity for a 3-bit message

Three-Bit Message Odd Parity Bit Even Parity Bit


X Y Z P P
0 0 0 1 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 0 1

No error is detectable if the transmitted message has 2 bits in error since the total number of 1’s will remain even (or
odd) as in the original message.

In general, a transmitted message with even number of errors cannot be detected by the parity bit.

- Gray code

The Gray code consist of 16 4-bit code words to represent the decimal Numbers 0 to 15. For Gray code, successive
code words differ by only one bit from one to the next

By Israel W. Digital Logic Design. Page 13


Gray Code Decimal Equivalent
0000 0
0001 1
0011 2
0010 3
0110 4
0111 5
0101 6
0100 7
1100 8
1101 9
1111 10
1110 11
1010 12
1011 13
1001 14
1000 15

Character Representation

Even though many people used to think of computers as "number crunchers", people figured out long ago that it's just as
important to handle character data.

Character data isn't just alphabetic characters, but also numeric characters, punctuation, spaces, etc. Most keys on the
central part of the keyboard (except shift, caps lock) are characters. Characters need to represented. In particular, they
need to be represented in binary. After all, computers store and manipulate 0's and 1's (and even those 0's and 1's are just
abstractions. The implementation is typically voltages).

Unsigned binary and two's complement are used to represent unsigned and signed integer respectively, because they
have nice mathematical properties, in particular, you can add and subtract as you'd expect.

However, there aren't such properties for character data, so assigning binary codes for characters is somewhat arbitrary.
The most common character representation is ASCII, which stands for American Standard Code for Information
Interchange.

There are two reasons to use ASCII. First, we need some way to represent characters as binary numbers (or,
equivalently, as bitstring patterns). There's not much choice about this since computers represent everything in binary.

If you've noticed a common theme, it's that we need representation schemes for everything. However, most importantly,
we need representations for numbers and characters. Once you have that (and perhaps pointers), you can build up
everything you need.

The other reason we use ASCII is because of the letter "S" in ASCII, which stands for "standard". Standards are good
because they allow for common formats that everyone can agree on.

Unfortunately, there's also the letter "A", which stands for American. ASCII is clearly biased for the English language
character set. Other languages may have their own character set, even though English dominates most of the computing
world (at least, programming and software).

Even though character sets don't have mathematical properties, there are some nice aspects about ASCII. In particular,
the lowercase letters are contiguous ('a' through 'z' maps to 9710 through 12210). The upper case letters are also
contiguous ('A' through 'Z' maps to 6510 through 9010). Finally, the digits are contiguous ('0' through '9' maps to 4810
through 5710).

By Israel W. Digital Logic Design. Page 14


Since they are contiguous, it's usually easy to determine whether a character is lowercase or uppercase (by checking if
the ASCII code lies in the range of lower or uppercase ASCII codes), or to determine if it's a digit, or to convert a digit
in ASCII to an integer value.

ASCII Code (Decimal)


0 nul 16 dle 32 sp 48 0 64 @ 80 P 96 ` 112 p
1 soh 17 dc1 33 ! 49 1 65 A 81 Q 97 a 113 q
2 stx 18 dc2 34 " 50 2 66 B 82 R 98 b 114 r
3 etx 19 dc3 35 # 51 3 67 C 83 S 99 c 115 s
4 eot 20 dc4 36 $ 52 4 68 D 84 T 100 d 116 t
5 enq 21 nak 37 % 53 5 69 E 85 U 101 e 117 u
6 ack 22 syn 38 & 54 6 70 F 86 V 102 f 118 v
7 bel 23 etb 39 ' 55 7 71 G 87 W 103 g 119 w
8 bs 24 can 40 ( 56 8 72 H 88 X 104 h 120 x
9 ht 25 em 41 ) 57 9 73 I 89 Y 105 i 121 y
10 nl 26 sub 42 * 58 : 74 J 90 Z 106 j 122 z
11 vt 27 esc 43 + 59 ; 75 K 91 [ 107 k 123 {
12 np 28 fs 44 , 60 < 76 L 92 \ 108 l 124 |
13 cr 29 gs 45 - 61 = 77 M 93 ] 109 m 125 }
14 so 30 rs 46 . 62 > 78 N 94 ^ 110 n 126 ~
15 si 31 us 47 / 63 ? 79 O 95 _ 111 o 127 del

The characters between 0 and 31 are generally not printable (control characters, etc). 32 is the space character. Also
note that there are only 128 ASCII characters. This means only 7 bits are required to represent an ASCII character.
However, since the smallest size representation on most computers is a byte, a byte is used to store an ASCII character.
The Most Significant bit(MSB) of an ASCII character is 0.

ASCII Code (Hex)

00 nul 10 dle 20 sp 30 0 40 @ 50 P 60 ` 70 p
01 soh 11 dc1 21 ! 31 1 41 A 51 Q 61 a 71 q
02 stx 12 dc2 22 " 32 2 42 B 52 R 62 b 72 r
03 etx 13 dc3 23 # 33 3 43 C 53 S 63 c 73 s
04 eot 14 dc4 24 $ 34 4 44 D 54 T 64 d 74 t
05 enq 15 nak 25 % 35 5 45 E 55 U 65 e 75 u
06 ack 16 syn 26 & 36 6 46 F 56 V 66 f 76 v
07 bel 17 etb 27 ' 37 7 47 G 57 W 67 g 77 w
08 bs 18 can 28 ( 38 8 48 H 58 X 68 h 78 x
09 ht 19 em 29 ) 39 9 49 I 59 Y 69 i 79 y
0a nl 1a sub 2a * 3a : 4a J 5a Z 6a j 7a z
0b vt 1b esc 2b + 3b ; 4b K 5b [ 6b k 7b {
0c np 1c fs 2c , 3c < 4c L 5c \ 6c l 7c |
0d cr 1d gs 2d - 3d = 4d M 5d ] 6d m 7d }
0e so 1e rs 2e . 3e > 4e N 5e ^ 6e n 7e ~
0f si 1f us 2f / 3f ? 4f O 5f _ 6f o 7f del

The difference in the ASCII code between an uppercase letter and its corresponding lowercase letter is 20 16. This makes
it easy to convert lower to uppercase (and back) in hex (or binary).

Other Character Codes

While ASCII is still popularly used, another character representation that was used (especially at IBM) was EBCDIC,
which stands for Extended Binary Coded Decimal Interchange Code (yes, the word "code" appears twice). This
character set has mostly disappeared. EBCDIC does not store characters contiguously, so this can create problems
alphabetizing "words".

By Israel W. Digital Logic Design. Page 15


One problem with ASCII is that it's biased to the English language. This generally creates some problems. One common
solution is for people in other countries to write programs in ASCII.

Other countries have used different solutions, in particular, using 8 bits to represent their alphabets, giving up to 256
letters, which is plenty for most alphabet-based languages (recall you also need to represent digits, punctuation, etc).

However, Asian languages, which are word-based, rather than character-based, often have more words than 8 bits can
represent. In particular, 8 bits can only represent 256 words, which is far smaller than the number of words in natural
languages.

Thus, a new character set called Unicode is now becoming more prevalent. This is a 16 bit code, which allows for about
65,000 different representations. This is enough to encode the popular Asian languages (Chinese, Korean, Japanese,
etc.). It also turns out that ASCII codes are preserved. What does this mean? To convert ASCII to Unicode, take all one
byte ASCII codes, and zero-extend them to 16 bits. That should be the Unicode version of the ASCII characters.

The biggest consequence of using Unicode from ASCII is that text files double in size. The second consequence is that
endianness begins to matter again. Endianness is the ordering of individually addressable sub-units (words, bytes, or
even bits) within a longer data word stored in external memory. The most typical cases are the ordering of bytes within a
16-, 32-, or 64-bit word, where endianness is often simply referred to as byte order. The usual contrast is between most
versus least significant byte first, called big-endian and little-endian respectively.

Big-endian places the most significant bit, digit, or byte in the first, or leftmost, position. Little-endian places the most
significant bit, digit, or byte in the last, or rightmost, position. Motorola processors employ the big-endian approach,
whereas Intel processors take the little-endian approach. Table below illustrates how the decimal value 47,572 would be
expressed in hexadecimal and binary notation (two octets) and how it would be stored using these two methods.
Table: Endianess
Number Big-Endian Little-Endian
Hexadecimal
B9D4 B9D4 4D9B
Binary
10111001 10111001 11010100
11010100 11010100 10111001

With single bytes, there's no need to worry about endianness. However, you have to consider that with two byte
quantities.

While C and C++ still primarily use ASCII, Java has already used Unicode. This means that Java must create a byte
type, because char in Java is no longer a single byte. Instead, it's a 2 byte Unicode representation.

Exercise
1. The state of a 12-bit register is 010110010111. what is its content if it represents:
i. Decimal digits in BCD code ii. Decimal digits in excess-3 code

iii. Decimal digits in 8 4-2-1 code iv. Natural binary number

2. The result of an experiment fall in the range -4 to +6. A scientist wishes to read the result into a computer and then
process them. He decides to use a 4-bit binary code to represents each of the possible inputs. Devise a 4-bit binary code
of representing numbers in the range of -4 to 6

3. The (r-1)’s complement of a base-6 numbers is called the 5’s complement. Explain the procedure for obtaining the 5’s
complement of base 6 numbers. Obtain the 5’s complement of (3210)6

4. Design a three-bit code to represent each of the six digits of the base 6 number system.

5. Represent the decimal number –234.125 using the IEEE 32- bit (single) format

By Israel W. Digital Logic Design. Page 16


Binary Logic

Introduction

Binary logic deals with variables that assume discrete values and with operators that assume logical meaning.

While each logical element or condition must always have a logic value of either "0" or "1", we also need to have
ways to combine different logical signals or conditions to provide a logical result.

For example, consider the logical statement: "If I move the switch on the wall up, the light will turn on." At first
glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more
to it than this. In this example, a more complete statement would be: "If I move the switch on the wall up and the
light bulb is good and the power is on, the light will turn on."

If we look at these two statements as logical expressions and use logical terminology, we can reduce the first
statement to:

Light = Switch

This means nothing more than that the light will follow the action of the switch, so that when the switch is
up/on/true/1 the light will also be on/true/1. Conversely, if the switch is down/off/false/0 the light will also be
off/false/0.

Looking at the second version of the statement, we have a slightly more complex expression:

Light = Switch and Bulb and Power

When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need
some special symbols to denote these functions in a logical diagram. There are three fundamental logical
operations, from which all other functions, no matter how complex, can be derived. These functions are named
and, or, and not. Each of these has a specific symbol and a clearly-defined behavior.

AND. The AND operation is represented by a dot(.) or by the absence of an operator. E.g. x.y=z xy=z are all read as x
AND y=z. the logical operation AND is interpreted to mean that z=1 if and only if x=1 and y=1 otherwise z=0

OR. The operation is represented by a + sign for example, x+y=z is interpreted as x OR y=z meaning that z=1 if x=1 or
y=1 or if both x=1 and y=1. If both x and y are 0, then z=0

NOT. This operation is represented by a bar or a prime. For example, x′= x =z is interpreted as NOT x =z meaning that z
is what x is not

It should be noted that although the AND and the OR operation have some similarity with the multiplication and
addition respectively in binary arithmetic, however one should note that an arithmetic variable may consist of many
digits. A binary logic variable is always 0 or 1.

e.g. in binary arithmetic, 1+1=10 while in binary logic 1+1=1

Basic Gate

The basic building blocks of a computer are called logical gates or just gates. Gates are basic circuits that have at least
one (and usually more) input and exactly one output. Input and output values are the logical values true and false. In
computer architecture it is common to use 0 for false and 1 for true. Gates have no memory. The value of the output
depends only on the current value of the inputs. A useful way of describing the relationship between the inputs of gates

By Israel W. Digital Logic Design. Page 17


and their output is the truth table. In a truth table, the value of each output is tabulated for every possible combination of
the input values.

We usually consider three basic kinds of gates, and-gates, or-gates, and not-gates (or inverters).

- The AND Gate

The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals
applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0.

The truth table for an and-gate with two inputs looks like this:

x y | z

0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
There is no limit to the number of inputs that may be applied to an AND function, so there is no functional limit to
the number of inputs an AND gate may have. However, for practical reasons, commercial AND gates are most
commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for
practical size and handling. A standard 14-pin package can contain four 2-input gates, three 3-input gates, or two 4-
input gates, and still have room for two pins for power supply connections.

- The OR Gate

The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to
be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, "If it is raining OR if I turn on the
sprinkler, the lawn will be wet." Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is
correctly reflected by the basic OR function.

In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol below designates the
OR gate.

The truth table for an or-gate with two inputs looks like this:

x y | z

0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
As with the AND function, the OR function can have any number of inputs. However, practical commercial OR gates
are mostly limited to 2, 3, and 4 inputs, as with AND gates.

By Israel W. Digital Logic Design. Page 18


- The NOT Gate, or Inverter

The inverter is a little different from AND and OR gates in that it always has exactly one input as well as one output.
Whatever logical state is applied to the input, the opposite state will appear at the output.

The truth table for an inverter looks like this:

x | y

0 | 1
1 | 0

The NOT function, as it is called, is necessary in many applications and highly useful in others. A practical verbal
application might be:

The door is NOT locked = You may enter

In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it "cleans up"
the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The
circle could have been placed at the input instead, and the logical meaning would still be the same

Combined gates

Sometimes, it is practical to combine functions of the basic gates into more complex gates, for instance in order to save
space in circuit diagrams. In this section, we show some such combined gates together with their truth tables.

- The NAND-gate

The NAND -gate is an and-gate with an inverter on the output. So instead of drawing several gates like this:

We draw a single and-gate with a little ring on the output like this:

The nand-gate, like the and-gate can take an arbitrary number of inputs.

The truth table for the nand-gate is like the one for the and-gate, except that all output values have been inverted:

By Israel W. Digital Logic Design. Page 19


x y | z

0 0 | 1
0 1 | 1
1 0 | 1
1 1 | 0
The truth table clearly shows that the NAND operation is the complement of the AND

- The nor-gate

The nor-gate is an or-gate with an inverter on the output. So instead of drawing several gates like this:

We draw a single or-gate with a little ring on the output like this:

The nor-gate, like the or-gate can take an arbitrary number of inputs.

The truth table for the nor-gate is like the one for the or-gate, except that all output values have been inverted:

xy|z

0 0|1
0 1|0
1 0|0
1 1|0

- The exclusive-or-gate

The exclusive-or-gate is similar to an or-gate. It can have an arbitrary number of inputs, and its output value is 1 if and
only if exactly one input is 1 (and thus the others 0). Otherwise, the output is 0.

We draw an exclusive-or-gate like this:

The truth table for an exclusive-or-gate with two inputs looks like this:

By Israel W. Digital Logic Design. Page 20


x y|z

0 0|0
0 1|1
1 0|1
1 1|0

- The Exclusive-Nor-gate

The exclusive-Nor-gate is similar to an N or-gate. It can have an arbitrary number of inputs, and its output value is 1 if
and only if the two input are of the same values (1 and 1 or 0 and 0). Otherwise, the output is 0.

We draw an exclusive-Nor-gate like this:

The truth table for an exclusive-nor-gate with two inputs looks like this:

xy|z

0 0|1
0 1|0
1 0|0
1 1|1

Let us limit ourselves to gates with n inputs. The truth tables for such gates have 2n lines. Such a gate is completely
defined by the output column in the truth table. The output column can be viewed as a string of 2 n binary digits. How
many different strings of binary digits of length 2n are there? The answer is 22n, since there are 2k different strings of k
binary digits, and if k=2n, then there are 22n such strings. In particular, if n=2, we can see that there are 16 different
types of gates with 2 inputs.

Families of logic gates


There are several different families of logic gates. Each family has its capabilities and limitations, its advantages and
disadvantages. The following list describes the main logic families and their characteristics. You can follow the links to
see the circuit construction of gates of each family.

- Diode Logic (DL)

Diode logic gates use diodes to perform AND and OR logic functions. Diodes have the property of easily passing an
electrical current in one direction, but not the other. Thus, diodes can act as a logical switch.

Diode logic gates are very simple and inexpensive, and can be used effectively in specific situations. However, they
cannot be used extensively, as they tend to degrade digital signals rapidly. In addition, they cannot perform a NOT
function, so their usefulness is quite limited.

- Resistor-Transistor Logic (RTL)

Resistor-transistor logic gates use Transistors to combine multiple input signals, which also amplify and invert the
resulting combined signal. Often an additional transistor is included to re-invert the output signal. This combination
provides clean output signals and either inversion or non-inversion as needed.

By Israel W. Digital Logic Design. Page 21


RTL gates are almost as simple as DL gates, and remain inexpensive. They also are handy because both normal and
inverted signals are often available. However, they do draw a significant amount of current from the power supply for
each gate. Another limitation is that RTL gates cannot switch at the high speeds used by today's computers, although
they are still useful in slower applications.

Although they are not designed for linear operation, RTL integrated circuits are sometimes used as inexpensive small-
signal amplifiers, or as interface devices between linear and digital circuits.

- Diode-Transistor Logic (DTL)

By letting diodes perform the logical AND or OR function and then amplifying the result with a transistor, we can avoid
some of the limitations of RTL. DTL takes diode logic gates and adds a transistor to the output, in order to provide logic
inversion and to restore the signal to full logic levels.

- Transistor-Transistor Logic (TTL)

The physical construction of integrated circuits made it more effective to replace all the input diodes in a DTL gate with
a transistor, built with multiple emitters. The result is transistor-transistor logic, which became the standard logic circuit
in most applications for a number of years.

As the state of the art improved, TTL integrated circuits were adapted slightly to handle a wider range of requirements,
but their basic functions remained the same. These devices comprise the 7400 family of digital ICs.

- Emitter-Coupled Logic (ECL)

Also known as Current Mode Logic (CML), ECL gates are specifically designed to operate at extremely high speeds, by
avoiding the "lag" inherent when transistors are allowed to become saturated. Because of this, however, these gates
demand substantial amounts of electrical current to operate correctly.

- CMOS Logic

One factor is common to all of the logic families we have listed above: they use significant amounts of electrical power.
Many applications, especially portable, battery-powered ones, require that the use of power be absolutely minimized. To
accomplish this, the CMOS (Complementary Metal-Oxide-Semiconductor) logic family was developed. This family
uses enhancement-mode MOSFETs as its transistors, and is so designed that it requires almost no current to operate.

CMOS gates are, however, severely limited in their speed of operation. Nevertheless, they are highly useful and
effective in a wide range of battery-powered applications.

Most logic families share a common characteristic: their inputs require a certain amount of current in order to operate
correctly. CMOS gates work a bit differently, but still represent a capacitance that must be charged or discharged when
the input changes state. The current required to drive any input must come from the output supplying the logic signal.
Therefore, we need to know how much current an input requires, and how much current an output can reliably supply,
in order to determine how many inputs may be connected to a single output.

However, making such calculations can be tedious, and can bog down logic circuit design. Therefore, we use a different
technique. Rather than working constantly with actual currents, we determine the amount of current required to drive
one standard input, and designate that as a standard load on any output. Now we can define the number of standard
loads a given output can drive, and identify it that way. Unfortunately, some inputs for specialized circuits require more
than the usual input current, and some gates, known as buffers, are deliberately designed to be able to drive more inputs
than usual. For an easy way to define input current requirements and output drive capabilities, we define two new terms:
fan-in and fan-out

Fan-in

Fan-in is a term that defines the maximum number of digital inputs that a single logic gate can accept. Most transistor-
transistor logic (TTL) gates have one or two inputs, although some have more than two. A typical logic gate has a fan-
in of 1 or 2.

By Israel W. Digital Logic Design. Page 22


In some digital systems, it is necessary for a single TTL logic gate to drive several devices with fan-in numbers greater
than 1. If the total number of inputs a transistor-transistor logic (TTL) device must drive is greater than 10, a device
called a buffer can be used between the TTL gate output and the inputs of the devices it must drive. A logical inverter
(also called a NOT gate) can serve this function in most digital circuits.

Fan-out

Fan-out is a term that defines the maximum number of digital inputs that the output of a single logic gate can feed. Most
transistor-transistor logic (TTL) gates can feed up to 10 other digital gates or devices. Thus, a typical TTL gate has a
fan-out of 10.

In some digital systems, it is necessary for a single TTL logic gate to drive more than 10 other gates or devices. When
this is the case, a device called a buffer can be used between the TTL gate and the multiple devices it must drive. A
buffer of this type has a fan-out of 25 to 30. A logical inverter (also called a NOT gate) can serve this function in most
digital circuits.

Remember, fan-in and fan-out apply directly only within a given logic family. If for any reason you need to interface
between two different logic families, be careful to note and meet the drive requirements and limitations of both families,
within the interface circuitry

Boolean Algebra

One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible.
This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce
the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler
circuit, which in turn saves the price of the unnecessary gates, reduces the number of gates needed, and reduces the
power and the amount of space required by those gates.

One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854
and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied
to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that
the reduction was valid.

Boolean algebra is an algebraic structure defined on a set of elements B, together with two binary operators (+, .)
provided the following postulates are satisfied.

1. Closure with respect to operator + and Closure with respect to operator.


2. An identity element with respect to + designated by 0: X+0= 0+X=X

An identity element with respect to . designated by 1: X.1= 1.X=X

3. Commutative with respect to +: X=Y=Y+X

Commutative with respect to .: X.Y=Y.X

4. . distributive over +: X.(Y+Z)=X.Y+X.Z

+ distributive over .: X+(Y.Z)=(X+Y).(X+Z)

5. For every element x belonging to B, there exist an element x′ or x called the complement of x such that x.
x′=0 and x+ x′=1
6. There exists at least two elements x,y belonging to B such that x ≠y

The two valued Boolean algebra is defined on a set B={0,1} with two binary operators + and.

By Israel W. Digital Logic Design. Page 23


X y x.y X Y x+y x x′
0 0 0 0 0 0 0 1
0 1 0 0 1 1 1 0
1 0 0 1 0 1
1 1 1 1 1 0

Closure. from the tables, the result of each operation is either 0 or 1 and 1 ,0 belongs to B

Identity. From the truth table we see that 0 is the identity element for + and 1 is the identity element for . .

Commutative law is obvious from the symmetry of binary operator’s table.

Distributive Law. x.(y+z)=x.y+x.z

x y z y+z x.(y+z) x.y x.z x.y+x.z


0 0 0 0 0 0 0 0
0 0 1 1 0 0 0 0
0 1 0 1 0 0 0 0
0 1 1 1 0 0 0 0
1 0 0 0 0 0 0 0
1 0 1 1 1 0 1 1
1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1

Distributive of + over . can be shown as in the truth table above

From the complement table we can see that x+ x′=1 i.e 1+0=1 and x. x′=0 i.e 1.0=0

Principle of duality of Boolean algebra

The principle of duality state that every algebraic expression which can be deduced from the postulates of Boolean
algebra remains valid if the operators and the identity elements are interchanged. This mean that the dual of an
expression is obtained changing every AND(.) to OR(+), every OR(+) to AND(.) and all 1's to 0's and vice-versa

Laws of Boolean Algebra


Postulate 2 :

(a) 0 + A = A (b) 1.A = A

Postulate 5 :

(a) A + A′ =1 (b) A. A′=0

Theorem1 : Identity Law

(a) A + A = A (b) A A = A

Theorem2

(a) 1 + A = 1 (b) 0. A = 0

Theorem3: involution

A′′=A

By Israel W. Digital Logic Design. Page 24


Postulate 3 : Commutative Law

(a) A + B = B + A (b) A B = B A

Theorem4: Associate Law

(a) (A + B) + C = A + (B + C) (b) (A B) C = A (B C)

Postulate4: Distributive Law

(a) A (B + C) = A B + A C (b) A + (B C) = (A + B) (A + C)

Theorem5 : De Morgan's Theorem

(a) (A+B)′= A′B′ (b) (AB)′= A′+ B′

Theorem6 : Absorption

(a) A + A B = A (b) A (A + B) = A

Prove Theorem 1 : (a)


X+X=X
x+x=(X+X).1 by postulate 2b
=(x+x)(x+x′) 5a
=x+xx′ 4b
=x+0 5b
=x 2a
Prove Theorem 1 : (b)
X.X=X
xx=(X.X)+0 by postulate 2a
=x.x+x.x′ 5b
=x(x+x′) 4a
=x.1 5a
=x 2b
Prove Theorem 2 : (a)
X+1=X
x+1=1.(X+1) by postulate 2b
(x+x′)=(x+1) 5a
=x+x′.1 4b
=x+ x′ 2b
=1 5a
Prove Theorem 2 : (b)
X.0=0
x.0=0+(X.0) by postulate 2a
(x.x′)=(x.0) 5b
=x.x′+0 4a
=x.x′ 2a
=0 5b

Prove Theorem 6 : (a)


X+xy=X
x+xy=x.1+xy by postulate 2b
=x(1+y) 4b
=x(y+1) 3a
=x.1 2b
=x 2b

By Israel W. Digital Logic Design. Page 25


Prove Theorem 6 : (b)
X(x+y)=X
X(x+y)=(x+0).(x+y) by postulate 2a
=x+0.y 4a
=x +0 2a
=x 2a

Using the laws given above, complicated expressions can be simplified.

By Israel W. Digital Logic Design. Page 26


Combinational circuit
Introduction

The combinational circuit consist of logic gates whose outputs at any time is determined directly from the present
combination of input without any regard to the previous input. A combinational circuit performs a specific information
processing operation fully specified logically by a set of Boolean functions.

A combinatorial circuit is a generalized gate. In general such a circuit has m inputs and n outputs. Such a circuit can
always be constructed as n separate combinatorial circuits, each with exactly one output. For that reason, some texts
only discuss combinatorial circuits with exactly one output. In reality, however, some important sharing of intermediate
signals may take place if the entire n-output circuit is constructed at once. Such sharing can significantly reduce the
number of gates required to build the circuit.

When we build a combinatorial circuit from some kind of specification, we always try to make it as good as possible.
The only problem is that the definition of "as good as possible" may vary greatly. In some applications, we simply want
to minimize the number of gates (or the number of transistors, really). In other, we might be interested in as short a
delay (the time it takes a signal to traverse the circuit) as possible, or in as low power consumption as possible. In
general, a mixture of such criteria must be applied.

Describing existing circuits using Truth tables

To specify the exact way in which a combinatorial circuit works, we might use different methods, such as logical
expressions or truth tables.

A truth table is a complete enumeration of all possible combinations of input values, each one with its associated output
value.

When used to describe an existing circuit, output values are (of course) either 0 or 1. Suppose for instance that we wish
to make a truth table for the following circuit:

All we need to do to establish a truth table for this circuit is to compute the output value for the circuit for each possible
combination of input values. We obtain the following truth table:

wxy|ab
-
000|01
001|01
010|11
011|10
100|11
101|11
110|11
111|10

By Israel W. Digital Logic Design. Page 27


Specifying circuits to build

When used as a specification for a circuit, a table may have some output values that are not specified, perhaps because
the corresponding combination of input values can never occur in the particular application. We can indicate such
unspecified output values with a dash -.

For instance, let us suppose we want a circuit of four inputs, interpreted as two nonnegative binary integers of two
binary digits each, and two outputs, interpreted as the nonnegative binary integer giving the quotient between the two
input numbers. Since division is not defined when the denominator is zero, we do not care what the output value is in
this case. Of the sixteen entries in the truth table, four have a zero denominator. Here is the truth table:

x1 x0 y1 y0 | z1 z0
-
0 0 0 0 |- -
0 0 0 1 |0 0
0 0 1 0 |0 0
0 0 1 1 |0 0
0 1 0 0 |- -
0 1 0 1 |0 1
0 1 1 0 |0 0
0 1 1 1 |0 0
1 0 0 0 |- -
1 0 0 1 |1 0
1 0 1 0 |0 1
1 0 1 1 |0 0
1 1 0 0 |- -
1 1 0 1 |1 1
1 1 1 0 |0 1
1 1 1 1 |0 1

Unspecified output values like this can greatly decrease the number of circuits necessary to build the circuit. The reason
is simple: when we are free to choose the output value in a particular situation, we choose the one that gives the fewest
total number of gates.

Circuit minimization is a difficult problem from complexity point of view. Computer programs that try to optimize
circuit design apply a number of heuristics to improve speed. In this course, we are not concerned with optimality. We
are therefore only going to discuss a simple method that works for all possible combinatorial circuits (but that can waste
large numbers of gates).

A separate single-output circuit is built for each output of the combinatorial circuit.

Our simple method starts with the truth table (or rather one of the acceptable truth tables, in case we have a choice). Our
circuit is going to be a two-layer circuit. The first layer of the circuit will have at most 2n AND-gates, each with n inputs
(where n is the number of inputs of the combinatorial circuit). The second layer will have a single OR-gate with as many
inputs as there are gates in the first layer. For each line of the truth table with an output value of 1, we put down a AND-
gate with n inputs. For each input entry in the table with a 1 in it, we connect an input of the AND-gate to the
corresponding input. For each entry in the table with a 0 in it, we connect an input of the AND-gate to the corresponding
input inverted.

The output of each AND-gate of the fist layer is then connected to an input of the OR-gate of the second layer.

As an example of our general method, consider the following truth table (where a - indicates that we don't care what
value is chosen):

By Israel W. Digital Logic Design. Page 28


xyz|ab
-
000|-0
001|11
010|1-
011|00
100|01
101|0-
110|--
111|10
The first step is to arbitrarily choose values for the undefined outputs. With out simple method, the best solution is to
choose a 0 for each such undefined output. We get this table:
xyz|ab
-
000|00
001|11
010|10
011|00
100|01
101|00
110|00
111|10
Now, we have to build two separate single-output circuits, one for the a column and one for the b column.

A=x′y′z+x′yz′+xyz

B=x′y′z+xy′z′

For the first column, we get three 3-input AND-gates in the first layer, and a 3-input OR-gate in the second layer. We get
three AND -gates since there are three rows in the a column with a value of 1. Each one has 3-inputs since there are
three inputs, x, y, and z of the circuit. We get a 3-input OR-gate in the second layer since there are three AND -gates in
the first layer.

Here is the complete circuit for the first column:

For the second column, we get two 3-input AND -gates in the first layer, and a 2-input OR-gate in the second layer. We
get two AND-gates since there are two rows in the b column with a value of 1. Each one has 3-inputs since again there
are three inputs, x, y, and z of the circuit. We get a 2-input AND-gate in the second layer since there are two AND-gates
in the first layer.

Here is the complete circuit for the second column:

By Israel W. Digital Logic Design. Page 29


Now, all we have to do is to combine the two circuits into a single one:

While this circuit works, it is not the one with the fewest number of gates. In fact, since both output columns have a 1 in
the row correspoding to the inputs 0 0 1, it is clear that the gate for that row can be shared between the two subcircuits:

In some cases, even smaller circuits can be obtained, if one is willing to accept more layers (and thus a higher circuit
delay).

By Israel W. Digital Logic Design. Page 30


Boolean functions

Operations of binary variables can be described by mean of appropriate mathematical function called Boolean function.
A Boolean function define a mapping from a set of binary input values into a set of output values. A Boolean function is
formed with binary variables, the binary operators AND and OR and the unary operator NOT.

For example , a Boolean function f(x1,x2,x3,……,xn) =y defines a mapping from an arbitrary combination of binary input
values (x1,x2,x3,……,xn) into a binary value y. a binary function with n input variable can operate on 2 n distincts values.
Any such function can be described by using a truth table consisting of 2n rows and n columns. The content of this table
are the values produced by that function when applied to all the possible combination of the n input variable.

Example

x y x.y
0 0 0
0 1 0
1 0 0
1 1 1

The function f, representing x.y, that is f(x,y)=xy. Which mean that f=1 if x=1 and y=1 and f=0 otherwise.

For each rows of the table, there is a value of the function equal to 1 or 0. The function f is equal to the sum of all rows
that gives a value of 1.

A Boolean function may be transformed from an algebraic expression into a logic diagram composed of AND, OR and
NOT gate. When a Boolean function is implemented with logic gates, each literal in the function designates an input to a
gate and each term is implemented with a logic gate . e.g.

F=xyz
F=x+y′z

Complement of a function

The complement of a function F is F′ and is obtained from an interchange of 0’s to 1’s and 1’s to 0’s in the value of F.
the complement of a function may be derived algebraically trough De Morgan’s theorem

(A+B+C+….)′= A′B′C′….
(ABC….)′= A′+ B′+C′……

The generalized form of de Morgan’s theorem state that the complement of function is obtained by interchanging AND
and OR and complementing each literal.

F=X′YZ′+X′Y′Z′
F′=( X′YZ′+X′Y′Z′)′
=( X′YZ′)′.( X′Y′Z′)′
=( X′′+Y′+Z′′)( X′′+Y′′+Z′′)
=( X+Y′+Z)( X+Y+Z)

By Israel W. Digital Logic Design. Page 31


Canonical form(Minterns and Maxterms )

A binary variable may appear either in it normal form or in it complement form . consider two binary variables x and y
combined with AND operation. Since each variable may appears in either form there are four possible combinations:
x′y′, x′y, xy′,xy. Each of the term represent one distinct area in the Venn diagram and is called minterm or a standard
product. With n variable, 2n minterms can be formed.

In a similar fashion, n variables forming an OR term provide 2n possible combinations called maxterms or standard
sum. Each maxterm is obtained from an OR term of the n variables, with each variable being primed if the
corresponding bit is 1 and un-primed if the corresponding bit is 0. Note that each maxterm is the complement of its
corresponding minterm and vice versa.

X Y Z Minterm maxterm
0 0 0 x′y′z′ X+y+z
0 0 1 X′y′z X+y+z′
0 1 0 X′yz′ X+y′+z
0 1 1 X′yz X+y′+z′
1 0 0 Xy′z′ X′+y+z
1 0 1 Xy′z X′+y+z′
1 1 0 Xyz′ X′+y′+z
1 1 1 Xyz X′+y′+z′

A Boolean function may be expressed algebraically from a given truth table by forming a minterm for each combination
of variable that produce a 1 and taken the OR of those terms.

Similarly, the same function can be obtained by forming the maxterm for each combination of variable that produces 0
and then taken the AND of those term.

It is sometime convenient to express the bolean function when it is in sum of minterms, in the following notation:

F(X,Y,Z)=∑(1,4,5,6,7) . the summation symbol∑ stands for the ORing of the terms; the number follow ing it are the
minterms of the function. The letters in the parenthesis following F form list of the variables in the order taken when
the minterm is converted to an AND term.

So, F(X,Y,Z)=∑(1,4,5,6,7) = X’Y’Z+XY’Z’+XY’Z+XYZ’+XYZ

Sometime it is convenient to express a Boolean function in its sum of minterm. If it is not in that case, the expression is
expanded into the sum of AND term and if there is any missing variable, it is ANDed with an expression such as x+x′
where x is one of the missing variable.

To express a Boolean function as a product of maxterms, it must first be brought into a form of OR terms. This can be
done by using distributive law x+xz=(x+y)(x+z). then if there is any missing variable, say x in each OR term is ORded
with xx′.

e.g. represent F=xy+x′z as a product of maxterm

=(xy +x′)(xy+z)
(x+x′)(y+x′)(x+z)(y+z)
(y+x′)(x+z)(y+z)
Adding missing variable in each term
(y+x′)= x′+y+zz′ =(x′+y+z)( x′+y+z′)
(x+z)= x+z+yy′ =(x+y+z)(x+y′+z)

By Israel W. Digital Logic Design. Page 32


(y+z)= y+z+xx′ =( x+y+z)( x′+y+z)
F= ( x+y+z)( x+y′+z) ( x′+y+z)( x′+y+z′)

A convenient way to express this function is as follow :

F(x,y,z)= ∏ (0,2,4,5)

Standard form

Another way to express a boolean function is in satndard form. Here the term that form the function may contains one,
two or nay number of literals. There are two types of standard form. The sum of product and the product of sum.

The sum of product(SOP) is a Boolean expression containing AND terms called product term of one or more literals
each. The sum denotes the ORing of these terms

e.g. F=x+xy′+x′yz

the product of sum (POS)is a Boolean expression containing OR terms called SUM terms. Each term may have any
number of literals. The product denotes the ANDing of these terms

e.g. F= x(x+y′)(x′+y+z)

a boolean function may also be expressed in a non standard form. In that case, distributive law can be used to remove
the parenthesis

F=(xy+zw)(x′y′+z′w′)

= xy(x′y′+z′w′)+zw(x′y′+z′w′)

=Xyx′y +xyz′w′ +zwx′y′ +zwz′w′

=xyz′w′+zwx′y′

A Boolean equation can be reduced to a minimal number of literal by algebraic manipulation. Unfortunately, there are
no specific rules to follow that will guarantee the final answer. The only methods is to use the theorem and postulate of
Boolean algebra and any other manipulation that becomes familiar

Describing existing circuits using Logic expressions

To define what a combinatorial circuit does, we can use a logic expression or an expression for short. Such an
expression uses the two constants 0 and 1, variables such as x, y, and z (sometimes with suffixes) as names of inputs and
outputs, and the operators +, . and a horizontal bar or a prime (which stands for not). As usual, multiplication is
considered to have higher priority than addition. Parentheses are used to modify the priority.

If Boolean functions in either Sum of Product or Product of Sum forms can be implemented using 2-Level
implementations.

For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.

For POS forms OR gates will be in the first level and a single AND gate will be in the second level.

Note that using inverters to complement input variables is not counted as a level.

By Israel W. Digital Logic Design. Page 33


Examples:

(X′+Y)(Y+XZ′)′+X(YZ)′

The equation is neither in sum of product nor in product of sum. The implementation is as follow

X1X2′X3+X1′X2′X2+X1′X2X′3

The equation is in sum of product. The implementation is in 2-Levels. AND gates form the first level and a single OR
gate the second level.

(X+1)(Y+0Z)

The equation is neither in sum of product nor in product of sum. The implementation is as follow

Power of logic expressions

A valid question is: can logic expressions describe all possible combinatorial circuits?. The answer is yes and here is
why:

You can trivially convert the truth table for an arbitrary circuit into an expression. The expression will be in the form of
a sum of products of variables and there inverses. Each row with output value of 1 of the truth table corresponds to one
term in the sum. In such a term, a variable having a 1 in the truth table will be uninvested, and a variable having a 0 in
the truth table will be inverted.

By Israel W. Digital Logic Design. Page 34


Take the following truth table for example:

xyz|f
-
000|0
001|0
010|1
011|0
100|1
101|0
110|0
111|1
The corresponding expression is:

X′Y′Z+XY′Z′+XYZ

Since you can describe any combinatorial circuit with a truth table, and you can describe any truth table with an
expression, you can describe any combinatorial circuit with an expression.

Simplicity of logic expressions

There are many logic expressions (and therefore many circuits) that correspond to a certain truth table, and therefore to a
certain function computed. For instance, the following two expressions compute the same function:

X(Y+Z) XY+XZ

The left one requires two gates, one and-gate and one or-gate. The second expression requires two and-gates and one
or-gate. It seems obvious that the first one is preferable to the second one. However, this is not always the case. It is not
always true that the number of gates is the only way, nor even the best way, to determine simplicity.

We have, for instance, assumed that gates are ideal. In reality, the signal takes some time to propagate through a gate.
We call this time the gate delay. We might be interested in circuits that minimize the total gate delay, in other words,
circuits that make the signal traverse the fewest possible gates from input to output. Such circuits are not necessarily the
same ones that require the smallest number of gates.

Circuit minimization

The complexity of the digital logic gates that implement a Boolean function is directly related to the complexity of the
algebraic expression from which the function is implemented. Although the truth table representation of a function is
unique, it can appear in many different forms when expressed algebraically.

Simplification through algebraic manipulation


A Boolean equation can be reduced to a minimal number of literal by algebraic manipulation as stated above.
Unfortunately, there are no specific rules to follow that will guarantee the final answer. The only methods is to use the
theorem and postulate of Boolean algebra and any other manipulation that becomes familiar

e.g. simplify x+x′y

x+x′y=(x+x′)(x+y)=x+y
simplify x′y′z+x′yz+xy′
x′y′z+x′yz+xy′=x′z(y+y′)+xy′
=x′z+xy′

By Israel W. Digital Logic Design. Page 35


Simplify xy +x′z+yz
xy +x′z+yz= xy +x′z+yz(x+x′)
xy +x′z+yzx+yzx′
xy(1+z) +x′z(1+y)
=xy+x′z

Karnaugh map
The Karnaugh map also known as Veitch diagram or simply as K map is a two dimensional form of the truth table,
drawn in such a way that the simplification of Boolean expression can be immediately be seen from the location of 1’s
in the map. The map is a diagram made up of squares , each sqare represent one minterm. Since any Boolean function
can be expressed as a sum of minterms, it follows that a Boolean function is recognised graphically in the map from the
area enclosed by those squares whose minterms are included in the function.

A two variable Boolean function can be represented as follow

A
A 0 1
B
A’B’ AB’
0

A’B AB
B 1

A three variable function can be represented as follow

A
AB
00 01 11 10
C
A’B’C’ A’BC’ ABC’ AB’C’
0

A’B’C A’BC ABC AB’C


C 1

By Israel W. Digital Logic Design. Page 36


A four variable Boolean function can be represented in the map bellow

A
AB
00 01 11 10
CD
A’B’C’D’ A’BC’D’ ABC’D’ AB’C’D’
00
A’B’C’D A’BC’D ABC’D AB’C’D
01 D
A’B’CD A’BCD ABCD AB’CD
01
C
A’B’CD’ A’BCD’ ABCD’ AB’CD’
01

To simplify a Boolean function using karnaugh map, the first step is to plot all ones in the function truth table on the
map. The next step is to combine adjacent 1’s into a group of one, two, four, eight, sixteen. The group of minterm
should be as large as possible. A single group of four minterm yields a simpler expression than two groups of two
minterms.

In a four variable karnaugh map,

1 variable product term is obtained if 8 adjacent squares are covered


2 variable product term is obtained if 4 adjacent squares are covered
3 variable product term is obtained if 2 adjacent squares are covered
1 variable product term is obtained if 1 square is covered
A square having a 1 may belong to more than one term in the sum of product expression

The final stage is reached when each of the group of minterms are ORded together to form the simplified sum of
product expression

The karnaugh map is not a square or rectangle as it may appear in the diagram. The top edge is adjacent to the bottom
edge and the left hand edge adjacent to the right hand edge. Consequent, two squares in karnaugh map are said to be
adjacent if they differ by only one variable

Implicant

In Boolean logic, an implicant is a "covering" (sum term or product term) of one or more minterms in a sum of products
(or maxterms in a product of sums) of a boolean function. Formally, a product term P in a sum of products is an
implicant of the Boolean function F if P implies F. More precisely:

P implies F (and thus is an implicant of F) if F also takes the value 1 whenever P equals 1.
where
• F is a Boolean of n variables.
• P is a product term

This means that P < = F with respect to the natural ordering of the Boolean space. For instance, the function

f(x,y,z,w) = xy + yz + w

By Israel W. Digital Logic Design. Page 37


is implied by xy, by xyz, by xyzw, by w and many others; these are the implicants of f.

Prime implicant
A prime implicant of a function is an implicant that cannot be covered by a more general (more reduced - meaning with
fewer literals) implicant. W.V. Quine defined a prime implicant of F to be an implicant that is minimal - that is, if the
removal of any literal from P results in a non-implicant for F. Essential prime implicants are prime implicants that cover
an output of the function that no combination of other prime implicants is able to cover.

A
AB Non prime implicant
00 01 11 10
CD

00

01 D
1 1
1 1
11 prime implicant
C
1
10 prime implicant

A
AB
00 01 11 10
CD Essential prime implicant
1
00

01 D
1
Non Essential prime implicant
1 1 1
11
C Essential prime implicant
1 1 1
10

In simplifying a Boolean function using karnaugh map, non essential prime implicant are not needed

By Israel W. Digital Logic Design. Page 38


Minimization of Boolean expressions using Karnaugh maps.
Given the following truth table for the majority function.

a b C M(output)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1

The Boolean algebraic expression is

m = a′bc + ab′c + abc′ + abc.

the minimization using algebraic manipulation can be done as follows.

m = a′bc + abc + ab′c + abc + abc′ + abc

= (a′ + a)bc + a(b′ + b)c + ab(c′ + c)

= bc + ac + ab

The abc term was replicated and combined with the other terms.

To use a Karnaugh map we draw the following map which has a position (square) corresponding to each of the 8
possible combinations of the 3 Boolean variables. The upper left position corresponds to the 000 row of the truth table,
the lower right position corresponds to 101.

a
ab
00 01 11 10
c
1
0

1 1 1
c 1

The 1s are in the same places as they were in the original truth table. The 1 in the first row is at position 110 (a = 1, b =
1, c = 0).

The minimization is done by drawing circles around sets of adjacent 1s. Adjacency is horizontal, vertical, or both. The
circles must always contain 2n 1s where n is an integer.

By Israel W. Digital Logic Design. Page 39


a
ab
00 01 11 10
c
1
0

1 1 1
c 1

We have circled two 1s. The fact that the circle spans the two possible values of a

(0 and 1) means that the a term is eliminated from the Boolean expression corresponding to this circle.

Now we have drawn circles around all the 1s. Thus the expression reduces to

bc + ac + ab

as we saw before.

What is happening? What does adjacency and grouping the 1s together have to do with minimization? Notice that the 1
at position 111 was used by all 3 circles. This 1 corresponds to the abc term that was replicated in the original algebraic
minimization. Adjacency of 2 1s means that the terms corresponding to those 1s differ in one variable only. In one case
that variable is negated and in the other it is not.

The map is easier than algebraic minimization because we just have to recognize patterns of 1s in the map instead of
using the algebraic manipulations. Adjacency also applies to the edges of the map.

Now for 4 Boolean variables. The Karnaugh map is drawn as shown below.

A
AB
00 01 11 10
CD
1
00

1 1
01 D

1 1 1
01
C
1 1
01

By Israel W. Digital Logic Design. Page 40


The following corresponds to the Boolean expression

Q = A′BC′D + A′BCD + ABC′D′ + ABC′D + ABCD + ABCD′ + AB′CD + AB′CD′

RULE: Minimization is achieved by drawing the smallest possible number of circles, each containing the largest
possible number of 1s.

Grouping the 1s together results in the following.

A
AB
00 01 11 10
CD
1
00

1 1
01 D

1 1 1
01
C
1 1
01

The expression for the groupings above is

Q = BD + AC + AB

This expression requires 3 2-input AND gates and 1 3-input OR gate.

Other examples

1. F=A′B+AB

A
A
0 1
B
0
=B
1 1
B 1

2. F=A′B′C′+A′B′C+A′BC′+ABC′+ABC

By Israel W. Digital Logic Design. Page 41


A
AB
00 01 11 10
C
1 1 1
0
=A’B’+BC’+AB
1 1
C 1

3. F=AB+A′BC′D+A′BCD+AB′C′D′

A
AB
00 01 11 10
CD
1 1
00

1 1
01 D =BD+AB+AC’D’

1 1
01
C
1
01

4. F=AC′D′+A′B′C+A′C′D+AB′D

A
AB
00 01 11 10
CD
1 1
00

1 1 1
01 D =B’D+AC’D’+A’C’D+A’B’C

1 1
11
C
1
10

By Israel W. Digital Logic Design. Page 42


5. F=A′B′C′D′+AB′C′D′+A′BC′D+ABC′D+A′BCD+ABCD

A
AB
00 01 11 10
CD
1 1
00

1 1
01 D =BD+D’B’
1 1
11
C

10 1 1

Obtaining a Simplified product of sum using Karnaugh map


The simplification of the product of sum follows the same rule as the product of sum. However, adjacent cells to be
combined are the cells containing 0. In this approach, the obtained simplified function is F′. since F is represented by the
square marked with 1. The function F can be obtained in product of sum by applying de morgan’s rule on F′.

F=A′B′C′D′+A′BC′D′+AB′C′D′+A′BC′D+A′B′CD′+A′BCD′+AB′CD′

A
AB
00 01 11 10
CD
1 1 0 1
00

0 1 0 0
01 D
0 0 0 0
11
C
1 1 0 1
10

The obtained simplified F′=AB+CD+BD′. Since F′′=F, By applying de morgan’s rule to F′, we obtain

F′′=(AB+CD+BD′)′

=(A′+B′)(C′+D′)(B′+D) which is he simplified F in product of sum.

By Israel W. Digital Logic Design. Page 43


Don't Care condition
Sometimes we do not care whether a 1 or 0 occurs for a certain set of inputs. It may be that those inputs will never occur
so it makes no difference what the output is. For example, we might have a BCD (binary coded decimal) code which
consists of 4 bits to encode the digits 0 (0000) through 9 (1001). The remaining codes (1010 through 1111) are not used.
If we had a truth table for the prime numbers 0 through 9, it would be

A B C D F
0 0 0 0 0
0 0 0 1 0
0 0 1 0 1
0 0 1 1 1
0 1 0 0 0
0 1 0 1 1
0 1 1 0 0
0 1 1 1 1
1 0 0 0 0
1 0 0 1 0
1 0 1 0 X
1 0 1 1 X
1 1 0 0 X
1 1 0 1 X
1 1 1 0 X
1 1 1 1 X

F=A′B′CD′+A′B′CD+A′BC′D+A′BCD

The X in the above stand for "don’t care", we don't care whether a 1 or 0 is the value for that combination of inputs
because (in this case) the inputs will never occur.

A
AB
00 01 11 10
CD
0 0 x 0
00

0 1 x 0
01 D F=BD+B’C

1 1 x x
11
C
1 0 x x
10

By Israel W. Digital Logic Design. Page 44


The tabulation method(Quine-McCluskey)
For function of five or more variables, it is difficult to be sure that the best selection is made. In such case, the tabulation
method can be used to overcome such difficulty. The tabulation method was first formulated by Quine and later
improved by McCluskey. It is also known as Quine-McCluskey method.

The Quine–McCluskey algorithm (or the method of prime implicants) is a method used for minimization of boolean
functions. It is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in
computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean function has
been reached.

The method involves two steps:

Finding all prime implicants of the function.

Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as
other prime implicants that are necessary to cover the function.

Step 1: finding prime implicants

Minimizing an arbitrary function:

ABCD f
m0 0 0 0 0 0
m1 0 0 0 1 0
m2 0 0 1 0 0
m3 0 0 1 1 0
m4 0 1 0 0 1
m5 0 1 0 1 0
m6 0 1 1 0 0
m7 0 1 1 1 0
m8 1 0 0 0 1
m9 1 0 0 1 x
m10 1 0 1 0 1
m11 1 0 1 1 1
m12 1 1 0 0 1
m13 1 1 0 1 0
m14 1 1 1 0 x
m15 1 1 1 1 1

One can easily form the canonical sum of products expression from this table, simply by summing the minterms
(leaving out don't-care terms) where the function evaluates to one:

F(A,B,C,D) = A′BC′D′ + AB′C′D′ + AB′CD′ + AB′CD + ABC′D′ + ABCD

Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first placed in a minterm
table. Don't-care terms are also added into this table, so they can be combined with minterms:

By Israel W. Digital Logic Design. Page 45


Number of 1s Minterm Binary Representation

1 m4 0100
m8 1000

2 m9 1001
m10 1010
m12 1100

3 m11 1011
m14 1110

4 m15 1111
At this point, one can start combining minterms with other minterms. If two terms vary by only a single digit changing,
that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more are
marked with a "*". When going from Size 2 to Size 4, treat '-' as a third bit value. Ex: -110 and -100 or -11- can be
combined, but not -110 and 011-. (Trick: Match up the '-' first.)

Number of 1s Minterm 0-Cube | Size 2 Implicants | Size 4 Implicants


| - | -
1 m4 0100 | m(4,12) -100* | m(8,9,10,11) 10--*
m8 1000 | m(8,9) 100- | m(8,10,12,14) 1--0*
| m(8,10) 10-0 | -
2 m9 1001 | m(8,12) 1-00 | m(10,11,14,15) 1-1-*
m10 1010 |---------------------- |
m12 1100 | m(9,11) 10-1 |
------------------------------ | m(10,11) 101- |
3 m11 1011 | m(10,14) 1-10 |
m14 1110 | m(12,14) 11-0 |
| - |
4 m15 1111 | m(11,15) 1-11 |
| m(14,15) 111- |

At this point, the terms marked with * can be seen as a solution. That is the solution is

F=AB′+AD′+AC+BC′D′

If the karnaugh map was used, we should have obtain an expression simplier than this. To obtain a minimal form, we
need to use the prime implicant chart

Step 2: prime implicant chart

None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table.
Along the side goes the prime implicants that have just been generated, and along the top go the minterms specified

By Israel W. Digital Logic Design. Page 46


earlier. The don't care terms are not placed on top - they are omitted from this section because they are not necessary
inputs.

4 8 10 11 12 15

m(4,12) X X -100 (BC′D′)

m(8,9,10,11) X X X 10--(AB′)

m(8,10,12,14) X X X 1--0 (AD′)

m(10,11,14,15) X X X 1-1- (AC)

In the prime implicant table shown above, there are 5 rows, one row for each of the prime implicant and 6 columns,
each representing one minterm of the function. X is placed in each row to indicate the minterms contained in the prime
implicant of that row. For example, the two X in the first row indicate that minterm 4 and 12 are contained in the prime
implicant represented by (-100) i.e. BC′D′

The completed prime implicant table is inspected for columns containing only a single x. in this example, there are two
minterms whose column have a single x. 4,15. The minterm 4 is covered by prime implicant BC′D′. that is the selection
of prime implicant BC′D′ guarantee that minterm 4 is included in the selection. Similarly, for minterm 15 is covered by
prime implicant AC. Prime implicants that cover minterms with a single X in their column are called essential prime
implicants.

Those essential prime implicant must be selected.

Now we find out each column whose minterm is covered by the selected essential prime implicant

For this example, essential prime implicant BC′D′ covers minterm 4 and 12. Essential prime implicant AC covers 10, 11
and 15. An inspection of the implicant table shows that, all the minterms are covered by the essential prime implicant
except the minterms 8. The minterms not selected must be included by the selection of one or more prime implicants.
From this example, we have only one minterm which is 8. It can be included in the selection either by including the
prime implicant AB′ or AD′. Since both of them have minterm 8 in their selection. We have thus found the minimum set
of prime implicants whose sum gives the required minimized function:

F=BC′D′+AD′+AC OR F= BC′D′+AB′+AC.

Both of those final equations are functionally equivalent to this original (very area-expensive) equation:

F(A,B,C,D) = A′BC′D′ + AB′C′D′ + AB′CD′ + AB′CD + ABC′D′ + ABCD

Implimenting logical circuit using NAND and NOR gate only.

In addition to AND, OR, and NOT gates, other logic gates like NAND and NOR are also used in the design of digital
circuits.

The NAND gate represents the complement of the AND operation. Its name is an

abbreviation of NOT AND. The graphic symbol for the NAND gate consists of an AND symbol with a bubble on the
output, denoting that a complement operation is performed on the output of the AND gate as shown earlier

By Israel W. Digital Logic Design. Page 47


The NOR gate represents the complement of the OR operation. Its name is an abbreviation of NOT OR. The graphic
symbol for the NOR gate consists of an OR symbol with a bubble on the output, denoting that a complement operation
is performed on the output of the OR gate as shown earlier.

A universal gate is a gate which can implement any Boolean function without need to use any other gate type. The
NAND and NOR gates are universal gates. In practice, this is advantageous since NAND and NOR gates are
economical and easier to fabricate and are the basic gates used in all IC digital logic families. In fact, an AND gate is
typically implemented as a NAND gate followed by an inverter not the other way around.

Likewise, an OR gate is typically implemented as a NOR gate followed by an inverter not the other way around.

NAND Gate is a Universal Gate

To prove that any Boolean function can be implemented using only NAND gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates. A universal gate is a gate which can implement any Boolean
function without need to use any other gate type.

Implementing an Inverter Using only NAND Gate

The figure shows two ways in which a NAND gate can be used as an inverter (NOT gate).

1. All NAND input pins connect to the input signal A gives an output A′.

2. One NAND input pin is connected to the input signal A while all other input pins are connected to logic 1. The output
will be A′.

Implementing AND Using only NAND Gates

An AND gate can be replaced by NAND gates as shown in the figure (The AND is

replaced by a NAND gate with its output complemented by a NAND gate inverter).

Implementing OR Using only NAND Gates

An OR gate can be replaced by NAND gates as shown in the figure (The OR gate is replaced by a NAND gate with all
its inputs complemented by NAND gate inverters).

By Israel W. Digital Logic Design. Page 48


Thus, the NAND gate is a universal gate since it can implement the AND, OR and NOT functions.

NOR Gate is a Universal Gate:

To prove that any Boolean function can be implemented using only NOR gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates.

Implementing an Inverter Using only NOR Gate

The figure shows two ways in which a NOR gate can be used as an inverter (NOT gate).

1.All NOR input pins connect to the input signal A gives an output A′.

2. One NOR input pin is connected to the input signal A while all other input pins are connected to logic 0. The output
will be A′.

Implementing OR Using only NOR Gates

An OR gate can be replaced by NOR gates as shown in the figure (The OR is replaced by a NOR gate with its output
complemented by a NOR gate inverter)

Implementing AND Using only NOR Gates

An AND gate can be replaced by NOR gates as shown in the figure (The AND gate is replaced by a NOR gate with all
its inputs complemented by NOR gate inverters)

By Israel W. Digital Logic Design. Page 49


Thus, the NOR gate is a universal gate since it can implement the AND, OR and NOT functions.

Equivalent Gates:

The shown figure summarizes important cases of gate equivalence. Note that bubbles indicate a complement operation
(inverter).

A NAND gate is equivalent to an inverted-input OR gate.

An AND gate is equivalent to an inverted-input NOR gate.

A NOR gate is equivalent to an inverted-input AND gate.

An OR gate is equivalent to an inverted-input NAND gate.

Two NOT gates in series are same as a buffer because they cancel each other as A′′=A.

By Israel W. Digital Logic Design. Page 50


Two-Level Implementations:

We have seen before that Boolean functions in either SOP or POS forms can be implemented using 2-Level
implementations.

For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.

For POS forms OR gates will be in the first level and a single AND gate will be in the second level.

Note that using inverters to complement input variables is not counted as a level.

To implement a function using NAND gates only, it must first be simplified to a sum of product and to implement a
function using NOR gates only, it must first be simplified to a product of sum

We will show that SOP forms can be implemented using only NAND gates, while POS forms can be implemented using
only NOR gates through examples.

Example 1: Implement the following SOP function using NAND gate only

F = XZ + Y′Z + X′YZ

Being an SOP expression, it is implemented in 2-levels as shown in the figure.

Introducing two successive inverters at the inputs of the OR gate results in the shown equivalent implementation. Since
two successive inverters on the same line will not have an overall effect on the logic as it is shown before.

By associating one of the inverters with the output of the first level AND gate and the other with the input of the OR
gate, it is clear that this implementation is reducible to 2-level implementation where both levels are NAND gates as
shown in Figure.

By Israel W. Digital Logic Design. Page 51


Example 2: Implement the following POS function using NOR gates only

F = (X+Z) (Y′+Z) (X′+Y+Z)

Being a POS expression, it is implemented in 2-levels as shown in the figure.

Introducing two successive inverters at the inputs of the AND gate results in the shown equivalent implementation.
Since two successive inverters on the same line will not have an overall effect on the logic as it is shown before.

By associating one of the inverters with the output of the first level OR gates and the other with the input of the AND
gate, it is clear that this implementation is reducible to 2-level implementation where both levels are NOR gates as
shown in Figure.

By Israel W. Digital Logic Design. Page 52


There are some other types of 2-level combinational circuits which are

• NAND-AND

• AND-NOR,

• NOR-OR,

• OR-NAND

These are explained by examples.

AND-NOR functions:

Example 3: Implement the following function F=(XZ+Y′Z+X′YZ) ′ OR F′=XZ+Y′Z+X′YZ

Since F′ is in SOP form, it can be implemented by using NAND-NAND circuit.

By complementing the output we can get F, or by using NAND-AND circuit as shown in the figure.

It can also be implemented using AND-NOR circuit as it is equivalent to NAND- AND circuit as shown in the figure.

By Israel W. Digital Logic Design. Page 53


OR-NAND functions:

Example 4: Implement the following function

F=((X+Z)(Y′+Z)(X′+Y+Z)) ′ orF′ (X+Z)(Y′+Z)(X′+Y+Z)

Since F′ is in POS form, it can be implemented by using NOR-NOR circuit.

By complementing the output we can get F, or by using NOR-OR circuit as shown in the figure.

It can also be implemented using OR-NAND circuit as it is equivalent to NOR-OR circuit as shown in the figure

By Israel W. Digital Logic Design. Page 54


Designing Combinatorial Circuits

The design of a combinational circuit starts from the verbal outline of the problem and ends with a logic circuit diagram
or a set of Boolean functions from which the Boolean function can be easily obtained. The procedure involves the
following steps:

- The problem is stated


- The number of available input variables and required output variables is determined.
- The input and output variable are assigned their letter symbol
- The truth table that defines the required relationship between the inputs and the outputs is derived.
- The simplified Boolean function for each output is obtained
- The logic diagram is drawn.

Example of combinational circuit

Adders
In electronics, an adder or summer is a digital circuit that performs addition of numbers. In modern computers adders
reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for
many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary
numbers. In cases where twos complement or ones complement is being used to represent negative numbers, it is trivial
to modify an adder into an adder-subtracter. Other signed number representations require a more complex adder.

-Half Adder

A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum
and a carry value which are both binary digits.

A half adder has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR
of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with
C being the most significant of these two outputs.

The drawback of this circuit is that in case of a multibit addition, it cannot include a carry.

Following is the truth table for a half adder:

A B Carry Sum

0 0 0 0

0 1 0 1

1 0 0 1

1 1 1 0

Equation of the Sum and Carry.

Sum=A′B+AB′ Carry=AB

One can see that Sum can also be implemented using XOR gate as A B

By Israel W. Digital Logic Design. Page 55


-Full Adder.

A full adder has three inputs A, B, and a carry in C, such that multiple adders can be used to add larger numbers. To
remove ambiguity between the input and output carry lines, the carry in is labelled Ci or Cin while the carry out is
labelled Co or Cout.

A full adder is a logical circuit that performs an addition operation on three binary digits. The full adder produces a sum
and carry value, which are both binary digits. It can be combined with other full adders or work on its own.

Input Output
A B Ci Co S
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

Co=A′BCi+AB′Ci+ABCi′+ABCi

S=A′B′Ci +A′BCi′+ABCi′+ABCi

A full adder can be trivially built using our ordinary design methods for combinatorial circuits. Here is the resulting

By Israel W. Digital Logic Design. Page 56


circuit diagram using NAND gates only:

Co=A′BCi+AB′Ci+ABCi′+ABCi by manipulating Co, we can see thatCo= Ci(A B)+AB

S=A′B′Ci +A′BCi′+ABCi′+ABCi By manipulating S, we can see that S=Ci (A B)

Note that the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting
logic. This is because the only discrepancy between OR and XOR gates occurs when both inputs are 1; for the adder
shown here, this is never possible. Using only two types of gates is convenient if one desires to implement the adder
directly using common IC chips.

A full adder can be constructed from two half adders by connecting A and B to the input of one half adder, connecting
the sum from that to an input to the second adder, connecting Ci to the other input and OR the two carry outputs.
Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the three-bit majority function of
A, B, and Ci. The output of the full adder is the two-bit arithmetic sum of three one-bit numbers.

By Israel W. Digital Logic Design. Page 57


Ripple carry adder

It is possible to create a logical circuit using multiple full adders to add N-bit numbers. Each full adder inputs a Cin,
which is the Cout of the previous adder. This kind of adder is a ripple carry adder, since each carry bit "ripples" to the
next full adder. Note that the first (and only the first) full adder may be replaced by a half adder.

The layout of ripple carry adder is simple, which allows for fast design time; however, the ripple carry adder is
relatively slow, since each full adder must wait for the carry bit to be calculated from the previous full adder. The gate
delay can easily be calculated by inspection of the full adder circuit. Following the path from Cin to Cout shows 2 gates
that must be passed through. Therefore, a 32-bit adder requires 31 carry computations and the final sum calculation for a
total of 31 * 2 + 1 = 63 gate delays.

Subtractor
In electronics, a subtractor can be designed using the same approach as that of an adder. The binary subtraction process
is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved
in performing the subtraction for each bit: the minuend (Xi), subtrahend (Yi), and a borrow in from the previous (less
significant) bit order position (Bi). The outputs are the difference bit (Di) and borrow bit Bi + 1.

Half subtractor

The half-subtractor is a combinational circuit which is used to perform subtraction of two bits. It has two inputs, X
(minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow). Such a circuit is called a half-subtractor
because it enables a borrow out of the current arithmetic operation but no borrow in from a previous arithmetic
operation.

The truth table for the half subtractor is given below.

X Y D B

0 0 0 0

0 1 1 1

1 0 1 0

1 1 0 0

D=X′Y+XY′ or D= X Y

B=X′Y

By Israel W. Digital Logic Design. Page 58


Full Subtractor

As in the case of the addition using logic gates , a full subtractor is made by combining two half-subtractors and an
additional OR-gate. A full subtractor has the borrow in capability (denoted as BORIN in the diagram below) and so
allows cascading which results in the possibility of multi-bit subtraction.

The final truth table for a full subtractor looks like:

A B BORIN D BOROUT
0 0 0 0 0
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 0 0 1 1
1 0 1 0 1
1 1 0 0 0
1 1 1 1 1

Find out the equations of the borrow and the difference

The circuit diagram for a full subtractor is given below.

For a wide range of operations many circuit elements will be required. A neater solution will be to use subtraction via
addition using complementing as was discussed in the binary arithmetic topic. In this case only adders are needed as
shown bellow.

Binary subtraction using adders


Our binary adder can already handle negative numbers as indicated in the section on binary arithmetic But we have not
discussed how we can get it to handle subtraction. To see how this can be done, notice that in order to compute the
expression x - y, we can compute the expression x + -y instead. We know from the section on binary arithmetic how to
negate a number by inverting all the bits and adding 1. Thus, we can compute the expression as x + inv(y) + 1. It
suffices to invert all the inputs of the second operand before they reach the adder, but how do we add the 1. That seems
to require another adder just for that. Luckily, we have an unused carry-in signal to position 0 that we can use. Giving a
1 on this input in effect adds one to the result. The complete circuit with addition and subtraction looks like this:

By Israel W. Digital Logic Design. Page 59


Exercise. Generate the truth table and Draw a logic circuit for a 3 bit message Parity Checker and generator seen in data
representation section

Medium Scale integration component

The purpose of circuit minimization is to obtain an algebraic expression that, when implemented results in a low cost
circuit. Digital circuit are constructed with integrated circuit(IC). An IC is a small silicon semiconductor crystal called
chip containing the electronic component for digital gates. The various gates are interconnected inside the chip to form
the required circuit. Digital IC are categorized according to their circuit complexity as measured by the number of logic
gates in a single packages.

- Small scale integration (SSI). SSi devices contain fewer than 10 gates. The input and output of the gates are
connected directly to the pins in the package.
- Medium Scale Integration. MSI devices have the complexity of approximately 10 to 100 gates in a single
package
- Large Scale Integration (LSI). LSI devices contain between 100 and a few thousand gates in a single package
- Very Large Scale Integration(VLSI). VLSI devices contain thousand of gates within a single package. VLSI
devices have revolutionized the computer system design technology giving the designer the capabilities to
create structures that previously were uneconomical.

Multiplexer

A multiplexer is a combinatorial circuit that is given a certain number (usually a power of two) data inputs, let us say 2n,
and n address inputs used as a binary number to select one of the data inputs. The multiplexer has a single output, which
has the same value as the selected data input.

In other words, the multiplexer works like the input selector of a home music system. Only one input is selected at a
time, and the selected input is transmitted to the single output. While on the music system, the selection of the input is
made manually, the multiplexer chooses its input based on a binary number, the address input.

The truth table for a multiplexer is huge for all but the smallest values of n. We therefore use an abbreviated version of
the truth table in which some inputs are replaced by `-' to indicate that the input value does not matter.

Here is such an abbreviated truth table for n = 3. The full truth table would have 2(3 + 23) = 2048 rows.

By Israel W. Digital Logic Design. Page 60


SELECT INPUT
a2 a1 a0 d7 d6 d5 d4 d3 d2 d1 d0 | x
- - - - - - - - - - - --- -
0 0 0 - - - - - - - 0 | 0
0 0 0 - - - - - - - 1 | 1
0 0 1 - - - - - - 0 - | 0
0 0 1 - - - - - - 1 - | 1
0 1 0 - - - - - 0 - - | 0
0 1 0 - - - - - 1 - - | 1
0 1 1 - - - - 0 - - - | 0
0 1 1 - - - - 1 - - - | 1
1 0 0 - - - 0 - - - - | 0
1 0 0 - - - 1 - - - - | 1
1 0 1 - - 0 - - - - - | 0
1 0 1 - - 1 - - - - - | 1
1 1 0 - 0 - - - - - - | 0
1 1 0 - 1 - - - - - - | 1
1 1 1 0 - - - - - - - | 0
1 1 1 1 - - - - - - - | 1

We can abbreviate this table even more by using a letter to indicate the value of the selected input, like this:

a 2 a 1 a 0 d7 d6 d5 d4 d3 d2 d1 d0 | x
- - - - - - - - - - - --- -
0 0 0 - - - - - - - c | c
0 0 1 - - - - - - c - | c
0 1 0 - - - - - c - - | c
0 1 1 - - - - c - - - | c
1 0 0 - - - c - - - - | c
1 0 1 - - c - - - - - | c
1 1 0 - c - - - - - - | c
1 1 1 c - - - - - - - | c

The same way we can simplify the truth table for the multiplexer, we can also simplify the corresponding circuit.
Indeed, our simple design method would yield a very large circuit. The simplified circuit looks like this:

By Israel W. Digital Logic Design. Page 61


Demultiplexer

The demultiplexer is the inverse of the multiplexer, in that it takes a single data input and n address inputs. It has 2 n
outputs. The address input determine which data output is going to have the same value as the data input. The other data
outputs will have the value 0.

Here is an abbreviated truth table for the demultiplexer. We could have given the full table since it has only 16 rows, but
we will use the same convention as for the multiplexer where we abbreviated the values of the data inputs.

a2 a1 a0 d | x7 x6 x5 x4 x3 x2 x1 x0

00 0 c| 0 0 0 0 00 0 c
00 1 c| 0 0 0 0 00 c 0
0 1 0 c| 0 0 0 0 0c 0 0
0 1 1 c| 0 0 0 0 c0 0 0
1 0 0 c| 0 0 0 c 00 0 0
1 0 1 c| 0 0 c 0 00 0 0
11 0 c| 0 c 00 0 0 0 0
11 1 c| c 0 00 0 0 0 0
Here is one possible circuit diagram for the demultiplexer:

By Israel W. Digital Logic Design. Page 62


Decoder

In both the multiplexer and the demultiplexer, part of the circuits decode the address inputs, i.e. it translates a binary
number of n digits to 2n outputs, one of which (the one that corresponds to the value of the binary number) is 1 and the
others of which are 0.

It is sometimes advantageous to separate this function from the rest of the circuit, since it is useful in many other
applications. Thus, we obtain a new combinatorial circuit that we call the decoder. It has the following truth table (for n
= 3):

a2 a1 a0 | x7 x6 x5 x4 x3 x2 x1 x0

0 0 0| 0 0 0 0 0 0 0 1
0 0 1| 0 0 0 0 0 0 1 0
0 1 0| 0 0 0 0 0 1 0 0
0 1 1| 0 0 0 0 1 0 0 0
1 0 0| 0 0 0 1 0 0 0 0
1 0 1| 0 0 1 0 0 0 0 0
1 1 0| 0 1 0 0 0 0 0 0
1 1 1| 1 0 0 0 0 0 0 0
Here is the circuit diagram for the decoder:

By Israel W. Digital Logic Design. Page 63


Encoder

An encoder has 2n input lines and n output lines. The output lines generate a binary code corresponding to the input
value. For example a single bit 4 to 2 encoder takes in 4 bits and outputs 2 bits. It is assumed that there are only 4 types
of input signals these are : 0001, 0010, 0100, 1000.

I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1

4 to 2 encoder

By Israel W. Digital Logic Design. Page 64


The encoder has the limitation that only one input can be active at any given time. If two inputs are simultaneously
active, the output produces an undefined combination. To prevent this we make use of the priority encoder.

A priority encoder is such that if two or more inputs are given at the same time, the input having the highest priority will
take precedence. An example of a single bit 4 to 2 encoder is shown.

I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 X 0 1
0 1 X X 1 0
1 X X X 1 1

4 to 2 priority encoder

The X’s designate the don’t care condition designating that fact that the binary value may be equal either to 0 or 1. For
example, the input I3has the highest priority so regarded the value of other inputs, if the value of I3 is 1, the output for
F1F0=11(binary 3)

Exercise

1 By using algebraic manipulation, Show that:


- A’B’C+A’BC’+AB’C’+ABC= A (B C) - AB+C’D = (A+B+C)(A+B’+C)(A’+B+C)(A’+B+C’)

2. A circuit has four inputs D,C,B,A encoded in natural binary form where A is the least significant bit. The inputs in the
range 0000=0 to 1011=11 represents the months of the year from January (0) to December (11). Input in the range 1100-
1111(i.e.12 to 15) cannot occur. The output of the circuit is true if the month represented by the input has 31 days.
Otherwise the output is false. The output for inputs in the range 1100 to 1111 is undefined.
- Draw the truth table to represent the problem and obtain the function F as a Sum of minterm.
- Use the Karnaugh map to obtain a simplified expression for the function F.
- Construct the circuit to implements the function using NOR gates only.
3. A circuit has four inputs P,Q,R,S, representing the natural binary number 0000=0, to 1111=15. P is the most
significant bit. The circuit has one output, X, which is true if the input to the circuit represents is a prime number and
false otherwise (A prime number is a number which is only divisible by 1 and by itself. Note that zero(0000) and
one(0001) are not considered as prime numbers)
i. Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S.
ii. Design a circuit diagram to implement this function using NOR gate only

4. A combinational circuit is defined by the following three Boolean functions: F1=x’y’z’+xz F2=xy’z’+x’y
F3=x’y’z+xy Design the circuit that implements the functions

5. A circuit implements the Boolean function F=A’B’C’D’+A’BCD’+AB’C’D’+ABC’D It is found that the circuit input
combinations A’B’CD’, A’BC’D’, AB’CD’ can never occur.
i. Find a simpler expression for F using the proper don’t care condition.
ii. Design the circuit implementing the simplified expression of F

6. A combinational circuit is defined by the following three Boolean functions: F1=x’y’z’+xz F2=xy’z’+x’y
F3=x’y’z+xy Design the circuit with a decoder and external gates.

7. A circuit has four inputs P,Q,R,S, representing the natural binary number 0000=0, to 1111=15. P is the most
significant bit. The circuit has one output, X, which is true if the number represented is divisible by three (Regard zero
as being indivisible by three.)
Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S as a product of maxterms
and also as a sum of minterms
Design a circuit diagram to implement this function

8. Plot the following function on K map and use the K map to simplify the expression.
F = ABC + ABC + ABC + ABC + ABC + ABC F = ABC + ABC + ABC + ABC

By Israel W. Digital Logic Design. Page 65


9. Simplify the following expressions by means of Boolean algebra
F = ABCD + ABCD + ABCD + ABCD + ABCD + ABCD + ABCD + ABCD
F = ABC + ABC + ABC + ABC + ABC

By Israel W. Digital Logic Design. Page 66


Sequential circuit
Introduction

In the previous session, we said that the output of a combinational circuit depends solely upon the input. The implication
is that combinational circuits have no memory. In order to build sophisticated digital logic circuits, including computers,
we need more a powerful model. We need circuits whose output depends upon both the input of the circuit and its
previous state. In other words, we need circuits that have memory.

For a device to serve as a memory, it must have three characteristics:

• the device must have two stable states


• there must be a way to read the state of the device
• there must be a way to set the state at least once.

It is possible to produce circuits with memory using the digital logic gates we've already seen. To do that, we need to
introduce the concept of feedback. So far, the logical flow in the circuits we've studied has been from input to output.
Such a circuit is called acyclic. Now we will introduce a circuit in which the output is fed back to the input, giving the
circuit memory. (There are other memory technologies that store electric charges or magnetic fields; these do not
depend on feedback.)

Latches and flip-flops

In the same way that gates are the building blocks of combinatorial circuits, latches and flip-flops are the building blocks
of sequential circuits.

While gates had to be built directly from transistors, latches can be built from gates, and flip-flops can be built from
latches. This fact will make it somewhat easier to understand latches and flip-flops.

Both latches and flip-flops are circuit elements whose output depends not only on the current inputs, but also on
previous inputs and outputs. The difference between a latch and a flip-flop is that a latch does not have a clock signal,
whereas a flip-flop always does.

Latches
How can we make a circuit out of gates that is not combinatorial? The answer is feed-back, which means that we create
loops in the circuit diagrams so that output values depend, indirectly, on themselves. If such feed-back is positive then
the circuit tends to have stable states, and if it is negative the circuit will tend to oscillate.

In order for a logical circuit to "remember" and retain its logical state even after the controlling input signal(s)
have been removed, it is necessary for the circuit to include some form of feedback. We might start with a pair of
inverters, each having its input connected to the other's output. The two outputs will always have opposite logic
levels.

By Israel W. Digital Logic Design. Page 67


The problem with this is that we don't have any additional inputs that we can use to change the logic states if we
want. We can solve this problem by replacing the inverters with NAND or NOR gates, and using the extra input
lines to control the circuit.

The circuit shown below is a basic NAND latch. The inputs are generally designated "S" and "R" for "Set" and
"Reset" respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action,
the inputs are considered to be inverted in this circuit.

The outputs of any single-bit latch or memory are traditionally designated Q and Q'. In a commercial latch circuit,
either or both of these may be available for use by other circuits. In any case, the circuit itself is:

For the NAND latch circuit, both inputs should normally be at a logic 1 level. Changing an input to a logic 0 level
will force that output to a logic 1. The same logic 1 will also be applied to the second input of the other NAND gate,
allowing that output to fall to a logic 0 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 1.

Applying another logic 0 input to the same gate will have no further effect on this circuit. However, applying a logic
0 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.

Note that it is forbidden to have both inputs at a logic 0 level at the same time. That state will force both outputs to a
logic 1, overriding the feedback latching action. In this condition, whichever input goes to logic 1 first will lose
control, while the other input (still at logic 0) controls the resulting state of the latch. If both inputs go to logic 1
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.

The same functions can also be performed using NOR gates. A few adjustments must be made to allow for the
difference in the logic function, but the logic involved is quite similar.

The circuit shown below is a basic NOR latch. The inputs are generally designated "S" and "R" for "Set" and
"Reset" respectively. Because the NOR inputs must normally be logic 0 to avoid overriding the latching action, the
inputs are not inverted in this circuit. The NOR-based latch circuit is:

For the NOR latch circuit, both inputs should normally be at a logic 0 level. Changing an input to a logic 1 level will
force that output to a logic 0. The same logic 0 will also be applied to the second input of the other NOR gate,
allowing that output to rise to a logic 1 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 0 even after the external input is removed.

Applying another logic 1 input to the same gate will have no further effect on this circuit. However, applying a logic
1 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.

Note that it is forbidden to have both inputs at a logic 1 level at the same time. That state will force both outputs to
a logic 0, overriding the feedback latching action. In this condition, whichever input goes to logic 0 first will lose
control, while the other input (still at logic 1) controls the resulting state of the latch. If both inputs go to logic 0
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.

By Israel W. Digital Logic Design. Page 68


One problem with the basic RS NOR latch is that the input signals actively drive their respective outputs to a logic
0, rather than to a logic 1. Thus, the S input signal is applied to the gate that produces the Q' output, while the R
input signal is applied to the gate that produces the Q output. The circuit works fine, but this reversal of inputs can
be confusing when you first try to deal with NOR-based circuits.

Flip-flops
Latches are asynchronous, which means that the output changes very soon after the input changes. Most computers
today, on the other hand, are synchronous, which means that the outputs of all the sequential circuits change
simultaneously to the rhythm of a global clock signal.

A flip-flop is a synchronous version of the latch.

A flip-flop circuit can be constructed from two NAND gates or two NOR gates. These flip-flops are shown in Figure 2
and Figure 3. Each flip-flop has two outputs, Q and Q′, and two inputs, set and reset. This type of flip-flop is referred to
as an SR flip-flop or SR latch. The flip-flop in Figure 2 has two useful states. When Q=1 and Q′=0, it is in the set state
(or 1-state). When Q=0 and Q′=1, it is in the clear state (or 0 -state). The outputs Q and Q′ are complements of each
other and are referred to as the normal and complement outputs, respectively. The binary state of the flip-flop is taken to
be the value of the normal output.

When a 1 is applied to both the set and reset inputs of the flip-flop in Figure 2, both Q and Q′ outputs go to 0. This
condition violates the fact that both outputs are complements of each other. In normal operation this condition must be
avoided by making sure that 1's are not applied to both inputs simultaneously.

(a) Logic diagram

(b) Truth table

Figure 2. Basic flip-flop circuit with NOR gates

By Israel W. Digital Logic Design. Page 69


(a) Logic diagram

(b) Truth table

Figure 3. Basic flip-flop circuit with NAND gates

The NAND basic flip-flop circuit in Figure 3(a) operates with inputs normally at 1 unless the state of the flip-flop has to
be changed. A 0 applied momentarily to the set input causes Q to go to 1 and Q′ to go to 0, putting the flip-flop in the set
state. When both inputs go to 0, both outputs go to 1. This condition should be avoided in normal operation.

Clocked SR Flip-Flop

The clocked SR flip-flop shown in Figure 4 consists of a basic NOR flip-flop and two AND gates. The outputs of the
two AND gates remain at 0 as long as the clock pulse (or CP) is 0, regardless of the S and R input values. When the
clock pulse goes to 1, information from the S and R inputs passes through to the basic flip-flop. With both S=1 and R=1,
the occurrence of a clock pulse causes both outputs to momentarily go to 0. When the pulse is removed, the state of the
flip-flop is indeterminate, ie., either state may result, depending on whether the set or reset input of the flip-flop remains
a 1 longer than the transition to 0 at the end of the pulse.

(a) Logic diagram

(b) Truth table

Figure 4. Clocked SR flip-flop

By Israel W. Digital Logic Design. Page 70


D Flip-Flop

The D flip-flop shown in Figure 5 is a modification of the clocked SR flip-flop. The D input goes directly into the S
input and the complement of the D input goes to the R input. The D input is sampled during the occurrence of a clock
pulse. If it is 1, the flip-flop is switched to the set state (unless it was already set). If it is 0, the flip-flop switches to the
clear state.

(a) Logic diagram with NAND gates

(b) Graphical symbol (c) Transition table

Figure 5. Clocked D flip-flop

JK Flip-Flop

A JK flip-flop is a refinement of the SR flip-flop in that the indeterminate state of the SR type is defined in the JK type.
Inputs J and K behave like inputs S and R to set and clear the flip-flop (note that in a JK flip-flop, the letter J is for set
and the letter K is for clear). When logic 1 inputs are applied to both J and K simultaneously, the flip-flop switches to its
complement state, ie., if Q=1, it switches to Q=0 and vice versa.

A clocked JK flip-flop is shown in Figure 6. Output Q is ANDed with K and CP inputs so that the flip-flop is cleared
during a clock pulse only if Q was previously 1. Similarly, ouput Q′ is ANDed with J and CP inputs so that the flip-flop
is set with a clock pulse only if Q′ was previously 1.

Note that because of the feedback connection in the JK flip-flop, a CP signal which remains a 1 (while J=K=1) after the
outputs have been complemented once will cause repeated and continuous transitions of the outputs. To avoid this, the
clock pulses must have a time duration less than the propagation delay through the flip-flop. The restriction on the pulse
width can be eliminated with a master-slave or edge-triggered construction. The same reasoning also applies to the T
flip-flop presented next.

(a) Logic diagram

By Israel W. Digital Logic Design. Page 71


(c) Transition table

Figure 6. Clocked JK flip-flop

T Flip-Flop

The T flip-flop is a single input version of the JK flip-flop. As shown in Figure 7, the T flip-flop is obtained from the JK
type if both inputs are tied together. The output of the T flip-flop "toggles" with each clock pulse.

(a) Logic diagram

(b) Graphical symbol

(c) Transition table

Triggering of Flip-flops
The state of a flip-flop is changed by a momentary change in the input signal. This change is called a trigger and the
transition it causes is said to trigger the flip-flop. The basic circuits of Figure 2 and Figure 3 require an input trigger
defined by a change in signal level. This level must be returned to its initial level before a second trigger is applied.
Clocked flip-flops are triggered by pulses.

The feedback path between the combinational circuit and memory elements in Figure 1 can produce instability if the
outputs of the memory elements (flip-flops) are changing while the outputs of the combinational circuit that go to the

By Israel W. Digital Logic Design. Page 72


flip-flop inputs are being sampled by the clock pulse. A way to solve the feedback timing problem is to make the flip-
flop sensitive to the pulse transition rather than the pulse duration.

The clock pulse goes through two signal transitions: from 0 to 1 and the return from 1 to 0. As shown in Figure 8 the
positive transition is defined as the positive edge and the negative transition as the negative edge.

Figure 8. Definition of clock pulse transition

The clocked flip-flops already introduced are triggered during the positive edge of the pulse, and the state transition
starts as soon as the pulse reaches the logic-1 level. If the other inputs change while the clock is still 1, a new output
state may occur. If the flip-flop is made to respond to the positive (or negative) edge transition only, instead of the entire
pulse duration, then the multiple-transition problem can be eliminated.

Master-Slave Flip-Flop

A master-slave flip-flop is constructed from two seperate flip-flops. One circuit serves as a master and the other as a
slave. The logic diagram of an SR flip-flop is shown in Figure 9. The master flip-flop is enabled on the positive edge of
the clock pulse CP and the slave flip-flop is disabled by the inverter. The information at the external R and S inputs is
transmitted to the master flip-flop. When the pulse returns to 0, the master flip-flop is disabled and the slave flip-flop is
enabled. The slave flip-flop then goes to the same state as the master flip-flop.

Figure 9. Logic diagram of a master-slave flip-flop

Master slave RS flip flop

The timing relationship is shown in Figure 10 and is assumed that the flip-flop is in the clear state prior to the
occurrence of the clock pulse. The output state of the master-slave flip-flop occurs on the negative transition of the clock

By Israel W. Digital Logic Design. Page 73


pulse. Some master-slave flip-flops change output state on the positive transition of the clock pulse by having an
additional inverter between the CP terminal and the input of the master.

Figure 10. Timing relationship in a master slave flip-flop

Edge Triggered Flip-Flop

Another type of flip-flop that synchronizes the state changes during a clock pulse transition is the edge-triggered flip-
flop. When the clock pulse input exceeds a specific threshold level, the inputs are locked out and the flip-flop is not
affected by further changes in the inputs until the clock pulse returns to 0 and another pulse occurs. Some edge-triggered
flip-flops cause a transition on the positive edge of the clock pulse (positive-edge-triggered), and others on the negative
edge of the pulse (negative-edge-triggered). The logic diagram of a D-type positive-edge-triggered flip-flop is shown in
Figure 11.

Figure 11. D-type positive-edge triggered flip-flop

When using different types of flip-flops in the same circuit, one must ensure that all flip-flop outputs make their
transitions at the same time, ie., during either the negative edge or the positive edge of the clock pulse.

Direct Inputs

Flip-flops in IC packages sometimes provide special inputs for setting or clearing the flip-flop asynchronously. They are
usually called preset and clear. They affect the flip-flop without the need for a clock pulse. These inputs are useful for
bringing flip-flops to an intial state before their clocked operation. For example, after power is turned on in a digital
system, the states of the flip-flops are indeterminate. Activating the clear input clears all the flip-flops to an initial state
of 0. The graphic symbol of a JK flip-flop with an active-low clear is shown in Figure 12.

(a) Graphic Symbol

By Israel W. Digital Logic Design. Page 74


(b) Transition table

Figure 12. JK flip-flop with direct clear

Summary
Since memory elements in sequential circuits are usually flip-flops, it is worth summarising the behaviour of various
flip-flop types before proceeding further. All flip-flops can be divided into four basic types: SR, JK, D and T. They
differ in the number of inputs and in the response invoked by different value of input signals. The four types of flip-
flops are defined in Table 1.

Table 1. Flip-flop Types

TYPE FLIP-FLOP CHARACTERISTIC CHARACTERISTIC


EXCITATION TABLE
SYMBOL TABLE EQUATION

S R Q(next) Q Q(next) S R
0 0 Q 0 0 0 X
Q(next) = S + R′Q
SR 0 1 0 0 1 1 0
SR = 0 1 0 0 1
1 0 1
1 1 X 0
1 1 ?

J K Q(next) Q Q(next) J K
0 0 Q 0 0 0 X
JK 0 1 0 Q(next) = JQ′ + K′Q 0 1 1 X
1 0 1 1 0 X 1
1 1 Q′ 1 1 X 0

Q Q(next) D
0 0 0
D Q(next)
D Q(next) = D 0 1 1
0 0
1 0 0
1 1
1 1 1

Q Q(next) T
T Q(next) 0 0 0
T 0 Q Q(next) = TQ′ + T′Q 0 1 1
1 Q′ 1 0 1
1 1 0

By Israel W. Digital Logic Design. Page 75


Each of these flip-flops can be uniquely described by its graphical symbol, its characteristic table, its characteristic
equation or excitation table. All flip-flops have output signals Q and Q′.

The characteristic table in the third column of Table 1 defines the state of each flip-flop as a function of its inputs and
previous state. Q refers to the present state and Q(next) refers to the next state after the occurrence of the clock pulse.
The characteristic table for the RS flip-flop shows that the next state is equal to the present state when both inputs S and
R are equal to 0. When R=1, the next clock pulse clears the flip-flop. When S=1, the flip-flop output Q is set to 1. The
equation mark (?) for the next state when S and R are both equal to 1 designates an indeterminate next state.

The characteristic table for the JK flip-flop is the same as that of the RS when J and K are replaced by S and R
respectively, except for the indeterminate case. When both J and K are equal to 1, the next state is equal to the
complement of the present state, that is, Q(next) = Q′.

The next state of the D flip-flop is completely dependent on the input D and independent of the present state.

The next state for the T flip-flop is the same as the present state Q if T=0 and complemented if T=1.

The characteristic table is useful during the analysis of sequential circuits when the value of flip-flop inputs are known
and we want to find the value of the flip-flop output Q after the rising edge of the clock signal. As with any other truth
table, we can use the map method to derive the characteristic equation for each flip-flop, which are shown in the third
column of Table 1.

During the design process we usually know the transition from present state to the next state and wish to find the flip-
flop input conditions that will cause the required transition. For this reason we will need a table that lists the required
inputs for a given change of state. Such a list is called the excitation table, which is shown in the fourth column of Table
1. There are four possible transitions from present state to the next state. The required input conditions are derived from
the information available in the characteristic table. The symbol X in the table represents a "don't care" condition, that
is, it does not matter whether the input is 1 or 0.

Synchronous and asynchronous sequential circuit

asynchronous system is a system whose outputs depend upon the order in which its input variables change and can be
affected at any instant of time.

Gate-type asynchronous systems are basically combinational circuits with feedback paths. Because of the feedback
among logic gates, the system may, at times, become unstable. Consequently they are not often used.

Synchronous type of system uses storage elements called flip-flops that are employed to change their binary value only
at discrete instants of time. Synchronous sequential circuits use logic gates and flip-flop storage devices. Sequential
circuits have a clock signal as one of their inputs. All state transitions in such circuits occur only when the clock value is
either 0 or 1 or happen at the rising or falling edges of the clock depending on the type of memory elements used in the
circuit. Synchronization is achieved by a timing device called a clock pulse generator. Clock pulses are distributed
throughout the system in such a way that the flip-flops are affected only with the arrival of the synchronization pulse.
Synchronous sequential circuits that use clock pulses in the inputs are called clocked-sequential circuits. They are stable
and their timing can easily be broken down into independent discrete steps, each of which is considered separately.

A clock signal is a periodic square wave that indefinitely switches from 0 to 1 and from 1 to 0 at fixed intervals. Clock
cycle time or clock period: the time interval between two consecutive rising or falling edges of the clock.

Moore and Mealy model of sequential circuit

Mealy and Moore models are the basic models of state machines. A state machine which uses only Entry Actions, so
that its output depends on the state, is called a Moore model. A state machine which uses only Input Actions, so that the
output depends on the state and also on inputs, is called a Mealy model. The models selected will influence a design but
there are no general indications as to which model is better. Choice of a model depends on the application, execution
means (for instance, hardware systems are usually best realised as Moore models) and personal preferences of a
designer or programmer. In practise, mixed models are often used with several action types
By Israel W. Digital Logic Design. Page 76
Design of Sequential Circuits

The design of a synchronous sequential circuit starts from a set of specifications and culminates in a logic diagram or a
list of Boolean functions from which a logic diagram can be obtained. In contrast to a combinational logic, which is
fully specified by a truth table, a sequential circuit requires a state table for its specification. The first step in the design
of sequential circuits is to obtain a state table or an equivalence representation, such as a state diagram.

A synchronous sequential circuit is made up of flip-flops and combinational gates. The design of the circuit consists of
choosing the flip-flops and then finding the combinational structure which, together with the flip-flops, produces a
circuit that fulfils the required specifications. The number of flip-flops is determined from the number of states needed
in the circuit.

The recommended steps for the design of sequential circuits are set out below:

Analysis of a sequential circuit

We have examined a general model for sequential circuits. In this model the effect of all previous inputs on the outputs
is represented by a state of the circuit. Thus, the output of the circuit at any time depends upon its current state and the
input. These also determine the next state of the circuit. The relationship that exists among the inputs, outputs, present
states and next states can be specified by either the state table or the state diagram.

State Table
The state table representation of a sequential circuit consists of three sections labelled present state, next state and
output. The present state designates the state of flip-flops before the occurrence of a clock pulse. The next state shows
the states of flip-flops after the clock pulse, and the output section lists the value of the output variables during the
present state.

By Israel W. Digital Logic Design. Page 77


State Diagram
In addition to graphical symbols, tables or equations, flip-flops can also be represented graphically by a state diagram.
In this diagram, a state is represented by a circle, and the transition between states is indicated by directed lines (or arcs)
connecting the circles. An example of a state diagram is shown in Figure 3 below.

Figure 3. State Diagram

The binary number inside each circle identifies the state the circle represents. The directed lines are labelled with two
binary numbers separated by a slash (/). The input value that causes the state transition is labelled first. The number after
the slash symbol / gives the value of the output. For example, the directed line from state 00 to 01 is labelled 1/0,
meaning that, if the sequential circuit is in a present state and the input is 1, then the next state is 01 and the output is 0.
If it is in a present state 00 and the input is 0, it will remain in that state. A directed line connecting a circle with itself
indicates that no change of state occurs. The state diagram provides exactly the same information as the state table and
is obtained directly from the state table.

Example: Consider a sequential circuit shown in Figure 4. It has one input x, one output Z and two state variables
Q1Q2 (thus having four possible present states 00, 01, 10, 11).

By Israel W. Digital Logic Design. Page 78


Figure 4. A Sequential Circuit

The behaviour of the circuit is determined by the following Boolean expressions:

Z = xQ1
D1 = x′ + Q1
D2 = xQ2′ + x′*Q1′

These equations can be used to form the state table. Suppose the present state (i.e. Q 1Q2) = 00 and input x = 0. Under
these conditions, we get Z = 0, D1 = 1, and D2 = 1. Thus the next state of the circuit D1D2 = 11, and this will be the
present state after the clock pulse has been applied. The output of the circuit corresponding to the present state
Q1Q2 = 00 and x = 1 is Z = 0. This data is entered into the state table as shown in Table 2.

Next State
Present State Output (Z)

X=0 X=1
Q1 Q2 x=0 x=1
Q1 Q0 Q1 Q0
0 0 1 1 0 1 0 0
0 1 1 1 0 0 0 0
1 0 1 0 1 1 0 1
1 1 1 0 1 0 0 1

Table 2. State table for the sequential circuit in Figure 4.

The state diagram for the sequential circuit in Figure 4 is shown in Figure 5.

Figure 5. State Diagram of circuit in Figure 4.

By Israel W. Digital Logic Design. Page 79


State Diagrams of Various Flip-flops
Table 3 shows the state diagrams of the four types of flip-flops.

NAME STATE DIAGRAM

SR

JK

Table 3. State diagrams of the four types of flip-flops.

You can see from the table that all four flip-flops have the same number of states and transitions. Each flip-flop is in the
set state when Q=1 and in the reset state when Q=0. Also, each flip-flop can move from one state to another, or it can re-
enter the same state. The only difference between the four types lies in the values of input signals that cause these
transitions.

A state diagram is a very convenient way to visualise the operation of a flip-flop or even of large sequential
components.

By Israel W. Digital Logic Design. Page 80


Example 1.1

Derive the state table and state diagram for the sequential circuit shown in Figure 7.

Figure 7. Logic schematic of a sequential circuit.

SOLUTION:

STEP 1: First we derive the Boolean expressions for the inputs of each flip-flops in the schematic, in terms of
external input Cnt and the flip-flop outputs Q1 and Q0. Since there are two D flip-flops in this example, we derive two
expressions for D1 and D0:

D0 = Cnt Q0 = Cnt′Q0 + CntQ0′


D1 = Cnt′Q1 + CntQ1′Q0 + CntQ1Q0′

These Boolean expressions are called excitation equations since they represent the inputs to the flip-flops of the
sequential circuit in the next clock cycle.

STEP 2: Derive the next-state equations by converting these excitation equations into flip-flop characteristic
equations. In the case of D flip-flops, Q(next) = D. Therefore the next state equal the excitation equations.

Q0(next) = D0 = Cnt′Q0 + CntQ0′


Q1(next) = D1 = Cnt′Q1 + CntQ1′Q0 + CntQ1Q0′

STEP 3: Now convert these next-state equations into tabular form called the next-state table.

Next State
Present State

Cnt = 0 Cnt = 1
Q1 Q0
Q1 Q0 Q1 Q0
0 0 0 0 0 1
0 1 0 1 1 0
1 0 1 0 1 1
1 1 1 1 0 0

Each row is corresponding to a state of the sequential circuit and each column represents one set of input values. Since
we have two flip-flops, the number of possible states is four - that is, Q1Q0 can be equal to 00, 01, 10, or 11. These are
present states as shown in the table.

By Israel W. Digital Logic Design. Page 81


For the next state part of the table, each entry defines the value of the sequential circuit in the next clock cycle after the
rising edge of the Clk. Since this value depends on the present state and the value of the input signals, the next state
table will contain one column for each assignment of binary values to the input signals. In this example, since there is
only one input signal, Cnt, the next-state table shown has only two columns, corresponding to Cnt = 0 and Cnt = 1.

Note that each entry in the next-state table indicates the values of the flip-flops in the next state if their value in the
present state is in the row header and the input values in the column header.

Each of these next-state values has been computed from the next-state equations in STEP 2.

STEP 4: The state diagram is generated directly from the next-state table, shown in Figure 8.

Figure 8. State diagram

Each arc is labelled with the values of the input signals that cause the transition from the present state (the source of the
arc) to the next state (the destination of the arc).

Example 1.2

Derive the next state, the output table and the state diagram for the sequential circuit shown in Figure 10.

Figure 10. Logic schematic of a sequential circuit.

By Israel W. Digital Logic Design. Page 82


SOLUTION:

The input combinational logic in Figure 10 is the same as in example1.1 so the excitation and the next-state equations
will be the same as in Example 1.1.

Excitation equations:

D0 = Cnt Q0 = Cnt′Q0 + CntQ0′ D1 = Cnt′Q1 + CntQ1′Q0 + CntQ1Q0′

Next-state equations: Q0(next) = D0 = Cnt′Q0 + CntQ0′ Q1(next) = D0 = Cnt′Q1 + CntQ1′Q0 + CntQ1Q0′

In addition, however, we have computed the output equation.

Output equation: Y = Q1Q0

As this equation shows, the output Y will equal to 1 when the counter is in state Q1Q0 = 11, and it will stay 1 as long as
the counter stays in that state.

Next-state and output table:

Next State
Present State Output
Cnt = 0 Cnt = 1
Q1 Q0 Z
Q1 Q0 Q1 Q0
00 00 01 0
01 01 10 0
10 10 11 0
11 11 00 1

State diagram:

Figure 11. State diagram of sequential circuit in Figure 10.

State Reduction

Any design process must consider the problem of minimising the cost of the final circuit. The two most obvious cost
reductions are reductions in the number of flip-flops and the number of gates.

By Israel W. Digital Logic Design. Page 83


The number of states in a sequential circuit is closely related to the complexity of the resulting circuit. It is therefore
desirable to know when two or more states are equivalent in all aspects. The process of eliminating the equivalent or
redundant states from a state table/diagram is known as state reduction.

Example: Let us consider the state table of a sequential circuit shown in Table 6.

Next State Output


Present State
x=0 x=1 x=0 x=1
A B C 1 0
B F D 0 0
C D E 1 1
D F E 0 1
E A D 0 0
F B C 1 0

Table 6. State table

It can be seen from the table that the present state A and F both have the same next states, B (when x=0) and C (when
x=1). They also produce the same output 1 (when x=0) and 0 (when x=1). Therefore states A and F are equivalent. Thus
one of the states, A or F can be removed from the state table. For example, if we remove row F from the table and
replace all F's by A's in the columns, the state table is modified as shown in Table 7.

Next State Output


Present State
x=0 x=1 x=0 x=1
A B C 1 0
B A D 0 0
C D E 1 1
D A E 0 1
E A D 0 0

Table 7. State F removed

It is apparent that states B and E are equivalent. Removing E and replacing E's by B's results in the reduce table shown
in Table 8.

Next State Output


Present State
x=0 x=1 x=0 x=1
A B C 1 0
B A D 0 0
C D B 1 1
D A B 0 1

Table 8. Reduced state table

The removal of equivalent states has reduced the number of states in the circuit from six to four. Two states are
considered to be equivalent if and only if for every input sequence the circuit produces the same output sequence
irrespective of which one of the two states is the starting state.

Example 1.3

By Israel W. Digital Logic Design. Page 84


We wish to design a synchronous sequential circuit whose state diagram is shown in Figure 13. The type of flip-flop to
be use is J-K.

Figure 13. State diagram

From the state diagram, we can generate the state table shown in Table 9. Note that there is no output section for this
circuit. Two flip-flops are needed to represent the four states and are designated Q0Q1. The input variable is labelled x.

Present State Next State


X=0 X=1
Q0 Q1
Q0 Q1 Q0 Q1
00 00 01
01 10 01
10 10 11
11 11 00

Table 9. State table.

We shall now derive the excitation table and the combinational structure. The table is now arranged in a different form
shown in Table 11, where the present state and input variables are arranged in the form of a truth table. Remember, the
excitable for the JK flip-flop was derive in table 1

Table 10. Excitation table for JK flip-flop

Output Transitions Flip-flop inputs

Q →Q(next) JK

0→0 0X
0 →1 1X
1→0 X1
1 →1 X0

By Israel W. Digital Logic Design. Page 85


Table 11. Excitation table of the circuit

Present State Input Next State Flip-flop Inputs

Q0 Q1 x Q0 Q1 J0 K0 J1 K1

0 0 0 0 0 0 X 0 X

0 0 1 0 1 0 X 1 X

0 1 0 1 0 1 X X 1

0 1 1 0 1 0 X X 0

1 0 0 1 0 X 0 0 X

1 0 1 1 1 X 0 1 X

1 1 0 1 1 X 0 X 0

1 1 1 0 0 X 1 X 1

In the first row of Table 11, we have a transition for flip-flop Q0 from 0 in the present state to 0 in the next state. In
Table 10 we find that a transition of states from 0 to 0 requires that input J = 0 and input K = X. So 0 and X are copied
in the first row under J0 and K0 respectively. Since the first row also shows a transition for the flip-flop Q1 from 0 in
the present state to 0 in the next state, 0 and X are copied in the first row under J1 and K1. This process is continued for
each row of the table and for each flip-flop, with the input conditions as specified in Table 10.

The flip-flop input functions are derived:

J0 = Q1*x′ K0 = Q1*x
J1 = x K1 = Q0′*x′ + Q0*x = Q0 x

Note: the symbol is exclusive-NOR.

The logic diagram is drawn in Figure 15.

Figure 15. Logic diagram of the sequential circuit.

Example 1.4 Design a sequential circuit whose state tables are specified in Table 12, using D flip-flops.

Table 12. State table of a sequential circuit.

By Israel W. Digital Logic Design. Page 86


Next State
Present State Output

X=0 X=1
Q0 Q1 x=0 x=1
Q0 Q1 Q0 Q1
00 00 01 0 0
01 00 10 0 0
10 11 10 0 0
11 00 01 0 1

Table 13. Excitation table for a D flip-flop.

Output Transitions Flip-flop inputs

Q→Q(next) D

0
0 → 0
0 → 1 1
1 → 0 0
1 → 1
1

Next step is to derive the excitation table for the design circuit, which is shown in Table 14. The output of the circuit is
labelled Z.

Present State Next State Input Flip-flop Inputs Output


Q0 Q1 Q0 Q1 x D0 D1 Z
0 0 0 0 0 0 0 0
0 0 0 1 1 0 1 0
0 1 0 0 0 0 0 0
0 1 1 0 1 1 0 0
1 0 1 1 0 1 1 0
1 0 1 0 1 1 0 0
1 1 0 0 0 0 0 0
1 1 0 1 1 0 1 1

Table 14. Excitation table

Now plot the flip-flop inputs and output functions on the Karnaugh map to derive the Boolean expressions, which is
shown in Figure 16.

By Israel W. Digital Logic Design. Page 87


Figure 16. Karnaugh maps

The simplified Boolean expressions are:

D0 = Q0*Q1′ + Q0′*Q1*x
D1 = Q0′*Q1′*x + Q0*Q1*x + Q0*Q1′*x′
Z = Q0*Q1*x

Finally, draw the logic diagram.

Figure 17. Logic diagram of the sequential circuit.

By Israel W. Digital Logic Design. Page 88


Register
A register is a sequential circuit with n + 1 (not counting the clock) inputs and n output. To each of the outputs
corresponds an input. The first n inputs will be called x0 trough xn-1 and the last input will be called ld (for load). The n
outputs will be called y0 trough yn-1.

When the ld input is 0, the outputs are uneffected by any clock transition. When the ld input is 1, the x inputs are stored
in the register at the next clock transition, making the y outputs into copies of the x inputs before the clock transition.

We can explain this behavior more formally with a state table. As an example, let us take a register with n = 4. The left
side of the state table contains 9 columns, labeled x0, x1, x2, x3, ld, y0, y1, y2, and y3. This means that the state table
has 512 rows. We will therefore abbreviate it. Here it is:

ld x3 x2 x1 x0 y3 y2 y1 y0 | y3′ y2′ y1′ y0′

0 -- -- -- -- c3 c2 c1 c0 | c3 c2 c1 c0

1 c3 c2 c1 c0 -- -- -- -- | c3 c2 c1 c0

As you can see, when ld is 0 (the top half of the table), the right side of the table is a copy of the values of the old
outputs, independently of the inputs. When ld is 1, the right side of the table is instead a copy of the values of the inputs,
independently of the old values of the outputs.

Registers play an important role in computers. Some of them are visible to the programmer, and are used to hold
variable values for later use. Some of them are hidden to the programmer, and are used to hold values that are internal to
the central processing unit, but nevertheless important.

Shift registers
Shift registers are a type of sequential logic circuit, mainly for storage of digital data. They are a group of flip-flops
connected in a chain so that the output from one flip-flop becomes the input of the next flip-flop. Most of the registers
possess no characteristic internal sequence of states. All the flip-flops are driven by a common clock, and all are set or
reset simultaneously.

In this section, the basic types of shift registers are studied, such as Serial In - Serial Out, Serial In - Parallel Out,
Parallel In - Serial Out, Parallel In - Parallel Out, and bidirectional shift registers. A special form of counter - the shift
register counter, is also introduced.

Serial In - Serial Out Shift Registers

A basic four-bit shift register can be constructed using four D flip-flops, as shown below. The operation of the circuit is
as follows. The register is first cleared, forcing all four outputs to zero. The input data is then applied sequentially to
the D input of the first flip-flop on the left (FF0). During each clock pulse, one bit is transmitted from left to right.
Assume a data word to be 1001. The least significant bit of the data has to be shifted through the register from FF0 to
FF3.

By Israel W. Digital Logic Design. Page 89


In order to get the data out of the register, they must be shifted out serially. This can be done destructively or non-
destructively. For destructive readout, the original data is lost and at the end of the read cycle, all flip-flops are reset to
zero.

To avoid the loss of data, an arrangement for a non-destructive reading can be done by adding two AND gates, an OR
gate and an inverter to the system. The construction of this circuit is shown below.

The data is loaded to the register when the control line is HIGH (ie WRITE). The data can be shifted out of the register
when the control line is LOW (ie READ)

Serial In - Parallel Out Shift Registers

For this kind of register, data bits are entered serially in the same manner as discussed in the last section. The difference
is the way in which the data bits are taken out of the register. Once the data are stored, each bit appears on its respective
output line, and all bits are available simultaneously. A construction of a four-bit serial in - parallel out register is
shown below.

A four-bit parallel in - serial out shift register is shown below. The circuit uses D flip-flops and NAND gates for
entering data (ie writing) to the register.

By Israel W. Digital Logic Design. Page 90


D0, D1, D2 and D3 are the parallel inputs, where D0 is the most significant bit and D3 is the least significant bit. To
write data in, the mode control line is taken to LOW and the data is clocked in. The data can be shifted when the mode
control line is HIGH as SHIFT is active high

Parallel In - Parallel Out Shift Registers

For parallel in - parallel out shift registers, all data bits appear on the parallel outputs immediately following the
simultaneous entry of the data bits. The following circuit is a four-bit parallel in - parallel out shift register constructed
by D flip-flops.

The D's are the parallel inputs and the Q's are the parallel outputs. Once the register is clocked, all the data at the D
inputs appear at the corresponding Q outputs simultaneously.

Bidirectional Shift Registers

The registers discussed so far involved only right shift operations. Each right shift operation has the effect of
successively dividing the binary number by two. If the operation is reversed (left shift), this has the effect of
multiplying the number by two. With suitable gating arrangement a serial shift register can perform both operations.

A bidirectional, or reversible, shift register is one in which the data can be shift either left or right. A four-bit
bidirectional shift register using D flip-flops is shown below.

Here a set of NAND gates are configured as OR gates to select data inputs from the right or left adjacent bistables, as
selected by the LEFT/RIGHT control line.

By Israel W. Digital Logic Design. Page 91


Shift Register Counters
Two of the most common types of shift register counters are introduced here: the Ring counter and the Johnson counter.
They are basically shift registers with the serial outputs connected back to the serial inputs in order to produce
particular sequences. These registers are classified as counters because they exhibit a specified sequence of states.

Ring Counters

A ring counter is basically a circulating shift register in which the output of the most significant stage is fed back to the
input of the least significant stage. The following is a 4-bit ring counter constructed from D flip-flops. The output of
each stage is shifted into the next stage on the positive edge of a clock pulse. If the CLEAR signal is high, all the flip-
flops except the first one FF0 are reset to 0. FF0 is preset to 1 instead.

Since the count sequence has 4 distinct states, the counter can be considered as a mod-4 counter. Only 4 of the
maximum 16 states are used, making ring counters very inefficient in terms of state usage. But the major advantage of a
ring counter over a binary counter is that it is self-decoding. No extra decoding circuit is needed to determine what state
the counter is in.

Johnson Counters

Johnson counters are a variation of standard ring counters, with the inverted output of the last stage fed back to the input
of the first stage. They are also known as twisted ring counters. An n-stage Johnson counter yields a count sequence of
length 2n, so it may be considered to be a mod-2n counter. The circuit above shows a 4-bit Johnson counter. The state
sequence for the counter is given in the table

By Israel W. Digital Logic Design. Page 92


Again, the apparent disadvantage of this counter is that the maximum available states are not fully utilized. Only eight
of the sixteen states are being used.

Counters

A sequential circuit that goes through a prescribed sequence of states upon the application of input pulses is called a
counter. The input pulses, called count pulses, may be clock pulses. In a counter, the sequence of states may follow a
binary count or any other sequence of states. Counters are found in almost all equipment containing digital logic. They
are used for counting the number of occurrences of an even and are useful for generating timing sequences to control
operations in a digital system.

A counter is a sequential circuit with 0 inputs and n outputs. Thus, the value after the clock transition depends only on
old values of the outputs. For a counter, the values of the outputs are interpreted as a sequence of binary digits (see the
section on binary arithmetic).

We shall call the outputs o0, o1, ..., on-1. The value of the outputs for the counter after a clock transition is a binary
number which is one plus the binary number of the outputs before the clock transition.

We can explain this behavior more formally with a state table. As an example, let us take a counter with n = 4. The left
side of the state table contains 4 columns, labeled o0, o1, o2, and o3. This means that the state table has 16 rows. Here it
is in full:

o3 o2 o1 o0 | o3′ o2′ o1′ o0′

0 0 0 0| 0 0 0 1
0 0 0 1| 0 0 1 0
0 0 1 0| 0 0 1 1
0 0 1 1| 0 1 0 0
0 1 0 0| 0 1 0 1
0 1 0 1| 0 1 1 0
0 1 1 0| 0 1 1 1
0 1 1 1| 1 0 0 0
1 0 0 0| 1 0 0 1
1 0 0 1| 1 0 1 0
1 0 1 0| 1 0 1 1
1 0 1 1| 1 1 0 0
1 1 0 0| 1 1 0 1
1 1 0 1| 1 1 1 0
1 1 1 0| 1 1 1 1
1 1 1 1| 0 0 0 0
As you can see, the right hand side of the table is always one plus the value of the left hand side of the table, except for
the last line, where the value is 0 for all the outputs. We say that the counter wraps around.

By Israel W. Digital Logic Design. Page 93


Counters (with some variations) play an important role in computers. Some of them are visible to the programmer, such
as the program counter (PC). Some of them are hidden to the programmer, and are used to hold values that are internal
to the central processing unit, but nevertheless important.

Important variations include:

• The ability to count up or down according to the value of an additional input


• The ability to count or not according the the value of an additional input
• The ability to clear the contents of the counter if some additional input is 1
• The ability to act as a register as well, so that a predetermined value is loaded when some additional input is 1
• The ability to count using a different representation of numbers from the normal (such as Gray-codes, 7-
segment codes, etc)
• The ability to count with different increments that 1

Design of Counters

Example 1.5 A counter is first described by a state diagram, which is shows the sequence of states through which the
counter advances when it is clocked. Figure 18 shows a state diagram of a 3-bit binary counter.

Figure 18. State diagram of


a 3-bit binary counter.

The circuit has no inputs other than the clock pulse and no outputs other than its internal state (outputs are taken off each
flip-flop in the counter). The next state of the counter depends entirely on its present state, and the state transition occurs
every time the clock pulse occurs. Figure 19 shows the sequences of count after each clock pulse.

By Israel W. Digital Logic Design. Page 94


Once the sequential circuit is defined by the state diagram, the next step is to obtain the next-state table, which is
derived from the state diagram in Figure 18 and is shown in Table 15.

Table 15. State table

Present State Next State


Q2 Q1 Q0 Q2 Q1 Q0
0 0 0 0 0 1
0 0 1 0 1 0
0 1 0 0 1 1
0 1 1 1 0 0
1 0 0
1 0 1
1 0 1
1 1 0
1 1 0
1 1 1 1 1 1
0 0 0

Since there are eight states, the number of flip-flops required would be three. Now we want to implement the counter
design using JK flip-flops.

Next step is to develop an excitation table from the state table, which is shown in Table 16.

Table 16. Excitation table

Output State Transitions


Flip-flop inputs
Present State Next State
J2 K2 J1 K1 J0 K0
Q2 Q1 Q0 Q2 Q1 Q0
0 0 0 0 0 1 0X 0X 1X
0 0 1 0 1 0 0X 1X X1
0 1 0 0 1 1 0X X0 1X
0 1 1 1 0 0 1X X1 X1
1 0 0 1 0 1 X0 0X 1X
1 0 1 1 1 0 X0 1X X1
1 1 0 1 1 1 X0 X0 1X
1 1 1 0 0 0 X1 X1 X1

Now transfer the JK states of the flip-flop inputs from the excitation table to Karnaugh maps to derive a simplified
Boolean expression for each flip-flop input. This is shown in Figure 20.

By Israel W. Digital Logic Design. Page 95


Figure 20.
Karnaugh
maps

The 1s in the Karnaugh maps of Figure 20 are grouped with "don't cares" and the following expressions for the J and K
inputs of each flip-flop are obtained:

J0 = K0 = 1
J1 = K1 = Q0
J2 = K2 = Q1*Q0

The final step is to implement the combinational logic from the equations and connect the flip-flops to form the
sequential circuit. The complete logic of a 3-bit binary counter is shown in Figure 21.

Figure 21. Logic diagram of a 3-bit binary counter

Example 1.6 Design a counter specified by the state diagram in Example 1.5 using T flip-flops. The state diagram is
shown here again in Figure 22.

By Israel W. Digital Logic Design. Page 96


Figure 22. State diagram of
a 3-bit binary counter.

The state table will be the same as in Example 1.5.

Now derive the excitation table from the state table, which is shown in Table 17.

Table 17. Excitation table.

Output State Transitions


Flip-flop inputs
Present State Next State
T2 T1 T0
Q2 Q1 Q0 Q2 Q1 Q0
0 0 0 0 0 1 0 0 1
0 0 1 0 1 0 0 1 1
0 1 0 0 1 1 0 0 1
0 1 1 1 0 0 1 1 1
1 0 0
1 0 1 0 0 1
1 0 1
1 1 0 0 1 1
1 1 0
1 1 1 1 1 1 0 0 1
0 0 0 1 1 1

Next step is to transfer the flip-flop input functions to Karnaugh maps to derive a simplified Boolean expressions, which
is shown in Figure 23.

Figure 23.
Karnaugh
maps

The following expressions are obtained:

T0 = 1; T1 = Q0; T2 = Q1*Q0

By Israel W. Digital Logic Design. Page 97


Finally, draw the logic diagram of the circuit from the expressions obtained. The complete logic diagram of the counter
is shown in Figure 24.

Figure 24. Logic


diagram of 3-bit
binary counter.

Exercises

Analysis of Sequential Circuits.

1. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.1. Draw the timing diagram of the circuit.

Figure
1.1

2. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.2.

Figure
1.2

3. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.3.

By Israel W. Digital Logic Design. Page 98


Figure
1.3

4. Derive the state output and state diagran for the sequential circuit shown in Figure 1.4.

Figure
1.4

5. A sequential circuit uses two D flip-flops as memory elements. The behaviour of the circuit is described by the
following equations:

D1 = Q1 + x′*Q2
D2 = x*Q1′ + x′*Q2
Z = x′*Q1*Q2 + x*Q1′*Q2′

Derive the state table and draw the state diagram of the circuit.

6. Design a sequential circuit specified by Table 6.1, using JK flip-flops.

Table 6.1

Present State Next State Output


Q0 Q1 x=0 x=1 x=0 x=1
00 0 0 0 1 0 0
01 0 0 1 0 0 0
10 1 1 1 0 0 0
11 0 0 0 1 0 1

7. Design the sequential circuit in question 6, using T flip-flops.

8. Design a mod-5 counter which has the following binary sequence: 0, 1, 2, 3, 4. Use JK flip-flops.

9. Design a counter that has the following repeated binary sequence: 0, 1, 2, 3, 4, 5, 6, 7. Use RS flip-flops.

10. Design a counter with the following binary sequence: 1, 2, 5, 7 and repeat. Use JK flip-flops.

11. Design a counter with the following repeated binary sequence: 0, 4, 2, 1, 6. Use T flip-flops.

By Israel W. Digital Logic Design. Page 99


12. Design a counter that counts in the sequence 0, 1, 3, 6, 10, 15, using four a) D, b) SR, c) JK and d) T flip-flops.

13. The content of a 5-bit shift register serial in parallel out with rotation capability is initially 11001. The register is
shifted four times to the right. What are the content and the output of the register after each shift?

By Israel W. Digital Logic Design. Page 100


Tri-state logic
Both combinatorial circuits and sequential circuits are fairly easy to understand, since we pretty much only have to
understand logic gates and how they are interconnected.

With tri-state logic circuits, this is no longer true. As their names indicate, they manipulate signals that can be in one of
three states, as opposed to only 0 or 1. While this may sound confusing at first, the idea is relatively simple.

Consider a fairly common case in which there are a number of source circuits S1, S2, etc in different parts of a chip (i.e.,
they are not real close together). At different times, exactly one of these circuit will generate some binary value that is to
be distributed to some set of destination circuits D1, D2, etc also in different parts of the chip. At any point in time,
exactly one source circuit can generate a value, and the value is always to be distributed to all the destination circuits.
Obviously, we have to have some signals that select which source circuit is to generate information. Assume for the
moment that we have signals s1, s2, etc for exactly that purpose. One solution to this problem is indicated in this figure:

As you can see, this solution requires that all outputs are routed to a central place. Often such solutions are impractical
or costly. Since only one of the sources is "active" at one point, we ought to be able to use a solution like this:

By Israel W. Digital Logic Design. Page 101


However, connecting two or more outputs together is likely to destroy the circuits. The solution to our problem is to use
tri-state logic.

Return to the transistors

A tri-state circuit (combinatorial or sequential) is like an ordinary circuit, except that it has an additional input that we
shall call enable. When the enable input is 1, the circuit behaves exactly like the corresponding normal (not tri-state)
circuit. When the enable input is 0, the outputs are completely disconnected from the rest of the circuit. It is as if there
we had taken an ordinary circuit and added a switch on every output, such that the switch is open when enable is 0 and
closed if enable is 1 like this:

which is pretty close to the truth. The switch is just another transistor that can be added at a very small cost.

Any circuit can exist in a tri-state version. However, as a special case, we can convert any ordinary circuit to a tri-state
circuit, by using a special tri-state combinatorial circuit that simply copies its inputs to the outputs, but that also has an
enable input. We call such a circuit a bus driver for reasons that will become evident when we discuss buses. A bus
driver with one input is drawn like this:

Here is a version for bidirectional signals:

By Israel W. Digital Logic Design. Page 102


Memories
A memory is neither a sequential circuit (since we require sequential circuits to be clocked, and memories are not
clocked), nor a combinatorial circuit, since its output values depend on past values.

In general, a memory has m inputs that are called the address inputs that are used to select exactly one out of 2m words,
each one consisting of n bits.

Furthermore, it has n connectors that are bidirectional that are called the data lines. These data lines are used both as
inputs in order to store information in a word selected by the address inputs, and as outputs in order to recall a
previously stored value. Such a solution reduces the number of required connectors by a factor two.

Finally, it has an input called enable (see the section on tri-state logic for an explanation) that controls whether the data
lines have defined states or not, and an input called r/w that determines the direction of the data lines.

A memory with an arbitrary value of m and an arbitrary value of n can be built from memories with smaller values of
these parameters. To show how this can be done, we first show how a one-bit memory (one with m = 0 and n = 1) can
be built. Here is the circuit:

The central part of the circuit is an SR-latch that holds one bit of information. When enable is 0, the output d0 is isolated
both from the inputs to and the output from the SR-latch. Information is passed from d0 to the inputs of the latch when
enable is 1 and r/w is 1 (indicating write). Information is passed from the output x to d0 when enable is 1 and r/w is 0
(indicating read).

Now that we know how to make a one-bit memory, we must figure out how to make larger memories. First, suppose we
have n memories of 2m words, each one consisting of a single bit. We can easily convert these to a single memory with
2m words, each one consisting of a n bits. Here is how we do it:

By Israel W. Digital Logic Design. Page 103


We have simply connected all the address inputs together, all the enables together, and all the read/writes together. Each
one-but memory supplies one of the bits of the n-bit word in the final circuit.

Next, we have to figure out how to make a memory with more words. To show that, we assume that we have two
memories each with m address inputs and n data lines. We show how we can connect them so as to obtain a single
memory with m + 1 address inputs and n data lines. Here is the circuit:

As you can see, the additional address line is combined with the enable input to select one of the two smaller memories.
Only one of them will be connected to the data lines at a time (because of the way tri-state logic works).

By Israel W. Digital Logic Design. Page 104


Read-only memories
A read-only memory (or ROM for short), is like an ordinary memory, except that it does not have the capability of
writing. Its contents is fixed at the factory.

Since the contents cannot be altered, we don't have a r/w signal. Except for the enable signal, a ROM is thus like an
ordinary combinatorial circuit with m inputs and n outputs.

ROMs are usually programmable. They are often sold with a contents of all 0s or all 1s. The user can then stick it in a
special machine and fill it with the desired contents, i.e. the ROM can be programmed. In that case, we sometimes call it
a PROM (programmable ROM).

Some varieties of PROMS can be erased and re-programmed. The way they are erased is typically with ultra-violet
light. When the PROM can be erased, we sometimes call it EPROM (erasable PROM).

A programmable logic device (PLD)


A programmable logic device or PLD is an electronic component used to build reconfigurable digital circuits. Unlike a
logic gate, which has a fixed function, a PLD has an undefined function at the time of manufacture. Before the PLD can
be used in a circuit it must be programmed

Using a ROM as a PLD


Before PLDs were invented, read-only memory (ROM) chips were used to create arbitrary combinational logic
functions of a number of inputs. Consider a ROM with m inputs (the address lines) and n outputs (the data lines). When
used as a memory, the ROM contains 2m words of n bits each. Now imagine that the inputs are driven not by an m-bit
address, but by m independent logic signals. Theoretically, there are 2m possible Boolean functions of these m signals,
but the structure of the ROM allows just 2n of these functions to be produced at the output pins. The ROM therefore
becomes equivalent to n separate logic circuits, each of which generates a chosen function of the m inputs.

The advantage of using a ROM in this way is that any conceivable function of the m inputs can be made to appear at any
of the n outputs, making this the most general-purpose combinatorial logic device available. Also, PROMs
(programmable ROMs), EPROMs (ultraviolet-erasable PROMs) and EEPROMs (electrically erasable PROMs) are
available that can be programmed using a standard PROM programmer without requiring specialised hardware or
software. However, there are several disadvantages:

• They are usually much slower than dedicated logic circuits,

• they cannot necessarily provide safe "covers" for asynchronous logic transitions so the PROM's outputs may
glitch as the inputs switch,

• They consume more power, and

• Because only a small fraction of their capacity is used in any one application, they often make an inefficient use
of space.

Since most ROMs do not have input or output registers, they cannot be used stand-alone for sequential logic. An
external TTL register was often used for sequential designs such as state machines.

By Israel W. Digital Logic Design. Page 105


A memory is just like a human brain. It is used to store data and instruction. Computer memory isthe storage
space in computer where data is to be processed and instructions required for processing are stored.
The memory is divided into large number of small parts. Each part is called a cell. Each location orcell has a
unique address which varies from zero to memory size minus one.
For example if computer has 64k words, then this memory unit has 64 * 1024 = 65536 memorylocation. The
address of these locations varies from 0 to 65535.
Memory is primarily of two types
Internal Memory − cache memory and primary/main memory
External Memory − magnetic disk / optical disk etc.

Characteristics of Memory Hierarchy are following when we go from top to bottom.


Capacity in terms of storage increases.Cost per bit of storage decreases.
Frequency of access of the memory by the CPU decreases.Access time by the CPU increases.
RAM
A RAM constitutes the internal memory of the CPU for storing data, program and program result. Itis read/write
memory. It is called random access memory RAM.
Since access time in RAM is independent of the address to the word that is, each storage location inside the
memory is as easy to reach as other location & takes the same amount of time. We canreach into the memory at
random & extremely fast but can also be quite expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a powerfailure. Hence, a
backup uninterruptible power system UPS is often used with computers. RAM is small, both in terms of its
physical size and in the amount of data it can hold.
RAM is of two types Static RAM SRAM
Dynamic RAM DRAM

Static RAM SRAM


The word static indicates that the memory retains its contents as long as power remains applied. However, data is
lost when the power gets down due to volatile nature. SRAM chips use a matrix of6-transistors and no capacitors.
Transistors do not require power to prevent leakage, so SRAM need not have to be refreshed on a regular basis.
Because of the extra space in the matrix, SRAM uses more chips than DRAM for the same amountof storage
space, thus making the manufacturing costs higher.
Static RAM is used as cache memory needs to be very fast and small.

Dynamic RAM DRAM


DRAM, unlike SRAM, must be continually refreshed in order for it to maintain the data. This is done by placing
the memory on a refresh circuit that rewrites the data several hundred times persecond. DRAM is used for most
system memory because it is cheap and small. All DRAMs are made up of memory cells. These cells are
composed of one capacitor and one transistor.

By Israel W. Digital Logic Design. Page 106


ROM
ROM stands for Read Only Memory. The memory from which we can only read but cannot write onit. This type
of memory is non-volatile. The information is stored permanently in such memories during manufacture.
A ROM, stores such instruction as are required to start computer when electricity is first turned on,this operation is
referred to as bootstrap. ROM chip are not only used in the computer but also in other electronic items like
washing machine and microwave oven.
Following are the various types of ROM −

MROM MaskedROM
The very first ROMs were hard-wired devices that contained a pre-programmed set of data orinstructions. These
kind of ROMs are known as masked ROMs. It is inexpensive ROM.

PROM ProgrammableReadOnlyMemory
PROM is read-only memory that can be modified only once by a user. The user buys a blank PROMand enters
the desired contents using a PROM programmer. Inside the PROM chip there are small fuses which are burnt
open during programming. It can be programmed only once and is not erasable.

EPROM ErasableandProgrammableReadOnlyMemory
The EPROM can be erased by exposing it to ultra-violet light for a duration of upto 40 minutes. Usually, an
EPROM eraser achieves this function. During programming an electrical charge is trapped in an insulated gate
region. The charge is retained for more than ten years because the charge has no leakage path. For erasing this
charge, ultra-violet light is passed through a quartz crystal window lid. This exposure to ultra-violet light
dissipates the charge. During normal use thequartz lid is sealed with a sticker.

EEPROM ElectricallyErasableandProgrammableReadOnlyMemory

By Israel W. Digital Logic Design. Page 107


Sample Question

1. For the binary number 1000, the weight of the column


with the 1 is
a. 4 c. 8
b. 6 d. 10

2. The 2’s complement of 1000 is


a. 0111 c. 1001
b. 1000 d. 1010
3. The fractional binary number 0.11 has a decimal value of
a. ¼ c. ¾
b. ½ d. none of the above
4. The hexadecimal number 2C has a decimal equivalent
value of
a. 14 c. 64
b. 44 d. none of the above
5. Assume that a floating-point number is represented in
binary. If the sign bit is 1, the
a. number is negative c. exponent is negative
b. number is positive d. exponent is positive
6. When two positive signed numbers are added, the result
may be larger that the size of the original numbers, creating
overflow. This condition is indicated by
a. a change in the sign bit c. a zero result
b. a carry out of the sign position d. smoke
7. The number 1010 in BCD is
a. equal to decimal eight
b. equal to decimal ten
c. equal to decimal twelve
d. invalid
8. An example of an unweighted code is
a. binary c. BCD
b. decimal d. Gray code

By Israel W. Digital Logic Design. Page 108


9. An example of an alphanumeric code is
a. hexadecimal c. BCD
b. ASCII d. CRC
10. An example of an error detection method for
transmitted data is the
a. parity check c. both of the above
b. CRC d. none of the above

Answers:
1. c 5. a 9. b
2. b 6. a 10. c
3. c 7. d
4. b 8. d

By Israel W. Digital Logic Design. Page 109


6. A logic gate that produces a HIGH output only when
all of its inputs are HIGH is a(n)
a. OR gate c. NOR gate
b. AND gate d. NAND gate

By Israel W. Digital Logic Design. Page 110


9. A 2-input gate produces a HIGH output only when the
inputs agree. This type of gate is a(n)
a. OR gate c. NOR gate
b. AND gate d. XNOR gate
10. The required logic for a PLD can be specified in an
Hardware Description Language by
a. text entry
b. schematic entry
c. state diagrams
d. all of the above

Answers: 4. a 8. d
1. c 5. d 9. d
2. b 6. b 10. d
3. a 7. c
1. The associative law for addition is normally written as
a. A + B = B + A
b. (A + B) + C = A + (B + C)
c. AB = BA
d. A + AB = A
2. The Boolean equation AB + AC = A(B+ C) illustrates
a. the distribution law
b. the commutative law
c. the associative law
d. DE Morgan’s theorem
3. The Boolean expression A . 1 is equal to
a. A b. B

By Israel W. Digital Logic Design. Page 111


c. 0 d. 1
4. The Boolean expression A + 1 is equal to
a. A c. 0
b. B d. 1
5. The Boolean equation AB + AC = A(B+ C) illustrates
a. the distribution law c. the associative law
b. the commutative law d. DeMorgan’s theorem
6. A Boolean expression that is in standard SOP form is
a. the minimum logic expression
b. contains only one product term
c. has every variable in the domain in every term
d. none of the above
7. Adjacent cells on a Karnaugh map differ from
each other by
a. one variable
b. two variables
c. three variables
d. answer depends on the size of the map

10. In VHDL code, the two main parts are called the
a. I/O and the module b. entity and the architecture

By Israel W. Digital Logic Design. Page 112


c. port and the module d. port and the architecture
Answers: 4. d 8. a
1. b 5. a 9. d
2. c 6. c 10. b
3. a 7. a

3. If you expand two 4-bit comparators to accept two 8-bit


numbers, the output of the least significant comparator is
a. equal to the final output
b. connected to the cascading inputs of the most
significant comparator
c. connected to the output of the most significant
comparator
d. not used

By Israel W. Digital Logic Design. Page 113


6. The 74138 is a 3-to-8 decoder. Together, two of these ICs can
be used to form one 4-to-16 decoder. To do this, connect
a. one decoder to the LSBs of the input; the other
decoder to the MSBs of the input
b. all chip select lines to ground
c. all chip select lines to their active levels
d. one chip select line on each decoder to the input MSB

By Israel W. Digital Logic Design. Page 114


9. The 74138 decoder can also be used as
a. an encoder c. a MUX
b. a DEMUX d. none of the above
10. The 74LS280 can generate even or odd parity. It can
also be used as
a. an adder c. a MUX
b. a parity tester d. an encoder
Answers: 4. c 8. d
1. c 5. a 9. b
2. c 6. d 10. b
3. b 7. a

References

Alan Clements, Principles of computer hardware. second edition oxford science publications

M. M. Mano, Digital Design, Prentice Hall

http://www.ied.edu.hk/has/phys/de/de-ba.htm

http://www.eelab.usyd.edu.au/digital_tutorial/

http://cwx.prenhall.com/bookbind/pubbooks/mano2/chapter5/deluxe.html

http://www.eelab.usyd.edu.au/digital_tutorial/part3/

http://wearcam.org/ece385/lectureflipflops/flipflops/

http://users.ece.gatech.edu/~leehs/ECE2030/reading/mixed-logic.pdf

By Israel W. Digital Logic Design. Page 115


Fundamentals of Electrical Engineering
Chapter – I
Review of Electromagnetic Phenomenon Circuit Variables & Parameters

BASIC CONCEPTS AND DEFINITIONS

CHARGE
The most basic quantity in an electric circuit is the electric charge. We all experience the effect of
electric charge when we try to remove our wool sweater and have it stick to our body or walk across a
carpet and receive a shock.

Charge is an electrical property of the atomic particles of which matter consists, measured in
coulombs (C). Charge, positive or negative, is denoted by the letter q or Q.

We also know that the charge ‘e’ on an electron is negative and equal in magnitude to 1.602x10-19 C,
while a proton carries a positive charge of the same magnitude as the electron and the neutron has no
charge. The presence of equal numbers of protons and electrons leaves an atom neutrally charged.

Coulomb’s Law
Charles Coulomb, a French scientist, observed that when two charges are placed near each
other, they experience a force. He performed a number of experiments to study the nature and
magnitude of the force between the charged bodies. He summed up his conclusions into two laws,
known as Coulomb’s laws.

First law. This law relates to the nature of force between two charged bodies and may be stated
as under :
Like charges repel each other while unlike charges attract each other.
In other words, if two charges are of the same nature (i.e. both positive or both negative), the
force between them is repulsion. On the other hand, if one charge is positive and the other negative,
the force between them is an attraction.

Second law. This law tells about the magnitude of force between two charged bodies and may
be stated as under :
The force between two *point charges is directly proportional to the product of their magnitudes
and inversely proportional to the square of distance between their centres.

Where K is the Constant whose value depends upon the medium in which charges are
placed.

one coulomb is that charge which when placed in air at a distance of one metre from an equal and similar charge
repels it with a force of 9 ×109 N.
Electric Field

Electric field can be considered as an electric property associated with each point in the space where a
charge is present in any form. An electric field is also described as the electric force per unit charge.
The formula of electric field is given as;
E = F /Q
Where,

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering
E is the electric field.
F is a force.
Q is the charge.
Electric fields are usually caused by varying magnetic fields or electric charges. Electric field strength is
measured in the SI unit volt per meter (V/m). The direction of the field is taken as the direction of the force
which is exerted on the positive charge. The electric field is radially outwards from positive charge and
radially in towards negative point charge.

VOLTAGE (or) POTENTIAL DIFFERENCE


To move the electron in a conductor in a particular direction requires some work or energy
transfer. This work is performed by an external electromotive force (emf), typically
represented by the battery in Fig. 1.3. This emf is also known as voltage or potential
difference. The voltage𝑣ab between two points a and b in an electric circuit is the energy (or
work) needed to move a unit charge from a to b.

Voltage (or potential difference) is the energy required to move charge from one point to the other,
measured in volts (V). Voltage is denoted by the letter v or V.

Voltage is always measured across a circuit element as shown in Fig. 1.4

Voltage across Resistor (R)


CURRENT
Current can be defined as the motion of charge through a conducting material, measured in Ampere
(A). Electric current, is denoted by the letter i or I.
The unit of current is the ampere abbreviated as (A) and corresponds to the quantity of total charge
that passes through an arbitrary cross section of a conducting material per unit second.
Mathematically,
Q
I = or Q = It
t
Where 𝑄 is the symbol of charge measured in Coulombs (C), I is the current in amperes (A) and t is
the time in second (s).
Current is always measured through a circuit element as shown in Figure

Current through Resistor (R)

Two types of currents:


1) A direct current (DC) is a current that remains constant with time.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering
2) An alternating current (AC) is a current that varies with time.

Fig. 1.2 Two common types of current: (a) direct current (DC), (b) alternative current (AC)

ENERGY
Energy is the capacity to do work, and is measured in joules (J). The energy absorbed or
supplied by an element from time 0 to t is given by,

POWER
Power is the time rate of expending or absorbing energy, measured in watts (W). Power, is
denoted bythe letter p or P.
Mathematically,

Where p is power in watts (W), w is energy in joules (J), and t is time in


seconds (s). From voltage and current equations, it follows that;

Thus, if the magnitude of current I and voltage are given, then power can be evaluated as the
product of the two quantities and is measured in watts (W).

Sign of power:
Plus sign: Power is absorbed by the element. (Resistor, Inductor)
Minus sign: Power is supplied by the element. (Battery, Generator)

Example 1
An electric heater consumes 1.8Mj when connected to a 250 V supply for 30 minutes. Find the power
rating of the heater and the current taken from the supply.

Faraday’s Laws of Electromagnetic Induction:


Electro Magnetic induction: The Phenomenon of Production of e.m.f and hence current in a
conductor or coil when the magnetic flux linking the conductor or coil changes called
electromagnetic induction. Consider a coil C of several turns. Connect to a centre zero
galvanometer G as shown in Fig. If a permanent magnet is moved towards the coil, it will be
observed that the galvanometer shows deflection in one direction.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

If the magnet is moved away from the coil, the galvanometer again shows deflection but in the
opposite direction. In either case, the deflection will persist so long as the magnet is in motion.
The production of e.m.f. and hence current in the coil C is due to the fact that when the magnet is
in motion (towards or away from the coil), the amount of flux linking the coil changes—the
basic requirement for inducing e.m.f. in the coil. If the movement of the magnet is stopped,
though the flux is linking the coil, there is no change in flux and hence no e.m.f. is induced in
the coil. Consequently, the deflection of the galvanometer reduces to zero.
Faradays 1st law :
First Law of Faraday's Electromagnetic Induction state that whenever a conductor are placed
in a varying magnetic field emf are induced which is called induced emf, if the conductor circuit are
closed current are also induced which is called induced current.
Faradays 2nd law :
Second Law of Electro magnetic induction states that the induced Emf Equal to Rate of Change
of Flux Linkages ( Product of turns N of the coil and flux associated with it ) .
Let
Initial Flux linkages = N1
Final Flux Linkages = N2 .
Change in Flux linkages = (N2 - N1 )
e
e=-N volts
Negative sign indicates is accoding to the Len’s Law

(A)Self-inductance (L)
The property of a coil that opposes any change in the amount of current flowing through it is
called its self-inductance or inductance.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

(B)Mutual inductance :

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

ELECTRICAL CIRCUIT PARAMETERS


CIRCUIT ELEMENTS
An element is the basic building block of a circuit. An electric circuit is simply an
interconnection of the elements. Circuit analysis is the process of determining voltages across
(or the currents through) the elements of the circuit.
There are 2 types of elements found in electrical circuits.
a) Active elements (Energy sources): The elements which are capable of generating or
deliveringthe energy are called active elements.
E.g., Generators, Batteries
b) Passive element (Loads): The elements which are capable of receiving the energy are
called passive elements.
E.g., Resistors, Capacitors and Inductors
ACTIVE ELEMENTS (ENERGY SOURCES)
The energy sources which are having the capacity of generating the energy are called active
elements. The most important active elements are voltage or current sources that generally
deliver power/energyto the circuit connected to them.
There are two kinds of sources
a) Independent sources
b) Dependent sources
INDEPENDENT SOURCES:
An ideal independent source is an active element that provides a specified voltage or current
that is completely independent of other circuit elements.
DEPENDENT (CONTROLLED) SOURCES
An ideal dependent (or controlled) source is an active element in which the source quantity is
controlled by another voltage or current. Dependent sources are usually designated by
diamond-shape.
PASSIVE ELEMENTS (LOADS)
Passive elements are those elements which are capable of receiving the energy. Some
passive elements like inductors and capacitors are capable of storing a finite amount of
energy, and return it later to an external element. More specifically, a passive element is
defined as one that cannot supply average power that is greater than zero over an infinite time
interval. Resistors, capacitors, Inductors fall in this category.

RESISTOR
Materials in general have a characteristic behavior of resisting the flow of electric charge.
This physical property, or ability to resist the flow of current, is known as resistance and is
represented by the symbol R. The Resistance is measured in ohms (Ω). The circuit element
used to model the current- resisting behavior of a material is called the resistor.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

The resistance of a resistor depends on the material of which the conductor is made and
geometrical shape of the conductor. The resistance of a conductor is proportional to its length
(𝑙) and inversely proportional to its cross sectional area (A). Therefore the resistance of a
conductor can be written as,
𝜌𝑙
𝑅=
𝐴
The proportionality constant 𝜌 is called the specific resistance o resistivity of the conductor
and its value depends on the material of which the conductor is made.
The inverse of the resistance is called the conductance and inverse of resistivity is called
specific conductance or conductivity. The symbol used to represent the conductance is G and
conductivity is𝜎. Thus conductivity 𝜎 = 1/𝜌 and its units are Siemens per meter.

Example 2 : In the circuit shown in Fig. below, calculate the current i, the conductance G, the
power p and energylost in the resistor W in 2hours.

Solution:
The voltage across the resistor is the same as the source voltage (30 V) because the resistor and the
voltage source are connected to the same pair of terminals. Hence, the current is

INDUCTOR
A change in the magnitude of the current changes the electromagnetic field. Increase in current
expands the fields, and decrease in current reduces it.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

Therefore, a change in current produces change in the electromagnetic field, which induces a voltage
across the coil according to Faraday'slaw of electromagnetic induction. i.e., the voltage across the inductor
is directly proportional to the time rate of change of current.

Where L is the constant of proportionality called the inductance of an inductor. The unit of inductance
is Henry (H).we can rewrite the above equation as

Integrating both sides from time 0 to t, we get

The power absorbed by the inductor is

The energy stored by the inductor is

CAPACITOR

(a) Typical Capacitor, (b) Capacitor connected to a voltage source, (c) Circuit Symbol of capacitor

Any two conducting surfaces separated by an insulating medium exhibit the property of a capacitor.
The conducting surfaces are called electrodes, and the insulating medium is called dielectric. A
capacitor stores energy in the form of an electric field that is established by the opposite charges on
the two electrodes. The electric field is represented by lines of force between the positive and negative
charges, and is concentrated within the dielectric.
The amount of charge stored, represented by q, is directly pro-proportional to the applied voltage v so
that
𝑞 = 𝐶𝑣
Where C, the constant of proportionality, is known as the capacitance of the capacitor. The unit of
capacitance is the farad (F).
Although the capacitance C of a capacitor is the ratio of the charge q per plate to the applied voltagev, it
does not depend on q or v. It depends on the physical dimensions of the capacitor.
The current flowing through the capacitor is given by

Integrating both sides from time 0 to t, we get

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

SERIES RESISTORS AND VOLTAGE DIVISION


Two or more resistors are said to be in series if the same current flows through all of them. The process of
combining the resistors is facilitated by combining two of them at a time. With this in mind, consider the
single-loop circuit of Fig. 1.18.

PARALLEL RESISTORS AND CURRENT DIVISION


Two or more resistors are said to be in parallel if the same voltage appears across each element. Consider
the circuit in Fig. 1.20, where two resistors are connected in parallel and therefore have the same voltage
across them.

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

Source Conversion
An electrical source transformation (or just ”source transformation”) is a method for simplifying
circuits by replacing a voltage source with its equivalent current source, or a current source with
its equivalent voltage source. Source transformations are implemented using Thévenin’s theorem
and Norton’s theorem.

Chapter – II
Circuit Analysis
KIRCHOFF’S LAWS

The most common and useful set of laws for solving electric circuits are the Kirchhoff’s voltage and
current laws. Several other useful relationships can be derived based on these laws. These laws are
formally known as Kirchhoff’s current law (KCL) and Kirchhoff’s voltage law (KVL).

KIRCHHOFF’S CURRENT LAW (KCL)


This is also called as Kirchhoff's first law or Kirchhoff’s nodal law. Kirchhoff’s first law is based on
the law of conservation of charge, which requires that the algebraic sum of charges within a system
cannot change.
Statement: Algebraic sum of the currents meeting at any junction or node is zero. The term
'algebraic' means the value of the quantity along with its sign, positive or negative.

Mathematically, KCL implies that

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering
𝑁

∑ i𝑛 = 0
𝑛=1
Where N is the number of branches connected to the node and i𝑛 is the nth current entering (or
leaving) the node. By this law, currents entering a node may be regarded as positive, while currents
leaving the node may be taken as negative or vice versa.

Alternate Statement: Sum of the currents flowing towards a junction is equal to the sum of the
currents flowing away from the junction.

Fig 1.16 Currents meeting in a junction


Applying Kirchhoff’s current law to the junction A
I1 + I2 - I3 + I4 - I5= 0 (algebraic sum is zero)
The above equation can be modified as I1 + I2 + I4 = I3 + I5 (sum of currents towards the junction =
sum of currents flowing away from the junction).

KIRCHHOFF’S VOLTAGE LAW (KVL)

This is also called as Kirchhoff's second law or Kirchhoff's loop or mesh law. Kirchhoff’s second law
is based on the principle of conservation of energy.
Statement: Algebraic sum of all the voltages around a closed path or closed loop at any instant is
zero. Algebraic sum of the voltages means the magnitude and direction of the voltages; care should be
taken in assigning proper signs or polarities for voltages in different sections of the circuit.
The polarity of the voltages across active elements is fixed on its terminals. The polarity of the
voltage drop across the passive elements (Resistance in DC circuits) should be assigned with
reference to the direction of the current through the elements with the concept that the current flows
from a higher potential to lower potential. Hence, the entry point of the current through the passive
elements should be marked as the positive polarity of voltage drop across the element and the exit
point of the current as the negative polarity. The direction of currents in different branches of the
circuits is initially marked either with the known direction or assumed direction.

After assigning the polarities for the voltage drops across the different passive elements, algebraic
sum is accounted around a closed loop, either clockwise or anticlockwise, by assigning a particular
sign, say the positive sign for all rising potentials along the path of tracing and the negative sign for
all decreasing potentials. For example consider the circuit shown in Fig. 1.17

Fig. Circuit for KVL

Prepared by: A. Clement Raj


Fundamentals of Electrical Engineering

The circuit has three active elements with voltages E1, E2 and E3. The polarity of each of them is
fixed. R1, R2, R3 are three passive elements present in the circuit. Currents I1 and I3 are marked
flowing into the junction A and current I2 marked away from the junction A with known information
or assumed directions. With reference to the direction of these currents, the polarity of voltage drops
V1, V2 and V3 are marked.

For loop1 it is considered around clockwise


+ E1 - V1 + V3 - E3 = 0
+ E1 - I1 R1 + I3 R3 - E3 = 0
E1 - E3 = I1 R1 - I3 R3
For loop2 it is considered anticlockwise
+ E2+ V2+ V3 – E3 = 0
+ E2 + I2 R2 + I3 R3 – E3 = 0
E2 – E3 = - I2 R2 - I3 R3
Two equations are obtained following Kirchhoff’s voltage law. The third equation can be written
based on Kirchhoff’s current law as
I1 – I2 + I3 = 0
With the three equations, one can solve for the three currents I1, I2, and I3.
If the results obtained for I1, I2, and I3 are all positive, then the assumed direction of the currents are
said to be along the actual directions. A negative result for one or more currents will indicate that the
assumed direction of the respective current is opposite to the actual direction.
Star (Y) - delta (Δ) transformation
In solving networks (having considerable number of branches) by the application of Kirchhoff’s
Laws, one sometimes experiences great difficulty due to a large number of simultaneous
equations that have to be solved. However, such complicated network can be simplified by
successively replacing delta meshes by equivalent star system and vice versa. Suppose we are
given three resistances R12, R23 and R31 connected in delta fashion between terminals 1, 2 and 3
as in Fig. (a). So far as the respective terminals are concerned, these three given resistances can be
replaced by the three resistances R1, R2 and R3 connected in star as shown in Fig. (b).

Delta to Star

Star to Delta

Prepared by: A. Clement Raj Page 12


Fundamentals of Electrical Engineering

Mesh (loop)

Circuit Terminology
 Node - A point where two or more branches meet
 Essential node - A node where three or more branches combine
 Path - A trace of the adjacent circuit elements, where no element is included more than once.
 Branch - A path that connects two nodes, and contains a single element such as voltage source or resistor
 Essential Branch - Path that connects two nodes without passing through an essential node
 Loop - A closed path in a circuit
 Mesh - A loop that does not contain any other loops.
Mesh (loop) Analysis
Mesh analysis provides a general procedure for analyzing circuits, using mesh currents as the circuit variables. Using
mesh currents instead of element currents as circuit variables is convenient and reduces the number of equations that
must be solved simultaneously.
A loop is a closed path with no node passed more than once. A mesh is a loop that does not contain any other loop
within it. Mesh analysis applies KVL to find unknown currents.
Steps to determine mesh currents
1. Assign mesh currents i1, i2. . . in to the n meshes.
2. Apply KVL to each of the n meshes. Use Ohm’s law to express the voltages in terms of the mesh currents.
3. Solve the resulting n simultaneous equations to get the mesh currents.
Explanation by a simple Circuit

Mesh ABDA.
– I1R1 – (I1 – I2) R2 + E1 = 0
or
I1 (R1 + R2) – I2R2 = E1 ……………………… ……(i)

Mesh BCDB.
– I2R3 – E2 – (I2 – I1) R2 = 0
or`
– I1R2 + (R2 + R3) I2 = – E2 ………………………………. (ii)
Solving eq. (i) and eq. (ii) simultaneously, mesh currents I1 and I2 can be found out. Once the mesh
currents are known, the branch currents can be readily obtained. The advantage of this method is that it
usually reduces the number of equations to solve a network problem.

Prepared by: A. Clement Raj Page 13


Fundamentals of Electrical Engineering

Example 1: For the circuit shown below, find Vo?


2Ω 6Ω 4Ω

+
40 V i1 Vo 8Ω i2 6Ω i3
20 V
--

Solution:
We have 3 meshes (loops)
KVL left loop : ( )

KVL middle loop: ( ) ( )

KVL right loop : ( )

Solve the above three equations involving the variables i1, i2 and i3 using simaltaneous solving method. In matrix
form the equations can be expressed as

[ ][ ] [ ]

Solving the matrix we can find that


𝐴 𝐴 𝐴
( ) ( ) ( )

Example 2: Calculate the current in each branch of the circuit shown below

Solution. Assign mesh currents I1, I2 and I3 to meshes ABHGA, HEFGH and BCDEHB
respectively as shown below

Mesh ABHGA. Applying KVL, we have,


– 60I1 – 30(I1 – I3) – 50(I1 – I2) – 20 + 100 = 0 or
140I1 – 50I2 – 30I3 = 80
or 14I1 – 5I2 – 3I3 = 8 ...(i)
Mesh GHEFG. Applying KVL, we have,
20 – 50(I2 – I1) – 40(I2 – I3) – 10I2 + 50 = 0
or –50I1 + 100I2 – 40I3 = 70

Prepared by: A. Clement Raj Page 14


Fundamentals of Electrical Engineering

or –5I1 + 10I2 – 4I3 = 7 ...(ii)


Mesh BCDEHB. Applying KVL, we have,
–20I3 – 40(I3 – I2) – 30(I3 – I1) = 0
or 30I1 + 40I2 – 90I3 = 0
or 3I1 + 4I2 – 9I3 = 0 ...(iii)
Solving for equations (i), (ii) and (iii), we get, I1 = 1·65 A ; I2 = 2·12 A ; I3 = 1·5 A

By determinant method
14I1 – 5I2 – 3I3 = 8
–5I1 + 10I2 – 4I3 = 7
3I1 + 4I2 – 9I3 = 0

NODAL ANALYSIS:
 First we find the number of KCL equations (These are used to find the nodal voltages). N -1 =
n, here N = number of equations, n = number of nodes.
 Then we write the KCL equations for the nodes and solve them to find the respected nodal
voltages.
 Once we have these nodal voltages, we can use them to further analyze the circuit.

Prepared by: A. Clement Raj Page 15


Fundamentals of Electrical Engineering

 Super Node: Two Nodes with a independent Voltage source between them is a Super node
and one forms a KVL equation for it.
Example Find the Nodal Voltages in the below circuit?

Prepared by: A. Clement Raj Page 16


INTRODUCTION:
Any complicated network i.e. several sources, multiple resistors are present if the
single element response is desired then use the network theorems. Network theorems are also can
be termed as network reduction techniques. Each and every theorem got its importance of
solving network. Let us see some important theorems with DC and AC excitation with detailed
procedures.
Thevenin’s Theorem and Norton’s theorem (Introduction) :

Thevenin’s Theorem and Norton’s theorem are two important theorems in solving
Network problems having many active and passive elements. Using these theorems the networks
can be reduced to simple equivalent circuits with one active source and one element. In circuit
analysis many a times the current through a branch is required to be found when it’s value is
changed with all other element values remaining same. In such cases finding out every time the
branch current using the conventional mesh and node analysis methods is quite awkward and
time consuming. But with the simple equivalent circuits (with one active source and one
element) obtained using these two theorems the calculations become very simple. Thevenin’s
and Norton’s theorems are dual theorems.

Thevenin’s Theorem Statement :

Any linear, bilateral two terminal network consisting of sources and


resistors(Impedance),can be replaced by an equivalent circuit consisting of a voltage source in
series with a resistance (Impedance).The equivalent voltage source VTh is the open circuit
voltage looking into the terminals(with concerned branch element removed) and the equivalent
resistance RTh while all sources are replaced by their internal resistors at ideal condition i.e.
voltage source is short circuit and current source is open circuit.

(a) (b)

Figure (a) shows a simple block representation of a network with several active / passive
elements with the load resistance RL connected across the terminals ‘a & b’ and figure (b) shows
the Thevenin equivalent circuit with VTh connected across RTh & RL .
Main steps to find out VTh and RTh :
1. The terminals of the branch/element through which the current is to be found out are
marked as say a & b after removing the concerned branch/element.
2. Open circuit voltage VOC across these two terminals is found out using the conventional
network mesh/node analysis methods and this would be VTh .

3. Thevenin resistance RTh is found out by the method depending upon whether the
network contains dependent sources or not.
a. With dependent sources: RTh = Voc / Isc

b. Without dependent sources : RTh = Equivalent resistance looking into the


concerned terminals with all voltage & current sources replaced by their internal
impedances (i.e. ideal voltage sources short circuited and ideal current sources
open circuited)

4. Replace the network with VTh in series with RTh and the concerned branch resistance (or)
load resistance across the load terminals(A&B) as shown in below fig.

Example: Find VTH, RTH and the load current and load voltage flowing through RL resistor
as shown in fig. by using Thevenin’s Theorem?

Fig.(a)

Solution:

The resistance RL is removed and the terminals of the resistance RL are marked as A & B as
shown in the fig. (1)

Fig.(1)
Calculate / measure the Open Circuit Voltage. This is the Thevenin Voltage (V TH). We have
already removed the load resistor from fig.(a), so the circuit became an open circuit as shown in
fig (1). Now we have to calculate the Thevenin’s Voltage. Since 3mA Current flows in both
12kΩ and 4kΩ resistors as this is a series circuit because current will not flow in the 8kΩ resistor
as it is open. So 12V (3mA x 4kΩ) will appear across the 4kΩ resistor. We also know that
current is not flowing through the 8kΩ resistor as it is open circuit, but the 8kΩ resistor is in
parallel with 4k resistor. So the same voltage (i.e. 12V) will appear across the 8kΩ resistor as
4kΩ resistor. Therefore 12V will appear across the AB terminals. So,VTH = 12V

Fig(2)
All voltage & current sources replaced by their internal impedances (i.e. ideal voltage sources
short circuited and ideal current sources open circuited) as shown in fig.(3)

Fig(3)
Calculate /measure the Open Circuit Resistance. This is the Thevenin Resistance (RTH)We have
Reduced the 48V DC source to zero is equivalent to replace it with a short circuit as shown in
figure (3) We can see that 8kΩ resistor is in series with a parallel connection of 4kΩ resistor and
12k Ω resistor. i.e.:
8kΩ + (4k Ω || 12kΩ) ….. (|| = in parallel with)
RTH = 8kΩ + [(4kΩ x 12kΩ) / (4kΩ + 12kΩ)]
RTH = 8kΩ + 3kΩ
RTH = 11kΩ

Fig(4)
Connect the RTH in series with Voltage Source VTH and re-connect the load resistor across the
load terminals(A&B) as shown in fig (5) i.e. Thevenin circuit with load resistor. This is the
Thevenin’s equivalent circuit
RTH

VTH

Fig(5)
Now apply Ohm’s law and calculate the total load current from fig 5.
IL = VTH/ (RTH + RL)= 12V / (11kΩ + 5kΩ) = 12/16kΩ
IL= 0.75mA
And VL = ILx RL= 0.75mA x 5kΩ
VL= 3.75V
Norton’s Theorem Statement :
Any linear, bilateral two terminal network consisting of sources and
resistors(Impedance),can be replaced by an equivalent circuit consisting of a current source in
parallel with a resistance (Impedance),the current source being the short circuited current across
the load terminals and the resistance being the internal resistance of the source network looking
through the open circuited load terminals.

(a) (b)
Figure (a) shows a simple block representation of a network with several active / passive
elements with the load resistance RL connected across the terminals ‘a & b’ and figure (b) shows
the Norton equivalent circuit with IN connected across RN & RL .

Main steps to find out IN and RN:


1. The terminals of the branch/element through which the current is to be found out are
marked as say a & b after removing the concerned branch/element.
2. Open circuit voltage VOC across these two terminals and ISC through these two terminals
are found out using the conventional network mesh/node analysis methods and they are
same as what we obtained in Thevenin’s equivalent circuit.

3. Next Norton resistance RN is found out depending upon whether the network contains
dependent sources or not.

a) With dependent sources: RN = Voc / Isc

b) Without dependent sources : RN = Equivalent resistance looking into the


concerned terminals with all voltage & current sources replaced by their internal
impedances (i.e. ideal voltage sources short circuited and ideal current sources
open circuited)

4. Replace the network with IN in parallel with RN and the concerned branch resistance
across the load terminals(A&B) as shown in below fig

Example: Find the current through the resistance RL (1.5 Ω) of the circuit shown in the
figure (a) below using Norton’s equivalent circuit.?

Fig(a)

Solution: To find out the Norton’s equivalent ckt we have to find out IN = Isc ,RN=Voc/ Isc.
Short the 1.5Ω load resistor as shown in (Fig 2), and Calculate / measure the Short Circuit
Current. This is the Norton Current (IN).

Fig(2)
We have shorted the AB terminals to determine the Norton current, IN. The 6Ω and 3Ω are then
in parallel and this parallel combination of 6Ω and 3Ω are then in series with 2Ω.So the Total
Resistance of the circuit to the Source is:-
2Ω + (6Ω || 3Ω) ….. (|| = in parallel with)
RT = 2Ω + [(3Ω x 6Ω) / (3Ω + 6Ω)]
RT = 2Ω + 2Ω
RT = 4Ω
IT = V / R T
IT = 12V / 4Ω= 3A..
Now we have to find ISC = IN… Apply CDR… (Current Divider Rule)…
ISC = IN = 3A x [(6Ω / (3Ω + 6Ω)] = 2A.
ISC= IN = 2A.

Fig(3)

All voltage & current sources replaced by their internal impedances (i.e. ideal voltage sources
short circuited and ideal current sources open circuited) and Open Load Resistor. as shown in
fig.(4)

Fig(4)

Calculate /measure the Open Circuit Resistance. This is the Norton Resistance (R N) We have
Reduced the 12V DC source to zero is equivalent to replace it with a short circuit as shown in
fig(4), We can see that 3Ω resistor is in series with a parallel combination of 6Ω resistor and 2Ω
resistor. i.e.:
3Ω + (6Ω || 2Ω) ….. (|| = in parallel with)
RN = 3Ω + [(6Ω x 2Ω) / (6Ω + 2Ω)]
RN = 3Ω + 1.5Ω
RN = 4.5Ω

Fig(5)
Connect the RN in Parallel with Current Source IN and re-connect the load resistor. This is
shown in fig (6) i.e. Norton Equivalent circuit with load resistor.

Fig(6)

Now apply the Ohm’s Law and calculate the load current through Load resistance across the
terminals A&B. Load Current through Load Resistor is
IL = IN x [RN / (RN+ RL)]
IL= 2A x (4.5Ω /4.5Ω +1.5kΩ)
IL = 1.5A IL = 1. 5A
Maximum Power Transfer Theorem:
In many practical situations, a circuit is designed to provide power to a load.
While for electric utilities, minimizing power losses in the process of transmission and
distribution is critical for Efficiency and economic reasons, there are other applications in areas
such as communications where it is desirable to maximize the power delivered to a load.
electrical applications with electrical loads such as Loud speakers, antennas, motors etc. it would
be required to find out the condition under which maximum power would be transferred from the
circuit to the load.

Maximum Power Transfer Theorem Statement:


Any linear, bilateral two terminal network consisting of a resistance load, being
connected to a dc network, receives maximum power when the load resistance is equal to the
internal resistance (Thevenin’s equivalent resistance) of the source network as seen from the load
terminals.

According to Maximum Power Transfer Theorem, for maximum power transfer from the
network to the load resistance , RL must be equal to the source resistance i.e. Network’s
Thevenin equivalent resistance RTh . i.e. RL = RTh
The load current I in the circuit shown above is given by,
𝑉𝑇𝐻
𝐼=
𝑅𝑇𝐻 +𝑅𝐿

The power delivered by the circuit to the load:


2
𝑉𝑇𝐻
𝑃 = 𝐼2𝑅 = 𝑅𝐿
(𝑅𝑇𝐻 +𝑅𝐿 )2

The condition for maximum power transfer can be obtained by differentiating the above
expression for power delivered with respect to the load resistance (Since we want to find out the
value of RL for maximum power transfer) and equating it to zero as :

𝜕𝑃 𝑉2𝑇𝐻 2𝑉2𝑇𝐻
𝜕𝑅𝐿
=0= 2− 3 𝑅𝐿 = 0
(𝑅𝑇𝐻 +𝑅𝐿 ) (𝑅𝑇𝐻+𝑅𝐿 )

Simplifying the above equation, we get:

(𝑅𝑇𝐻 + 𝑅𝐿 ) − 2𝑅𝐿 = 0 ⟹ 𝑅𝐿 = 𝑅𝑇𝐻


Under the condition of maximum power transfer, the power delivered to the load is given by :
2 2
𝑉𝑇𝐻 𝑉𝑇𝐻
𝑃𝑀𝐴𝑋 = 2
× 𝑅𝐿 =
(𝑅𝐿 +𝑅𝐿 ) 4𝑅𝐿

Under the condition of maximum power transfer, the efficiency 𝜼 of the network is then given
by:
2 2
𝑉𝑇𝐻 𝑉𝑇𝐻
𝑃𝐿𝑂𝑆𝑆 = 2
× 𝑅𝑇𝐻 =
(𝑅𝐿 +𝑅𝐿 ) 4𝑅𝐿

𝑉2𝑇𝐻
output 4𝑅𝐿
𝜼= = 2 2 = 0.50
input 𝑉 𝑉
𝑇𝐻 𝑇𝐻
4𝑅𝐿 + 4𝑅𝐿 )
(

For maximum power transfer the load resistance should be equal to the Thevenin equivalent
resistance ( or Norton equivalent resistance) of the network to which it is connected . Under the
condition of maximum power transfer the efficiency of the system is 50 %.

Example: Find the value of RL for maximum power transfer in the circuit of Fig. Find the
maximum power.?
Solution:We need to find the Thevenin resistance RTh and the Thevenin voltage VTh across the
terminals a-b. To get RTh, we use the circuit in Fig. (a)

6×12
RTh= 2 + 3 + (6 // 12 )=5+(6+12)=5+4=9Ω

To get VTh, we consider the circuit in Fig.(b).Applying mesh analysis,

−12 + 18i1− 12i2 = 0,

i2 = −2 A,
Solving for i1, we get i1= −2/3.
Applying KVL around the outer loop to get VTh across terminals a-b, we obtain,

−12 + 6i1+ 3i2+ 2(0) + VTh= 0

VTh= 22 V
For maximum power transfer, RL= RTh= 9Ω and the maximum power is,
2
𝑉𝑇𝐻 22×22
𝑃𝑀𝐴𝑋 = = =13.44W
4𝑅𝐿 4×9

Superposition Theorem:

The principle of superposition helps us to analyze a linear circuit with more than
one current or voltage sources sometimes it is easier to find out the voltage across or current in a
branch of the circuit by considering the effect of one source at a time by replacing the other
sources with their ideal internal resistances.

Superposition Theorem Statement:

Any linear, bilateral two terminal network consisting of more than one sources,
The total current or voltage in any part of a network is equal to the algebraic sum of the currents
or voltages in the required branch with each source acting individually while other sources are
replaced by their ideal internal resistances. (i.e. Voltage sources by a short circuit and current
sources by open circuit)
Steps to Apply Super position Principle:
1. Replace all independent sources with their internal resistances except one source. Find the
output (voltage or current) due to that active source using nodal or mesh analysis.
2. Repeat step 1 for each of the other independent sources.
3. Find the total contribution by adding algebraically all the contributions due to the
independent sources.

Example: By Using the superposition theorem find I in the circuit shown in figure?

Fig.(a)
Solution: Applying the superposition theorem, the current I2 in the resistance of 3 Ω due to the
voltage source of 20V alone, with current source of 5A open circuited [ as shown in the figure.1
below ] is given by :

Fig1
I2 = 20/(5+3) = 2.5A

Similarly the current I5 in the resistance of 3 Ω due to the current source of 5A alone with
voltage source of 20V short circuited [ as shown in the figure.2 below ] is given by :

Fig.2
Practice Problem

1. Find the Current through the 3Ω resistance connected between C and D in the below figure.

Answer : 1 A from C to D.
STEADY STATE ANALYSIS OF SINGLE PHASE AC CIRCUITS

Alternating Voltage and Current


A voltage which changes its polarity at regular intervals of time is called an alternating voltage.
When an alternating voltage is applied in a circuit, the current flows first in one direction and
then in the opposite direction; the direction of current at any instant depends upon the
polarity of the voltage. The below figure shows an alternating voltage source connected to a
resistor R.

The upper terminal of alternating voltage source is positive and lower terminal negative so that
current flows in the circuit as shown . After some time (a fraction of a second), the polarities of the
voltage source are reversed so that current now flows in the opposite direction. This is called alternating
current because the current flows in alternate directions in the circuit.

Sinusoidal Alternating Voltage and Current


Commercial alternators produce sinusoidal alternating voltage i.e. alternating voltage is a
sine wave. A sinusoidal alternating voltage can be produced by rotating a coil with a constant
angular velocity (say ω rad/sec) in a uniform magnetic field. The sinusoidal alternating voltage
can be *expressed by the equation :
v = Vm sin ω t

where v = Instantaneous value of alternating voltage


Vm = Max. value of alternating voltage
ω = Angular velocity of the coil
Sinusoidal voltages always produce sinusoidal currents, unless the circuit is non-linear.Therefore, a

sinusoidal current can be expressed in the same way as voltage i.e. i = Im sin ωt.
Why Sine Waveform?
Although it is possible to produce alternating voltages and currents with an endless variety
of waveforms (e.g., square waves, triangular waves, rectangular waves etc), yet the engineers
choose to adopt **sine waveform. The following are the technical and economical advantages of
producingsinusoidal alternating voltages and currents :
(i) The sine waveform produces the least disturbance in the electrical circuit and is the smoothest
and efficient waveform. For example, when current in a capacitor, in an inductor or in a
transformer is sinusoidal, the voltage across the element is also sinusoidal. This is not true of
any other waveform.
The use of sinusoidal voltages applied to appropriately designed coils results in a revolving magnetic field which has
the capacity to do work.

Generation of Alternating Voltages and Currents


An alternating voltage may be generated :
(i) by rotating a coil at constant angular velocity in a uniform magnetic field.
or
(ii) by rotating a magnetic field at a constant angular velocity within a stationary coil.
Important A.C. Terminology
An alternating voltage or current changes continuously in magnitude and alternates in
direction at regular intervals of time. It rises from zero to maximum positive value, falls to zero,
increases to a maximum in the reverse direction and falls back to zero again. From this point on
indefinitely, the voltage or current repeats the procedure. The important a.c. terminology is
defined below :
(ii) Waveform. The shape of the curve obtained by plotting the instantaneous values of voltageor
current as ordinate against *time as abcissa is called its waveform or waveshape. Fig. shows the waveform
of an alternating voltage varying sinusoidally.
(iii) Instantaneous value. The value of an alternating quantity at any instant is called instantaneous
value. The instantaneous values of alternating voltage and current are represented by v and i respectively.
As an example, the instantaneous values of voltage (See Fig. 11.6) at 0º, 90º and 270º are 0, + Vm, -Vm
respectively.
(iv) Cycle. One complete set of positive and negative values of an alternating quantity is known as a
cycle. Fig. 11.6 shows one cycle of an alternating voltage.

Fig. 11.6

A cycle can also be defined in terms of angular measure. One cycle corresponds to 360º
electrical or 2ω radians. The voltage or current generated in a conductor will span 360º
electrical (or complete one cycle) when the conductor moves past successive north and south
poles.
(v) Alternation. One-half cycle of an alternating quantity is called an alternation. An alternation spans
180º electrical. Thus in Fig. 11.6, the positive or negative half of alternating voltage is the alternation.
(vi) Time period. The time taken in seconds to complete one cycle of an alternating quantity is called
its time period. It is generally represented by T.
(vii) Frequency. The number of cycles that occur in one second is called the frequency (f) of the
alternating quantity. It is measured in cycles/sec (C/s) or Hertz (Hz). One Hertz is equal to 1C/s.
(viii) Amplitude. The maximum value (positive or negative) attained by an alternating quantity is
called its amplitude or peak value. The amplitude of an alternating voltage or current is designated by Vm
(or Em) or Im.
Table of Contents
Chapter 1 .............................................................................................................................. 2
1.1 Basic definitions and representation of networks ......................................................... 2
1.2. Analysis and synthesis ................................................................................................ 2
1.3. Network components .................................................................................................. 2
1.4. Types (active, passive; linear, non-linear; lumped, distributed) ................................... 3
1.5. Mathematical equations (time-domain and transformed) ............................................. 5
1.7. Important definitions and mathematical representations .............................................. 7
1.7.1. Poles and zeros of rational function (Laplace domain) ......................................... 7
1.7.2. Partial fraction expansion and residues................................................................. 8
1.7.3. Continued Fraction expansion ............................................................................ 10
Chapter 2 ............................................................................................................................ 10
2 Realizability Theory and Positive Real Functions ............................................................. 10
2.1 Realizability criteria (Passive Networks) ................................................................... 10
2.2 Positive Real Conditions............................................................................................ 11
2.3 Hurwitz and Strictly-Hurwitz Polynomials ................................................................ 11
2.4 Tests for Hurwitz nature of polynomials ( P(S)) ....................................................... 12
2.5 Sturm’s Theorem and Sturm’s Test............................................................................ 15
2.6 Testing Deriving-point functions ............................................................................... 20
2.6.1 General realizability criteria ................................................................................ 20
Chapter 3 ............................................................................................................................ 23
3 Elements of Realizability theory ....................................................................................... 23
3.1 Introduction ............................................................................................................... 23
3.2 Hurwitz Polynomials ................................................................................................. 24
3.3 Positive Real Functions ............................................................................................. 28
3.4 Elementary Synthesis Procedure ................................................................................ 35
CHAPTER 4 ....................................................................................................................... 41
Two Port Network............................................................................................................... 41
CHAPTER 5 ....................................................................................................................... 57
5 Active Networks .............................................................................................................. 57
5.1. Active network components...................................................................................... 57
5.2. Operational Amplifier Circuits ................................................................................. 60
5.3. Realization of Active Networks ................................................................................ 61

1
Chapter 1
1 Introduction
1.1 Basic definitions and representation of networks

1.2. Analysis and synthesis

1.3. Network components

2
Linear and non-linear elements:
A system is linear if superposition theorem holds true for the input-output relationship

Similarly, linear elements are those that have a linar response (current or voltage) to the input
(voltage or current); or elements that have linear relationship between current and voltage
through/across them.

1.4. Types (active, passive; linear, non-linear; lumped, distributed)


Depending on type of components that makeup the network, a network can be classified as
active and passive, linear and non-linear, lumped and distributed, continuous and discrete; and
depending on the number of terminals, a network can also be classified as 1-port, 2-port,…
network.

3
One-port, two-port, multi-port networks
A pair of terminals such that current entering one of the terminals is the same as current leaving
the other terminal is called port. Depending on the number of ports, networks can be classified
as 1-port, 2-port…, n-port (multi-port).

4
1.5. Mathematical equations (time-domain and transformed)
Network analysis and synthesis is usually done in Laplace domain so that it is important to
know mathematical equations of network elements in Laplace domain. Consider the following
general network element.

1.6 Network Functions

5
Example 1
Find the deriving-point impedance and admittance functions for the following network

6
Example 2
Determine Z11(s) for the following network.

Conclussion
Deriving-point functions of linear, lumped and passive elementes (resistors inductor and
capacitors ) are rationals of S (Laplace domain) with positive and real coefficients.

1.7. Important definitions and mathematical representations


1.7.1. Poles and zeros of rational function (Laplace domain)

7
Example 3

Example 4

1.7.2. Partial fraction expansion and residues

8
The constants Ki are called residues of the poles.
The term Kos exists only if H(s) has a pole at infinity.

Example 5

Problem 1. Find partial fraction expansion of the following functions.

9
1.7.3. Continued Fraction expansion

Example 6 Express the following impedance function using continues fraction expansion.

Problem 2

Chapter 2
2 Realizability Theory and Positive Real Functions
2.1 Realizability criteria (Passive Networks)
Deriving-point functions of networks made up of linear, lumped, passive elements (resistors,
inductors, capacitors and transformers) are rational functions of s. But not all rational functions
of s describe a realizable network of RLCM elements.
According to Otto Brune1, a deriving-point function (Z(s) or Y(s)) is realizable using lumped
passive elements (R, L, C, M) elements if it is positive real (PR) rational function of s. i.e if it
satisfies the following conditions.

Or

1) H(S) (Z(s) or Y(s)) is a real rational function of s where all coefficients (of the numerator
and denominator polynomials) are real and positive.

10
2) If s (which is generally complex, s = σ + jω) has non-negative real part, H(s) should not have
negative real part. i.e. if Re[s] > 0, then Re[H(s)] > 0.

2.2 Positive Real Conditions


Equivalent positive-real conditions
A. H(s) should be quotient of polynomials with positive real coefficients
B. For all real ω, Re[H(jω)] ≥ 0
C. All poles and zeros of H(s) should lie in the closed Left-hand S-plane. All jω-axis poles and
zeros must be simple (multiplicity of one) with positive residues.

2.3 Hurwitz and Strictly-Hurwitz Polynomials


Definition:
A polynomial with its zeros restricted to the closed Left-hand S-plane is called Hurwitz
polynomial.
A polynomial with its zeros restricted to the inside if Left-hand S-plane (excluding the jω –axis) is
called Strictly Hurwitz polynomial.
A polynomial that is not Hurwitz is called Non-Hurwitz polynomial.
Therefore, the positive real condition restricts the polynomials of H(s) to be Hurwitz or strictly
Hurwitz polynomial; if they are Hurwitz only, their jω -axis zeros must be simple.
The following are zero locations of examples of strictly Hurwitz, Hurwitz and non-Hurwitz
polynomials.
P(s) = (s + 3)(s2 + 2s + 2)

b) P(s) = s(s +1)(s2 + 4)

11
c) P(s) = (s+2)(s2 – 4s + 5)

Properties
A strictly Hurwitz polynomial has the form:

P(s) = ansn + an-1sn-1 +… + a1s1 +a0


Where, ai > 0 (no missing terms)
A Hurwitz polynomial may have missing terms (every second term) due to zeros of the form (s2 +
ω2) that lie on the jω-axis.

2.4 Tests for Hurwitz nature of polynomials ( P(S))


Method 1 – FACTORIZATION
Factorize P(s), find all zeros (roots of the polynomial) and inspect their location on the s-plane.
Method 2– Continued Fraction Expansion (CFE)
➢ Separate P(s) in to even (Pe(s)) and odd (Po(s)) parts.
➢ Form φ(s) = Pe(s)/Po(s) or Po(s)/Pe(s). Select the higher degree to be numerator.
➢ Expand φ(s) using CFE (Continued Fraction Expansion)

12
The expansion stops when remainder of the subsequent division is zero. If there
is no premature termination, denominator of the last division is a constant.
If the last denominator is not a constant, then we say there is premature
termination.
The polynomial P(s) is strictly Hurwitz if the all coefficients, αi, are real and positive, and if
there is no premature termination.
If the coefficients are real and positive, then P(s) is Hurwitz. (*** there can be premature
termination)
Otherwise, the polynomial is non-Hurwitz.
Example 1
Test for Hurwitz nature of the following polynomial using CFE.
P(s) = (s2 + 2s + 1)(s2 + s + 1)(s2+4)
= s6 + 3s5 + 8s4 + 15s3 + 17s2 + 12s + 4

Solution:

The above answer the coefficient of the 1st s is 1/3


➔ all are real positive coefficients
➔ there is premature termination; if there were no premature termination, there would be 6
(the degree of P(s)) terms in the expansion
➔ Therefore P(s) is simply Hurwitz. This can also be verified from the zero plot of P(s)
shown above.
Problem 1

13
Test for Hurwitz nature of the following polynomial using CFE
P(s) = s4 + s3 + 5s2 +3s+2
Method 3– Routh-Hurwitz Array

For P(s) = ansn + an-1sn-1 +… + a1s1 +a0


Construct a triangular array as follows:
Where the ai’s are the polynomial coefficients, and the coefficients bi, ci,… are computed as
follows:
Special cases:
when all coefficients in a row are zero ( Vanishing row)
This occurs when the polynomial has roots on the jw-axis. The polynomial with coefficients in
a row prior to the vanishing row is a factor of P(s), and it has roots on the jw-axis.
If the vanishing row is at sk+1, create a polynomial, Pk(s), using coefficients of a row at sk
(prior to the vanishing row)
Pk(s) = u1sk + u2sk-2 + u3sk-4 + u4sk-6 +…
Define an auxiliary polynomial, dPk(s)/ds, and replace the vanishing row with coefficients of
this polynomial (derivative of Pk(s))
Continue the process until you reach the last row at s0.
the first element of a row is zero, and there is at least one non-zero coefficient in a row
If a row begins with a zero and the row has some non-zero coefficients, replace the leading
zero (the first zero) with ε (assume a small positive number) and find the remaining coefficients
(in terms of ε).

Cases
❖ If there is no sign change in coefficients of the first column, and if there is no vanishing
row, the polynomial is strictly Hurwitz. (If there are coefficients that contain ε, evaluate sign
of the coefficients by taking the limit ε→ 0+).
❖ If there is at least one sign change, the polynomial is non-Hurwitz.
❖ If there exists vanishing row (root on the jw-axis), the polynomial is not strictly-Hurwitz,
but if there is still no sign change, the polynomial is Hurwitz. i.e. The polynomial is Hurwitz
if there is no sign change. (Vanishing rows are allowed)
Example 2
Test Hurwitz nature the following polynomial using Routh-Hurwitz array
a) P(s) = s4 + s3 + 5s2 +3s+2

14
Solution:

b) P(s) = s4 + 4s3 + 5s2 +8s+6


Solution

2.5 Sturm’s Theorem and Sturm’s Test


Recall that there are three conditions for a network function to be positive real.
A. H(s) should be quotient of polynomials with positive real coefficients
B. For all real ω, Re[H(jω)] ≥ 0
C. All poles and zeros of H(s) should lie in the closed Left-hand S-plane. All jω-axis poles and
zeros must be simple (multiplicity of one) with positive residues.
The first condition can be checked by inspection, and the last condition restricts denominator
of H(s) to be strictly Hurwitz or simply Hurwitz. If it is Hurwitz only its jω-axis roots must be
simple. Sturm’s test is useful for testing condition B. Consider a network function H(s) =
N(s)/D(s), where N(s) is the numerator and D(s) is denominator of H(s).
First, separate both polynomials in to even and odd parts.

Property D(-s) = De(s) – Do(s)


Now, multiply H(s) by D(-s)/D(-s)

15
= He(s) + Ho(s)
Where, De(s)2 – Do(s)2 = D(s)D(-s)
For s = jω
He(jω) – is purely real
Ho(jω) – is purely imaginary.
De(s)2 – Do(s)2 = D(s)D(-s) = D(jω)D(-jω) = |D(jω)|2 which is always positive.
Ne(jω)De(jω) – No(jω)Do(jω) = P(ω2) is an even polynomial function of ω
Therefore, Re[H(jω)] = He(jω)
The second positive real condition (condition B) states that:
For all real ω, Re[H(jω)] ≥ 0
►He(jω) ≥ 0
where He(jω) = P(ω2)/ |D(jω)|2
Now condition B can be restated as:
P(ω2) ≥ 0 for all real ω OR for ω2 ≥ 0
Let x =ω2

P(x) = anxn + an-1xn-1 + …+ a1x + a0 ≥ 0 for x ≥ 0


This implies that the graph of P(x) should not cross the x – axis in the right half of the xy – plane
(positive values of x)
Cases:
❖ If all coefficients ai are positive, then P(x) ≥ 0 for x ≥ 0 (condition met)
❖ P(x) = a0 at x = 0 hence, ao ≥ 0 (necessary)
❖ P(x) → an as x → ∞ hence, an ≥ 0 (necessary)
❖ If a0 ≥ 0, an ≥ 0 but ai < 0 for some i, then P(x) ≥ 0 for x ≥ 0 provided that P(x) does not have
odd-ordered zeros on the positive x-axis (except at x = 0 and x → ∞). If the multiplicity of zeros
of P(x) is odd, then the graph of P(x) crosses the x-axis at those points (condition not met);
otherwise, if multiplicity of zeros of P(x) is even, the graph of P(x) touches the x-axis at those
points. Therefore, even-ordered (even multiplicity) zeros (x-intercepts) are allowed, but odd-
ordered zeros are not.
Plot the following polynomials and observe the nature of zeros.
a) P(x) = (x-1)2
b) P(x) = (x-1)3

16
c) P(x) = x(x2 - 4x + 4)
Test for condition B is reduced to two steps
1. Check that both a0 and an are non-negative.
2. Check that P(x) has no odd-ordered zeros on the positive x-axis. This can be done in three
ways:
Method 1 – Factorization
Method 2 – Plot P(x) over a sufficient range of x
Method 3 – Sturm’s test
Sturm’s Test
Step 1: Develop a sequence of polynomials Po(x), P1(x), P2(x)… called Sturm’s
functions as follows:
Define P0(x) = P(x)
P1(x) = P’(x)
To find P2(x), divide P0(x) by P1(x) to get a two term quotient and a remainder. The remainder
is negative of P2(x).
i.e. P0(x) / P1(x) = b1x + c1 + [-P2(x)] / P1(x)
Repeat this stem for P3, P4 …
Euclid Algorithm:
P0(x) = P(x)
P1(x) = P’(x)
P0(x) = q1(x)P1 + [-P2(x)]
P1(x) = q2(x)P2 + [-P3(x)]


Pk-2(x) = q1(x)Pk-1 + [-Pk(x)]
The process stops when the remainder Pk(x) becomes a constant (when k = n) or zero ( when
k ≤ n premature termination)
Step 2:
Case 1: Pk(x) = constant when k = n
Sturm’s Theorem

17
The number of odd-ordered zeros which Pk(x) has in the interval a ≤ x ≤ b is equal to |Sb – Sa|
where Sa and Sb are the number of sign changes in the test (P0, P1,P2,…) evaluated at x=a and
x=b respectively.
Here, we are interested in the presence of odd-ordered zeros on the positive x-axis, hence we
take a = 0 and b → ∞.
Case 2: Pk(x) = 0 for some k ≤ n (Premature termination)
This shows that Pk-1(x) is a factor of P(x) so that all zeros of Pk-1 are zeros of P(x) and the
multiplicity of these zeros in P(x) is one higher than their multiplicity in Pk-1(x). The test
continues by taking polynomials P0 to Pk-1. The zero count |Sb – Sa| in this case is the sum of
number of odd-ordered zeros plus multiple zeros (due to Pk-1(x) each counted once.
Example 3
Test whether the following polynomials satisfy the condition P(x) ≥0 for all x ≥ 0.
a) p(x) = x2 – 4x + 3
Solution:
Method – 1
P(x) = (x – 1)1(x – 3)1
Two odd-ordered zeros on the positive x – axis (x =1 and x = 3)
Therefore, P(x) does not satisfy the condition.
Method – 2

Method – 3 Sturm’s test n = 2


P0(x) = x2 – 4x + 3
P1(x) = P1’(x) = 2x – 4

P2(x) = - (-1) = 1
P2(x) = 1 ← constant when k = 2 = n

18
|Sb – Sa| = 2 ← the number of odd-ordered zeros on the positive x – axis
Condition not satisfied.
b) p(x) = x4 – 8x3 + 23x2 – 28x + 12
Solution:
Sturm’s test n = 4
P0(x) = x4 – 8x3 + 23x2 – 28x + 12
P1(x) = P0’(x) = 4x3 – 24x2 + 46x – 28
Using the Euclid algorithm:
P2(x) = ½x2 – 2x + 2
P3(x) = 2x – 4
P4(x) = 0 k=4 ≤ n = 4 ← premature termination.
P3(x) = 2(x-2)1 is a factor of P(x)
The multiplicity of the zero (x-2) in P(x) is one higher than in P3(x)
(x-2)1+1 = (x-2)2 is a factor of P(x)
P(x) has one double (even-ordered) zero

P(x) has 3 zeros on the positive x – axis among which the double zero (x-2)2 is the one.
Therefore, P(x) has 3-1 = 2 odd-ordered zeros.
Hence, P(x) 0 for x ≥ 0

Problem 2

19
Check whether p(x) = x4 – 15x2 + 10x + 24 satisfies the condition P(x) ≥0 for all x ≥ 0.

2.6 Testing Deriving-point functions


Recall that according to Otto Brune2, a deriving-point function (Z(s) or Y(s)) is realizable using
lumped passive elements (R, L, C, M) elements if it is positive real (PR) rational function of s.
2 Otto Walter Heidrich Oscar Brune (1901– ) who found the necessary and sufficient
conditions (positive real conditions for realizable passive network functions at Massachusetts
Institute of Technology (MIT) during his doctorial thesis.
If Z(s) is positive real, Y(s) (=1/z(s)) is also positive real.
Let us consider Z(s) = N(s)/D(s) (the same conclusion can be reached for Y(s))

The following condition must be satisfied.


A. Z(s) should be quotient of polynomials with positive real coefficients
B. For all real ω, Re[Z(jω)] ≥ 0
C. All poles and zeros of Z(s) should lie in the closed Left-hand S-plane. All jω-axis poles and
zeros must be simple (multiplicity of one) with positive residues.
2.6.1 General realizability criteria
The following are procedures for testing positive-real nature of Z(s) with all common factors
in the numerator and denominator been removed.

1. Inspection test for necessary conditions


a) all polynomial coefficients are real and positive
b) Degree of N(s) and D(s) differ at most by1
c) Lowest degrees of N(s) and D(s) differ at most by 1
d) There should be no missing terms in N(s) and D(s) unless all even or odd terms are missing.
e) jω – axis poles and zeros must be simple
2. Test for necessary and sufficient conditions
a) Re[Z(jω)] ≥ 0 for all real ω
Or P(x) ≥ 0 for all x ≥ 0
b) N(s) + D(s) must be strictly Hurwitz
Example 4
Test positive real nature of the following Dp impedance function.

20
Solution:
1. Inspection test. 2. Necessary and sufficient conditions
✓ Real and positive coefficients P(ω2) = NeDe – NoDo |s = jω
✓ Highest degrees differ by 1-1 = 0 ≤ 1 = (3)(1) – (j2ω)(jω)
✓ Lowest degrees differ by 0-0 = 0 ≤ 1 = 2ω2 + 3
✓ No missing terms P(x) = 2x + 3 ← all coefficients are positive
✓ No imaginary axis poles and zeros  P(x) ≥ 0 for all x ≥ 0
Ok!  Re[z(jω)] ≥ 0 for all real ω Ok!
 N(s) + D(s) = 3s + 4 = 3(s + 4/3) ← zero at s = -4/3
 N(s) + D(s) is strictly Hurwitz
 All condition met
Z(s) is positive real

Problem 3
Test the following admittance function.

Example 5
Test positive realness of the following impedance function

Solution:
1. Inspection test.
✓ Real and positive coefficients
✓ Highest degrees differ by 3-3 = 0 ≤ 1
✓ Lowest degrees differ by 0-0 = 0 ≤ 1
✓ No missing terms
➢ Test for imaginary axis poles and zeros.
❖ This can be done using Routh-Hurwitz array; there will be vanishing row if a polynomial
has imaginary axis (jω – axis) roots.

21
Pk(s) = 3s2 + 3 = 3(s2 + 1)1 is a factor of P(s)
P(s) has one simple root.
 Z(s) has one simple zero in the jω – axis
Ok!
2. Necessary and sufficient conditions
P(ω2) = NeDe – NoDo |s = jω
= (3s2 + 3)(3s2 + 1) – (2s3 + 2s)(s3 + 4s)| s = jω
= 2ω6 – ω4 - 4 ω2 + 3
P(x) = 2x3 – x2 – 4x + 3
 Sturm’s Test
P0(x) = 2x3 – x2 – 4x + 3
P1(x) = 6x2 – 2x – 4
P2(x) = 25/9x – 25/9
P3(x) = 0 ← premature termination
P2(x) = 25/9(x – 1)1
 (x-1)1+1 = (x-1)2 is a factor of P(x)

22
 P(x) has one double zero
 P(x) has one zero on the positive x – axis among which the double zero (x-1)2 is the one.
 Therefore, P(x) has 1-1 = 0 odd-ordered zeros.
 No odd-ordered zeros on the positive x - axis
 Hence, P(x) ≥0 for x ≥ 0
N(s) + D(s) = 3s3 + 6s2 + 6s + 4

Z(s) is positive real


Problem 4
Show that the following impedance function is positive real

Chapter 3
3 Elements of Realizability theory
3.1 Introduction
In the frequency domain, the stability criterion requires that the system function possess poles
in the left-half plane or on the j axis only. Moreover, the poles on the j axis must be
simple. As a result of the requirement of simple poles on the j axis, if H(s) is given as

an s n + an −1s n −1 + ... + a1s + a0


H (s ) = (3.1)
bm s m + bm −1s m −1 + ... + b1s + b0

then the order of the numerator n cannot exceed the order of the denominator m by more than
unity, that is, n – m  1. If n exceeded m by more than unity, this would imply that at s = j =
 , and there would be a multiple pole. To summarize, in order for a network to be stable, the

following three conditions on its system function H(s) must be satisfied:

23
1. H(s) cannot have poles in the right-half plane.
2. H(s) cannot have multiple poles in the j axis.

3. The degree of the numerator of H(s) cannot exceed the degree of the denominator by

more than unity.

Finally, it should be pointed out that a rational function H(s) with poles in the left-half plane
only has an inverse transform h(t), which is zero for t < 0. In this respect, stability implies
causality. Since system functions of passive linear networks with lumped elements are rational

functions with poles in the left-half plane or j axis only, causality ceases to be a problem
when we deal with system functions of this type. We are only concerned with the problem of
causality when we have to design a filter for a given amplitude characteristic such as the ideal
low-pass filter. We know we could never hope to realize exactly a filter of this type because
the impulse response would not be causal. To this extent the Paley-Wiener criterion is helpful
in defining the limits of our capability.

3.2 Hurwitz Polynomials


We know that in order for a system function to be stable, its poles must be restricted to the left-

half plane or the j axis. Moreover, the poles on the j axis must be simple. The
denominator polynomial of the system function belongs to a class of polynomials known as
Hurwitz polynomials. A polynomial P(s) is said to be Hurwitz if the following conditions are
satisfied:

1. P(s) is real when s is real.


2. The roots of P(s) have real parts which are zero or negative.
As a result of these conditions, if P(s) is a Hurwitz polynomial given by
P(s) = ansn +an-1sn-1 +…+ a1s+a0 (3.2)
then all the coefficients ai must be real; if si = i +i is a root of P(s), then i must be negative.
The polynomial
P(s) = (s+1)(s+1+j2)(s+1-j2) (3.3)
is Hurwitz because all of its roots have negative real parts. On the other hand,
G(s) = (s-1)(s+2)(s+3) (3.4)
is not Hurwitz because of the root s = 1, which has a positive real part. Hurwitz polynomials
have the following properties:
1. All the coefficients ai are nonnegative. This is readily seen by examining the three types of
roots that a Hurwitz polynomial might have. These are

24
s = -i i real and positive

s =  ji  i real

s = -i + i  i real and positive

The polynomial P(s) which contains these roots can be written as


P(s) = (s+i)(s2+i2)[(s+i)2 + i2]… (3.5)
Since P(s) is the product of terms with only positive coefficients, it follows that the coefficients
of P(s) must be positive. A corollary is that between the highest order term in s and the lowest
order term, none of the coefficients may be zero unless the polynomial is even or odd. In other
words, an-1,aa-2,…..a2,a1 must not be zero if the polynomial is neither even nor odd. This is
readily seen because the absence of a term ai implies cancellation brought about by a root (s –
i) with a positive real part.
2. Both the odd and even parts of a Hurwitz polynomial p(s) have roots on the j axis only. If
we denote the odd part of P(s) as n(s) and the even part as m(s), so that
P(s) = n(s) + m(s) (3.6)
then m(s) and n(s) both have roots on the j axis only.
3.As a result of proof of this property 2, if P(s) is either even or odd, all its roots are on the j
axis.
4. The continued fraction expansion of the ratio of the odd to even parts or the even to odd
parts of a Hurwitz polynomial yield all positive quotient terms. Suppose we denote the ratios
as (s)=m(s)/n(s) or (s)=n(s)/m(s), then the continued fraction expansion of (s) can be
written as-
1
 ( s ) = q1s + (3.7)
1
q2 s +
1
q3 s +
...
1
+
qn s
where the quotients q1, q2 ... qn must be positive if the polynomial P(s)=n(s)+m(s) is
Hurwitz. To obtain the continued fraction expansion, we must perform a series of long
divisions. Suppose (s) is
m( s )
(s)= (3.8)
n( s )

25
where m(s) is of one higher degree than n(s). Then if we divide n(s) into m(s), we obtain a
single quotient and a remainder
R1 ( s )
(s)=q1s+ (3.9)
n( s )

The degree of the term R1(s) is one lower than the degree of n(s). Therefore if we invert the
remainder term and divide, we have
n( s ) R ( s)
= q2 s + 2 (3.10)
R1 ( s ) R1 ( s )

Inverting and dividing again, we obtain


R1 ( s ) R3 ( s )
= q3 s + (3.11)
R2 ( s) R2 ( s)

We see that the process of obtaining the continued fraction expansion of  (s ) simply
involves division and inversion. At each step we obtain a quotient term qis and a remainder
term, Ri+1(s) /Ri(s). We then invert the remainder term and divide Ri+1(s) into Ri(s) to obtain
a new quotient. There is a theorem in the theory of continued fractions which states that the
continued fraction expansion of the even to odd or odd to even parts of a polynomial must
be finite in length. Another theorem states that, if the continued fraction expansion of the
odd to even or even to odd parts of a polynomial yields positive quotient terms, then the
polynomial must be Hurwitz to within a multiplicative factor W(s). That is, if we write
F(s) = W(s)F1(s) (3.12)
then F(s) is Hurwitz, if W(s) and F1(s) are Hurwitz. For example, let us test whether the
polynomial
F(s) = s4+s3+5s2+3s+4 (3.13)
is Hurwitz. The even and odd parts of F(s) are
m(s) = s4+5s2+4
n(s) = s3+3s (3.14)
We now perform a continued fraction expansion of  (s ) = m(s)/n(s) by dividing m(s) by
n(s), and then inverting and dividing again, as given by the operation

s3 + 3s)s4+5s2+4(s
s4+3 s2
---------------------
2 s2+4) s3 + 3s(s/2
s3 + 2s

26
------------------ (3.15)
s) 2 s2+4(2s
2 s2
--------------------------
4)s(s/4
s
------------------
so that the continued fraction expansion of  (s ) is
m(s )
 (s ) =
1
=s+
n(s ) s
+
1
2 2s + 1
s/4
Since all the quotient terms of the continued fraction expansion are positive, F(s) is
Hurwitz.
Example 3.1. Let us test whether the polynomial
G(s) = s3+2s2+3s+6 (3.16)
is Hurwitz. The continued fraction expansion of n(s)/m(s) is obtained from the division
2 s 2 + 6) s 3 + 3s ( s / 2
s 3 + 3s
0

We see that the division has been terminated abruptly by a common factor s3+3s. The
polynomial can then be written as
 2
G(s) = (s3+3s) 1 +  (3.17)
 s
We know that the term 1+2/s is Hurwitz. Since the multiplicative factor s3+3s is also
Huwitz. The term s3+3s is the multiplicative factor W(s), which we referred to earlier.
Example 3.2. Next consider a case where W(s) is non-Hurwitz.
F(s) = s7 + 2s6 +2s5 + s4 + 4s3 + 8s2 + 8s + 4 (3.18)
The continued fraction fraction expansion of F(s) is now obtained.
n(s ) s 1
= + (3.19)
m(s ) 2 4
s+
1
3 3 4
2
ss +4 ( )
s4 + 4( )
We thus see that W(s) = s4+4, which can be factored into

27
W(s) = (s2+2s+2)(s2-2s+2) (3.20)
It is clear that F(s) is not Hurwitz.
Example 3.3. Let us consider a a more obvious non-Hurwitz polynomial
F(s) = s4+s3+2s2+3s+2 (3.21)
The continued fraction expansion is
s3 + 3s)s4+2s2+2(s
s4+3 s2
---------------------
- s2+2) s3 + 3s(-s
s3 - 2s
------------------
5 s) - s2+2(-s/5
- s2
--------------------------
2)5s(5s/2
5s
------------------
We see that F(s) is not Hurwitz because of negative quotients.
Example 3.4. Consider the case where F(s) is an odd or even function. It is impossible to
perform a continued fraction expansion on the function as it stands. However, we can test the
ratio of F(s) to its derivative, F'(s). If the ratio F(s)/ F'(s) gives a continued fraction expansion
with all positive coefficients, then F(s) is Hurwitz. For example, if F(s) is given as
F(s) = s7 + 3 s5 + 2s3 + s (3.22)
F'(s) = 7s6 + 15s4+ 6s2+1 (3.23)
Without going into the details, it can be shown that the continued fraction expansion of
F(s)/F’(s) does not yield all positive quotients. Therefore F(s) is not Hurwitz.

3.3 Positive Real Functions


In this section we will study the properties of a class of functions known as positive real
functions. These functions are important because they represent physically realizable passive
driving-point immittances. A function F(s) is positive real (p.r.) if the following conditions are
satisfied:
1. F(s) is real for real s; that is, F(  ) is real.

28
2. The real part of F(s) is greater than or equal to zero when the real part of s is greater than
or equal to zero, that is,
Re F (s )  0 for Re s  0
Let us consider a complex plane interpretation of a p.r. function.
Consider the s plane and the F(s) plane in Fig. 10.5. If F(s) is p.r., then a point  0 on the
positive real axis of the s plane would correspond to, or map onto, a point F(  0) which must
be on the positive real axis of the F(s) plane. In addition, a point si in the right half of the s
plane would map onto a point F(si) in the right half of the F(s) plane.

jImF

S plane
F(s) plane
o
O
si F(si)

F(  0 ) Re F

Fig. 5. Mapping of s plane onto F(s) plane.


In other words, for a positive real function, the right half of the s plane maps onto the right
half of the F(s) plane. The real axis of the s plane maps onto the real axis of the F(s) plane.
A further restriction we will impose is that F(s) be rational. Consider the following
examples of p.r. functions:
1. F(s) = Ls (where L is a real, positive number) is p.r. by definition. If F(s) is an
impedance function, then L is an inductance.
2. F(s) = R (where R is real and positive) is p.r. by definition. If F(s) is an impedance
function, R is a resistance.
3. F(s) = K/s (K real and positive) is p.r. because, when s is real, F(s) is real. In addition,
  0.
when the real part of s is greater than zero, Re (s) =

K K
Then Re   = 2 0 (3.24)
 s   +
2

Therefore, F(s) is p.r. If F(s) an impedance function, then the corresponding element is
a capacitor of 1/K farads.
We thus that the basic passive impedances are p.r. functions.

29
Similarly, it is clear that the admittances

Y(s) = K
Y(s) = Ks (3.25)
K
Y (s ) =
s
are positive real if K is real and positive. We now show that all driving point immittances of
passive networks must be p.r. The proof depends upon the following assertion: for a sinusoidal
input, the average power dissipated by a passive network is nonnegative. For the passive
network in Fig. 10.6, the average power dissipated by the network is

ReZ in ( j ) I  0
1 2
Average power = (3.26)
2
We then conclude that, for any passive network
Re Zin ( j )  0 (3.27)

We can now prove that for Re s =   0, Re Z in( + j )  0.Consider

the network in Fig. 3.6, whose driving

Zin(s) Passive
network

Point impedance is Zin(s). Let us load the network with incidental dissipation such that if the
driving-point impedance of the uniformly loaded network is Z1(s), then
Z1(s) = Zin(s+  ) (3.28)
where  , the dissipation constant, is real and positive. Since Z1(s) is the impedance of a passive
network,
Re Z1 ( j )  0
(3.29)& (3.30)
Re Z in ( + j )  0
Since  is an arbitrary real positive quantity, it can be taken to be  . Thus the theorem is
proved.
Next let us consider some useful properties of p.r. functions. The proofs of these
properties are not given here.
1. If F(s) is p.r., then 1/F(s) is also p.r. This property implies that if a driving-point
impedance is p.r., then its reciprocal, the driving-point admittance, is also p.r.

30
2. The sum of p.r. functions is p.r. From an impedance standpoint, we see that if two
impedances are connected in series, the sum of the impedances is p.r. An analogous
situation holds for two admittances in parallel. Note that the difference of two p.r.
functions is not necessarily p.r.; for example, F(s)= s-1/s is not p.r.
3. The poles and zeros of a p.r. function cannot have positive real parts, i.e., they cannot
be in the right half of the s plane.
4. Only simple poles with real positive residues can exist on the j  axis.
5. The poles and zeros of a p.r. function are real or occur in conjugate pairs. We know
that the poles and zeros of a network function are functions of the elements in the
network. Since the elements themselves are real, there cannot be complex poles or
zeros without conjugates because this would imply imaginary elements.
6. The highest powers of the numerator and denominator polynomials may differ at most
by unity. This condition prohibits multiple poles and zeros at s =  .
7. The lowest powers of the denominator and numerator polynomials may differ by at
most unity. This condition prevents the possibility of multiple poles or zeros at s =0.
8. The necessary and sufficient conditions for a rational function with real coefficients
F(s) to be p.r. are
(a) F(s) must have no poles in the right-half plane.
(b) F(s) may have only simple poles on the j  axis with real and positive residues.
(c) Re F(j  )  0, for all .
Let us compare this new definition with the original one which requires the two conditions.
1. F(s) is real when s is real.
2. Re F(s)  0, when Re s  0.
In order to test condition 2 of the original definition, we must test every single point in
the right-half plane. In the alternate definition, condition (c) merely requires that we test the
behaviour of F(s) along j axis. It is apparent that testing a function for the three conditions
given by the alternate definition represents a considerable saving of effort, except in simple
cases as F(s) = 1/s.
Let us examine the implications of each criterion of the second definition.
Condition (a) requires that we test the denominator of F(s) for roots in the right-half plane, i.e.,
we must determine whether the denominator of F(s) is Hurwitz. This is readily accomplished
through a continued fraction expansion of the odd to even or even to odd parts of the
denominator. The second requirement--condition (b) -- is tested by making a partial fraction

31
expansion of F(s) and checking whether the residues of the poles on the j axis are positive
and real. Thus, if F(s) has a pair of poles at s =  j1, a partial fraction expansion gives terms

of the form shown.


K1 K1 *
+
s − j1 s + j1
The residues of complex conjugate poles are themselves conjugates. If the residues are real---
as they must be in order for F(s) to be p.r.---then K1 = K1* so that
K1 K1 * 2K s
+ = 2 1 2 (3.31)
s − j1 s + j1 s + 1

If K1 is found to be positive, then F(s) satisfies the second of the three conditions.
In order to test for the third condition for positive realness, we must first find the real
part of F(j  ) form the original function F(s). To do this, let us consider a function F(s) given
as a quotient of two polynomials
P(s )
F (s ) = (3.32)
Q(s )
We can separate the even parts from the odd parts of P(s) and Q(s) so that F(s) is
M 1 (s ) + N1 (s )
F (s ) = (3.33)
M 2 (s ) + N 2 (s )
where Mi(s) is an even function and Ni(s) is odd function. F(s) is now decomposed into even
and odd parts by multiplying both P(s) and Q(s) by (M2-N2) so that
M 1 + N1 M 2 − N 2
F (s ) =
M 2 + N2 M 2 − N2
M 1M 2 − N1 N 2 M 2 N1 − M 1 N 2
= + (3.34)
M 22 − N 22 M 22 − N 22

We see that the products M1 M2 and N1 N2 are even functions, while M1N2 and M2N1 are odd
functions. Therefore, the even part of F(s) is
M 1M 2 − N1 N 2
EvF (s ) = (3.35)
M 22 − N 22
and the odd part of F(s) is
M 2 N1 − M 1 N 2
Odd F (s ) = (3.36)
M 22 − N 22

32
If we let s = j , we see that the even part of any polynomial is real, while the odd part of the
polynomial is imaginary, so that if F ( j ) is written as
F ( j ) = Re f ( j ) + j ImF ( j ) (3.37)

it is clear that ReF ( j ) = EvF (s ) s = j (3.38)

and j ImF ( j ) = Odd F (s ) s = j (3.39)

Therefore, to test for the third condition for positive realness, we determine the real part of
F(j) by finding the even part of F(s) and then letting s = j. We then check to see whether Re
F( j )  0 for all  .

A(  )

Single  Double
Fig.
root 7 Fig. 8 root

The denominator of Re F( j ) is a always a positive quantity because

M 2 ( j ) − N 2 ( j ) = M 2 ( ) + N 2 ( )  0
2 2 2 2
(3.40)
That is, there is an extra j or imaginary term in N2( j ), which, when squared, gives –1, so
that the denominator of Re F( j )is the sum of two squared numbers and is always positive.
Therefore, our task resolves into the problem of determining whether
A( ) M 1 ( j )M 2 ( j ) − N1 ( j )N 2 ( j )  0 (3.41)

If we call the preceding function A(  ) , we see that A(  ) must not have positive, real
roots of the type shown in Fig. 10.7; i.e., A(  ) must never have single, real roots of  .
However, A(  ) may have double roots (Fig.10.8), because A(  ) need not become negative
in this case.
As an example, consider the requirements for
s+a
F (s ) = (3.42)
s + bs + c
2

to be p.r. First, we know that, in order for the poles and zeros to be in the left-half plane or on
the j axis, the coefficients a, b, c must be greater than or equal to zero. Second, if b = 0, then
F(s) will possess poles on the j axis. We can then write F(s) as

F (s ) =
s a
+ 2 (3.43)
s +c s +c
2

33
We shall show later that the coefficient a must also be zero when b = 0. Let us proceed
with the third requirement, namely, Re F(j)  0. From the equation
M 1 ( j )M 2 ( j ) − N1 ( j )N 2 ( j )  0 (3.44)

we have a(-2 + c) + b2  0 (3.45a)


which simplifies to A() = (b – a) 2 + ac  0 (3.45b)
It is evident that in order to prevent A() from having positive real roots of , b must be greater
than or equal to a, that is, b  a. As a result, when b = 0, then a = 0. To summarize, the conditions
that must be fulfilled in order for F(s) to be positive real are
1. a, b, c  0.
2. b  a.
s+2
We see that F1 (s ) = (3.46)
s + 3s + 2
2

Is p.r., while the functions


s +1
F1 (s ) = (3.47)
s2 + 2
s+4
F3 (s ) = (3.48)
s + 2s + 1
2

are not p.r.


Next the examples given here are, of course, special cases. But they do illustrate the procedure
by which functions are tested for the p.r. property. Now we shall consider a number of other
helpful points by which a function might be tested quickly. First, if F(s) has poles on the j
axis, a partial fraction expansion will show if the residues of these poles are positive and real.
For example,
3s 2 + 5
F (s ) = (3.49)
s ( s 2 + 1)

has a pair of poles at s = ± j1. The partial fraction expansion of F(s),


− 2s
F (s ) =
5
+ (3.50)
( s + 1) s
2

shows that the residues of the poles at s = ± j is negative. Therefore F(s) is not p.r.
Since the impedances and admittances of passive time-invariant networks are p.r.
functions, we can make use of our knowledge of impedances connected in series or parallel in
our testing for the p.r. property. For example, if Z1(s) and Z2(s) are passive impedances, then
Z1 connected in parallel with Z2 gives overall impedance

34
Z1 ( s ) Z 2 ( s )
Z(s) = (3.51)
Z1 ( s ) + Z 2 ( s )

Since the connecting of the two impedances in parallel has not affected the passivity of the
network, we know that Z(s) must also be p.r. We see that if F 1(s) and F2(s) are p.r. functions,
then
F1 ( s ) F2 ( s )
F(s) = (3.52)
F1 ( s ) + F2 ( s )

must also be p.r. Consequently, the functions


Ks
F(s) = (3.53)
s +
K
F(s) = (3.54)
s +
Where  and K are real and positive quantities, must be p.r. We then observe that the functions
of the type
s+
F(s) = ,0
s +
s 
F(s) = + (3.57)
s + s +
must be p.r. also.
Finally, let us determine whether
Ks
F(s) = , K  0 (3.58)
s +
2

is p.r. If we write F(s) as


1
F(s) = (3.59)
s / K +  / Ks
We see that the terms s/K and /Ks are p.r. Therefore, the sum of the two terms must be p.r.
Since the reciprocal of a p.r. function is also p.r., we conclude that F(s) is p.r.

3.4 Elementary Synthesis Procedure


The basic philosophy behind the synthesis of driving-point functions is to break up a
p.r. function Z(s) into a sum of simplier p.r. functions Z1(s), Z2(s), …, Zn(s), and synthesize
these individual Zi(s) as elements of the overall network whose driving point impedance is
Z(s).
Z(s) = Z1(s) + Z2(s) + … + Zn(s) (3.60)
First, consider the “breaking-up” process of the function Z(s) into the sum of functions Z i(s).
One important restriction is that all Zi(s) must be p.r. Certainly, if all Zi(s) were given to us, we

35
could synthesize a network whose driving-point impedance is Z(s) by simply connecting all
the Zi(s) in series. However, if we were to start with Z(s) alone, how could we decompose Z(s)
to give us the individual Zi(s)? Suppose Z(s) is given in general as
an s n + an −1s n −1 + ... + a1s + a0 P( s)
Z(s) = m −1
= (3.61)
bm s + bm −1s + ... + b1s + b0 Q( s)
m

Consider the case where Z(s) has a pole at s = 0(that is, b 0 = 0). Let us divide P(s) by Q(s) to
give a quotient D/s and a remainder R(s), which we can denote as Z 1(s) and Z2(s).
D
Z(s) = + R(s) D0
s
= Z1(s) + Z2(s) (3.62)
Are Z1 and Z2 p.r.? We know that Z1 = D/s is p.r. Is Z2(s) p.r.? Consider the p.r. criteria given
previously.
1. Z2(s) must have no poles in the right-half plane.
2. Poles of Z2(s) on the imaginary axis must be simple, and their residues must be real and
positive.
3. Re [Z2(s)]  0, for all .
Let us examine these cases one by one. Criterion 1 is satisfied because the poles of Z2(s)
are also poles of Z(s). Criterion 2 is satisfied by this same argument. A simple partial fraction
expansion does not affect the residues of the other poles. When s = j, Re [Z(j) = D/ j] = 0.
Therefore we have
Re [Z2(j)] = Re [Z(j)]  0 (3.63)
From the foregoing discussion, it is seen that if Z(s) has a pole at s = 0, a partial fraction
expansion can be made such that one of the terms is of the form K/s and the other terms
combined still remain p.r.
A similar argument shows that if Z(s) has a pole at s =  (that is, n - m = 1), we can divide
numerator by denominator to give a quotient Ls and a remainder term R(s), again denoted as
Z1(s) and Z2(s).
Z(s) = Ls + R(s) = Z1(s) + Z2(s). (3.64)
Here Z2(s) is also p.r. If Z(s) has a pair of conjugate imaginary poles on the imaginary axis, for
example, poles at s = E j1, then Z(s) can be expanded into partial fractions so that
2 Ks
Z(s) = + Z 2 ( s) (3.65)
s + 1
2 2

36
 2 Ks   j 2 K 
Here Re  2 
2 
=  =0
2 
(3.66)
 s + 1  s = j  −  2
+ 1 

so that Z2(s) is p.r.

Z1(s)

Z(s) Z2(s)

Fig 10
Consider the following p.r. function
s 2 + 2s + 6
Z(s) = (3.68)
s( s + 3)
We see that Z(s) has a pole at s =0. a partial fraction expansion of Z(s) yields
2 s
Z(s) = +
s s+3
= Z1(s) + Z2(s) (3.69)
If we remove Z1(s) from Z(s), we obtain Z2(s), which can be shown by a resistor in parallel
with an inductor, as illustrated in fig.11.

Fig 11

Example 6:
7s + 2
Y(s) = (3.70)
2s + 4
Let us synthesize the network by first removing min [Re Y(j)]. The real part of Y(j) can be
easily obtained as
8 + 14 2
Re[Y(j)] = (3.71)
16 + 4 2

37
We see that the minimum of Re[Y(j)] occurs at  = 0, and is equal to min[Re Y(j)] = 0.5.
Let us then remove Y1 = 0.5 mho from Y(s) and denote the remainder as Y2(s), as shown in fig
12. The remainder function Y2(s) p.r. because we have removed only the minimum real part of
Y2(j). Y2(s) is obtained as
3s
Y2(s) = Y(s) – 0.5 = (3.72)
s+2
1 3
It is readily seen that Y2(s) is made up of a  in series with a farad capacitor. Thus the
3 2
final network is shown in fig 13.

Consider the p.r impedance


6s 3 + 3s 2 + 3s + 1
Z(s) = (3.73)
6s 3 + 3s

The real part of the function is a constant, equal to unity. Removing a constant of 1 , we
obtain (fig. 14)

38
1

Z(s) Z1(s)

Fig 14

3s 2 + 1
Z1(s) = Z(s) – 1= (3.75)
6 s 3 + 3s
The reciprocal of Z1(s) is an admittance
6 s 3 + 3s
Y1(s) = (3.76)
3s 2 + 1

Which has a pole at s = . The pole is removed by finding the partial fraction expansion of
Y1(s);
s
Y1(s) = 2s+ (3.77)
3s + 1
2

and then by removing the term with the pole at s =  to give a capacitor of 2 farads in parallel
with Y2(s) below(fig 15). Y2(s) is now obtained as
s
Y2(s) = Y1(s) - 2s = (3.78)
3s + 1
2

The reciprocal of Y2(s) is


1
Z2(s) = 3s + (3.79)
s

1

Z(s) Y2
Y1 2F

Fig 15

39
1 3H

Z(s) 2F 1F

Fig 16

These examples are special cases of driving-point synthesis problem. However, they do
illustrate the basic techniques involved.

40
CHAPTER 4
Two Port Network

41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
CHAPTER 5
5 Active Networks
5.1. Active network components
Linear, lumped, and time-invariant active network components can be classified in to two
categories: basic building blocks and secondary building blocks. This classification is based on
the observation that every element in the secondary building block category can be realized by
interconnecting elements in the basic building block category. Secondary building blocks are
not fundamental; they are useful to model some network components.
• Basic building blocks
This category includes passive elements such as resistors, capacitors, inductors, and active
elements such as operational amplifiers (Op-Amps).
• Secondary building blocks
o Negative impedance converter (NIC)
It is a two-port device that has input impedance equal to the negative of impedance of the load
scaled by a constant 1/k.

o Generalized impedance converter (GIC)


It is a two-port device capable of making the input impedance of one of its two ports the
product of impedance of the load and some internal impedance.

57
Inductors can be realized from resistors. This is very important in filters to substitute inductors
with active element especially in extreme frequency applications since they require physically
large inductors which will be difficult to fabricate integrated circuits.
o Frequency dependant negative resistance (FDNR)

58
A circuit realization of an FDNR can be obtained by terminating port 1 of the GIC with a
capacitor as shown below.

Operational amplifiers

59
• An op amp is a high voltage gain, DC amplifier with high input impedance, low output
impedance, and differential inputs.
• Positive input at the non-inverting input produces positive output; positive input at the
inverting input produces negative output.
• Practically, op amps are not used in open-loop manner; a feedback is included to reduce
the gain and get a more precise and predictable characteristic.
• If one of the terminals is grounded in feedback configuration, the other will be virtually
grounded.

5.2. Operational Amplifier Circuits


The following are some of most common op amp feedback configurations.

Inverting feedback

The impedance elements Zs and ZF are impedances of a one-port network containing one or
more elements.

60
5.3. Realization of Active Networks
Transfer functions can be realized using active elements (especially operational amplifiers) and
RC elements. Inductors are not common in active networks because of their large size and since
they can be simulated using active circuits. Transfer functions of simple feedback circuits have
orders not more than 2. However, in most applications such as filters and control systems,
higher order transfer functions are required. The most common and easiest way to realize these
function is to break the given transfer function in to product of smaller 1 st and 2nd ordered
transfer functions so that cascaded interconnections of sub networks obtained from these
functions will realize the given transfer function if there is no loading effect. If the transfer
function is voltage gain function, cascaded op amp feedback circuits can be used.
Operational amplifiers have high input impedance and low output impedance so that there will
not be loading effect during cascaded interconnections.
Example 1 Realize the following transfer function using op-amps and RC elements.

61
62
1.2 Signals and systems

One dimensional signal


If a signal depends on only one independent variable. It is called a one dimensional signal.
Ex: Speech signal
Multidimensional signal
If a signal depends on two or more independent variables. It is called multidimensional signal.
Ex: Image (Two dimensional signal). The two variables of image are brightness & intensity.
One channel signal
If a signals are generated by a single source are called one channel signal.
Ex: audio output of a mono speaker
Multichannel signal
If a signals are generated by a multiple sources are called multichannel signal.
Ex: ECG signal (Take ECG at six different places in human body)
Continuous signal
A signal is defined continuously for any value of independent variable. It is called continuous
(or) analog signal.
Discrete signal
A signal is defined for discrete intervals of independent variable. It is called Discrete signal.
Based on Nature & Characterisitics in the time domain the signals may be classified as
(i) Continuous time signals
(ii) Discrete time signals
Continuous time signals
The signals that are defined for every instant of time are known as continuous time signals. It
is denoted as x(t).
Example: x(t)

+A

–A
Classification of signals and systems 1.3

Discrete time signals


The signals that are defined at discrete instant of time are known as discrete time signals. The
discrete time signals are continuous in amplitude and discrete in time. It is denoted as x(n). Also it
is denoted as,

x (nT)  x (t) t nT

Example:
x(n)

n
0 1 2 3 4 5
Mathematically a discrete time signal is denoted as

x(n)  {0, 2, 4, 1, 3,  1}

where the arrow indicates the value of x(n) at n = 0.


Digital signals
The signals that are discrete in time and quantized in amplitude are digital signals.
1.2 System
A system may be defined as a set of element (or) functional blocks (or) physical device which
are connected together and produces an output in response to an input signal.

x(t) System y(t)

Mathematically the functional relationship between Input and Output may be written as
y(t) = H[x(t)]
Symbolically, x(t)  y(t)
where, x(t)  Input signal
y(t)  Output signal
H  System operator
Ex: Audio and Video amplifiers
1.4 Signals and systems

When a system satisfies the properties of linearity and time invariant then it is called LTI
(Linear Time Invariant) system.
Applications of signals and systems
It is mainly used in Science and Engineering. Some of the applications are
 Image processing
 Speech processing
 Communication
 Audio & Video Equipments
 Bio medical
Problems

1. x (n)  { 3,  1, 2, 3, 4} . Draw the DT signal.


Solution:
x(–2) = –3, x(–1) = –1, x(0) = 2, x(1) = 3, x(2) = 4
x(n)

4
3
2

1
n
–3 –2 –1 0 1 2 3
–1
–2
–3

2. x(n) = {1, 2, 3, 4}. Draw the DT signal.


Solution:
x(0) = 1, x(1) = 2, x(2) = 3, x(3) = 4
Classification of signals and systems 1.5

x(n)
4
3
2
1
n
–3 –2 –1 0 1 2 3
–1
–2
–3

n  2, n0

3. x (n)  n  1, n  0 . Draw DT signal.
n  3, n0

Solution:
n > 0, x(n) = n + 2
n = 1, x(1) = 3
n = 2, x(2) = 4
n = 3, x(3) = 5
n = 0, x(n) = n + 1
x(0) = 1
n < 0, x(n) = n + 3
x(–1) = 2
x(–2) = 1
x(–3) = 0

x(n)  {0, 1, 2, 1, 3, 4, 5}

1.6 Signals and systems

x(n)
5
4
3
2
1
n
–4 –3 –2 –1 0 1 2 3 4
–1
–2
–3

4. Sketch the continuous time signal x(t) = e–t for an interval 0 < t < 2. Sample the signal
with a sampling period T = 0.4 second and sketch the discrete time signal.
Solution:
x(t) = e–t; 0 < t < 2
Continuous time signal
Consider t = {0, 0.5, 1, 1.5, 2}
x(0) = e–0 = 1
x(0.5) = e–0.5 = 0.606
x(1) = e–1 = 0.368
x(1.5) = e–1.5 = 0.223
x(2) = e–2 = 0.135
x(t)
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
t
0
0.5 1 1.5 2
Classification of signals and systems 1.7
Discrete time signal

x (n)  x (nT)  x (t) t  nT

T = 0.4 second

x (0.4n)  x (t) t 0.4n

= e–nT
 x(n) = e–0.4n
For choosing n,
0.4n = 2

2
n
0.4
n=5
i.e., n = 0 to 4
x(n) = e–0.4n
n = 0, x(0) = e0 = 1
n = 1, x(1) = e–0.4 = 0.67
n = 2, x(2) = e–0.8 = 0.449
n = 3, x(3) = e–1.2 = 0.301
n = 4, x(4) = e–1.6 = 0.201
x(n)

1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
n
0 1 2 3 4
Classification of signals and systems 1.9
n = 0, x(0) = 2sin  (0) = 0
n = 1, x(1) = 2sin  (0.4) = 1.9
n = 2, x(2) = 2sin  (0.8) = 1.17
n = 3, x(3) = 2sin  (1.2) = –1.17
n = 4, x(4) = 2sin  (1.6) = –1.9
n = 5, x(5) = 2sin  (2) = 0
x(n)

2
1

0 n
1 2 3 4 5
–1
–2

1.3 Elementary continuous time signals/Standard continuous time signals


These signals include step, ramp, pulse, impulse, sinusoidal, parabolic, exponential functions.
1. Step signal x(t)
u(t)
The step signal is defined as
A 1
 A; t  0
x(t)  
 0; t  0 t t
The unit step signal is defined as

 1; t0
x(t)  u(t)  
 0; t0

2. Ramp signal
x(t) r(t)
The ramp signal is defined as

At; t  0 3A 3
x(t)  
 0; t  0 2A 2

The unit ramp signal is defined as A 1

t t
 t; t0 0 1 2 3 0
x(t)  r(t)   1 2 3
 0; t0
1.10 Signals and systems

3. Rectangular pulse [unit pulse signal]


The rectangular pulse signal is defined as
(t)
 t t
 1; t
x(t)  (t)   2 2 1
 0; otherwise

(or)
t
 t –0.5
t 0 t
0.5
 1; |t|
(t)   2 2 2
 0; otherwise

4. Triangular pulse signal


x(t)
The triangular pulse signal is defined as

 |t|
1  ; |t| a
x(t)  a (t)   a
 0 ; |t| a
t
5. Impulse signal –a 0 a

The impulse signal is defined as x(t)

A; t  0
x(t)   A
 0; t  0
t
It has zero duration and infinite magnitude. 0
The unit impulse signal (Delta signal) is defined as (t)

 1; t0 1
(t)  
 0; t0
t
Properties of unit impulse signal 0


(i)  (t)dt  1



(ii)  x(t) (t) dt  x(0)

Classification of signals and systems 1.11


(iii)  x(t) (t  t 0 )dt  x(t 0 )



(iv)  x() (t  )d  x(t)


1
(v) (at)  (t) .
|a |
6. Sinusoidal signal
A continuous time sinusoidal signal is given by
x(t) = Asin( t   )
where,
A  Amplitude
  Angular frequency in radians
  Phase angle in radians
A sinusoidal signal is an example of periodic signal. The time period of this signal is given by

2
T

x(t)


t
0

–A

7. Exponential signal
The exponential signal plays an important role in signal analysis. It is classified into
(i) Real exponential signal
(ii) Complex exponential signal
1.12 Signals and systems

Real exponential signal


The real exponential signal is represented as
x(t) = A eat
where, A, a  Real constants
We consider three cases based on ‘a’ value.
Case (i)
a = 0, x(t) = A. The given signal is constant at A.
Case (ii)
a > 0, i.e., a is consider only for positive values. The x(t) values are Aet, Ae2t, Ae3t..... (a = 1,
2, 3......). The x(t) is said to be growing exponentially.
Case (iii)
a < 0, i.e., a is consider only for positive values. The x(t) values are Ae–t, Ae–2t, Ae–3t.....
(a = –1, –2, –3......). The x(t) is said to be decaying exponentially.

x(t) x(t) x(t)

A
A A

t t t
0 0
0
Case (i) a = 0 Case (ii) a > 0 Case (iii) a < 0
Complex exponential signals
The complex exponential signal is represented as
x(t) = eSt
where S is complex signal, i.e., S =   j

 x(t)  e
  j  t

x (t)  et e jt


We consider three cases.
Case (i)
  0,   0  x(t)  1 . The signal is constant (or) DC signal.
Classification of signals and systems 1.13

x(t)

t
0
Case (ii)

We consider   0,   0  x (t)  et

(a)   0,   0, x(t) = et, e2t, e3t....... (   1, 2, 3...... )


The x(t) is said to be exponentially growing signal/rising signal.
(b)   0,   0, x(t) = e–t, e–2t, e–3t....... (    1,  2,  3...... )
The x(t) is said to be exponentially decaying signal.
x(t) x(t)

0 t 0 t
Figure: exponentially growing signal Figure: exponentially decaying signal
Case (iii)
We consider   0,   0,  x (t)  e jt then x(t) is sinusoidally damped signal
(a)   0,   0, x(t) = ejt, e2jt..... (   1, 2........ ) then x(t) is said to be sinusoidally exponential
growing signal.
x(t)

0 t

Figure: sinusoidal signal


1.14 Signals and systems

(b)   0,   0, x(t) = e–jt, e –2jt..... (    1,  2........ ) then x(t) is said to be sinusoidally


exponential decaying signal.

x(t)
x(t)

t
t

Figure: sinusoidally exponential growing signal Figure: sinusoidally exponential decaying signal

8. Parabolic signal
The parabolic signal is defined as

 At 2
 ; t0
x(t)   2
 0; t0

x(t)

4.5A

2A

0.5A

t
1 2 3
The unit parabolic signal is defined as

 t2
 ; t0
P(t)   2
 0; t0

Classification of signals and systems 1.15

P(t)

4.5
2

0.5

t
1 2 3
9. Signum function [Sgn(t)]
The signum function (or) signum signal is defined as

 1; t0

x(t)  Sgn(t)   0; t0
1; t0

x(t)

0 t

–1

It can be expressed in terms of unit step function as,


Sgn(t) = –1 + 2u(t)
10. Sinc function
The sinc function (or) sinc signal is defined as

sin t
x(t)  sin c(t)  ;   t  
t
x(t)
1

0 t
Classification of signals and systems 1.17

2. d[r(t)]  u(t)
dt
L.H.S,

 t; t0
Let r(t)  
 0; t0

Differentiating on both sides,

 dt
d[r(t)]  ; t  0
  dt
dt  0; t  0

 1; t0
  u(t)
 0; t0

L.H.S. = R.H.S.

d[r(t)]
 u(t)
dt
Hence proved.
1.5 Elementary Discrete time signals/Standard Discrete time signals
x(n)
1. Impulse function
It is defined as A
 A; n0
n
x(n)  
0; n0 –2 –1 0 1 2

The unit impulse sequence (or) unit sample sequence is defined as (n)

1; n0 1
(n)  
0; n0
n
2. Step function (or) step sequence –2 –1 0 1 2

It is defined as x(n)

 A; n0 A
x(n)  
0; n0

The unit step sequence is defined as n


0 1 2 3
1.18 Signals and systems

1; n0 u(n)


u(n)  
0; n0 A

3. Ramp sequence
It is defined as
n
0 1 2 3
 An; n0 x(n) r(n)
x(n)  
 0; n0
3A 3
The unit ramp sequence is defined as 2A 2
A 1
 n; n0
r(n)  
 0; n0 0 1 2 3 n 0 1 2 3 n
4. Parabolic sequence
x(n)
The parabolic sequence is defined as 4.5A
2A
 An 2
 ; n0 0.5A
x(n)   2
 0; n0
 0 1 2 3 n
The unit parabolic sequence is defined as P(n)
4.5
 n2
 ; n0 2
P(n)   2
 0; 0.5
 n0
0 1 2 3 n
5. Sinusoidal sequence
The discrete time sinusoidal sequence is defined as

x(n) = Asin( n   )
where,
A  Amplitude
  Angular frequency
  Phase angle
n  Integer
The period of the discrete time sinusoidal sequence is,

2
N m.

Classification of signals and systems 1.19

x(n)

6. Exponential sequence
It is classified into
(i) Real exponential sequence
(ii) Complex exponential sequence
Real exponential sequence
The real exponential sequence is represented as
x(n) = an for all n
We consider four cases depend on ‘a’
Case (i): a > 1
The sequence is said to be exponentially growing sequence.
Case (ii): 0 < a < 1
The sequence is said to be exponentially decaying sequence.

x(n) x(n)
a>1 0<a<1

1 2 3 4 5 n n
0 0 1 2 3 4 5
Figure: exponentially growing sequence Figure: exponentially decaying sequence
Case (iii): –1 < a < 0
The sequence is said to be exponentially decaying sequence. The sequence should be positive,
negative, positive.... etc.
1.20 Signals and systems

x(n) x(n)
–1 < a < 0 a < –1

0 1 2 3 4 5 n 0 1 2 3 4 5 n

Figure: exponentially decaying sequence Figure: exponentially growing sequence


Case (iv): a < –1
The sequence is said to be exponentially growing sequence. The sequence should be negative,
positive, negative...... etc.
Complex exponential sequence
The complex exponential sequence is represented as

x (n)  a n e j( n )
We consider three cases based on ‘a’.
Case (i): a = 1

x (n)  e j( n )
The sequence is said to be purely sinusoidal.
x(n)

Case (ii): a > 1


The sequence is said to be sinusoidally exponentially growing signal.
Classification of signals and systems 1.21

x(n) a > 1
x(n) a < 1

n
n

Case (iii): a < 1


The sequence is said to be sinusoidally exponentially decaying signal.
1.6 Relation between (n) and u(n)

The discrete time unit impulse is the first difference of the discrete time unit step.

(n)  u(n)  u(n  1)


Discrete time unit step is the running of the unit sample

u(n)   (m)
m 


i.e., u(n)   (m) with k = n – m
m 


u(n)   (n  k) .
k 0

1.7 Basic operations on signals


The basic operations on signals are
(i) Signal Addition
(ii) Signal Multiplication
(iii) Amplitude Scaling
(iv) Time Scaling
(v) Time Reversal
(vi) Time shifting
1.22 Signals and systems

(i) Signal Addition


The addition of two signals can be obtained by adding their values at every insant of time. The
subtraction of two signals can be obtained by subtracting their values at every instant of time.
Consider the two signals x1(t) and x2(t) then the addition of these signals y(t) = x1(t) + x2(t).
Similarly the subtraction of these signals y(t) = x1(t) – x2(t).
Example:

x2(t)
x1(t)

3 3
2 2
1
x2(t) 1

–2 –1 0 1 2 t
–2 –1 0 1 2
y(t)

5
4

3
2
1
t
–2 –1 0 1 2
(ii) Signal Multiplication
The multiplication of two signals can be obtained by multiplying their values at every instant of
time. Consider the two signals x 1(t) and x 2(t) then the multiplication of these signals
y(t) = x1(t) x2(t).
Example:
x1(t) x2(t) y(t)

2 2 2

1 1 1

t t
0 1 2 3 0 1 2 3 t 0 1 2 3
Classification of signals and systems 1.23
(iii)Amplitude Scaling
The amplitude scaling of a signal x(t) can be represented by
y(t) = A x(t)
where A  Scaling factor
If A < 1, then the signal attenuates.
If A > 1, then the signal amplifies.
Example:
Consider x(t) = cos  t y(t) = 0.5 cos  t
x(t)
0.5 A = 0.5
1

0 t 0 t

–1
–0.5

y(t) = 2 cos  t
2 A=2

0 t

–2
(iv)Time Scaling
The time scaling of a signal x(t) can be accomplished by replacing t by at in it.
It is expressed as
y(t) = x(  t)

where   Scaling factor

If   1, then the signal expands. If   1, then the signal compresses.


1.24 Signals and systems
Example:
x(t) x(t/2) x(2t)
1
 2
2

–1 0 1 t –2 –1 0 1 2 t –0.5 0 0.5 t
(v) Time Reversal
The time reversal of a signal x(t) can be obtained by folding the signal about t = 0. It is denoted
by x(–t). It is obtained by replacing the independent variable t by (–t). It is a mirror image of the
original signal x(t) with respect to the time origin t = 0.
Example:
x(–t)
x(t)

1
1
t
–4 –3 –2 –1 0
0 1 2 3 4 t

x(t) x(t)

t
–1 0 1 2 –5 –4 –3 –2 –1
t
(vi)Time shifting
The time shifting of a signal x(t) can be represented by
y(t) = x(t – t0)
If t0 > 0. For all values of t0 then a signal is said to be positive (right sided) shifted signal. The
shifting delays the signal.
If t0 < 0. For all values of t0 then a signal is said to be negative (left sided) shifted signal. The
shifting advances the signal.
Classification of signals and systems 1.25

Example:

x(t) x(t) x(t)

2 2

1 1

0 1 2 3 t 0 1 2 3 4 5 t –2 –1 0 1 t
Problems
1. Sketch the signal u(t) – u(t – 4).
Solution:

u(t) u(t – 4) u(t) – u(t – 4)

1 1 1

0 t 0 4 t 0 4 t
2. Sketch the signal x(2t + 3) for the given signal x(t).

x(t)

–1 0 1 t
Solution:
(i) Using time shifting (ii) Using time scaling
x(t+3) x(2t+3)

1 1

t t
–4 –3 –2 –1 –3 –2 –1 0
Classification of signals and systems 1.33

u(-n + 5)
1

-1 0 1 2 3 4 5
1.8 Classification of CT and DT signals
Both CT and DT signals are classified into several types
(i) Deterministic and Random signals
(ii) Periodic and Aperiodic signals
(iii) Even and odd signals
(iv) Causal and Non-causal signals
(v) Energy and power signals
1.8.1 Deterministic and Random signals
A Deterministic signal can be completely represented by a mathematical equation at any time.
The nature and amplitude of such signals at any time can be predicted.
Example: sinusoidal signal, exponential signal
x(t)
Deterministic signal
x(t) = Asin  t
A

The signal cannot be predicted at any time. The signal whose characteristics are random in
nature is called random signals. It cannot be represented by mathematical expression.
Example: noise signals
1.34 Signals and systems
x(t)
Random Signal

1.8.2 Periodic and Aperiodic signals


A continuous time signal x(t) is said to be periodic if and only if
x(t + T) = x(t) for all t
The smallest value of T which satisfies the above condition is called the fundamental period.
The reciprocal of fundamental period T is called the fundamental frequency f.

1
i.e., f =
T
The fundamental angular frequency is given by

2
0  2f 
T

2
 Fundamental period, T =  .
0

A signal is a periodic if the condition x(t + T) = x(t) is not satisified even for one value of t.
Similarly a discrete time signal x(n) is called periodic if it satisfies the condition x(n + N) = x(n)
for all integers n.
The smallest value of N which satisfies the above condition is called fundamental period.
A signal is aperiodic if the condition x(n + N) = x(n) is not satisfied even for one value of n.
The fundamental angular frequency is given by,

2
0  m, m - integer
N
The examples of continuous time periodic signals are complex exponential & sinusoidal signals.
All singularity function, i.e., unit step, unit ramp and unit impulse signals are aperiodic signals.
Classification of signals and systems 1.35
Problems
1. Prove the complex exponential signal is periodic or not?
Solution:

x (t)  A e j0 t

 x(t  T)  A e j0 (t  T)

 A e j0t e j0 t

j
2
.T 2
j0 t T  0 
 Ae e T

 A e j0 t e j2 

 A e j0 t [cos 2  jsin 2 ]

 A e j0 t [1  0]

 A e j0 t  x (t)

 x(t + T) = x(t)
Hence it is periodic.
2. Prove the sinusoidal signal is periodic or not.
Solution:

x(t) = A sin 0 t

x(t + T) = A sin 0 (t + T)

= A sin ( 0 t + 0 T)

 2  2
= A sin  0 t  .T   0 
 T  T

= A sin ( 0 t + 2 )

= A sin 0 t = x(t)

 x(t + T) = x(t)
Hence it is periodic.
1.36 Signals and systems
3. Prove the cosine signal is periodic or not?
Solution:

x(t) = A cos 0 t

 x(t + T) = A cos 0 (t + T)

= A cos( 0 t + 0 T)

 2  2
= A cos  0 t  .T   0 
 T  T

= A cos( 0 t + 2 )

= A cos 0 t
= x(t)
 x(t + T) = x(t)
Hence it is periodic.
4. Find the fundamental period for following signals.
(i) x(t) = ej7t


(ii) x(t) = 10 sin 20t   3 
t
(iii) 2 cos
3

 n   n   n  
(iv) x(n)  2cos    sin    2cos   .
 4   8   2 6

Solution:
(i) x(t) = ej7t

2
Fundamental period, T = 
0

0  7

2
 T second
7
= 0.285  second.
Classification of signals and systems 1.37


(ii) x(t) = 10 sin 20t   3 
 0  20

2
 T
20
T = 0.1 second.

t
(iii) x(t) = 2 cos
3

0  1
3

2
T
1
3
T = 6  second.

 n   n   n  
(iv) x(n)  2cos    sin    2cos   
 4   8   2 6

2
N1  m  8m

4
Choose m = 1, N1 = 8

2
N2  m  16m

8
Choose m = 1, N2 = 16

2
N3  m  4m

2
Choose m = 1, N3 = 4
The L.C.M of N1, N2, N3 = 16
Fundamental period = 16.
5. Find whether the following signals are periodic or not? If periodic determine the
fundamental period.
(i) 2sin100  t + cos250  t
1.38 Signals and systems
(ii) x(t) = 2sin3t + 3cos(4t + 1)
(iii) x(t) = sin2t
(iv) x(t) = 3u(t) + sin4t

4 2
(v) x(t) = 2cos t + 5sin t
3 3
(vi) x(t) = 3cos5t + 2sin  t

 
(vii) cos   t + sin   t
3 5
(viii) j ej5t
(ix) e j8  t

(x) cost + sin 2 t


Solution:
(i) Given x(t) = 2sin100  t + cos250  t

2
Time period, T1 =  0.02seconds
100

2
T2 =  0.008seconds
250
The ratio of two periods

T1 0.02 5
   2.5 is a rational number
T2 0.008 2

Hence x(t) is a periodic signal.


Fundamental period, T = 2T1 = 5T2

2
 T=  0.04seconds
50
(ii) x(t) = 2sin3t + 3cos(4t + 1)

Period, T1 = 2 second
3

T2 = 2 second
4
The ratio of two periods
Classification of signals and systems 1.39

2
T1 3 4
 is a rational number
T2 2 3
4
Hence x(t) is periodic signal
Fundamental period T = 3T1 = 4T2

2 2
= 3 = 4
3 4

T = 2 second
(iii) x(t) = sin2t

1  cos 2t
x(t) 
2

Fundamental period T = 2   second


2
The given signal x(t) is periodic.
(iv) x(t) = 3u(t) + sin4t
Note:
The period of u(t) is zero. Hence it is aperiodic periodic x Aperiodic = Aperiodic
periodic + Aperiodic = Aperiodic
The period of sin4t is, T = 2  
4 2
So, sin4t is a periodic
Therefore the sum of aperiodic signal and a periodic signal is aperiodic.
So, given x(t) is aperiodic signal.

4 2
(v) x(t) = 2cos t + 5sin t
3 3

2
Period, T1 =  3 sec ond
4 2
3

2
T2 =  3 sec ond
2
3
The ratio of two periods
1.40 Signals and systems

3
T1 1
 2  is a rational number
T2 3 2

Hence x(t) is periodic signal


Fundamental period T = 2T1 = T2
T = 3 second.
(vi) x(t) = 3cos5t + 2sin  t

Period, T1 = 2 second
5

2
T2 =  2 second

The ratio of two periods,

2
T1 
 5  is a irrational number
T2 2 5

Hence it is aperiodic signal.

 
(vii) cos   t + sin   t
3 5

2
T1 =   6sec ond
3

2
T2 =   10 sec ond
5
The ratio of two periods

T1 6
 is a rational number
T2 10

Hence the given signal is periodic signal


Fundamental period T = 10T1 = 6T2
T = 60 second.
Classification of signals and systems 1.41
j5t
(viii) j e

Period, T = 2 is a irrational number. Hence the given signal is aperiodic signal.


5
(ix) ej8  t

Period, T = 2  1  0.25sec ond


8 4
It is rational number. Hence the given signal is periodic signal with fundamental period
T = 0.25 second.

(x) cost + sin 2 t

Period, T1 = 2  2 second
1

2
T2 =  2 sec ond
2
The ratio of two periods,

T1 2
  2 is a irrational number
T2 2

Hence the given signal is non-periodic.


6. Test the periodicity of x(t) = t esint.
Solution:
Given x(t) = t esint
x(t + T) = (t + T) esin(t+T)

 2 
= (t + T) esint esinT  T  1  2
 
= (t + 2  ) esint esin2 
= (t + 2  ) esint
 x(t)
Hence the given x(t) is non-periodic signal.
7. Check whether the following signals are periodic or not? If periodic determine the
fundamental period?

2n 2n
(i) cos + cos
5 7
1.42 Signals and systems
(ii) e n
j7

n  n 
(iii) cos   cos  
8  8 

n
(iv) cos  
4

 2 
(v) sin  n
 3 

n
(vi) sin  
8

(vii) cos  0.1n 

3 j3  1
(viii) e n  
5  2


(ix) cos 2   n
8
n n
(x) 1 + ej4  7 – ej2  5

Solution:

(i) cos 2n + cos 2n


5 7

2
Period, N = m
0

2
 N1  m  5m
2
5

2
N2  m  7m
2
7
The ratio of two periods,

N1 5

N 2 7 is a rational number
Classification of signals and systems 1.43
Hence the given signal is periodic signal
Fundamental period, N = 7N1 = 5N2
N = 35 second.
(ii) e j7  n

Period, N = 2m  2m is a rational number


7 7
Hence the given signal is periodic signal

2
Fundamental period, N  m
7
Choose m = 7,  N = 2

n  n 
(iii) cos   cos  
8  8 

2m
Period, N1 =  16m
1
8
Choose m = 1,  N1 = 16

2m
N2 =  16m

8
Choose m = 1,  N2 = 16
The ratio of two periods,

N1 16
   is not a rational number
N2 16

Hence the given signal is non-periodic.

n
(iv) cos  
4

2m
Period, N =  8m
1
4

Choose m = 1,  N = 8 is not a rational number


Hence the given signal is aperiodic signal.
1.44 Signals and systems

 2 
(v) sin  n
 3 

2m
Period, N =  3m
2
3
Choose m = 1, N = 3 is a rational number
Hence the given signal is periodic signal with period N = 3 second.

n
(vi) sin  
8

2m
Period, N =  16m
1
8

Choose m = 1, N = 16 is not a rational number


Hence the given signal is aperiodic signal.

(vii) cos  0.1n 

Period, N = 2m  20m


0.1
Choose m = 1, N = 20 is a rational number
Hence the given signal is periodic signal with period N = 20 second.

3 j3  1
(viii) e n  
5  2

Period, N = 2 m  2 m
3 3
Choose m = 3, N = 2 is a rational number
Hence the given signal is periodic signal with period N = 2 second.


(ix) x(n) = cos2   n
8
2
 1  cos n
2
x(n) = cos   n = 8
8 2
Classification of signals and systems 1.45


1  cos n
x(n) = 4
2

2
N m

4
= 8m
Choose m = 1, N = 8 is a rational number
Hence the given signal is periodic signal with fundamental period N = 8 second.
n n
(x) 1 + ej4  7 – ej2  5

2 7
N1  m m
4 2
7
Choose m = 2, N1 = 7

2
N2  m  5m
2
5
Choose m = 1, N2 = 5
The ratio of two periods,

N1 7
 is a rational number
N2 5

Hence the given signal is periodic signal.


Fundamental period, N = 5N1 = 7N2
N = 35 second.
1.8.3 Even (symmetric) and odd (Antisymmetric) signals
A continuous time signal x(t) is said to be even signal if it satisfies the condition
x(–t) = x(t) for all t
1.46 Signals and systems
Example:
x(t) = A cos  t

t
0

A continuous time signal x(t) is said to be odd signal if it satisfies the condition
x(–t) = –x(t) for all t
Example:

x(t) = A sin  t

t
0

Any signal x(t) can be expressed as sum of even and odd components.
i.e., x(t) = xe(t) + xo(t) ----------- (1)
where, xe(t)  even component of the signal
xo(t)  odd component of the signal
Replacing t by –t in equation (1)
x(–t) = xe(t) + xo(–t)
x(–t) = xe(t) – xo(t) ----------- (2)
Adding equation (1) & (2)
x(t) + x(–t) = 2xe(t)
Classification of signals and systems 1.47

x(t)  x(  t)
 xe (t) 
2
Subtracting equation (1) & (2)
x(t) – x(–t) = 2xo(t)

x(t)  x(  t)
 xo (t) 
2
Similarly a discrete time signal x(n) is even if it satisfies the condition
x(–n) = x(n) for all n
A DT signal x(n) is odd if it satisfies the condition
x(–n) = –x(n) for all n
For DT signal the even and odd part of a signal can be obtained by

x(n)  x (  n)
xe (n) 
2

x(n)  x( n)
xo (n) 
2
Note:
Even  Even = Even
odd  odd = Even
Even  odd = odd
Problems
1. Find the even and odd components of x(t) = cost + sint.
Solution:
Given x(t) = cost + sint
x(–t) = cos(–t) + sin(–t)
= cost – sint

The even component, xe (t)  x(t)  x( t)


2

cos t  sin t  cos t  sin t



2

2cos t

2
1.48 Signals and systems
xe(t) = cost

The odd component, xo (t)  x(t)  x( t)


2

cos t  sin t  cos t  sin t



2

2sin t

2
xo(t) = sint.
2. Find the even and odd components of x(t) = 1 + 2t + 3t2.
Solution:
Given x(t) = 1 + 2t + 3t2
x(–t) = 1 – 2t + 3t2

The even component, xe (t)  x(t)  x( t)


2

1  2t  3t 2  1  2t  3t 2

2

2  6t 2

2
xe(t) = 1 + 3t2

The odd component, xo (t)  x(t)  x( t)


2

1  2t  3t 2  1  2t  3t 2

2

4t

2
xo(t) = 2t.
3. Find the even and odd component of the following signals.

(i) x(n) = {5, 4, 3, 2, 1}


(ii) x(n) = { 3, 2, 1, 4, 2}



Classification of signals and systems 1.49
Solution:

(i) Given x(n) = {5, 4, 3, 2, 1}


n = 0, 1, 2, 3, 4

x (n)  x (  n)
The even component, xe (n) 
2

x(0)  x(0) 5  5
For n = 0, xe (0)   5
2 2

x(1)  x( 1) 4  0
For n = 1, xe (1)   2
2 2

x(2)  x(2) 3  0
For n = 2, xe (2)    1.5
2 2

x(3)  x( 3) 2  0
For n = 3, xe (3)   1
2 2

x(4)  x(4) 1  0
For n = 4, xe (4)    0.5
2 2

 xe(n) = {5, 2, 1.5, 1, 0.5}


x (n)  x (  n)
The odd component, xo (n) 
2

x (0)  x(0) 5  5
For n = 0, xo (0)   0
2 2

x (1)  x ( 1) 4  0
For n = 1, xo (1)   2
2 2

x(2)  x(2) 3  0
For n = 2, xo (2)    1.5
2 2

x (3)  x ( 3) 2  0
For n = 3, xo (3)   1
2 2

x(4)  x(4) 1  0
For n = 4, xo (4)    0.5
2 2

 xo(n) = {0, 2, 1.5, 1, 0.5} .



1.50 Signals and systems

(ii) Given x(n) = {3, 2, 1, 4, 2}


n = –2, –1, 0, 1, 2

x (n)  x (  n)
The even component, xe (n) 
2

x(2)  x(2) 3  2
For n = –2, xe (2)     0.5
2 2

x(1)  x(1) 2  4
For n = –1, xe (1)   3
2 2

x(0)  x(0) 1  1
For n = 0, xe (0)   1
2 2

x(1)  x(1) 4  2
For n = 1, xe (1)   3
2 2

x(2)  x(2) 2  3
For n = 2, xe (2)     0.5
2 2

 xe(n) = {0.5, 3, 1, 3,  0.5}


x (n)  x (  n)
The odd component, xo (n) 
2

x(2)  x(2) 3  2
For n = –2, xo (2)     2.5
2 2

x(1)  x(1) 2  4
For n = –1, xo (1)    1
2 2

x (0)  x (  0) 1  1
For n = 0, xo (0)   0
2 2

x (1)  x (  1) 4  2
For n = 1, xo (1)   1
2 2

x(2)  x(2) 2  3
For n = 2, x0 (n) =   2.5
2 2

 x0 (n) = { - 2.5, -1, 0, 1, 2.5}



Classification of signals and systems 1.53

6. Find the even and odd signal for the unit step signal.
Solution:
x(t)

t
0

x(t)  x( t)
For even signal, xe (t) 
2
x(-t) xe(t)

1 0.5

0 0
x(t)  x( t)
For odd signal, xo (t) 
2
-x(-t) x0(t)

0.5

t t
0 0

-1 -0.5

7. Find the even and odd component of the signal x(t) = ejt.
Solution:
Given x(t) = ejt

For even component, xe (t) 


x(t)  x( t)
2

e jt  e jt
xe (t)   cos t
2
1.54 Signals and systems

For odd component, xo (t)  x(t)  x( t)


2

e jt  e jt
xo (t)   jsin t .
2
8. Find the even and odd component of the signal x(t) = cost sint + 2sin2t cost
Solution:
Given x(t) = cost sint + 2sin2t cost
 x(–t) = cos(–t) sin(–t) + 2sin2(–t) cos(–t)
= –cost sint + 2sin2t cost

For even component, xe (t)  x(t)  x( t)


2

cost sint + 2sin 2 t cost  cost sint  2sin 2 t cost 4sin 2 t cost
xe (t)  
2 2

xe (t)  2sin 2 t cost

For odd component, xo (t)  x(t)  x( t)


2

cost sint + 2sin 2 t cost  cost sint  2sin 2 t cost 2cost sint
xo (t)  
2 2
xo(t) = cost sint.
1.8.4 Causal and Non-causal signals
A continuous time signal x(t) is said to be causal if x(t) = 0 for t < 0, otherwise the signal is non-
causal. A continuous time signal x(t) is said to be anticausal if x(t) = 0 for t > 0.
Similarly a discrete time signal x(n) is said to be causal if x(n) = 0 for n < 0, otherwise the signal
is non-causal. A discrete time signal x(n) is said to be anticausal if x(n) = 0 for n > 0.
Problems
1. Find which of the following signals are causal or non-causal.
(i) x(t) = e3t u(t – 3)
(ii) x(t) = cos3t
(iii) x(t) = 4sinct
Classification of signals and systems 1.55

(iv) x(n) = u(n + 5) – u(n – 3)


n
1
(v) x(n) =   u(n  4)
4
Solution:
(i) Given x(t) = e3t u(t – 3)
The signal x(t) is causal because x(t) = 0 for t < 0.
(ii) x(t) = cos3t
The given signal exists from   to . Since x(t)  0 for t < 0, the signal is non-causal.
(iii) x(t) = 4sinct
A sinc signal exists for t < 0 also. So the given x(t) is non-causal.
(iv) x(n) = u(n + 5) – u(n – 3)
The given signal exists from n = –4 to 3. Since x(n) = 0 for n < 0. Therefore x(n) is
non-causal.
n
1
(v) x(n) =   u(n  4)
4
The given signal exists for n < 0. So it is non-causal.
1.8.5 Energy and Power signals
A signal is said to be an energy signal if and only if its total energy E is finite. The
non-periodic signals like exponential signals will have constant energy and so non-periodic signals
are examples of energy signals.
A signal is said to be a power signal if its average power P is finite. The periodic signals like
sinusoidal and complex exponential signals will have constant power and so periodic signals are
examples of power signals.
Generally power is the product of voltage and current in the circuit analysis. The instantaneous
power is defined as
2
P(t) = V(t) i(t) = V (t) = i2(t) R
R
We consider R = 1  , power is defined as the square of voltage (or) current.
t T
2
Total Energy, E = T  i (t)dt Joules
T

t T
1
Average power, P = T  i 2 (t)dt watts
2T T
1.56 Signals and systems
The Energy (E) of a continuous time signal x(t) is defined as

t T
2
E = T   | x(t) | dt in Joules
T


2
=  | x(t) | dt

The average power of a continuous time signal x(t) is defined as

t T
1
P = T  | x(t) |2 dt in watts
2T T

RMS value = P
For energy signals, the energy will be finite (or) constant i.e., (0 < E <  ) and average power
will be zero.
For power signals, the average power is finite (or) constant i.e., (0 < P <  ) and energy will
be infinite.
For discrete time signals
The total energy is finite and average power is zero. The signal is said to be energy signal.

t N
E  N   | x(n) |2
n  N

The total energy is infinite and average power is finite, the signal is said to be power signal,

t N
1
P  N   | x(n) |2
2N  1 n  N

Comparison of Energy and Power signal


Energy signal Power signal
1. The Energy is obtained by The power is obtained by
t T t T
1
E = T   | x(t) |2 dt P = T   | x(t) |2 dt
T
2T  T

2. The normalized energy is finite and The average power is finite and energy will be
average power will be zero. infinite.
3. Non-periodic signals are energy Periodic signals are power signals.
signals.
1.80 Signals and systems

 t 
(iii) x(t) = rect   cos 0 t
 10 

 t  cos 0 t, 5  t  5
x(t) = rect   cos 0 t = 
 10   0, otherwise


2
Energy, E =  | x(t) | dt


5
2
=  cos 0 t dt
5

5
1  cos 20 t
=  dt
5
2

5 5  5 
1  cos 2  t dt  0
=  dt 
2  cos 20 t dt 
 0 

5 5 5

1 5
= t  0
2 5
10
=
2
E = 5J

t T
1
Power, P = T   | x(t) |2 dt
2T  T

t 5
1
= T  cos 2 0 t dt
2T 5

t 1
= T  5
2T
P=0
Therefore the energy is finite and power is zero. Hence the signal is energy signal.
1.9 CT systems and DT systems
Continuous time (CT) systems
A system which can process continuous time signal and produces a continuous time output
signal is called continuous time system.
Classification of signals and systems 1.81

Continuous time
x(t) y(t)
system

Mathematically the functional relationship between input and output may be written as
y(t) = H[x(t)]
Symbolically, x(t)  y(t)
where,
x(t)  continuous time input signal
y(t)  continuous time output signal
H  system operator
When a continuous time system satisfies the properties of linearity and time invariant then it
is called Linear Time Invariant (LTI) continuous time system.
Discrete time (DT) system
A system which can process discrete time signal and produces a Discrete time output signal
is called Discrete time system.

x(t) Discrete time y(t)


system

Mathematically the functional relationship between input and output may be written as
y(t) = H[x(t)]
Symbolically, x(t)  y(t)
where,
x(t)  discrete time input signal
y(t)  discrete time output signal
H  system operator
When a discrete time system satisfies the properties of linearity and time invariant then it is
called linear time invariant (LTI) discrete time system.
1.10 Classifications of systems/properties of system
1. Lumped parameter and distributed parameter systems.
2. Static (memoryless) and dynamic (memory) systems.
3. Linear and Non-Linear systems.
4. Time variant and Time invariant systems.
1.82 Signals and systems

5. Causal and Non-causal systems.


6. Stable and unstable systems.
7. Invertible and non-invertible systems.
8. Recursive and Non-Recursive systems.
1.10.1 Lumped Parameter and distributed parameter systems
Lumped parameter systems are the systems in which each component is lumped at one point
space. These systems are described by ordinary differential equations.
Distributed parameter systems are the systems in which signals are functions of space as
well as time. These systems are described by partial differential equations.
1.10.2 Static and Dynamic systems
A system is said to be static if the output of the system depends only on present input. It is
also called as memoryless system (or) system without memory.
Example: Purely Resistive electrical circuit.
y(t) = x(t)
y(n) = n x2(n)
A system is said to be dynamic if the output of the system depends on past and future values
of inputs. It is also as memory system (or) system with memory.
Example: Electrical circuit having Inductors and/or capacitors.
y(t) = x(t – 2)

d x(t)
y(t) =  x(t)
dt
y(n) = x(n) + x(n + 1)
Any continuous time system described by a differential equation (or) any discrete time system
described by a difference equation is a dynamic system.
Problems
1. Check whether the following systems are static (or) dynamic.
(i) y(t) = x(t – 4)
(ii) y(n) = x2(n)

d 2 x (t)
(iii) y(t)   x (t)
dt 2
Classification of signals and systems 1.83

(iv) y(t)   x(t)dt


(v) y(n) = x(n) + x(n – 3)
(vi) y(n) = x(n + 5)
(vii) y(t) = x(3t)
Solution:
(i) Given y(t) = x(t – 4)
Substitute t = 0,
 y(0) = x(–4)
The output depends on past value of input. Therefore the system is dynamic system.
(ii) y(n) = x2(n)
Put n = 0, y(0) = x2(0)
The output depends on the present value of input. Therefore the system is static system.

d 2 x (t)
(iii) y(t)   x(t)
dt 2
The system is described by a differential equation. Therefore the system is dynamic system.

(iv) y(t)   x(t) dt

The output y(t) is the integral of the input x(t). Therefore the system is dynamic system.
(v) y(n) = x(n) + x(n – 3)
Put n = 0, y(0) = x(0) + x(–3)
The output depends on present and past value input. Therefore the system is dynamic system.
Also, the system is described by a difference equation. Therefore the system is dynamic
system.
(vi) y(n) = x(n + 5)
Put n = 0, y(0) = x(5)
The output depends on future value of input. Therefore the system is dynamic system.
(vii) y(t) = x(3t)
Put n = 2, y(2) = x(6)
The output depends on future value of input. Therefore the system is dynamic system.
1.10.3 Linear and Non-Linear systems
A system that satisfies the superposition principle is said to be a linear system. A system
which does not satisfies the superposition principle is said to be a non-linear system.
1.84 Signals and systems
The superposition principle consist of two properties.
(i) Additive property
(ii) Scaling/Homogeneity property
Consider the two systems defined as follows.
y1(t) = H[x1(t)]
y2(t) = H[x2(t)]
Additive property is,
H[x1(t) + x2(t)] = H[x1(t)] + H[x2(t)]
= y1(t) + y2(t)
Scaling property is,
H[ax(t)] = aH[x(t)] = ay(t)
The superposition principle states that the response to a weighted sum of input signals is
equal to the weighted sum of the outputs corresponding to each of the individual input signal.
i.e., H[ax1(t) + bx2(t)] = aH[x1(t)] + bH[x2(t)]
= ay1(t) + by2(t)
 H[ax1(t) + bx2(t)] = ay1(t) + by2(t)
where a, b  constants
For DT systems
H[ax1(n) + bx2(n)] = aH[x1(n)] + bH[x2(n)]
= ay1(n) + by2(n)
 H[ax1(n) + bx2(n)] = ay1(n) + by2(n)
Problems
1. Check whether the following system are linear (or) not.

d y(t)
(1)  3t y(t)  t 2 x (t)
dt

1
(2) y(n)  2 x (n) 
x (n  1)

(3) y(t) = ex(t)


(4) y(t) = 3x2(t)
(5) T[x(n)] = x(n – n0)
(6) y(t) = 5x(t) + 4
Classification of signals and systems 1.85
2
(7) y(n) = n x(n)
(8) y(t) = x(t2)
(9) y(n) = 2x(n) + 5

d 2 y(t) d y(t)
(10) 2
2  x (t)  4
dt dt

(11) y(t) = A x(t) + B


(12) y(n) = ln[x(n)]
(13) y(n) = x2(n) + x2(n – 2)
(14) y(t) = x[sint]
(15) y(t) = sin[x(t)]
Solution:

d y(t)
(1) Given  3t y(t)  t 2 x (t)
dt
Consider two inputs x1(t) and x2(t). The response of the system is y1(t) and y2(t).

d y1 (t)
 3t y1 (t)  t 2 x1 (t) -------------- (1)
dt

d y2 (t)
 3t y2 (t)  t 2 x2 (t) -------------- (2)
dt
Adding Equation (1) & (2)

d
 y1 (t)  y2 (t)  3t  y1 (t)  y2 (t)  t 2 [ x1 (t)  x2 (t)] -------------- (3)
dt
Substitute y(t) = y1(t) + y2(t) and
x(t) = x1(t) + x2(t) in given system

d  y1 (t)  y 2 (t)
 3t  y1 (t)  y 2 (t)  t 2 [ x1 (t)  x2 (t)] -------------- (4)
dt
Compare equations (3) & (4)
 (3) = (4)
Therefore the system is linear system.
1.86 Signals and systems

1
(2) Given y(n )  2 x (n) 
x (n  1)

For an input x1(n), the output is y1(n), then

1
y1 (n)  2 x1 (n)  -------------- (1)
x1 (n  1)

For an input x2(n), the output is y2(n), then

1
y2 (n)  2 x2 (n)  -------------- (2)
x2 (n  1)

Adding Equation (1) & (2)

1 1
y1 (n)  y2 (n)  2 x1 (n)  2 x2 (n)   -------------- (3)
x1 (n  1) x2 (n  1)

Substitute y(n) = y1(n) + y2(n) and


x(n) = x1(n) + x2(n) in given system

1
y1 (n)  y 2 (n)  2[ x1 (n)  x2 (n)]  -------------- (4)
x1 (n  1)  x2 (n  1)

From equations (3) & (4)


 (3)  (4)
Therefore the system is non-linear system
(3) y(t) = ex(t)
For an input x1(t), the corresponding output is y1(t), then

y1 (t)  e x1 (t) -------------- (1)

For an input x2(t), the corresponding output is y2(t), then

y2 (t)  e x2 (t) -------------- (2)

Adding Equation (1) & (2)

y1 (t)  y 2 (t)  e x1 (t)  e x2 (t) -------------- (3)

Substitute y(t) = y1(t) + y2(t) and


x(t) = x1(t) + x2(t) in given system

y1 (t)  y 2 (t)  e x1 (t)  x2 (t)  e x1 (t) e x2 (t) -------------- (4)


Classification of signals and systems 1.87
From equations (3) & (4)
 (3)  (4)
Therefore the system is non-linear system
(4) y(t) = 3x2(t)
For an input x1(t), the corresponding output is y1(t), then
y1(t) = 3x12 (t) -------------- (1)
For an input x2(t), the corresponding output is y2(t), then
y2(t) = 3x22 (t) -------------- (2)
Adding Equation (1) & (2)
y1(t) + y2(t) = 3x12 (t) + 3x22 (t) -------------- (3)
Substitute y(t) = y1(t) + y2(t) and
x(t) = x1(t) + x2(t) in given system
y1(t) + y2(t) = 3[x1 (t) + x2 (t)]2 -------------- (4)
From equations (3) & (4)
 (3)  (4)
Therefore the system is non-linear system
(5) T[x(n)] = x(n – n0)
T[x(n)] means y(n)  y(n) = x(n – n0)
For an input x1(n), the corresponding output is y1(n), then
y1(n) = x1(n – n0) -------------- (1)
For an input x2(n), the corresponding output is y2(n), then
y2(n) = x2(n – n0) -------------- (2)
Adding Equation (1) & (2)
y1(n) + y2(n) = x1(n – n0) + x2(n – n0) -------------- (3)
Substitute y(n) = y1(n) + y2(n) and
x(n) = x1(n) + x2(n) in given system
y1(n) + y2(n) = x1(n – n0) + x2(n – n0) -------------- (4)
From equations (3) & (4)
 (3) = (4)
Therefore the system is linear system.
(6) y(t) = 5x(t) + 4
For an input x1(t), the corresponding output is y1(t), then
Classification of signals and systems 1.99

d y(t)
(10)  10 y(t)  5  x(t)
dt

d y(1)
For t = –1  10 y(1)  5  x(1)
dt

d y(0)
For t = 0  10 y(0)  5  x(0)
dt

d y(1)
For t = 1  10 y(1)  5  x(1)
dt
For all values of t, the output depends on present values of input. Therefore the system is
causal.
1.10.6 Stable and unstable systems
A system is said to be BIBO (Bounded Input Bounded Output) stable if and only if every
bounded input produces a bounded output.
Let the input signal x(t) be finite (bounded)
i.e., |x(t)| < Mx <  for all t
If output signal y(t) is also finite (bounded)
i.e., |y(t)| < My <  for all t
where Mx, My  positive real number
The system gives unbounded output for bounded input is called unstable system.
Condition for stability for an LTI-CT system

 | h(t) |dt  


Condition for stability for an LTI-DT system


 | h(n) |  
n 

Check whether the following systems are stable or not.


(1) y(t) = t x(t)
(2) y(t) = 5x(t) + 3
(3) h(t) = e–4|t|
(4) h(n) = u(n)
1.100 Signals and systems
3t
(5) h(t) = e u(t)
(6) h(n) = an u(n)
(7) h(n) = 3n u(n – 2)
(8) h(t) = t e–t u(t – 1)

R tR L
(9) h(t)  e u(t)
L
Solution:
(1) Given y(t) = t x(t)
x(t) is bounded when t   , i.e.,  x(t).
Therefore the output is unbounded. The system is said to be unstable system.
(2) y(t) = 5x(t) + 3
x(t) is bounded. It produces a bounded output then the system is said to be stable system.
(3) h(t) = e–4|t|

 
4|t|
  | h(t) |dt  e dt
 

0 
=  e4|t| dt   e 4|t| dt
 0

0 
=  e4t dt   e4t dt
 0

0 
 e4t   e4t 
=     
 4    4  0

1   1
=   0   0  
4   4

1
= 
2
Hence the system is stable.
Classification of signals and systems 1.101
(4) h(n) = u(n)

 
 | h(n) |   | u(n) |
n  n 


= 1
n 0

= 1 + 1 + ........... 
= 
So the output is unbounded and the system is unstable.
(5) h(t) = e3t u(t)

 
3t
 | h(t) | dt  e u(t) dt
 


3t
= e dt
0


 e3t 
=  
 3  0

   1
=  
 3 
The output is unbounded. Hence the system is unstable.
(6) h(n) = an u(n)

 
 | h(n) |   a n u(n)
n  n 


=  an
n 0

1
= 1 a; |a | 1

So the given system is stable if and only if |a| < 1.


1.102 Signals and systems
n
(7) h(n) = 3 u(n – 2)

 
 | h(n) |   3n u(n  2)
n  n 


=  3n
n 2

= 32 + 33 + ..... + 3
= 
The output is unbounded. Hence the system is unstable.
(8) h(t) = t e–t u(t – 1)

 
t
 | h(t) | dt   te u(t  1) dt
 


t
=  te dt
1


Here u = t, dv = e–t dt   u dv  uv   vdu 
du = dt v = –e–t

=   t e  t   e  t dt 
1


=   t e  t   e t dt 
1


=   t e t  e t 
1

= [0 + e–1 + e–1]
= 2e–1 < 
Hence the output is bounded. The system is stable.

You might also like