digital_logic_lecture_note
digital_logic_lecture_note
Introduction
A digital computer stores data in terms of digits (numbers) and proceeds in discrete steps from one state to the next.
The states of a digital computer typically involve binary digits which may take the form of the presence or absence of
magnetic markers in a storage medium , on-off switches or relays. In digital computers, even letters, words and whole
texts are represented digitally.
Digital Logic is the basis of electronic systems, such as computers and cell phones. Digital Logic is rooted in
binary code, a series of zeroes and ones each having an opposite value. This system facilitates the design of
electronic circuits that convey information, including logic gates. Digital Logic gate functions include and, or
and not. The value system translates input signals into specific output. Digital Logic facilitates computing,
robotics and other electronic applications.
Digital Logic Design is foundational to the fields of electrical engineering and computer engineering. Digital
Logic d esigners b uild c omplex e lectronic c omponents th at u se b oth e lectrical and c omputational
characteristics. T hese ch aracteristics m ay i nvolve pow er, current, l ogical f unction, p rotocol a nd user i nput.
Digital Logic D esign i s us ed t o de velop ha rdware, s uch as c ircuit boa rds a nd m icrochip pr ocessors. T his
hardware processes user input, system protocol and other data in computers, navigational systems, cell phones
or other high-tech systems.
The numeric sy stem w e u se d aily i s the d ecimal sy stem, b ut t his system i s n ot co nvenient f or m achines si nce the
information is handled codified in the shape of on or off bits; this way of codifying takes us to the necessity of knowing
the positional calculation which will allow us to express a number in any base where we need it.
A base of a number system or radix defines the range of values that a digit may have.
In the binary system or base 2, there can be only two values for each digit of a number, either a "0" or a "1".
In the octal system or base 8, there can be eight choices for each digit of a number:
In the decimal system or base 10, there are ten different values for each digit of a number:
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9".
"0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "A", "B", "C", "D", "E", and "F".
Let’s think about what you do to obtain each digit. As an example, let's start with a decimal number 1234 and convert it
to decimal notation. To extract the last digit, you move the decimal point left by one digit, which means that you divide
the given number by its base 10.
The remainder of 4 is the last digit. To extract the next last digit, you again move the decimal point left by one digit and
see what drops out.
123/10 = 12 + 3/10
The remainder of 3 is the next last digit. You repeat this process until there is nothing left. Then you stop. In summary,
you do the following:
-----------------------------
1234/10 = 123 4 --------+
123/10 = 12 3 ------+ |
12/10 = 1 2 ----+ | |
1/10 = 0 1 --+ | | |(Stop when the quotient is 0)
| | | |
1 2 3 4 (Base 10)
Now, let's try a nontrivial example. Let's express a decimal number 1341 in binary notation. Note that the desired base is
2, so we repeatedly divide the given decimal number by 2.
Quotient Remainder
-----------------------------
1341/2 = 670 1 ----------------------+
670/2 = 335 0 --------------------+ |
335/2 = 167 1 ------------------+ | |
167/2 = 83 1 ----------------+ | | |
83/2 = 41 1 --------------+ | | | |
41/2 = 20 1 ------------+ | | | | |
20/2 = 10 0 ----------+ | | | | | |
10/2 = 5 0 --------+ | | | | | | |
5/2 = 2 1 ------+ | | | | | | | |
2/2 = 1 0 ----+ | | | | | | | | |
1/2 = 0 1 --+ | | | | | | | | | |(Stop when the
| | | | | | | | | | | quotient is 0)
1 0 1 0 0 1 1 1 1 0 1 (BIN; Base 2)
In conclusion, the easiest way to convert fixed point numbers to any base is to convert each part separately. We begin by
separating the number into its integer and fractional part. The integer part is converted using the remainder method, by
using a successive division of the number by the base until a zero is obtained. At each division, the reminder is kept and
then the new number in the base r is obtained by reading the remainder from the lat remainder upwards.
The conversion of the fractional part can be obtained by successively multiplying the fraction with the base. If we iterate
this process on the remaining fraction, then we will obtain successive significant digit. This methods form the basis of
the multiplication methods of converting fractions between bases
Example. C onvert the de cimal number 3315 t o he xadecimal notation. W hat about t he hexadecimal e quivalent of t he
decimal number 3315.3?
Let's think more carefully what a d ecimal number means. For example, 1234 means that there are four boxes (digits);
and there are 4 one 's in the right-most box ( least significant digit), 3 ten's in the next box, 2 hu ndred's in the next box,
and finally 1 thousand's in the left-most box (most significant digit). The total is 1234:
Original Number: 1 2 3 4
| | | |
How Many Tokens: 1 2 3 4
Digit/Token Value: 1000 100 10 1
Value: 1000 + 200 + 30 + 4 = 1234
Thus, each digit has a value: 10^0=1 for the least significant digit, increasing to 10^1=10, 10^2=100, 10^3=1000, and so
forth.
Likewise, the least si gnificant d igit i n a h exadecimal n umber h as a v alue o f 16^0= 1 for th e le ast s ignificant digit,
increasing to 16^1=16 for the next digit, 16^2=256 for the next, 16^3=4096 for the next, and so forth. Thus, 1234 means
that there are four boxes (digits); and there are 4 one 's in the right-most box ( least significant digit), 3 sixteen's in the
next box, 2 256's in the next, and 1 4096's in the left-most box (most significant digit). The total is:
In summary, the conversion from any base to base 10 can be obtained from the formulae
n −1
x1 0 = ∑d b
i =− m
i
i
Where b is the base, di the digit at position i, m the number of digit after the decimal point, n the number
of digits of the integer part and X10 is the obtained number in decimal. This form the basic of the polynomial method of
converting numbers from any base to decimal
2*82 + 3*81 + 4*80+1*8-1 + 4*8-2 = 2*64 +3*8 +4*1 +1/8 +4/64 =156.1875
Solution:
Original Number: 4 B 3 . 3
| | | |
How Many Tokens: 4 11 3 3
Digit/Token Value: 256 16 1 0.0625
Value: 1024 +176 + 3 + 0.1875 = 1203.1875
Solution:
Original Number: 2 3 4 . 1 4
| | | | |
How Many Tokens: 2 3 4 1 4
Digit/Token Value: 64 8 1 0.125 0.015625
Value: 128 + 24 + 4 + 0.125 + 0.0625 = 156.1875
As demonstrated by the table bellow, there is a direct correspondence between the binary system and the octal system,
with three b inary di gits c orresponding t o o ne octal d igit. L ikewise, f our b inary d igits t ranslate d irectly in to o ne
hexadecimal digit.
With such relationship, In order to convert a binary number to octal, we partition the base 2 number into groups of three
starting from the radix point, and pad the outermost groups with 0’s as needed to form triples. Then, we convert each
triple to the octal equivalent.
Notice that the leftmost two bits are padded with a 0 on the left in order to create a full triplet.
A.F. Kana Digital Logic Design. Page 5
Now consider converting 101101102 to base 16:
The c onversion m ethods c an be us ed to convert a n umber f rom a ny ba se to any ot her ba se, but i t m ay not b e v ery
intuitive to convert something like 513.03 to base 7. As an aid in performing an unnatural conversion, we can convert to
the more familiar base 10 form as an intermediate step, and then continue the conversion from base 10 to the target base.
As a g eneral r ule, w e us e t he po lynomial m ethod when c onverting into base 10, and w e us e t he r emainder a nd
multiplication methods when converting out of base 10.
Numeric complements
The radix complement of an n digit number y in radix b is, by definition, bn − y. Adding this to x results in the value x +
bn − y or x − y + bn. Assuming y ≤ x, the result will always be greater than bn and dropping the initial '1' is the same as
subtracting bn, making the result x − y + bn − bn or just x − y, the desired result.
The radix complement is most easily obtained by adding 1 to the diminished radix complement, which is (bn − 1) − y.
Since (bn − 1) is the digit b − 1 repeated n times (because bn − 1 = bn − 1n = (b − 1)(bn − 1 + bn − 2 + ... + b + 1) = (b − 1)bn
−1
+ ... + (b − 1), see also binomial numbers), the diminished radix complement of a number is found by complementing
each digit with respect to b − 1 (that is, subtracting each digit in y from b − 1). Adding 1 to obtain the radix complement
can be done separately, but is most often combined with the addition of x and the complement of y.
In t he de cimal num bering s ystem, t he r adix c omplement is called th e ten's complement and t he diminished r adix
complement the nines' complement.
In bi nary, t he r adix c omplement i s c alled t he two's complement and t he d iminished r adix c omplement t he ones'
complement. The naming of complements in other bases is similar.
- Decimal example
To subtract a decimal number y from another number x using the method of complements, the ten's complement of y
(nines' c omplement pl us 1) is added to x. T ypically, t he ni nes' c omplement of y is f irst obtained by de termining t he
complement of each digit. The complement of a decimal digit in the nines' complement system is the number that must
be a dded to i t to p roduce 9. The c omplement o f 3 i s 6, the c omplement o f 7 is 2 , a nd s o on. G iven a s ubtraction
problem:
873 (x)
- 218 (y)
The nines' complement of y (218) is 781. In this case, because y is three digits long, this is the same as su btracting y
from 999. (The number of 9's is equal to the number of digits of y.)
873 (x)
+ 781 (complement of y)
+ 1 (to get the ten's complement of y)
=====
1655
The first "1" digit is then dropped, giving 655, the correct answer.
If the subtrahend has fewer digits than the minuend, leading zeros must be added which will become leading nines when
the nines' complement is taken. For example:
- Binary example
The method of complements is especially useful in binary (radix 2) since the ones' complement is very easily obtained
by i nverting e ach bi t (changing '0' to ' 1' and vice versa). A nd a dding 1 t o g et the two's complement can be done by
simulating a carry into the least significant bit. For example:
Dropping the initial "1" gives the answer: 01001110 (equals decimal 78)
The signed m agnitude (also r eferred t o as sign and m agnitude) r epresentation is m ost familiar to u s a s t he b ase 1 0
number system. A plus or minus sign to the left of a number indicates whether the number is positive or negative as in
+1210 or −1210. In t he bi nary s igned magnitude representation, the leftmost bi t i s us ed for the sign, w hich takes on a
value of 0 or 1 for ‘+’ or ‘−’, respectively. The remaining bits contain the absolute magnitude.
(+12)10 = (00001100)2
(−12)10 = (10001100)2
The negative number is formed by simply changing the sign bit in the positive number from 0 to 1. Notice that there are
both positive and negative representations for zero: +0= 00000000 and -0= 10000000.
The one’s complement operation is trivial to perform: convert all of the 1’s in the number to 0’s, and all of the 0’s to 1’s.
See the fourth column in Table1 for examples. We can observe from the table that in the one’s complement
representation the leftmost bit is 0 for positive numbers and 1 for negative numbers, as it is for the signed magnitude
representation. This negation, c hanging 1’ s to 0 ’s and c hanging 0’ s to 1 ’s, is k nown a s complementing the bi ts.
Consider a gain representing ( +12)10 and ( −12)10 i n a n e ight-bit f ormat, now us ing t he one’s c omplement
representation:
(+12)10 = (00001100)2
(−12)10 = (11110011)2
Note again that t here are representations for both + 0 an d −0, w hich a re 00 000000 a nd 11111111, respectively. A s a
result, t here a re o nly 2 8 − 1 = 255 di fferent numbers t hat c an b e r epresented e ven t hough t here are 2 8 different b it
patterns.
The one ’s complement r epresentation is n ot commonly us ed. This i s a t l east partly due to the difficulty i n m aking
comparisons when there are two representations for 0. There is also additional complexity involved in adding numbers.
The two’s complement is formed in a way similar to forming the one’s complement: complement all of the bits in the
number, but then add 1, and if that addition results in a carry-out from the most significant bit of the number, discard the
carry-out.
Examination of the fifth column of Table above shows that in the two’s complement representation, the leftmost bit is
again 0 for positive numbers and is 1 for negative numbers. However, this number format does not have the unfortunate
characteristic of signed-magnitude and one’s complement representations: it has only one representation for zero. To see
that this is true, consider forming the negative of (+0)10, which has the bit pattern: (+0)10 = (00000000)2
1 t o i t y ields ( 00000000)2, t hus ( −0)10 = ( 00000000)2. T he c arry out of t he l eftmost pos ition i s d iscarded i n t wo’s
complement addition (except when detecting an overflow condition). Since there is only one representation for 0, and
since all bit patterns are valid, there are 28 = 256 different numbers that can be represented.
Consider ag ain r epresenting ( +12)10 and ( −12)10 in a n e ight-bit f ormat, th is t ime u sing th e two’s c omplement
representation. Starting with (+12)10 =(00001100)2, complement, or negate the number, producing (11110011)2.
(+12)10 = (00001100)2
(−12)10 = (11110100)2
There is an equal number of positive and negative numbers provided zero is considered to be a positive number, which
is reasonable because its sign bit is 0. The positive numbers start at 0, but the negative numbers start at −1, and so the
magnitude of the most ne gative number is one greater t han the magnitude of t he most positive number. The pos itive
number with the largest magnitude is +127, and the negative number with the largest magnitude is −128. There is thus
no pos itive num ber t hat c an be represented t hat corresponds to the n egative of −128. I f w e t ry t o form the t wo’s
complement negative of −128, then we will arrive at a negative number, as shown below:
(−128)10 = (10000000)2
(−128)10 = (01111111
(−128)10 + (+0000001)2
(−128)10 ——————)2
(−128)10 = (10000000)2
The two’s complement representation is the representation most commonly used in conventional computers.
- Excess Representation
In the excess or biased representation, the number is treated as unsigned, but is “shifted” in value by subtracting the bias
from it. The concept is to assign the smallest numerical bit pattern, all zeros, to the negative of the bias, and assign the
remaining num bers in sequence a s t he b it pa tterns i ncrease i n m agnitude. A convenient w ay t o t hink of a n excess
representation is that a n umber is represented as the sum of its two’s complement form and another number, which is
known as the “excess,” or “bias.” Once again, refer to Table 2.1, the rightmost column, for examples.
Consider again representing (+12)10 and (−12)10 in an eight-bit format but now using an excess 128 representation. An
excess 128 number is formed by adding 128 to the original number, and then creating the unsigned binary version. For
(+12)10, we compute (128 + 12 = 140)10 and produce the bit pattern (10001100)2. For (−12)10, we compute (128 + −12 =
116)10 and produce the bit pattern (01110100)2
(+12)10 = (10001100)2
(−12)10 = (01110100)2
Note that there is no numerical significance to the excess value: it simply has the effect of shifting the representation of
the two’s complement numbers.
There is onl y one excess representation for 0, since the excess r epresentation i s si mply a sh ifted version of t he t wo’s
complement representation. For the previous case, the excess value is chosen to have the same bit pattern as the largest
negative n umber, w hich h as the effect of m aking t he num bers appear in num erically sorted or der i f the numbers are
viewed in an unsigned binary representation.
Thus, the most negative n umber i s (−128)10 = ( 00000000)2 and the m ost pos itive number i s (+127)10 = ( 11111111)2.
This representation simplifies making comparisons between numbers, since the bit patterns for negative numbers have
numerically smaller values than the bit patterns for positive numbers. This is important for representing the exponents of
floating point numbers, in which exponents of two numbers are compared in order to make them equal for addition and
subtraction.
choosing a bias:
The b ias ch osen i s m ost often b ased o n t he n umber o f b its ( n) available for r epresenting an i nteger. T o g et an
approximate equal distribution of true values above and below 0, the bias should be 2(n-1) or 2(n-1) - 1
- Normalization
A potential problem with representing floating point numbers is that the same number can be represented in different
ways, which makes comparisons and arithmetic operations difficult. For example, consider the numerically equivalent
forms shown below:
In or der t o a void multiple representations for t he same number, floating poi nt numbers a re maintained in normalized
form. That is, the radix point is shifted to the left or to the right and the exponent is adjusted accordingly until the radix
point is to the left of the leftmost nonzero digit. So the rightmost number above is the normalized one. Unfortunately,
the number zero cannot be represented in this scheme, so to represent zero an exception is made. The exception to this
rule is that zero is represented as all 0’s in the mantissa.
If the mantissa is represented as a binary, that is, base 2, num ber, and if the normalization condition is that there is a
leading “1” in the normalized mantissa, then there is no need to store that “1” and in fact, most floating point formats do
not store it. Rather, it is “chopped off ” before packing up the number for storage, and it is restored when unpacking the
number into exponent and mantissa. This results in having an additional bit of precision on the right of the number, due
to removing the bit on the left. This missing bit is referred to as the hidden bit, also known as a hidden 1.
For example, if the mantissa in a given format is 1.1010 after normalization, then the bit pattern that is stored is 1010—
the left-most bit is truncated, or hidden.
The number of words used (i.e. the total number of bits used.)
The representation of the mantissa (2s complement etc.)
The representation of the exponent (biased etc.)
The total number of bits devoted for the mantissa and the exponent
The location of the mantissa (exponent first or mantissa first)
Because of the five points above, the number of ways in which a floating point number may be represented is legion.
Many representation use the format of sign bit to represent a floating point where the leading bit represents the sign
Sign Exponent Mantissa
The IEEE (Institute of Electrical and Electronics Engineers) has produced a standard for floating point format arithmetic
in mini and micro computers.(i.e. ANSI/IEEE 754-1985). This standard specifies how single precision (32 bit) , double
precision (64 bit) and Quadruple (128 bit) floating point numbers are to be represented, as well as how arithmetic should
be carried out on them.
Binary floating-point numbers a re s tored i n a sign-magnitude form where the most significant bit is th e sign bit,
exponent is the biased exponent, and "fraction" is the significand without the most significant bit.
Exponent biasing
The exponent is biased by (2e − 1) − 1, where e is the number of bits used for the exponent field (e.g. if e=8, then (28 − 1) −
1 = 128 − 1 = 127 ). Biasing is done because exponents have to be signed values in order to be able to represent both
tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
To s olve t his, t he e xponent i s b iased be fore be ing s tored by a djusting i ts v alue t o put it w ithin a n uns igned r ange
suitable for comparison.
For example, to represent a number which has exponent of 17 in an exponent field 8 bits wide:
Single Precision
The IEEE single precision floating point standard representation requires a 32 b it word, which may be represented as
numbered from 0 to 31, left to right. The first bit is the sign bit, S, the next eight bits are the exponent bits, 'E', and the
final 23 bits are the fraction 'F':
S EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF
01 89 31
To convert decimal 17.15 to IEEE Single format:
Convert decimal 17 to binary 10001. Convert decimal 0.15 to the repeating binary fraction 0.001001 Combine
integer and fraction to obtain binary 10001.001001 Normalize the binary number to obtain 1.0001001001x24
Thus, M = m-1 =0001001001 and E = e+127 = 131 = 10000011.
The number is positive, so S=0. Align the values for M, E, and S in the correct fields.
0 10000011 00010010011001100110011
Note that if the exponent does not use all the field allocated to it, there will be leading 0’s while for the mantissa,
the zero’s will be filled at the end.
Double Precision
The IEEE double precision floating point standard representation requires a 64 b it word, which may be represented as
numbered from 0 to 63, left to right. The first bit is the sign bit, S, the next eleven bits are the exponent bits, 'E', and the
final 52 bits are the fraction 'F':
S EEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 11 12
The IEEE Quad precision floating point standard representation requires a 128 bit word, which may be represented as
numbered from 0 to 127, left to right. The first bit is the sign bit, S, the next fifteen bits are the exponent bits, 'E', and the
final 128 bits are the fraction 'F':
S EEEEEEEEEEEEEEE FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
01 15 16
Binary code
Internally, d igital computers o perate on bi nary num bers. W hen interfacing t o h umans, di gital p rocessors, e.g. poc ket
calculators, communication is decimal-based. Input is done in decimal then converted to binary for internal processing.
For output, t he r esult h as to b e co nverted f rom i ts i nternal b inary r epresentation t o a decimal f orm. D igital system
represents and manipulates not only binary number but also many other discrete elements of information.
In computing and electronic systems, binary-coded decimal (BCD) is an encoding for decimal numbers in which each
digit is represented by its own binary sequence. Its main virtue is t hat it allows easy conversion to decimal digits for
printing o r d isplay an d faster d ecimal c alculations. Its drawbacks ar e the i ncreased co mplexity of ci rcuits n eeded to
implement mathematical o perations an d a r elatively i nefficient e ncoding. It o ccupies m ore sp ace than a p ure b inary
representation.In BCD, a digit is usually represented by four bits which, in general, represent the values/digits/characters
0-9
To BCD-encode a decimal number using the common encoding, each decimal digit is stored in a four-bit nibble.
Decimal: 0 1 2 3 4 5 6 7 8 9
BCD: 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001
Thus, the BCD encoding for the number 127 would be:
The position weights of the BCD code are 8, 4, 2, 1. Other codes (shown in the table) use position weights of 8, 4, -2, -1
and 2, 4, 2, 1.
An example of a non-weighted code is the excess-3 code where digit codes is obtained from
their binary equivalent after adding 3. Thus the code of a decimal 0 is 0011, that of 6 is 1001, etc
it is very important to understand the difference between the conversion of a decimal number to binary and the binary
coding o f a d ecimal n umber. I n each cas e, t he f inal result is a series o f b its. The b its o btained from co nversion a re
binary digit. Bits obtained from coding are combinations of 1’s and 0’s arranged according to the rule of the code used.
e.g. the binary conversion of 13 is 1101; the BCD coding of 13 is 00010011
- Error-Detection Codes
Binary i nformation may b e t ransmitted t hrough s ome c ommunication medium, e .g. us ing w ires or w ireless media. A
corrupted bit will have its value changed from 0 t o 1 or vice versa. To be able to detect errors at the receiver end, the
sender sends an extra bit (parity bit) with the original binary message.
A pa rity bi t i s a n extra bit i ncluded w ith the n -bit b inary m essage t o m ake t he t otal number o f 1 ’s in this m essage
(including the parity bit) either odd or even. If the parity bit makes the total number of 1’s an odd (even) number, it is
called odd (even) parity. The table shows the required odd (even) parity for a 3-bit message
No error is detectable if the transmitted message has 2 bits in error since the total number of 1’s will remain even (or
odd) as in the original message.
In general, a transmitted message with even number of errors cannot be detected by the parity bit.
- Gray code
The Gray code consist of 16 4-bit c ode words to represent the decimal Numbers 0 t o 15. For Gray code, successive
code words differ by only one bit from one to the next
Character Representation
Even though many people used to think of computers as "number crunchers", people figured out long ago that it's just as
important to handle character data.
Character data isn't just alphabetic characters, but also numeric characters, punctuation, spaces, etc. Most keys on t he
central part of the keyboard (except shift, caps lock) are characters. Characters need to represented. In particular, they
need to be represented in binary. After all, computers store and manipulate 0's and 1's (and even those 0's and 1's are just
abstractions. The implementation is typically voltages).
Unsigned binary a nd two's complement are us ed to represent uns igned a nd s igned i nteger respectively, because t hey
have nice mathematical properties, in particular, you can add and subtract as you'd expect.
However, there aren't such properties for character data, so assigning binary codes for characters is somewhat arbitrary.
The m ost c ommon ch aracter representation i s A SCII, w hich st ands f or American Standard Code for Information
Interchange.
There a re two r easons to u se A SCII. F irst, w e n eed so me w ay t o r epresent ch aracters as binary n umbers ( or,
equivalently, as bitstring patterns). There's not much choice about this since computers represent everything in binary.
If you've noticed a common theme, it's that we need representation schemes for everything. However, most importantly,
we ne ed representations for num bers a nd c haracters. O nce you ha ve t hat ( and pe rhaps po inters), y ou c an bui ld up
everything you need.
The other reason we use ASCII is because of the letter "S" in ASCII, which stands for "standard". Standards are good
because they allow for common formats that everyone can agree on.
Unfortunately, there's also the letter "A", which stands for American. ASCII is clearly biased for the English language
character set. Other languages may have their own character set, even though English dominates most of the computing
world (at least, programming and software).
Even though character sets don't have mathematical properties, there are some nice aspects about ASCII. In particular,
the lowercase l etters a re contiguous ( 'a' t hrough 'z' m aps t o 9 710 through 12 210). T he u pper case l etters are al so
contiguous ('A' through 'Z' maps to 65 10 through 90 10). Finally, the digits are contiguous ('0' through '9' maps t o 48 10
through 5710).
The characters between 0 and 31 are generally not printable (control characters, etc). 32 is t he space character. Also
note t hat there ar e o nly 1 28 A SCII ch aracters. This means o nly 7 b its a re r equired t o r epresent an A SCII ch aracter.
However, since the smallest size representation on most computers is a byte, a byte is used to store an ASCII character.
The Most Significant bit(MSB) of an ASCII character is 0.
00 nul 10 dle 20 sp 30 0 40 @ 50 P 60 ` 70 p
01 soh 11 dc1 21 ! 31 1 41 A 51 Q 61 a 71 q
02 stx 12 dc2 22 " 32 2 42 B 52 R 62 b 72 r
03 etx 13 dc3 23 # 33 3 43 C 53 S 63 c 73 s
04 eot 14 dc4 24 $ 34 4 44 D 54 T 64 d 74 t
05 enq 15 nak 25 % 35 5 45 E 55 U 65 e 75 u
06 ack 16 syn 26 & 36 6 46 F 56 V 66 f 76 v
07 bel 17 etb 27 ' 37 7 47 G 57 W 67 g 77 w
08 bs 18 can 28 ( 38 8 48 H 58 X 68 h 78 x
09 ht 19 em 29 ) 39 9 49 I 59 Y 69 i 79 y
0a nl 1a sub 2a * 3a : 4a J 5a Z 6a j 7a z
0b vt 1b esc 2b + 3b ; 4b K 5b [ 6b k 7b {
0c np 1c fs 2c , 3c < 4c L 5c \ 6c l 7c |
0d cr 1d gs 2d - 3d = 4d M 5d ] 6d m 7d }
0e so 1e rs 2e . 3e > 4e N 5e ^ 6e n 7e ~
0f si 1f us 2f / 3f ? 4f O 5f _ 6f o 7f del
The difference in the ASCII code between an uppercase letter and its corresponding lowercase letter is 2016. This makes
it easy to convert lower to uppercase (and back) in hex (or binary).
While ASCII is still popularly used, another character representation that was used (especially at IBM) was EBCDIC,
which s tands f or Extended Binary Coded Decimal Interchange Code (yes, t he word " code" ap pears t wice). T his
character se t h as m ostly d isappeared. E BCDIC d oes n ot st ore ch aracters co ntiguously, s o t his c an c reate pr oblems
alphabetizing "words".
Other countries ha ve us ed different s olutions, i n particular, using 8 b its t o represent their alphabets, giving up t o 256
letters, which is plenty for most alphabet based languages (recall you also need to represent digits, punctuation, etc).
However, Asian languages, which are word-based, rather than character-based, often have more words than 8 bits can
represent. In particular, 8 b its can only represent 256 words, which is far smaller than the number of words in natural
languages.
Thus, a new character set called Unicode is now becoming more prevalent. This is a 16 bit code, which allows for about
65,000 di fferent r epresentations. T his i s e nough t o e ncode t he popul ar A sian l anguages ( Chinese, K orean, J apanese,
etc.). It also turns out that ASCII codes are preserved. What does this mean? To convert ASCII to Unicode, take all one
byte ASCII codes, and zero-extend them to 16 bits. That should be the Unicode version of the ASCII characters.
The biggest consequence of using Unicode from ASCII is that text files double in size. The second consequence is that
endianness be gins t o m atter a gain. Endianness is the ordering of individually addressable s ub-units (words, by tes, or
even bits) within a longer data word stored in external memory. The most typical cases are the ordering of bytes within a
16-, 32-, or 64-bit word, where endianness is often simply referred to as byte order. The usual contrast is between most
versus least significant byte first, called big-endian and little-endian respectively.
Big-endian places the most significant bit, digit, or byte in the first, or leftmost, position. Little-endian places the most
significant bit, digit, or b yte in the last, or r ightmost, p osition. M otorola processors employ th e b ig-endian a pproach,
whereas Intel processors take the little-endian approach. Table bellow illustrates how the decimal value 47,572 would be
expressed in hexadecimal and binary notation (two octets) and how it would be stored using these two methods.
Table : Endianess
Number Big-Endian Little-Endian
Hexadecimal
B9D4 B9D4 4D9B
Binary
10111001 10111001 11010100
11010100 11010100 10111001
With s ingle by tes, t here's no ne ed t o w orry a bout e ndianness. H owever, y ou have t o c onsider t hat w ith t wo by te
quantities.
While C and C++ still primarily use ASCII, J ava has already used Unicode. This means that Java must create a byte
type, because char in Java is no longer a single byte. Instead, it's a 2 byte Unicode representation.
Exercise
1. The state of a 12-bit register is 010110010111. what is its content if it represents :
i. Decimal digits in BCD code ii. Decimal digits in excess-3 code
2. The result of an experiment fall in the range -4 to +6. A scientist wishes to read the result into a computer and then
process them. He decides to use a 4-bit binary code to represents each of the possible inputs. Devise a 4-bit binary code
of representing numbers in the range of -4 to 6
3. The (r-1)’s complement of a base-6 numbers is called the 5’s complement. Explain the procedure for obtaining the 5’s
complement of base 6 numbers. Obtain the 5’s complement of (3210)6
4. Design a three bit code to represent each of the six digits of the base 6 number system.
5. Represent the decimal number –234.125 using the IEEE 32- bit (single) format
Introduction
Binary logic deals with variables that assume discrete values and with operators that assume logical meaning.
While each logical element or condition must always have a logic value of either "0" or "1", we also need to have
ways to combine different logical signals or conditions to provide a logical result.
For example, consider the logical statement: "If I move the switch on t he wall up, the light will turn on." At first
glance, this seems to be a correct statement. However, if we look at a few other factors, we realize that there's more
to it than this. In this example, a more complete statement would be: "If I move the switch on the wall up and the
light bulb is good and the power is on, the light will turn on."
If we l ook at t hese two st atements a s l ogical ex pressions an d u se logical t erminology, we can r educe t he f irst
statement to:
Light = Switch
This m eans no thing m ore t han that the l ight w ill f ollow th e a ction o f th e switch, s o t hat w hen t he switch i s
up/on/true/1 t he l ight w ill also b e on/ true/1. C onversely, i f t he s witch i s dow n/off/false/0 the l ight w ill a lso be
off/false/0.
Looking at the second version of the statement, we have a slightly more complex expression:
When we deal with logical circuits (as in computers), we not only need to deal with logical functions; we also need
some sp ecial sy mbols t o denote these f unctions in a logical d iagram. T here ar e t hree f undamental logical
operations, from w hich a ll ot her f unctions, no matter how c omplex, c an be derived. T hese f unctions a re named
and, or, and not. Each of these has a specific symbol and a clearly-defined behaviour.
AND. The AND operation is represented by a dot(.) or by the absence of an operator. E.g. x.y=z xy=z are all read as x
AND y=z. the logical operation AND is interpreted to mean that z=1 if and only if x=1 and y=1 otherwise z=0
OR. The operation is represented by a + sign for example, x+y=z is interpreted as x OR y=z meaning that z=1 if x=1 or
y=1 or if both x=1 and y=1. If both x and y are 0, then z=0
NOT. This operation is represented by a bar or a prime. For example x′= x =z is interpreted as NOT x =z meaning that z
is what x is not
It s hould b e n oted t hat a lthough t he A ND a nd the OR o peration h ave s ome s imilarity w ith the m ultiplication a nd
addition respectively i n b inary arithmetic , h owever one should n ote t hat an ar ithmetic variable may co nsist of many
digits. A binary logic variable is always 0 or 1.
Basic Gate
The basic building blocks of a computer are called logical gates or just gates. Gates are basic circuits that have at least
one (and usually more) input and exactly one output. Input and output values are the logical values true and false. In
computer architecture it is common to use 0 for false and 1 for true. Gates have no memory. The value of the output
depends only on the current value of the inputs. A useful way of describing the relationship between the inputs of gates
We usually consider three basic kinds of gates, and-gates, or-gates, and not-gates (or inverters).
The AND gate implements the AND function. With the gate shown to the left, both inputs must have logic 1 signals
applied to them in order for the output to be a logic 1. With either input at logic 0, the output will be held to logic 0.
The truth table for an and-gate with two inputs looks like this:
x y | z
-------
0 0 | 0
0 1 | 0
1 0 | 0
1 1 | 1
There is no limit to the number of inputs that may be applied to an AND function, so there is no f unctional limit to
the number o f i nputs an AND g ate m ay h ave. H owever, f or p ractical r easons, co mmercial A ND g ates ar e m ost
commonly manufactured with 2, 3, or 4 inputs. A standard Integrated Circuit (IC) package contains 14 or 16 pins, for
practical size and handling. A standard 14-pin package can contain four 2-input gates, three 3-input gates, or two 4-
input gates, and still have room for two pins for power supply connections.
- The OR Gate
The OR gate is sort of the reverse of the AND gate. The OR function, like its verbal counterpart, allows the output to
be true (logic 1) if any one or more of its inputs are true. Verbally, we might say, "If it is raining OR if I turn on the
sprinkler, the lawn will be wet." Note that the lawn will still be wet if the sprinkler is on and it is also raining. This is
correctly reflected by the basic OR function.
In symbols, the OR function is designated with a plus sign (+). In logical diagrams, the symbol below designates the
OR gate.
The truth table for an or-gate with two inputs looks like this:
x y | z
-------
0 0 | 0
0 1 | 1
1 0 | 1
1 1 | 1
As with the AND function, the OR function can have any number of inputs. However, practical commercial OR gates
are mostly limited to 2, 3, and 4 inputs, as with AND gates.
The inverter is a little different from AND and OR gates in that it always has exactly one input as well as one output.
Whatever logical state is applied to the input, the opposite state will appear at the output.
x | y
-----
0 | 1
1 | 0
The NOT function, as it is called, is necesasary in many applications and highly useful in others. A practical verbal
application might be:
In the inverter symbol, the triangle actually denotes only an amplifier, which in digital terms means that it "cleans up"
the signal but does not change its logical sense. It is the circle at the output which denotes the logical inversion. The
circle could have been placed at the input instead, and the logical meaning would still be the same
Combined gates
Sometimes, it is practical to combine functions of the basic gates into more complex gates, for instance in order to save
space in circuit diagrams. In this section, we show some such combined gates together with their truth tables.
- The nand-gate
The nand-gate is an and-gate with an inverter on the output. So instead of drawing several gates like this:
We draw a single and-gate with a little ring on the output like this:
The nand-gate, like the and-gate can take an arbitrary number of inputs.
The truth table for the nand-gate is like the one for the and-gate, except that all output values have been inverted:
- The nor-gate
The nor-gate is an or-gate with an inverter on the output. So instead of drawing several gates like this:
We draw a single or-gate with a little ring on the output like this:
The nor-gate, like the or-gate can take an arbitrary number of inputs.
The truth table for the nor-gate is like the one for the or-gate, except that all output values have been inverted:
x y|z
-------
0 0|1
0 1|0
1 0|0
1 1|0
- The exclusive-or-gate
The exclusive-or-gate is similar to an or-gate. It can have an arbitrary number of inputs, and its output value is 1 if and
only if exactly one input is 1 (and thus the others 0). Otherwise, the output is 0.
The truth table for an exclusive-or-gate with two inputs looks like this:
- The exclusive-Nor-gate
The exclusive-Nor-gate is similar to an N or-gate. It can have an arbitrary number of inputs, and its output value is 1 if
and only if the two input are of the same values (1 and 1 or 0 and 0). Otherwise, the output is 0.
The truth table for an exclusive-nor-gate with two inputs looks like this:
x y|z
-------
0 0|1
0 1|0
1 0|0
1 1|1
Let us limit ourselves to gates with n inputs. The truth t ables for such gates have 2 n lines. Such a g ate is completely
defined by the output column in the truth table. The output column can be viewed as a string of 2 n binary digits. How
many different strings of binary digits of length 2n are there? The answer is 2 2n, since there are 2k different strings of k
binary di gits, and i f k=2n, then there are 22n such strings. In particular, if n=2, we can see t hat there are 16 different
types of gates with 2 inputs.
Diode logic gates use diodes to perform AND and OR logic functions. Diodes have the property of easily passing an
electrical current in one direction, but not the other. Thus, diodes can act as a logical switch.
Diode l ogic gates are very si mple an d i nexpensive, an d can b e u sed effectively i n sp ecific si tuations. H owever, t hey
cannot b e u sed e xtensively, a s t hey t end t o d egrade digital signals rapidly. I n a ddition, they c annot pe rform a N OT
function, so their usefulness is quite limited.
Resistor-transistor l ogic g ates u se Transistors to co mbine m ultiple i nput si gnals, w hich also am plify an d invert the
resulting co mbined si gnal. O ften an ad ditional t ransistor is i ncluded t o re-invert th e o utput s ignal. This c ombination
provides clean output signals and either inversion or non-inversion as needed.
Although they are not designed for linear operation, RTL integrated circuits are sometimes used as inexpensive small-
signal amplifiers, or as interface devices between linear and digital circuits.
By letting diodes perform the logical AND or OR function and then amplifying the result with a transistor, we can avoid
some of the limitations of RTL. DTL takes diode logic gates and adds a transistor to the output, in order to provide logic
inversion and to restore the signal to full logic levels.
The physical construction of integrated circuits made it more effective to replace all the input diodes in a DTL gate with
a transistor, built with multiple emitters. The result is transistor-transistor logic, which became the standard logic circuit
in most applications for a number of years.
As the state of the art improved, TTL integrated circuits were adapted slightly to handle a wider range of requirements,
but their basic functions remained the same. These devices comprise the 7400 family of digital ICs.
Also known as Current Mode Logic (CML), ECL gates are specifically designed to operate at extremely high speeds, by
avoiding t he "lag" i nherent w hen t ransistors are a llowed t o b ecome sat urated. Because o f this, h owever, these g ates
demand substantial amounts of electrical current to operate correctly.
- CMOS Logic
One factor is common to all of the logic families we have listed above: they use significant amounts of electrical power.
Many applications, especially portable, battery-powered ones, require that the use of power be absolutely minimized. To
accomplish this, t he C MOS ( Complementary Met al-Oxide-Semiconductor) l ogic f amily w as de veloped. T his f amily
uses enhancement-mode MOSFETs as its transistors, and is so designed that it requires almost no current to operate.
CMOS g ates ar e, h owever, sev erely l imited i n t heir sp eed o f o peration. N evertheless, they ar e h ighly u seful an d
effective in a wide range of battery-powered applications.
Most logic families share a c ommon characteristic: their inputs require a certain amount of current in order to operate
correctly. CMOS gates work a bit differently, but still represent a capacitance that must be charged or discharged when
the input changes state. The current required to drive any input must come from the output supplying the logic signal.
Therefore, we need to know how much current an input requires, and how much current an output can reliably supply,
in order to determine how many inputs may be connected to a single output.
However, making such calculations can be tedious, and can bog down logic circuit design. Therefore, we use a different
technique. Rather than working constantly with actual currents, we determine the amount of current required to drive
one standard i nput, and de signate t hat a s a standard load on a ny ou tput. Now we can define the number of standard
loads a given output can drive, and identify it that way. Unfortunately, some inputs for specialized circuits require more
than the usual input current, and some gates, known as buffers, are deliberately designed to be able to drive more inputs
than usual. For an easy way to define input current requirements and output drive capabilities, we define two new terms:
fan-in and fan-out
Fan-in
Fan-in is a term that defines the maximum number of digital inputs that a single logic gate can accept. Most transistor-
transistor logic ( TTL ) gates have one or two inputs, although some have more than two. A typical logic gate has a fan-
in of 1 or 2.
Fan-out
Fan-out is a term that defines the maximum number of digital inputs that the output of a single logic gate can feed. Most
transistor-transistor logic ( TTL ) gates can feed up to 10 other digital gates or devices. Thus, a typical TTL gate has a
fan-out of 10.
In some digital systems, it is necessary for a single TTL logic gate to drive more than 10 other gates or devices. When
this is the case, a device called a buffer can be used between the TTL gate and t he multiple devices it must drive. A
buffer of this type has a fan-out of 25 to 30. A logical inverter (also called a NOT gate) can serve this function in most
digital circuits.
Remember, fan-in and fan-out apply directly only within a given logic family. If for any reason you need to interface
between two different logic families, be careful to note and meet the drive requirements and limitations of both families,
within the interface circuitry
Boolean Algebra
One of the primary requirements when dealing with digital circuits is to find ways to make them as simple as possible.
This constantly requires that complex logical expressions be reduced to simpler expressions that nevertheless produce
the same results under all possible conditions. The simpler expression can then be implemented with a smaller, simpler
circuit, which i n t urn s aves the price o f the u nnecessary g ates, r educes t he n umber o f gates n eeded, and r educes the
power and the amount of space required by those gates.
One tool to reduce logical expressions is the mathematics of logical expressions, introduced by George Boole in 1854
and known today as Boolean Algebra. The rules of Boolean Algebra are simple and straight-forward, and can be applied
to any logical expression. The resulting reduced expression can then be readily tested with a Truth Table, to verify that
the reduction was valid.
Boolean algebra is an algebraic structure defined on a set of elements B, together with two binary operators(+, .)
provided the following postulates are satisfied.
5. For every element x belonging to B, there exist an element x′ or x called the complement of x such that x.
x′=0 and x+ x′=1
6. There exists at least two elements x,y belonging to B such that x ≠y
The two valued Boolean algebra is defined on a set B={0,1} with two binary operators + and.
Closure. from the tables, the result of each operation is either 0 or 1 and 1 ,0 belongs to B
Identity. From the truth table we see that 0 is the identity element for + and 1 is the identity element for . .
From the complement table we can see that x+ x′=1 i.e 1+0=1 and x. x′=0 i.e 1.0=0
The principle o f duality st ate that ev ery al gebraic expression w hich can b e d educed f rom t he p ostulates o f B oolean
algebra r emains valid i f the ope rators and t he i dentity el ements ar e i nterchanged. T his m ean t hat t he dua l of a n
expression is obtained changing every AND(.) to OR(+), every OR(+) to AND(.) and all 1's to 0's and vice-versa
Postulate 5 :
(a) A + A = A (b) A A = A
Theorem2
(a) 1 + A = 1 (b) 0. A = 0
Theorem3: involution
A′′=A
(a) A + B = B + A (b) A B = B A
(a) (A + B) + C = A + (B + C) (b) (A B) C = A (B C)
(a) A (B + C) = A B + A C (b) A + (B C) = (A + B) (A + C)
Theorem6 : Absorption
(a) A + A B = A (b) A (A + B) = A
The c ombinational ci rcuit consist o f l ogic g ates w hose o utputs a t an y t ime i s d etermined d irectly f rom t he p resent
combination of input without any regard to the previous input. A combinational circuit performs a specific information
processing operation fully specified logically by a set of Boolean functions.
A combinatorial circuit is a g eneralized gate. In general such a ci rcuit has m inputs and n outputs. Such a ci rcuit can
always b e co nstructed as n separate combinatorial circuits, e ach w ith e xactly one out put. F or t hat r eason, s ome t exts
only discuss combinatorial circuits with exactly one output. In reality, however, some important sharing of intermediate
signals may take p lace if t he en tire n-output circuit i s c onstructed a t on ce. S uch s haring c an significantly reduce t he
number of gates required to build the circuit.
When we build a combinatorial circuit from some kind of specification, we always try to make it as good as possible.
The only problem is that the definition of "as good as possible" may vary greatly. In some applications, we simply want
to minimize t he n umber o f gates ( or t he number o f t ransistors, really). In o ther, w e might b e i nterested i n as sh ort a
delay (the tim e it t akes a signal to t raverse t he c ircuit) as possible, o r i n as low power consumption as pos sible. In
general, a mixture of such criteria must be applied.
To s pecify t he e xact w ay in w hich a c ombinatorial circuit w orks, w e might u se d ifferent m ethods, su ch as l ogical
expressions or truth tables.
A truth table is a complete enumeration of all possible combinations of input values, each one with its associated output
value.
When used to describe an existing circuit, output values are (of course) either 0 or 1. Suppose for instance that we wish
to make a truth table for the following circuit:
All we need to do to establish a truth table for this circuit is to compute the output value for the circuit for each possible
combination of input values. We obtain the following truth table:
wxy|ab
-----------
000|01
001|01
010|11
011|10
100|11
101|11
110|11
111|10
When used as a specification for a circuit, a table may have some output values that are not specified, perhaps because
the c orresponding c ombination of input v alues c an never oc cur i n the particular application. We c an i ndicate s uch
unspecified output values with a dash -.
For instance, let us s uppose w e w ant a c ircuit o f four i nputs, interpreted as two nonne gative b inary i ntegers of two
binary digits each, and two outputs, interpreted as the nonnegative binary integer giving the quotient between the two
input numbers. Since division is not defined when the denominator is zero, we do not care what the output value is in
this case. Of the sixteen entries in the truth table, four have a zero denominator. Here is the truth table:
x1 x0 y1 y0 | z1 z0
-------------------
0 0 0 0 |- -
0 0 0 1 |0 0
0 0 1 0 |0 0
0 0 1 1 |0 0
0 1 0 0 |- -
0 1 0 1 |0 1
0 1 1 0 |0 0
0 1 1 1 |0 0
1 0 0 0 |- -
1 0 0 1 |1 0
1 0 1 0 |0 1
1 0 1 1 |0 0
1 1 0 0 |- -
1 1 0 1 |1 1
1 1 1 0 |0 1
1 1 1 1 |0 1
Unspecified output values like this can greatly decrease the number of circuits necessary to build the circuit. The reason
is simple: when we are free to choose the output value in a particular situation, we choose the one that gives the fewest
total number of gates.
Circuit m inimization i s a d ifficult p roblem f rom c omplexity p oint o f v iew. C omputer p rograms th at tr y to o ptimize
circuit design apply a number of heuristics to improve speed. In this course, we are not concerned with optimality. We
are therefore only going to discuss a simple method that works for all possible combinatorial circuits (but that can waste
large numbers of gates).
A separate single-output circuit is built for each output of the combinatorial circuit.
Our simple method starts with the truth table (or rather one of the acceptable truth tables, in case we have a choice). Our
circuit is going to be a two-layer circuit. The first layer of the circuit will have at most 2n AND-gates, each with n inputs
(where n is the number of inputs of the combinatorial circuit). The second layer will have a single OR-gate with as many
inputs as there are gates in the first layer. For each line of the truth table with an output value of 1, we put down a AND-
gate w ith n inputs. F or each i nput e ntry i n t he t able w ith a 1 in i t, w e c onnect a n input of the AND-gate to th e
corresponding input. For each entry in the table with a 0 in it, we connect an input of the AND-gate to the corresponding
input inverted.
The output of each AND-gate of the fist layer is then connected to an input of the OR-gate of the second layer.
As an example of our general method, consider the following truth table (where a - indicates that we don't care what
value is chosen):
A=x′y′z+x′yz′+xyz
B=x′y′z+xy′z′
For the first column, we get three 3-input AND-gates in the first layer, and a 3-input OR-gate in the second layer. We get
three AND -gates since there are three rows in the a column with a value of 1. Each one has 3-inputs since t here are
three inputs, x, y, and z of the circuit. We get a 3-input OR-gate in the second layer since there are three AND -gates in
the first layer.
For the second column, we get two 3-input AND -gates in the first layer, and a 2-input OR-gate in the second layer. We
get two AND-gates since there are two rows in the b column with a value of 1. Each one has 3-inputs since again there
are three inputs, x, y, and z of the circuit. We get a 2-input AND-gate in the second layer since there are two AND-gates
in the first layer.
While this circuit works, it is not the one with the fewest number of gates. In fact, since both output columns have a 1 in
the row correspoding to the inputs 0 0 1, it is clear that the gate for that row can be shared between the two subcircuits:
In some cases, even smaller circuits can be obtained, if one is willing to accept more layers (and thus a h igher circuit
delay).
Operations of binary variables can be described by mean of appropriate mathematical function called Boolean function.
A Boolean function define a mapping from a set of binary input values into a set of output values. A Boolean function is
formed with binary variables, the binary operators AND and OR and the unary operator NOT.
For example , a Boolean function f(x1,x2,x3,……,xn) =y defines a mapping from an arbitrary combination of binary input
values (x1,x2,x3,……,xn) into a binary value y. a binary function with n input variable can operate on 2 n distincts values.
Any such function can be described by using a truth table consisting of 2n rows and n columns. The content of this table
are the values produced by that function when applied to all the possible combination of the n input variable.
Example
x y x.y
0 0 0
0 1 0
1 0 0
1 1 1
The function f, representing x.y, that is f(x,y)=xy. Which mean that f=1 if x=1 and y=1 and f=0 otherwise.
For each rows of the table, there is a value of the function equal to 1 or 0. The function f is equal to the sum of all rows
that gives a value of 1.
A Boolean function may be transformed from an algebraic expression into a logic diagram composed of AND, OR and
NOT gate. When a Boolean function is implemented with logic gates, each literal in the function designates an input to a
gate and each term is implemented with a logic gate . e.g.
F=xyz
F=x+y′z
Complement of a function
The complement of a function F is F′ and is obtained from an interchange of 0’s to 1’s and 1’s to 0’s in the value of F.
the complement of a function may be derived algebraically trough De Morgan’s theorem
(A+B+C+….)′= A′B′C′….
(ABC….)′= A′+ B′+C′……
The generalized form of de Morgan’s theorem state that the complement of function is obtained by interchanging AND
and OR and complementing each literal.
F=X′YZ′+X′Y′Z′
F′=( X′YZ′+X′Y′Z′)′
=( X′YZ′)′.( X′Y′Z′)′
=( X′′+Y′+Z′′)( X′′+Y′′+Z′′)
=( X+Y′+Z)( X+Y+Z)
A binary variable may appear either in it normal form or in it complement form . consider two binary variables x and y
combined with AND operation. Since each variable may appears in either form t here are f our possible combinations:
x′y′, x′y, xy′,xy. Each of the term represent one distinct area in the Venn diagram and is called minterm or a standard
product. With n variable, 2n minterms can be formed.
In a s imilar fashion, n v ariables f orming a n O R t erm pr ovide 2 n possible c ombinations called maxterms o r st andard
sum. E ach m axterm i s obtained from a n O R t erm of t he n v ariables, w ith e ach v ariable be ing pr imed i f the
corresponding bi t i s 1 a nd un -primed i f t he c orresponding bi t i s 0. N ote t hat each maxterm i s t he c omplement of i ts
corresponding minterm and vice versa.
X Y Z Minterm maxterm
0 0 0 x′y′z′ X+y+z
0 0 1 X′y′z X+y+z′
0 1 0 X′yz′ X+y′+z
0 1 1 X′yz X+y′+z′
1 0 0 Xy′z′ X′+y+z
1 0 1 Xy′z X′+y+z′
1 1 0 Xyz′ X′+y′+z
1 1 1 Xyz X′+y′+z′
A Boolean function may be expressed algebraically from a given truth table by forming a minterm for each combination
of variable that produce a 1 and taken the OR of those terms.
Similarly, the same function can be obtained by forming the maxterm for each combination of variable that produces 0
and then taken the AND of those term.
It is sometime convenient to express the bolean function when it is in sum of minterms, in the following notation:
F(X,Y,Z)=∑(1,4,5,6,7) . the s ummation s ymbol∑ stands for the ORing of the terms; the number follow ing it a re the
minterms of the function. The letters in the parenthesis following F form l ist of the variables in the order taken when
the minterm is converted to an AND term.
Sometime it is convenient to express a Boolean function in its sum of minterm. If it is not in that case, the expression is
expanded into the sum of AND term and if there is any missing variable, it is ANDed with an expression such as x+x′
where x is one of the missing variable.
To express a Boolean function as a product of maxterms, it must first be brought into a form of OR terms. This can be
done by using distributive law x+xz=(x+y)(x+z). then if there is any missing variable, say x in each OR term is ORded
with xx′.
=(xy +x′)(xy+z)
(x+x′)(y+x′)(x+z)(y+z)
(y+x′)(x+z)(y+z)
Adding missing variable in each term
(y+x′)= x′+y+zz′ =(x′+y+z)( x′+y+z′)
(x+z)= x+z+yy′ =(x+y+z)(x+y′+z)
F(x,y,z)= ∏ (0,2,4,5)
Standard form
Another way to express a boolean function is in satndard form. Here the term that form the function may contains one,
two or nay number of literals. There are two types of standard form. The sum of product and the product of sum.
The sum of product(SOP) is a Boolean e xpression containing AND t erms called pr oduct t erm of one or more literals
each. The sum denotes the ORing of these terms
e.g. F=x+xy′+x′yz
the product o f s um (POS)is a B oolean ex pression co ntaining OR terms called SUM terms. E ach term may h ave any
number of literals. The product denotes the ANDing of these terms
e.g. F= x(x+y′)(x′+y+z)
a boolean function may also be expressed in a non standard form. In that case, distributive law can be used to remove
the parenthesis
F=(xy+zw)(x′y′+z′w′)
= xy(x′y′+z′w′)+zw(x′y′+z′w′)
=xyz′w′+zwx′y′
A Boolean equation can be reduced to a minimal number of literal by algebraic manipulation. Unfortunately, there are
no specific rules to follow that will guarantee the final answer. The only methods is to use the theorem and postulate of
Boolean algebra and any other manipulation that becomes familiar
To d efine w hat a combinatorial c ircuit doe s, w e c an us e a logic expression or a n expression for short. S uch a n
expression uses the two constants 0 and 1, variables such as x, y, and z (sometimes with suffixes) as names of inputs and
outputs, a nd the ope rators +, . and a hor izontal ba r or a pr ime (which s tands f or not). A s us ual, m ultiplication i s
considered to have higher priority than addition. Parentheses are used to modify the priority.
For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.
For POS forms OR gates will be in the first level and a single AND gate will be in the second level.
Note that using inverters to complement input variables is not counted as a level.
(X′+Y)(Y+XZ′)′+X(YZ)′
The equation is neither in sum of product nor in product of sum. The implementation is as follow
X1X2′X3+X1′X2′X2+X1′X2X′3
The equation is in sum of product. The implementation is in 2-Levels. AND gates form the first level and a single OR
gate the second level.
(X+1)(Y+0Z)
The equation is neither in sum of product nor in product of sum. The implementation is as follow
A valid question is: can logic expressions describe all possible combinatorial circuits?. The answer is yes and here is
why:
You can trivially convert the truth table for an arbitrary circuit into an expression. The expression will be in the form of
a sum of products of variables and there inverses. Each row with output value of 1 of the truth table corresponds to one
term in the sum. In such a term, a variable having a 1 in the truth table will be uninverted, and a variable having a 0 in
the truth table will be inverted.
xyz|f
---------
000|0
001|0
010|1
011|0
100|1
101|0
110|0
111|1
The corresponding expression is:
X′Y′Z+XY′Z′+XYZ
Since y ou c an de scribe a ny c ombinatorial c ircuit w ith a t ruth t able, a nd y ou c an de scribe a ny t ruth table w ith a n
expression, you can describe any combinatorial circuit with an expression.
There are many logic expressions (and therefore many circuits) that correspond to a certain truth table, and therefore to a
certain function computed. For instance, the following two expressions compute the same function:
X(Y+Z) XY+XZ
The left one requires two gates, one and-gate and one or-gate. The second expression requires two and-gates and one
or-gate. It seems obvious that the first one is preferable to the second one. However, this is not always the case. It is not
always true that the number of gates is the only way, nor even the best way, to determine simplicity.
We have, for instance, assumed that gates are ideal. In reality, the signal takes some time to propagate through a g ate.
We call this time the gate delay. We might be interested in circuits that minimize the total gate delay, in other words,
circuits that make the signal traverse the fewest possible gates from input to output. Such circuits are not necessarily the
same ones that require the smallest number of gates.
Circuit minimization
The complexity of the digital logic gates that implement a Boolean function is directly related to the complexity of the
algebraic expression from which the f unction is i mplemented. Although the truth t able r epresentation of a function is
unique, it can appear in many different forms when expressed algebraically.
x+x′y=(x+x′)(x+y)=x+y
simplify x′y′z+x′yz+xy′
x′y′z+x′yz+xy′=x′z(y+y′)+xy′
=x′z+xy′
Karnaugh map
The Karnaugh map a lso known a s Veitch diagram or s imply a s K map i s a t wo di mensional f orm of t he t ruth t able,
drawn in such a way that the simplification of Boolean expression can be immediately be seen from the location of 1’s
in the map. The map is a diagram made up of squares , each sqare represent one minterm. Since any Boolean function
can be expressed as a sum of minterms, it follows that a Boolean function is recognised graphically in the map from the
area enclosed by those squares whose minterms are included in the function.
A
A
0
1
B
A’B’ AB’
0
A’B AB
B 1
A
AB
00 01 11 10
C
A’B’C’ A’BC’ ABC’ AB’C’
0
A
AB
00 01 11 10
CD
A’B’C’D’ A’BC’D’ ABC’D’ AB’C’D’
00
A’B’C’D A’BC’D ABC’D AB’C’D
01 D
A’B’CD A’BCD ABCD AB’CD
01
C
A’B’CD’ A’BCD’ ABCD’ AB’CD’
01
To simplify a Boolean function using karnaugh map, the first step is to plot all ones in the function truth table on the
map. T he ne xt s tep i s to c ombine a djacent 1 ’s into a g roup of one , t wo, four, e ight, s ixteen. The g roup of minterm
should be as large a s p ossible. A s ingle g roup of four m interm y ields a simpler e xpression than t wo g roups o f t wo
minterms.
The f inal s tage i s reached w hen e ach of the g roup of m interms a re O Rded t ogether to f orm t he s implified s um of
product expression
The karnaugh map is not a square or rectangle as it may appear in the diagram. The top edge is adjacent to the bottom
edge and the left hand edge adjacent to the right hand edge. Consequent, two squares in karnaugh map are said to be
adjacent if they differ by only one variable
Implicant
In Boolean logic, an implicant is a "covering" (sum term or product term) of one or more minterms in a sum of products
(or m axterms i n a pr oduct of s ums) of a boo lean f unction. F ormally, a pr oduct t erm P i n a s um of pr oducts is an
implicant of the Boolean function F if P implies F. More precisely:
P implies F (and thus is an implicant of F) if F also takes the value 1 whenever P equals 1.
where
• F is a Boolean of n variables.
• P is a product term
This means that P < = F with respect to the natural ordering of the Boolean space. For instance, the function
f(x,y,z,w) = xy + yz + w
Prime implicant
A prime implicant of a function is an implicant that cannot be covered by a more general (more reduced - meaning with
fewer literals) implicant. W.V. Quine defined a prime implicant of F to be an implicant that is minimal - that is, if the
removal of any literal from P results in a non-implicant for F. Essential prime implicants are prime implicants that cover
an output of the function that no combination of other prime implicants is able to cover.
00
01 D
1 1
1 1
11 prime implicant
C
1
10 prime implicant
AB
00 01 11 10
CD Essential prime implicant
1
00
01 D
1
Non Essential prime implicant
1 1 1
11
C Essential prime implicant
1 1 1
10
In simplifying a Boolean function using karnaugh map, non essential prime implicant are not needed
a b C M(output)
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 0
1 0 1 1
1 1 0 1
1 1 1 1
= bc + ac + ab
The abc term was replicated and combined with the other terms.
a
ab
00 01 11 10
c
1
0
1 1 1
c 1
The 1s are in the same places as they were in the original truth table. The 1 in the first row is at position 110 (a = 1, b =
1, c = 0).
The minimization is done by drawing circles around sets of adjacent 1s. Adjacency is horizontal, vertical, or both. The
circles must always contain 2n 1s where n is an integer.
1 1 1
c 1
We have circled two 1s. The fact that the circle spans the two possible values of a
(0 and 1) means that the a term is eliminated from the Boolean expression corresponding to this circle.
Now we have drawn circles around all the 1s. Thus the expression reduces to
bc + ac + ab
as we saw before.
What is happening? What does adjacency and grouping the 1s together have to do with minimization? Notice that the 1
at position 111 was used by all 3 circles. This 1 corresponds to the abc term that was replicated in the original algebraic
minimization. Adjacency of 2 1s means that the terms corresponding to those 1s differ in one variable only. In one case
that variable is negated and in the other it is not.
The map is easier than algebraic minimization because we just have to recognize patterns of 1s in the map instead of
using the algebraic manipulations. Adjacency also applies to the edges of the map.
Now for 4 Boolean variables. The Karnaugh map is drawn as shown below.
AB
00 01 11 10
CD
1
00
1 1
01 D
1 1 1
01
C
1 1
01
RULE: Minimization is achieved by drawing the smallest possible number of circles, each containing the largest
possible number of 1s.
AB
00 01 11 10
CD
1
00
1 1
01 D
1 1 1
01
C
1 1
01
Q = BD + AC + AB
This expression requires three 2-input AND gates and one 3-input OR gate.
Other examples
1. F=A′B+AB
A
A
0 1
B
0
=B
1 1
B 1
2. F=A′B′C′+A′B′C+A′BC′+ABC′+ABC
3. F=AB+A′BC′D+A′BCD+AB′C′D′
AB
00 01 11 10
CD
1 1
00
1 1
01 D =BD+AB+AC’D’
1 1
01
C
1
01
4. F=AC′D′+A′B′C+A′C′D+AB′D
AB
00 01 11 10
CD
1 1
00
1 1 1
01 D =B’D+AC’D’+A’C’D+A’B’C
1 1
11
C
1
10
AB
00 01 11 10
CD
1 1
00
1 1
01 D =BD+D’B’
1 1
11
C
10 1 1
F=A′B′C′D′+A′BC′D′+AB′C′D′+A′BC′D+A′B′CD′+A′BCD′+AB′CD′
AB
00 01 11 10
CD
1 1 0 1
00
0 1 0 0
01 D
0 0 0 0
11
C
1 1 0 1
10
The obtained simplified F′=AB+CD+BD′. Since F′′=F, By applying de morgan’s rule to F′, we obtain
F′′=(AB+CD+BD′)′
A B C D F
0 0 0 0 0
0 0 0 1 0
0 0 1 0 1
0 0 1 1 1
0 1 0 0 0
0 1 0 1 1
0 1 1 0 0
0 1 1 1 1
1 0 0 0 0
1 0 0 1 0
1 0 1 0 X
1 0 1 1 X
1 1 0 0 X
1 1 0 1 X
1 1 1 0 X
1 1 1 1 X
F=A′B′CD′+A′B′CD+A′BC′D+A′BCD
The X in the above stand for "don’t care", we don't care whether a 1 or 0 is the value f or that combination of i nputs
because (in this case) the inputs will never occur.
AB
00 01 11 10
CD
0 0 x 0
00
0 1 x 0
01 D F=BD+B’C
1 1 x x
11
C
1 0 x x
10
The Q uine–McCluskey a lgorithm ( or t he method of pr ime i mplicants) i s a method us ed f or minimization of bool ean
functions. I t i s f unctionally i dentical t o K arnaugh m apping, but t he t abular f orm makes i t more e fficient f or us e i n
computer algorithms, a nd it also gives a deterministic way t o check t hat the minimal f orm of a Boolean function has
been reached.
Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as
other prime implicants that are necessary to cover the function.
ABCD f
m0 0 0 0 0 0
m1 0 0 0 1 0
m2 0 0 1 0 0
m3 0 0 1 1 0
m4 0 1 0 0 1
m5 0 1 0 1 0
m6 0 1 1 0 0
m7 0 1 1 1 0
m8 1 0 0 0 1
m9 1 0 0 1 x
m10 1 0 1 0 1
m11 1 0 1 1 1
m12 1 1 0 0 1
m13 1 1 0 1 0
m14 1 1 1 0 x
m15 1 1 1 1 1
One c an e asily f orm t he c anonical s um of pr oducts e xpression f rom t his t able, s imply b y s umming t he m interms
(leaving out don't-care terms) where the function evaluates to one:
Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first placed in a m interm
table. Don't-care terms are also added into this table, so they can be combined with minterms:
At this point, the terms marked with * can be seen as a solution. That is the solution is
F=AB′+AD′+AC+BC′D′
If the karnaugh map was used, we should have obtain an expression simplier than this. To obtain a minimal form, we
need to use the prime implicant chart
None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table.
Along t he side goes t he prime i mplicants t hat h ave just been generated, and along t he t op go t he minterms sp ecified
4 8 10 11 12 15
m(8,9,10,11) X X X 10--(AB′)
In t he pr ime i mplicant table s hown above, t here are 5 r ows, one r ow for each of t he prime i mplicant and 6 c olumns,
each representing one minterm of the function. X is placed in each row to indicate the minterms contained in the prime
implicant of that row. For example, the two X in the first row indicate that minterm 4 and 12 are contained in the prime
implicant represented by (-100) i.e. BC′D′
The completed prime implicant table is inspected for columns containing only a single x. in this example, there are two
minterms whose column have a single x. 4,15. The minterm 4 is covered by prime implicant BC′D′. that is the selection
of prime implicant BC′D′ guarantee that minterm 4 is included in the selection. Similarly, for minterm 15 is covered by
prime implicant AC. P rime implicants that cover minterms with a single X in their column are called essential prime
implicants.
Now we find out each column whose minterm is covered by the selected essential prime implicant
For this example, essential prime implicant BC′D′ covers minterm 4 and 12. Essential prime implicant AC covers 10, 11
and 15. A n inspection of t he implicant table shows that, all the minterms are covered by the essential prime implicant
except the minterms 8. The minterms not selected must be included by the selection of one or more prime implicants.
From t his example, we have onl y one minterm which i s 8. It can be included in t he selection either by i ncluding t he
prime implicant AB′ or AD′. Since both of them have minterm 8 in their selection. We have thus found the minimum set
of prime implicants whose sum gives the required minimized function:
F=BC′D′+AD′+AC OR F= BC′D′+AB′+AC.
Both of those final equations are functionally equivalent to this original (very area-expensive) equation:
In addition to AND, OR, and NOT gates, other logic gates like NAND and NOR are also used in the design of digital
circuits.
The NAND gate represents the complement of the AND operation. Its name is an
abbreviation of NOT AND. The graphic symbol for the NAND gate consists of an AND symbol with a bubble on the
output, denoting that a complement operation is performed on the output of the AND gate as shown earlier
A uni versal g ate i s a g ate which can i mplement a ny B oolean f unction w ithout need to us e a ny o ther g ate t ype. The
NAND an d N OR g ates a re u niversal g ates. I n p ractice, this i s ad vantageous si nce N AND and N OR g ates a re
economical and easier to fabricate and are the basic gates used in all IC digital logic families. In fact, an AND gate is
typically implemented as a NAND gate followed by an inverter not the other way around.
Likewise, an OR gate is typically implemented as a NOR gate followed by an inverter not the other way around.
To prove that any Boolean function can be implemented using only NAND gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates. A universal gate is a gate which can implement any Boolean
function without need to use any other gate type.
The figure shows two ways in which a NAND gate can be used as an inverter (NOT gate).
1. All NAND input pins connect to the input signal A gives an output A′.
2. One NAND input pin is connected to the input signal A while all other input pins are connected to logic 1. The output
will be A′.
An AND gate can be replaced by NAND gates as shown in the figure (The AND is
replaced by a NAND gate with its output complemented by a NAND gate inverter).
An OR gate can be replaced by NAND gates as shown in the figure (The OR gate is replaced by a NAND gate with all
its inputs complemented by NAND gate inverters).
To prove that any Boolean function can be implemented using only NOR gates, we will show that the AND, OR, and
NOT operations can be performed using only these gates.
The figure shows two ways in which a NOR gate can be used as an inverter (NOT gate).
1.All NOR input pins connect to the input signal A gives an output A′.
2. One NOR input pin is connected to the input signal A while all other input pins are connected to logic 0. The output
will be A′.
An OR gate can be replaced by NOR gates as shown in the figure (The OR is replaced by a N OR gate with its output
complemented by a NOR gate inverter)
An AND gate can be replaced by NOR gates as shown in the figure (The AND gate is replaced by a NOR gate with all
its inputs complemented by NOR gate inverters)
Equivalent Gates:
The shown figure summarizes important cases of gate equivalence. Note that bubbles indicate a complement operation
(inverter).
Two NOT gates in series are same as a buffer because they cancel each other as A′′=A.
We ha ve s een b efore that B oolean functions in e ither S OP or P OS forms c an be implemented us ing 2 -Level
implementations.
For SOP forms AND gates will be in the first level and a single OR gate will be in the second level.
For POS forms OR gates will be in the first level and a single AND gate will be in the second level.
Note that using inverters to complement input variables is not counted as a level.
To implement a function using NAND gates only, it must first be simplified to a sum of product and to implement a
function using NOR gates only, it must first be simplified to a product of sum
We will show that SOP forms can be implemented using only NAND gates, while POS forms can be implemented using
only NOR gates through examples.
Example 1: Implement the following SOP function using NAND gate only
F = XZ + Y′Z + X′YZ
Introducing two successive inverters at the inputs of the OR gate results in the shown equivalent implementation. Since
two successive inverters on the same line will not have an overall effect on the logic as it is shown before.
By associating one of the inverters with the output of the first level AND gate and the other with the input of the OR
gate, it is clear that this implementation is reducible to 2-level i mplementation where both l evels are NAND gates as
shown in Figure.
Introducing t wo s uccessive i nverters a t the inputs o f t he A ND g ate results in the s hown equivalent i mplementation.
Since two successive inverters on the same line will not have an overall effect on the logic as it is shown before.
By associating one of the inverters with the output of the first level OR gates and the other with the input of the AND
gate, it is clear th at this i mplementation is r educible to 2 -level i mplementation w here b oth l evels a re N OR g ates as
shown in Figure.
• NAND-AND
• AND-NOR,
• NOR-OR,
• OR-NAND
AND-NOR functions:
By complementing the output we can get F, or by using NAND-AND circuit as shown in the figure.
It can also be implemented using AND-NOR circuit as it is equivalent to NAND- AND circuit as shown in the figure.
By complementing the output we can get F, or by using NOR-OR circuit as shown in the figure.
It can also be implemented using OR-NAND circuit as it is equivalent to NOR-OR circuit as shown in the figure
The design of a combinational circuit starts from the verbal outline of the problem and ends with a logic circuit diagram
or a s et o f B oolean f unctions f rom w hich t he B oolean f unction c an be easily obtained. The p rocedure i nvolves t he
following steps:
Adders
In electronics, an adder or summer is a d igital circuit that performs addition of numbers. In modern computers adders
reside in the arithmetic logic unit (ALU) where other operations are performed. Although adders can be constructed for
many numerical representations, such as Binary-coded decimal or excess-3, the most common adders operate on binary
numbers. In cases where twos complement or ones complement is being used to represent negative numbers, it is trivial
to modify an adder into an adder-subtracter. Other signed number representations require a more complex adder.
-Half Adder
A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum
and a carry value which are both binary digits.
A half adder has two inputs, generally labelled A and B, and two outputs, the sum S and carry C. S is the two-bit XOR
of A and B, and C is the AND of A and B. Essentially the output of a half adder is the sum of two one-bit numbers, with
C being the most significant of these two outputs.
The drawback of this circuit is that in case of a multibit addition, it cannot include a carry.
A B Carry Sum
0 0 0 0
0 1 0 1
1 0 0 1
1 1 1 0
Sum=A′B+AB′ Carry=AB
One can see that Sum can also be implemented using XOR gate as A B
A full adder has three inputs A, B, and a carry in C, such that multiple adders can be used to add larger numbers. To
remove a mbiguity be tween t he i nput a nd out put c arry l ines, the c arry i n i s l abelled Ci or Cin while t he carry out i s
labelled Co or Cout.
A full adder is a logical circuit that performs an addition operation on three binary digits. The full adder produces a sum
and carry value, which are both binary digits. It can be combined with other full adders or work on its own.
Input Output
A B Ci Co S
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1
Co=A′BCi+AB′Ci+ABCi′+ABCi
S=A′B′Ci +A′BCi′+ABCi′+ABCi
A full adder can be trivially built using our ordinary design methods for combinatorial circuits. Here is the resulting
Note that the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting
logic. This is because the only discrepancy between OR and XOR gates occurs when both inputs are 1; for the adder
shown here, this is never possible. Using only two types of gates is convenient if one desires to implement the adder
directly using common IC chips.
A full adder can be constructed from two half adders by connecting A and B to the input of one half adder, connecting
the s um f rom t hat t o a n i nput t o the s econd a dder, connecting C i t o t he ot her i nput a nd O R the t wo c arry out puts.
Equivalently, S could be made the three-bit xor of A, B, and Ci and Co could be made the three-bit majority function of
A, B, and Ci. The output of the full adder is the two-bit arithmetic sum of three one-bit numbers.
It is p ossible to create a logical circuit using multiple f ull adders to add N-bit numbers. Each f ull adder i nputs a Cin,
which is the Cout of the previous adder. This kind of adder is a ripple carry adder, since each carry bit "ripples" to the
next full adder. Note that the first (and only the first) full adder may be replaced by a half adder.
The layout of ripple carry adder is s imple, w hich allows for fast de sign time; how ever, the ripple carry adder is
relatively slow, since each full adder must wait for the carry bit to be calculated from the previous full adder. The gate
delay can easily be calculated by inspection of the full adder circuit. Following the path from Cin to Cout shows 2 gates
that must be passed through. Therefore, a 32-bit adder requires 31 carry computations and the final sum calculation for a
total of 31 * 2 + 1 = 63 gate delays.
Subtractor
In electronics, a subtractor can be designed using the same approach as that of an adder. The binary subtraction process
is summarized below. As with an adder, in the general case of calculations on multi-bit numbers, three bits are involved
in performing the subtraction for each bit: the minuend (Xi), subtrahend (Yi), and a borrow in from the previous (less
significant) bit order position (Bi). The outputs are the difference bit (Di) and borrow bit Bi + 1.
Half subtractor
The h alf-subtractor is a c ombinational circuit w hich is us ed to pe rform s ubtraction of two b its. I t ha s t wo i nputs, X
(minuend) and Y (subtrahend) and two outputs D (difference) and B (borrow). Such a circuit is called a half-subtractor
because i t en ables a b orrow o ut of the current a rithmetic ope ration b ut no borrow in f rom a pr evious a rithmetic
operation.
X Y D B
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0
D=X′Y+XY′ or D= X Y
B=X′Y
As i n t he case of t he addition using logic gates , a full subtractor is made by combining t wo half-subtractors a nd a n
additional O R-gate. A f ull su btractor h as the b orrow i n capability ( denoted as BORIN in t he di agram b elow) a nd s o
allows cascading which results in the possibility of multi-bit subtraction.
A B BORIN D BOROUT
0 0 0 0 0
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 0 0 1 1
1 0 1 0 1
1 1 0 0 0
1 1 1 1 1
For a wide range of operations many circuit elements will be required. A neater solution will be to use subtraction via
addition using complementing as was discussed i n t he binary arithmetic topic. I n t his case o nly ad ders are n eeded as
shown bellow.
The purpose of circuit minimization is to obtain an algebraic expression that, when implemented results in a low cost
circuit. Digital circuit are constructed with integrated circuit(IC). An IC is a small silicon semiconductor crystal called
chip containing the electronic component for digital gates. The various gates are interconnected inside the chip to form
the required circuit. Digital IC are categorized according to their circuit complexity as measured by the number of logic
gates in a single packages.
- Small scale integration (SSI). S Si d evices contain fewer t han 1 0 gates. The input an d o utput of the gates are
connected directly to the pins in the package.
- Medium S cale I ntegration. M SI de vices h ave t he c omplexity of approximately 10 to 10 0 g ates i n a single
package
- Large Scale Integration (LSI). LSI devices contain between 100 and a few thousand gates in a single package
- Very L arge S cale Integration(VLSI). V LSI devices contain t housand o f gates within a single package. V LSI
devices h ave r evolutionized the c omputer s ystem de sign t echnology g iving t he de signer t he capabilities t o
create structures that previously were uneconomical.
Multiplexer
A multiplexer is a combinatorial circuit that is given a certain number (usually a power of two) data inputs, let us say 2n,
and n address inputs used as a binary number to select one of the data inputs. The multiplexer has a single output, which
has the same value as the selected data input.
In other words, the multiplexer works like the input selector of a home music system. Only one input is selected at a
time, and the selected input is transmitted to the single output. While on the music system, the selection of the input is
made manually, the multiplexer chooses its input based on a binary number, the address input.
The truth table for a multiplexer is huge for all but the smallest values of n. We therefore use an abbreviated version of
the truth table in which some inputs are replaced by `-' to indicate that the input value does not matter.
Here is such an abbreviated truth table for n = 3. The full truth table would have 2(3 + 23) = 2048 rows.
We can abbreviate this table even more by using a letter to indicate the value of the selected input, like this:
a2 a1 a0 d7 d6 d5 d4 d3 d2 d1 d0 | x
- - - - - - - - - - - --- -
0 0 0 - - - - - - - c | c
0 0 1 - - - - - - c - | c
0 1 0 - - - - - c - - | c
0 1 1 - - - - c - - - | c
1 0 0 - - - c - - - - | c
1 0 1 - - c - - - - - | c
1 1 0 - c - - - - - - | c
1 1 1 c - - - - - - - | c
The sam e w ay w e can simplify t he truth table for the m ultiplexer, w e can also s implify t he co rresponding ci rcuit.
Indeed, our simple design method would yield a very large circuit. The simplified circuit looks like this:
The demultiplexer is the inverse of the multiplexer, in that it takes a single data input and n address i nputs. It has 2n
outputs. The address input determine which data output is going to have the same value as the data input. The other data
outputs will have the value 0.
Here is an abbreviated truth table for the demultiplexer. We could have given the full table since it has only 16 rows, but
we will use the same convention as for the multiplexer where we abbreviated the values of the data inputs.
a2 a1 a0 d | x7 x6 x5 x4 x3 x2 x1 x0
-------------------------------------
0 0 0 c| 0 0 0 0 0 0 0 c
0 0 1 c| 0 0 0 0 0 0 c 0
0 1 0 c| 0 0 0 0 0 c 0 0
0 1 1 c| 0 0 0 0 c 0 0 0
1 0 0 c| 0 0 0 c 0 0 0 0
1 0 1 c| 0 0 c 0 0 0 0 0
1 1 0 c| 0 c 0 0 0 0 0 0
1 1 1 c| c 0 0 0 0 0 0 0
Here is one possible circuit diagram for the demultiplexer:
In bot h t he multiplexer and t he de multiplexer, part of the circuits decode the address inputs, i.e. it translates a binary
number of n digits to 2n outputs, one of which (the one that corresponds to the value of the binary number) is 1 and the
others of which are 0.
It i s so metimes ad vantageous t o se parate t his function f rom t he r est of the circuit, since it is useful i n many o ther
applications. Thus, we obtain a new combinatorial circuit that we call the decoder. It has the following truth table (for n
= 3):
a2 a1 a0 | x7 x6 x5 x4 x3 x2 x1 x0
----------------------------------
0 0 0| 0 0 0 0 0 0 0 1
0 0 1| 0 0 0 0 0 0 1 0
0 1 0| 0 0 0 0 0 1 0 0
0 1 1| 0 0 0 0 1 0 0 0
1 0 0| 0 0 0 1 0 0 0 0
1 0 1| 0 0 1 0 0 0 0 0
1 1 0| 0 1 0 0 0 0 0 0
1 1 1| 1 0 0 0 0 0 0 0
Here is the circuit diagram for the decoder:
An en coder has 2 n input l ines and n output lines. The out put lines generate a binary c ode c orresponding t o t he i nput
value. For example a single bit 4 to 2 encoder takes in 4 bits and outputs 2 bits. It is assumed that there are only 4 types
of input signals these are : 0001, 0010, 0100, 1000.
I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 0 0 1
0 1 0 0 1 0
1 0 0 0 1 1
4 to 2 encoder
A priority encoder is such that if two or more inputs are given at the same time, the input having the highest priority will
take precedence. An example of a single bit 4 to 2 encoder is shown.
I3 I2 I1 I0 F1 F0
0 0 0 1 0 0
0 0 1 X 0 1
0 1 X X 1 0
1 X X X 1 1
4 to 2 priority encoder
The X’s designate the don’t care condition designating that fact that the binary value may be equal either to 0 or 1. For
example, the input I 3has the highest priority so regarded the value of other inputs, if the value of I3 is 1, the output for
F1F0=11(binary 3)
Exercise
2. A circuit has four inputs D,C,B,A encoded in natural binary form where A is the least significant bit. The inputs in the
range 0000 = 0 to 1 011= 11 represents the m onths of the y ear from January (0) to D ecember (11). I nput in the range
1100-1111(i.e.12 to 15) cannot occur. The output of the circuit is true if the month represented by the input has 31 days.
Otherwise the output is false. The output for inputs in the range 1100 to 1111 is undefined.
- Draw the truth table to represent the problem and obtain the function F as a Sum of minterm.
- Use the Karnaugh map to obtain a simplified expression for the function F.
- Construct the circuit to implements the function using NOR gates only.
3. A c ircuit ha s f our i nputs P ,Q,R,S, r epresenting t he na tural b inary num ber 0000= 0, t o 111 1= 15. P i s t he m ost
significant bit. The circuit has one output, X, which is true if the input to the circuit represents is a prime number and
false ot herwise ( A p rime num ber i s a num ber w hich i s onl y di visible by 1 a nd by i tself. N ote that z ero(0000) a nd
one(0001) are not considered as prime numbers)
i. Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S.
ii. Design a circuit diagram to implement this function using NOR gate only
5. A circuit implements the Boolean function F=A’B’C’D’+A’BCD’+AB’C’D’+ABC’D It is found that the circuit input
combinations A’B’CD’, A’BC’D’, AB’CD’ can never occur.
i. Find a simpler expression for F using the proper don’t care condition.
ii. Design the circuit implementing the simplified expression of F
7. A c ircuit ha s f our i nputs P ,Q,R,S, r epresenting t he na tural b inary num ber 0000= 0, t o 111 1= 15. P is t he m ost
significant bit. The circuit has one output, X, which is true if the number represented is divisible by three (Regard zero
as being indivisible by three.)
Design a true table for this circuit, and hence obtain an expression for X in terms of P,Q,R,S as a product of maxterms
and also as a sum of minterms
Design a circuit diagram to implement this function
8. Plot the following function on K map and use the K map to simplify the expression.
F = ABC + A BC + AB C + AB C + A BC + A B C F = AB C + A B C + A BC + A B C
In the previous session, we said that the output of a combinational circuit depends solely upon the input. The implication
is that combinational circuits have no memory. In order to build sophisticated digital logic circuits, including computers,
we ne ed m ore a pow erful model. W e n eed c ircuits whose o utput depends upon bot h the i nput of t he c ircuit a nd its
previous state. In other words, we need circuits that have memory.
It is possible to produce circuits with memory using the digital logic gates we've already seen. To do that, we need to
introduce the concept of feedback. So far, the logical flow in the circuits we've studied has been from input to output.
Such a circuit is called acyclic. Now we will introduce a circuit in which the output is fed back to the input, giving the
circuit m emory. ( There are ot her m emory t echnologies t hat store el ectric charges o r m agnetic f ields; t hese d o n ot
depend on feedback.)
In the same way that gates are the building blocks of combinatorial circuits, latches and flip-flops are the building blocks
of sequential circuits.
While gates had to be built directly from t ransistors, latches can be built from gates, and flip-flops can be built from
latches. This fact will make it somewhat easier to understand latches and flip-flops.
Both latches a nd f lip-flops a re c ircuit e lements w hose ou tput de pends not only on t he c urrent inputs, but a lso on
previous inputs and outputs. The difference between a latch and a flip-flop is that a latch does not have a clock signal,
whereas a flip-flop always does.
Latches
How can we make a circuit out of gates that is not combinatorial? The answer is feed-back, which means that we create
loops in the circuit diagrams so that output values depend, indirectly, on t hemselves. If such feed-back is positive then
the circuit tends to have stable states, and if it is negative the circuit will tend to oscillate.
In order for a logical circuit t o " remember" an d r etain its l ogical state even after t he co ntrolling input si gnal(s)
have been removed, it is necessary for the circuit to include some form of feedback. We might start with a pair of
inverters, each having its input connected to the other's output. The two outputs will always have opposite logic
levels.
The circuit shown below is a b asic NAND latch. The inputs are generally designated "S" and "R" for "Set" and
"Reset" respectively. Because the NAND inputs must normally be logic 1 to avoid affecting the latching action,
the inputs are considered to be inverted in this circuit.
The outputs of any single-bit latch or memory are traditionally designated Q and Q'. In a commercial latch circuit,
either or both of these may be available for use by other circuits. In any case, the circuit itself is:
For the NAND latch circuit, both inputs should normally be at a logic 1 level. Changing an input to a logic 0 level
will force that output to a logic 1. The same logic 1 will also be applied to the second input of the other NAND gate,
allowing that output to fall to a logic 0 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 1.
Applying another logic 0 input to the same gate will have no further effect on this circuit. However, applying a logic
0 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.
Note that it is forbidden to have both inputs at a logic 0 level at the same time. That state will force both outputs to a
logic 1, ov erriding t he f eedback l atching a ction. I n t his c ondition, w hichever i nput g oes t o l ogic 1 f irst w ill l ose
control, w hile t he ot her i nput ( still a t l ogic 0) c ontrols t he resulting s tate of t he l atch. If bot h i nputs go t o l ogic 1
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.
The sam e f unctions c an a lso b e p erformed u sing N OR g ates. A f ew ad justments m ust b e m ade t o al low f or t he
difference in the logic function, but the logic involved is quite similar.
The circuit s hown be low i s a ba sic N OR l atch. The inputs a re g enerally de signated "S" a nd " R" f or " Set" a nd
"Reset" respectively. Because the NOR inputs must normally be logic 0 to avoid overriding the latching action, the
inputs are not inverted in this circuit. The NOR-based latch circuit is:
For the NOR latch circuit, both inputs should normally be at a logic 0 level. Changing an input to a logic 1 level will
force t hat output to a logic 0. The s ame l ogic 0 w ill a lso be applied t o the second i nput of the o ther N OR g ate,
allowing that output to rise to a logic 1 level. This in turn feeds back to the second input of the original gate, forcing
its output to remain at logic 0 even after the external input is removed.
Applying another logic 1 input to the same gate will have no further effect on this circuit. However, applying a logic
1 to the other gate will cause the same reaction in the other direction, thus changing the state of the latch circuit the
other way.
Note that it is forbidden to have both inputs at a logic 1 level at the same time. That state will force both outputs to
a logic 0, overriding the feedback latching action. In this condition, whichever input goes to logic 0 first will lose
control, while the other input (still at logic 1) controls the resulting state of the latch. If both inputs go to logic 0
simultaneously, the result is a "race" condition, and the final state of the latch cannot be determined ahead of time.
Flip-flops
Latches a re a synchronous, w hich m eans t hat the output ch anges v ery so on af ter t he i nput changes. Most co mputers
today, on the other hand, are synchronous, which means that the outputs of all the sequential circuits change
simultaneously to the rhythm of a global clock signal.
A flip-flop circuit can be constructed from two NAND gates or two NOR gates. These flip-flops are shown in Figure 2
and Figure 3. Each flip-flop has two outputs, Q and Q′, and two inputs, set and reset. This type of flip-flop is referred to
as an SR flip-flop or SR latch. The flip-flop in Figure 2 has two useful states. When Q=1 and Q′=0, it is in the set state
(or 1 -state). When Q = 0 and Q′=1, it is in the clear state (or 0 -state). The outputs Q and Q′ are complements of each
other and are referred to as the normal and complement outputs, respectively. The binary state of the flip-flop is taken to
be the value of the normal output.
When a 1 i s applied to both t he set a nd reset inputs of t he flip-flop in Figure 2, bot h Q and Q
′ outputs go to 0. This
condition violates the fact that both outputs are complements of each other. In normal operation this condition must be
avoided by making sure that 1's are not applied to both inputs simultaneously.
The NAND basic flip-flop circuit in Figure 3(a) operates with inputs normally at 1 unless the state of the flip-flop has to
be changed. A 0 applied momentarily to the set input causes Q to go to 1 and Q′ to go to 0, putting the flip-flop in the set
state. When both inputs go to 0, both outputs go to 1. This condition should be avoided in normal operation.
Clocked SR Flip-Flop
The clocked SR flip-flop shown in Figure 4 c onsists of a basic NOR flip-flop and two AND gates. The outputs of the
two AND gates remain at 0 as l ong as t he clock pulse (or CP) is 0, regardless of the S and R input values. When the
clock pulse goes to 1, information from the S and R inputs passes through to the basic flip-flop. With both S=1 and R=1,
the occurrence of a clock pulse causes both outputs to momentarily go to 0. When the pulse is removed, the state of the
flip-flop is indeterminate, ie., either state may result, depending on whether the set or reset input of the flip-flop remains
a 1 longer than the transition to 0 at the end of the pulse.
The D flip-flop shown in Figure 5 i s a modification of the clocked SR flip-flop. The D input goes directly into the S
input and the complement of the D input goes to the R input. The D input is sampled during the occurrence of a clock
pulse. If it is 1, the flip-flop is switched to the set state (unless it was already set). If it is 0, the flip-flop switches to the
clear state.
JK Flip-Flop
A JK flip-flop is a refinement of the SR flip-flop in that the indeterminate state of the SR type is defined in the JK type.
Inputs J and K behave like inputs S and R to set and clear the flip-flop (note that in a JK flip-flop, the letter J is for set
and the letter K is for clear). When logic 1 inputs are applied to both J and K simultaneously, the flip-flop switches to its
complement state, ie., if Q=1, it switches to Q=0 and vice versa.
A clocked JK flip-flop is shown in Figure 6. Output Q is ANDed with K and CP inputs so that the flip-flop is cleared
during a clock pulse only if Q was previously 1. Similarly, ouput Q′ is ANDed with J and CP inputs so that the flip-flop
is set with a clock pulse only if Q′ was previously 1.
Note that because of the feedback connection in the JK flip-flop, a CP signal which remains a 1 (while J=K=1) after the
outputs have been complemented once will cause repeated and continuous transitions of the outputs. To avoid this, the
clock pulses must have a time duration less than the propagation delay through the flip-flop. The restriction on the pulse
width can be eliminated with a master-slave or edge-triggered construction. The same reasoning also applies to the T
flip-flop presented next.
T Flip-Flop
The T flip-flop is a single input version of the JK flip-flop. As shown in Figure 7, the T flip-flop is obtained from the JK
type if both inputs are tied together. The output of the T flip-flop "toggles" with each clock pulse.
Triggering of Flip-flops
The st ate of a f lip-flop is changed by a momentary change in the input signal. This change is called a trigger and the
transition it causes is said to trigger t he flip-flop. The basic circuits of Figure 2 and Figure 3 require an input trigger
defined b y a change in signal level. This level must be r eturned to its initial l evel b efore a second t rigger i s ap plied.
Clocked flip-flops are triggered by pulses.
The feedback path between t he c ombinational circuit and memory elements i n Figure 1 c an produce instability i f the
outputs of the memory elements (flip-flops) are changing while the outputs of the combinational circuit that go to the
The clock pulse goes through two signal transitions: from 0 t o 1 a nd the return from 1 t o 0. A s shown in Figure 8 t he
positive transition is defined as the positive edge and the negative transition as the negative edge.
The cl ocked flip-flops already i ntroduced a re triggered dur ing t he pos itive edge of t he pulse, a nd t he s tate t ransition
starts as soon as t he pulse reaches the logic-1 level. If the other inputs change while the clock is still 1, a new output
state may occur. If the flip-flop is made to respond to the positive (or negative) edge transition only, instead of the entire
pulse duration, then the multiple-transition problem can be eliminated.
Master-Slave Flip-Flop
A master-slave flip-flop is constructed from t wo seperate flip-flops. One circuit serves as a m aster and the other as a
slave. The logic diagram of an SR flip-flop is shown in Figure 9. The master flip-flop is enabled on the positive edge of
the clock pulse CP and the slave flip-flop is disabled by the inverter. The information at the external R and S inputs is
transmitted to the master flip-flop. When the pulse returns to 0, the master flip-flop is disabled and the slave flip-flop is
enabled. The slave flip-flop then goes to the same state as the master flip-flop.
The t iming r elationship is s hown in F igure 1 0 a nd i s a ssumed t hat the flip-flop is in the clear state prior to t he
occurrence of the clock pulse. The output state of the master-slave flip-flop occurs on the negative transition of the clock
Another type of flip-flop that synchronizes the state changes during a cl ock pulse transition is the edge-triggered flip-
flop. When the clock p ulse i nput ex ceeds a sp ecific threshold level, t he i nputs are locked o ut an d t he flip-flop i s not
affected by further changes in the inputs until the clock pulse returns to 0 and another pulse occurs. Some edge-triggered
flip-flops cause a transition on the positive edge of the clock pulse (positive-edge-triggered), and others on the negative
edge of the pulse (negative-edge-triggered). The logic diagram of a D-type positive-edge-triggered flip-flop is shown in
Figure 11.
When u sing di fferent types of flip-flops in t he s ame c ircuit, one m ust e nsure t hat a ll flip-flop ou tputs make t heir
transitions at the same time, ie., during either the negative edge or the positive edge of the clock pulse.
Direct Inputs
Flip-flops in IC packages sometimes provide special inputs for setting or clearing the flip-flop asynchronously. They are
usually called preset and clear. They affect the flip-flop without the need for a clock pulse. These inputs are useful for
bringing f lip-flops t o a n i ntial s tate before t heir clocked operation. For example, after power is t urned on i n a di gital
system, the states of the flip-flops are indeterminate. Activating the clear input clears all the flip-flops to an initial state
of 0. The graphic symbol of a JK flip-flop with an active-low clear is shown in Figure 12.
Summary
Since memory el ements i n sequential circuits are usually f lip-flops, it i s worth summarising t he behaviour of various
flip-flop types before pr oceeding f urther. A ll f lip-flops can be divided i nto four ba sic types: SR, JK, D and T. T hey
differ in the number of inputs and in the response invoked by di fferent value of input signals. The four types of flip-
flops are defined in Table 1.
S R Q(next)
Q Q(next) S R
0 0 Q 0 0 0 X
Q(next) = S + R′Q
SR 0 1 0 0 1 1 0
SR = 0 1 0 0 1
1 0 1
1 1 X 0
1 1 ?
J K Q(next) Q Q(next) J K
0 0 Q 0 0 0 X
JK 0 1 0 Q(next) = JQ′ + K′Q 0 1 1 X
1 0 1 1 0 X 1
1 1 Q′ 1 1 X 0
Q Q(next) D
D Q(next) 0 0 0
D 0 0 Q(next) = D 0 1 1
1 1 1 0 0
1 1 1
Q Q(next) T
T Q(next) 0 0 0
T 0 Q Q(next) = TQ′ + T′Q 0 1 1
1 Q′ 1 0 1
1 1 0
The characteristic table in the third column of Table 1 defines the state of each flip-flop as a function of its inputs and
previous state. Q refers to the present state and Q(next) refers to the next state after the occurrence of the clock pulse.
The characteristic table for the RS flip-flop shows that the next state is equal to the present state when both inputs S and
R are equal to 0. When R=1, the next clock pulse clears the flip-flop. When S=1, the flip-flop output Q is set to 1. The
equation mark (?) for the next state when S and R are both equal to 1 designates an indeterminate next state.
The next state of the D flip-flop is completely dependent on the input D and independent of the present state.
The next state for the T flip-flop is the same as the present state Q if T=0 and complemented if T=1.
The characteristic table is useful during the analysis of sequential circuits when the value of flip-flop inputs are known
and we want to find the value of the flip-flop output Q after the rising edge of the clock signal. As with any other truth
table, we can use the map method to derive the characteristic equation for each flip-flop, which are shown in the third
column of Table 1.
During the design process we usually know the transition from present state to the next state and wish to find the flip-
flop input conditions that will cause the required transition. For this reason we will need a t able that lists the required
inputs for a given change of state. Such a list is called the excitation table, which is shown in the fourth column of Table
1. There are four possible transitions from present state to the next state. The required input conditions are derived from
the information available in the characteristic table. The symbol X in the table represents a "don't care" condition, that
is, it does not matter whether the input is 1 or 0.
asynchronous system is a system whose outputs depend upon the order in which its input variables change and can be
affected at any instant of time.
Gate-type asy nchronous systems ar e b asically co mbinational circuits w ith feedback p aths. Because o f t he f eedback
among logic gates, the system may, at times, become unstable. Consequently they are not often used.
Synchronous type of system uses storage elements called flip-flops that are employed to change their binary value only
at d iscrete instants o f t ime. S ynchronous s equential c ircuits u se logic g ates and f lip-flop st orage devices. S equential
circuits have a clock signal as one of their inputs. All state transitions in such circuits occur only when the clock value is
either 0 or 1 or happen at the rising or falling edges of the clock depending on the type of memory elements used in the
circuit. S ynchronization i s a chieved by a t iming de vice c alled a c lock pu lse g enerator. C lock pu lses a re di stributed
throughout the system in such a way that the flip-flops are affected only with the arrival of the synchronization pulse.
Synchronous sequential circuits that use clock pulses in the inputs are called clocked-sequential circuits. They are stable
and their timing can easily be broken down into independent discrete steps, each of which is considered separately.
A clock signal is a periodic square wave that indefinitely switches from 0 to 1 and from 1 to 0 at fixed intervals. Clock
cycle time or clock period: the time interval between two consecutive rising or falling edges of the clock.
Mealy and Moore models are the basic models of state machines. A state machine which uses only Entry Actions, so
that its output depends on the state, is called a Moore model. A state machine which uses only Input Actions, so that the
output depends on the state and also on inputs, is called a Mealy model. The models selected will influence a design but
there are no general indications as to which model is better. Choice of a model depends on t he a pplication, execution
means ( for i nstance, h ardware sy stems ar e u sually b est r ealised as Mo ore m odels) an d p ersonal p references o f a
designer or programmer. In practise, mixed models are often used with several action types
A.F. Kana Digital Logic Design. Page 76
Design of Sequential Circuits
The design of a synchronous sequential circuit starts from a set of specifications and culminates in a logic diagram or a
list of B oolean f unctions f rom w hich a logic diagram c an be obtained. In c ontrast t o a c ombinational logic, which i s
fully specified by a truth table, a sequential circuit requires a state table for its specification. The first step in the design
of sequential circuits is to obtain a state table or an equivalence representation, such as a state diagram.
A synchronous sequential circuit is made up of flip-flops and combinational gates. The design of the circuit consists of
choosing t he f lip-flops a nd t hen f inding t he c ombinational s tructure w hich, t ogether w ith t he f lip-flops, produces a
circuit that fulfils the required specifications. The number of flip-flops is determined from the number of states needed
in the circuit.
The recommended steps for the design of sequential circuits are set out below:
We have examined a general model for sequential circuits. In this model the effect of all previous inputs on the outputs
is represented by a state of the circuit. Thus, the output of the circuit at any time depends upon its current state and the
input. These also determine the next state of the circuit. The relationship that exists among the inputs, outputs, present
states and next states can be specified by either the state table or the state diagram.
State Table
The state t able r epresentation of a sequential circuit consists o f t hree sections l abelled present state, next state and
output. The present state designates the state of flip-flops before the occurrence of a cl ock pulse. The next state shows
the s tates o f f lip-flops after the c lock pulse, a nd the out put section lists t he value of the ou tput v ariables during t he
present state.
The binary number inside each circle identifies the state the circle represents. The directed lines are labelled with two
binary numbers separated by a slash (/). The input value that causes the state transition is labelled first. The number after
the s lash sy mbol / g ives t he v alue o f t he o utput. F or ex ample, the d irected l ine f rom st ate 0 0 t o 01 i s l abelled 1/0,
meaning that, if the sequential circuit is in a present state and the input is 1, then the next state is 01 and the output is 0.
If it is in a present state 00 and the input is 0, it will remain in that state. A directed line connecting a circle with itself
indicates that no change of state occurs. The state diagram provides exactly the same information as the state table and
is obtained directly from the state table.
Example: Consider a s equential c ircuit s hown i n F igure 4. It ha s one i nput x, one out put Z a nd t wo s tate variables
Q1Q2 (thus having four possible present states 00, 01, 10, 11).
Z = xQ1
D1 = x′ + Q1
D2 = xQ2′ + x′*Q1′
These equations can be used to form the state table. Suppose the present state (i.e. Q 1Q2) = 00 a nd input x = 0. U nder
these conditions, we get Z = 0, D 1 = 1, a nd D 2 = 1. Thus the next state of the circuit D 1D2 = 11, a nd this will be the
present s tate af ter t he c lock p ulse h as b een ap plied. T he o utput o f t he ci rcuit co rresponding t o t he p resent st ate
Q1Q2 = 00 and x = 1 is Z = 0. This data is entered into the state table as shown in Table 2.
Next State
Present State Output (Z)
X=0 X=1
Q1 Q2 x=0 x=1
Q1 Q0 Q1 Q0
0 0 1 1 0 1 0 0
0 1 1 1 0 0 0 0
1 0 1 0 1 1 0 1
1 1 1 0 1 0 0 1
The state diagram for the sequential circuit in Figure 4 is shown in Figure 5.
SR
JK
You can see from the table that all four flip-flops have the same number of states and transitions. Each flip-flop is in the
set state when Q=1 and in the reset state when Q=0. Also, each flip-flop can move from one state to another, or it can re-
enter t he sam e st ate. The only d ifference between t he f our t ypes l ies i n the v alues o f i nput signals t hat cause these
transitions.
A st ate d iagram i s a v ery co nvenient w ay t o visualise t he operation of a f lip-flop or e ven o f large s equential
components.
Derive the state table and state diagram for the sequential circuit shown in Figure 7.
SOLUTION:
STEP 1: First w e de rive t he B oolean e xpressions f or the inputs o f e ach flip-flops i n the s chematic, i n t erms of
external input Cnt and the flip-flop outputs Q 1 and Q 0. Since there are two D flip-flops in this example, we derive two
expressions for D1 and D0:
These B oolean ex pressions ar e called excitation equations since they r epresent t he i nputs to the f lip-flops of t he
sequential circuit in the next clock cycle.
STEP 2: Derive t he n ext-state eq uations b y co nverting t hese ex citation eq uations i nto f lip-flop ch aracteristic
equations. In the case of D flip-flops, Q(next) = D. Therefore the next state equal the excitation equations.
STEP 3: Now convert these next-state equations into tabular form called the next-state table.
Next State
Present State
Cnt = 0 Cnt = 1
Q1 Q0
Q1 Q0 Q1 Q0
0 0 0 0 0 1
0 1 0 1 1 0
1 0 1 0 1 1
1 1 1 1 0 0
Each row is corresponding to a state of the sequential circuit and each column represents one set of input values. Since
we have two flip-flops, the number of possible states is four - that is, Q1Q0 can be equal to 00, 01, 10, or 11. These are
present states as shown in the table.
A.F. Kana Digital Logic Design. Page 81
For the next state part of the table, each entry defines the value of the sequential circuit in the next clock cycle after the
rising edge of the Clk. Since this value depends on the present state and the value of the input signals, the next state
table will contain one column for each assignment of binary values to the input signals. In this example, since there is
only one input signal, Cnt, the next-state table shown has only two columns, corresponding to Cnt = 0 and Cnt = 1.
Note t hat e ach e ntry i n the ne xt-state table indicates the values of t he f lip-flops in the next state if their value in the
present state is in the row header and the input values in the column header.
Each of these next-state values has been computed from the next-state equations in STEP 2.
STEP 4: The state diagram is generated directly from the next-state table, shown in Figure 8.
Each arc is labelled with the values of the input signals that cause the transition from the present state (the source of the
arc) to the next state (the destination of the arc).
Example 1.2
Derive the next state, the output table and the state diagram for the sequential circuit shown in Figure 10.
The input combinational logic in Figure 10 is the same as in example1.1 so the excitation and the next-state equations
will be the same as in Example 1.1.
Excitation equations:
As this equation shows, the output Y will equal to 1 when the counter is in state Q1Q0 = 11, and it will stay 1 as long as
the counter stays in that state.
Next State
Present State Output
Cnt = 0 Cnt = 1
Q1 Q0 Z
Q1 Q0 Q1 Q0
00 00 01 0
01 01 10 0
10 10 11 0
11 11 00 1
State diagram:
State Reduction
Any design process must consider the problem of minimising the cost of the final circuit. The two most obvious cost
reductions are reductions in the number of flip-flops and the number of gates.
Example: Let us consider the state table of a sequential circuit shown in Table 6.
It can be seen from the table that the present state A and F both have the same next states, B (when x=0) and C (when
x=1). They also produce the same output 1 (when x=0) and 0 (when x=1). Therefore states A and F are equivalent. Thus
one of t he states, A or F c an be r emoved from t he st ate t able. F or ex ample, i f w e r emove r ow F f rom t he t able an d
replace all F's by A's in the columns, the state table is modified as shown in Table 7.
It is apparent that states B and E are equivalent. Removing E and replacing E's by B's results in the reduce table shown
in Table 8.
The removal o f eq uivalent st ates h as r educed t he n umber o f st ates i n t he ci rcuit f rom si x t o f our. T wo st ates ar e
considered t o be equivalent if and o nly i f f or every i nput sequence t he c ircuit produces t he same out put s equence
irrespective of which one of the two states is the starting state.
Example 1.3
From the state diagram, we can generate the state table shown in Table 9. Note that there is no output section for this
circuit. Two flip-flops are needed to represent the four states and are designated Q0Q1. The input variable is labelled x.
We shall now derive the excitation table and the combinational structure. The table is now arranged in a different form
shown in Table 11, where the present state and input variables are arranged in the form of a truth table. Remember, the
excitable for the JK flip-flop was derive in table 1
Q →Q(next) JK
0→0 0X
0 →1 1X
1→0 X1
1 →1 X0
Q0 Q1 x Q0 Q1 J0 K0 J1 K1
0 0 0 0 0 0 X 0 X
0 0 1 0 1 0 X 1 X
0 1 0 1 0 1 X X 1
0 1 1 0 1 0 X X 0
1 0 0 1 0 X 0 0 X
1 0 1 1 1 X 0 1 X
1 1 0 1 1 X 0 X 0
1 1 1 0 0 X 1 X 1
In the first row of Table 11, we have a transition for flip-flop Q0 from 0 in the present state to 0 in the next state. In
Table 10 we find that a transition of states from 0 to 0 requires that input J = 0 and input K = X. So 0 and X are copied
in the first row under J0 and K0 respectively. Since the first row also shows a transition for the flip-flop Q1 from 0 i n
the present state to 0 in the next state, 0 and X are copied in the first row under J1 and K1. This process is continued for
each row of the table and for each flip-flop, with the input conditions as specified in Table 10.
J0 = Q1*x′ K0 = Q1*x
J1 = x K1 = Q0′*x′ + Q0*x = Q0 x
Example 1.4 Design a sequential circuit whose state tables are specified in Table 12, using D flip-flops.
X=0 X=1
Q0 Q1 x=0 x=1
Q0 Q1 Q0 Q1
00 00 01 0 0
01 00 10 0 0
10 11 10 0 0
11 00 01 0 1
Q→Q(next) D
0
0 → 0
0 → 1 1
1 → 0 0
1 → 1
1
Next step is to derive the excitation table for the design circuit, which is shown in Table 14. The output of the circuit is
labelled Z.
Now pl ot t he flip-flop i nputs a nd output f unctions on t he Karnaugh map t o derive t he Boolean expressions, which is
shown in Figure 16.
D0 = Q0*Q1′ + Q0′*Q1*x
D1 = Q0′*Q1′*x + Q0*Q1*x + Q0*Q1′*x′
Z = Q0*Q1*x
When the ld input is 0, the outputs are uneffected by any clock transition. When the ld input is 1, the x inputs are stored
in the register at the next clock transition, making the y outputs into copies of the x inputs before the clock transition.
We can explain this behavior more formally with a state table. As an example, let us take a register with n = 4. The left
side of the state table contains 9 columns, labeled x0, x1, x2, x3, ld, y0, y1, y2, and y3. This means that the state table
has 512 rows. We will therefore abbreviate it. Here it is:
1 c3 c2 c1 c0 -- -- -- -- | c3 c2 c1 c0
As you can see, w hen ld is 0 (the t op ha lf of the t able), t he r ight side of the t able is a c opy of t he values of the old
outputs, independently of the inputs. When ld is 1, the right side of the table is instead a copy of the values of the inputs,
independently of the old values of the outputs.
Registers p lay an i mportant r ole in computers. S ome o f them ar e v isible t o t he p rogrammer, an d ar e u sed t o h old
variable values for later use. Some of them are hidden to the programmer, and are used to hold values that are internal to
the central processing unit, but nevertheless important.
Shift registers
Shift registers are a t ype of sequential logic circuit, mainly f or storage of digital data. They are a group of flip-flops
connected in a chain so that the output from one flip-flop becomes the input of the next flip-flop. Most of the registers
possess no characteristic internal sequence of states. All the flip-flops are driven by a common clock, and all are set or
reset simultaneously.
In t his s ection, the b asic t ypes o f sh ift registers ar e studied, s uch a s S erial I n - Serial O ut, S erial I n - Parallel O ut,
Parallel In - Serial Out, Parallel In - Parallel Out, and bidirectional shift registers. A special form of counter - the shift
register counter, is also introduced.
A basic four-bit shift register can be constructed using four D flip-flops, as shown below. The operation of the circuit is
as follows. The register is first cleared, forcing all four outputs to zero. The input data is then applied sequentially to
the D in put of the f irst flip-flop on t he left ( FF0). During e ach clock pul se, one bi t i s transmitted f rom left t o right.
Assume a data word to be 1001. The least significant bit of the data has to be shifted through the register from FF0 to
FF3.
To avoid the loss of data, an arrangement for a non-destructive reading can be done by adding two AND gates, an OR
gate and an inverter to the system. The construction of this circuit is shown below.
The data is loaded to the register when the control line is HIGH (ie WRITE). The data can be shifted out of the register
when the control line is LOW (ie READ)
For this kind of register, data bits are entered serially in the same manner as discussed in the last section. The difference
is the way in which the data bits are taken out of the register. Once the data are stored, each bit appears on its respective
output l ine, a nd a ll b its are a vailable simultaneously. A construction of a four-bit ser ial i n - parallel o ut r egister i s
shown below.
A f our-bit p arallel in - serial o ut sh ift r egister i s s hown b elow. The ci rcuit u ses D f lip-flops an d N AND g ates f or
entering data (ie writing) to the register.
For pa rallel i n - parallel out s hift r egisters, a ll d ata bi ts a ppear on t he pa rallel ou tputs i mmediately f ollowing t he
simultaneous entry of the data bits. The following circuit is a four-bit parallel in - parallel out shift register constructed
by D flip-flops.
The D's are the parallel inputs and the Q's are the parallel outputs. Once the register is clocked, all the data at the D
inputs appear at the corresponding Q outputs simultaneously.
The r egisters di scussed s o f ar involved on ly r ight s hift ope rations. Each r ight s hift ope ration ha s t he e ffect of
successively di viding t he bi nary nu mber by t wo. If t he ope ration is r eversed ( left s hift), t his ha s t he e ffect of
multiplying the number by two. With suitable gating arrangement a serial shift register can perform both operations.
A bidirectional, or reversible, s hift r egister is on e i n w hich the da ta can be shift e ither left o r r ight. A f our-bit
bidirectional shift register using D flip-flops is shown below.
Here a set of NAND gates are configured as OR gates to select data inputs from the right or left adjacent bistables, as
selected by the LEFT/RIGHT control line.
Ring Counters
A ring counter is basically a circulating shift register in which the output of the most significant stage is fed back to the
input of the least significant stage. The following is a 4-bit ring counter constructed from D flip-flops. The output of
each stage is shifted into the next stage on the positive edge of a clock pulse. If the CLEAR signal is high, all the flip-
flops except the first one FF0 are reset to 0. FF0 is preset to 1 instead.
Since the co unt s equence has 4 d istinct states, t he c ounter c an be c onsidered a s a mod-4 c ounter. Only 4 o f t he
maximum 16 states are used, making ring counters very inefficient in terms of state usage. But the major advantage of a
ring counter over a binary counter is that it is self-decoding. No extra decoding circuit is needed to determine what state
the counter is in.
Johnson Counters
Johnson counters are a variation of standard ring counters, with the inverted output of the last stage fed back to the input
of the first stage. They are also known as twisted ring counters. An n-stage Johnson counter yields a count sequence of
length 2n, so it may be considered to be a mod-2n counter. The circuit above shows a 4-bit Johnson counter. The state
sequence for the counter is given in the table
Counters
A sequential circuit t hat goes t hrough a pr escribed sequence of states upon t he application of i nput pulses i s c alled a
counter. The input pulses, called count pulses, may be clock pulses. In a counter, the sequence of states may follow a
binary count or any other sequence of states. Counters are found in almost all equipment containing digital logic. They
are used for counting the number of occurrences of an even and are useful for generating timing sequences to control
operations in a digital system.
A counter is a sequential circuit with 0 inputs and n outputs. Thus, the value after the clock transition depends only on
old values of the outputs. For a counter, the values of the outputs are interpreted as a sequence of binary digits (see the
section on binary arithmetic).
We shall call the outputs o0, o1, ..., on-1. The value of the outputs for the counter after a c lock transition is a b inary
number which is one plus the binary number of the outputs before the clock transition.
We can explain this behavior more formally with a state table. As an example, let us take a counter with n = 4. The left
side of the state table contains 4 columns, labeled o0, o1, o2, and o3. This means that the state table has 16 rows. Here it
is in full:
Design of Counters
Example 1.5 A counter is first described by a state diagram, which is shows the sequence of states through which the
counter advances when it is clocked. Figure 18 shows a state diagram of a 3-bit binary counter.
The circuit has no inputs other than the clock pulse and no outputs other than its internal state (outputs are taken off each
flip-flop in the counter). The next state of the counter depends entirely on its present state, and the state transition occurs
every time the clock pulse occurs. Figure 19 shows the sequences of count after each clock pulse.
Since there are eight states, the number of flip-flops required would be three. Now we want to implement the counter
design using JK flip-flops.
Next step is to develop an excitation table from the state table, which is shown in Table 16.
Now t ransfer t he JK st ates o f t he flip-flop i nputs f rom t he e xcitation t able t o Karnaugh m aps t o de rive a s implified
Boolean expression for each flip-flop input. This is shown in Figure 20.
The 1s in the Karnaugh maps of Figure 20 are grouped with "don't cares" and the following expressions for the J and K
inputs of each flip-flop are obtained:
J0 = K0 = 1
J1 = K1 = Q0
J2 = K2 = Q1*Q0
The f inal step is to implement the combinational logic f rom t he equations and connect the flip-flops to form the
sequential circuit. The complete logic of a 3-bit binary counter is shown in Figure 21.
Example 1.6 Design a counter specified by the state diagram in Example 1.5 using T flip-flops. The state diagram is
shown here again in Figure 22.
Now derive the excitation table from the state table, which is shown in Table 17.
Next step is to transfer the flip-flop input functions to Karnaugh maps to derive a simplified Boolean expressions, which
is shown in Figure 23.
Figure 23.
Karnaugh
maps
T0 = 1; T1 = Q0; T2 = Q1*Q0
Exercises
1. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.1. Draw the timing diagram of the circuit.
Figure
1.1
2. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.2.
Figure
1.2
3. Derive a) excitation equations, b) next state equations, c) a state/output table, and d) a state diagram for the circuit
shown in Figure 1.3.
4. Derive the state output and state diagran for the sequential circuit shown in Figure 1.4.
Figure
1.4
5. A s equential c ircuit u ses two D flip-flops as m emory elements. The b ehaviour o f the c ircuit is described b y t he
following equations:
D1 = Q1 + x′*Q2
D2 = x*Q1′ + x′*Q2
Z = x′*Q1*Q2 + x*Q1′*Q2′
Derive the state table and draw the state diagram of the circuit.
Table 6.1
8. Design a mod-5 counter which has the following binary sequence: 0, 1, 2, 3, 4. Use JK flip-flops.
9. Design a counter that has the following repeated binary sequence: 0, 1, 2, 3, 4, 5, 6, 7. Use RS flip-flops.
10. Design a counter with the following binary sequence: 1, 2, 5, 7 and repeat. Use JK flip-flops.
11. Design a counter with the following repeated binary sequence: 0, 4, 2, 1, 6. Use T flip-flops.
13. The content of a 5 -bit shift register serial in parallel out with rotation capability is initially 11001. T he register is
shifted four times to the right. What are the content and the output of the register after each shift?
With tri-state logic circuits, this is no longer true. As their names indicate, they manipulate signals that can be in one of
three states, as opposed to only 0 or 1. While this may sound confusing at first, the idea is relatively simple.
Consider a fairly common case in which there are a number of source circuits S1, S2, etc in different parts of a chip (i.e.,
they are not real close together). At different times, exactly one of these circuit will generate some binary value that is to
be distributed t o s ome s et of de stination circuits D1, D2, e tc also in d ifferent parts of t he c hip. A t a ny poi nt in time,
exactly one source circuit can generate a value, and the value is always to be distributed to all the destination circuits.
Obviously, w e h ave t o h ave so me si gnals t hat select which so urce ci rcuit is t o generate i nformation. A ssume f or t he
moment that we have signals s1, s2, etc for exactly that purpose. One solution to this problem is indicated in this figure:
As you can see, this solution requires that all outputs are routed to a central place. Often such solutions are impractical
or costly. Since only one of the sources is "active" at one point, we ought to be able to use a solution like this:
A tri-state circuit (combinatorial or sequential) is like an ordinary circuit, except that it has an additional input that we
shall call enable. When t he e nable i nput is 1, the circuit behaves exactly like the corresponding normal (not tri-state)
circuit. When the enable input is 0, the outputs are completely disconnected from the rest of the circuit. It is as if there
we had taken an ordinary circuit and added a switch on every output, such that the switch is open when enable is 0 and
closed if enable is 1 like this:
which is pretty close to the truth. The switch is just another transistor that can be added at a very small cost.
Any circuit can exist in a tri-state version. However, as a special case, we can convert any ordinary circuit to a tri-state
circuit, by using a special tri-state combinatorial circuit that simply copies its inputs to the outputs, but that also has an
enable input. We call such a ci rcuit a bus driver for reasons that will become evident when we discuss buses. A bus
driver with one input is drawn like this:
In general, a memory has m inputs that are called the address inputs that are used to select exactly one out of 2m words,
each one consisting of n bits.
Furthermore, it has n connectors that are bidirectional that are called the data lines. These data lines are used both as
inputs in order to s tore i nformation i n a word selected by the address inputs, and a s outputs in or der to r ecall a
previously stored value. Such a solution reduces the number of required connectors by a factor two.
Finally, it has an input called enable (see the section on tri-state logic for an explanation) that controls whether the data
lines have defined states or not, and an input called r/w that determines the direction of the data lines.
A memory with an arbitrary value of m and an arbitrary value of n can be built from memories with smaller values of
these parameters. To show how this can be done, we first show how a one-bit memory (one with m = 0 and n = 1) can
be built. Here is the circuit:
The central part of the circuit is an SR-latch that holds one bit of information. When enable is 0, the output d0 is isolated
both from the inputs to and the output from the SR-latch. Information is passed from d0 to the inputs of the latch when
enable is 1 and r/w is 1 (indicating write). Information is passed from the output x to d0 when enable is 1 and r/w is 0
(indicating read).
Now that we know how to make a one-bit memory, we must figure out how to make larger memories. First, suppose we
have n memories of 2 m words, each one consisting of a single bit. We can easily convert these to a single memory with
2m words, each one consisting of a n bits. Here is how we do it:
Next, w e ha ve to figure o ut how t o m ake a m emory w ith m ore w ords. To s how t hat, w e a ssume t hat w e ha ve two
memories ea ch with m address i nputs a nd n data lines. We show h ow w e can connect them so as to o btain a single
memory with m + 1 address inputs and n data lines. Here is the circuit:
As you can see, the additional address line is combined with the enable input to select one of the two smaller memories.
Only one of them will be connected to the data lines at a time (because of the way tri-state logic works).
Since t he contents cannot be altered, we don't have a r/w signal. E xcept for t he enable signal, a R OM is thus like an
ordinary combinatorial circuit with m inputs and n outputs.
ROMs are usually programmable. They are often sold with a contents of all 0s or all 1s. The user can then stick it in a
special machine and fill it with the desired contents, i.e. the ROM can be programmed. In that case, we sometimes call it
a PROM (programmable ROM).
Some v arieties of P ROMS can b e er ased and r e-programmed. T he w ay t hey a re er ased is t ypically w ith u ltra-violet
light. When the PROM can be erased, we sometimes call it EPROM (erasable PROM).
The advantage of using a ROM in this way is that any conceivable function of the m inputs can be made to appear at any
of t he n o utputs, m aking t his the m ost g eneral-purpose c ombinatorial logic de vice a vailable. A lso, P ROMs
(programmable R OMs), E PROMs ( ultraviolet-erasable P ROMs) an d E EPROMs (electrically e rasable PROMs) are
available t hat can b e p rogrammed u sing a st andard PROM p rogrammer w ithout r equiring sp ecialised h ardware o r
software. However, there are several disadvantages:
• they can not necessarily p rovide sa fe " covers" for a synchronous logic transitions s o t he PROM's out puts may
glitch as the inputs switch,
• Because only a small fraction of their capacity is used in any one application, they often make an inefficient use
of space.
Since m ost R OMs do not ha ve i nput or ou tput r egisters, they c annot b e us ed s tand-alone for s equential l ogic. A n
external TTL register was often used for sequential designs such as state machines.
Alan Clements, Principles of computer hardware. second edition oxford science publications
http://www.ied.edu.hk/has/phys/de/de-ba.htm
http://www.eelab.usyd.edu.au/digital_tutorial/
http://cwx.prenhall.com/bookbind/pubbooks/mano2/chapter5/deluxe.html
http://www.eelab.usyd.edu.au/digital_tutorial/part3/
http://wearcam.org/ece385/lectureflipflops/flipflops/
http://users.ece.gatech.edu/~leehs/ECE2030/reading/mixed-logic.pdf