DFA notes
DFA notes
A number system of base (also called radix) r is a system, which have r distinct symbols
for r digits. A number is represented by a string of these symbolic digits. To determine
the quantity that the number represents, we multiply the number by an integer power of r
depending on the place it is located and then find the sum of weighted digits.
Decimal:
• The decimal number system makes use of ten digits namely 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9
• Ten basic symbols or digits are used, the decimal number system is said to have a base or
radix of ten.
• All the digits in the decimal number system are expressed in powers of 10 like 10 0,
101,102,103…etc for the integer part and 10-1,10-2,10-3……etc for fractional part.
• Representation of units tens,hundreds,thousands etc are called weights.
• The integer part and the fractional part in a decimal number system are seperated by a
decimal point.
Example:
438=400+30+8
=4*100+3*10+8*1
+4*102+3*101=8*10
The digit o4 has a weight of 100 and the digit of 3 has a weight of 10 and the digit of 8
has a weight of 1.
46.25=40+6+0.2+0.005
=4*40+6*1+2*0.1+5*0.01
=4*101+6*10+2*10-1+5*10-2
The digit o4 has a weight of 10 and the digit of 6 has a weight of 1 to the right of the
decimal point the digit of 2 has a weight of 0.1(i.e. 1/10) the digit 5 has a weight of
0.01(i.e. 1/100).
Binary:
• The binary number system uses only two symbols or digits namely 0 and 1 (i.e) it has
the base or radix of 2.
• A binary digit 0 and 1 is often called a bit.
• All the bits will have the powers of 2 like 20, 21,22,23…etc for the integer part and 2-1,2-2,2-
3
……etc for fractional part.
• The integer part and the fractional part in a binary number system are seperated by a
binary point.
• A 4- bit binary word is called as a Nibble.
• A 8- bit binary word is called as a byte.
• A 16- bit binary word is called as a word.
• A 32- bit binary word is called as a double word.
Decimal Binary
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
Example:
(101)2 , (10011)2
Octal:
0 000 0
1 001 1
2 010 2
3 011 3
4 100 4
5 101 5
6 110 6
7 111 7
Binary(22+21+20) – (23 = 8)
Example:
(75)8 , (198)8
Hexadecimal:
Binary digits are used in computers it is not a compact code for it and it requires a large
number of bits to represent bigger numbers.
Example:
(D5)16=(13+5)10
=(13*161+5*160)10
=(13*16+5*1)10
= (208+5)10
=(213)10
(D5)16=(213)10
Decimal Hexadecimal
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 A
11 B
12 C
13 D
14 E
15 F
Conversions:
Binary
i) Decimal Octal
Hexadecimal
Decimal
ii)Binary Octal
Hexadecimal
Decimal
iii) Octal Binary
Hexadecimal
Decimal
iv) Hexadecimal Binary
Octal
Decimal conversions:
i)Decimal to Binary:
converting decimal to binary number equivalents is to write down the decimal
number and to continually divide-by-2 (two) to give a result and a remainder of either a
“1” or a “0” until the final result equals zero.
For example take the decimal number 11.1875. First, look at the integer part: 11.
1. Divide 11 by 2. This gives a quotient of 5 and a remainder of 1. Since the remainder
is 1, .
2. Divide the quotient 5 by 2. This gives a quotient of 2 and a remainder of 1. Since the
remainder is 1, .
3. Divide the quotient 2 by 2. This gives a quotient of 1 and a remainder of 0. Since the
remainder is 0, .
4. Divide the quotient 1 by 2. This gives a quotient of 0 and a remainder of 1. Since the
remainder is , .
Since the quotient now is 0, the process is stopped. The above steps are summarized in
Table 1.
Quotient Remainder
11/2 5
5/2 2
2/2 1
1/2 0
Hence
ii)Decimal to Octal:
To convert integer decimals to octal, divide the original number by the largest
possible power of 8 and divide the remainders by successively smaller powers of 8 until
the power is 1. The octal representation is formed by the quotients, written in the order
generated by the algorithm. For example, to convert 12510 to octal:
125 = 82 × 1 + 61
61 = 81 × 7 + 5
5 = 80 × 5 + 0
Another example:
900 = 83 × 1 + 388
388 = 82 × 6 + 4
4 = 81 × 0 + 4
4 = 80 × 4 + 0
To convert a decimal fraction to octal, multiply by 8; the integer part of the result
is the first digit of the octal fraction. Repeat the process with the fractional part of the
result, until it is null or within acceptable error bounds.
These two methods can be combined to handle decimal numbers with both integer
and fractional parts, using the first on the integer part and the second on the fractional part.
iii)Decimal to Hexadecimal:
3. Start with the most significant digit (the digit on the far left). ...
4. If another digit exists, multiply the sum by 16 and add the decimal value of the
next digit.
Binaryconversions:
i) Binary to Decimal:
The equivalent decimal representation of a binary number is sum of the powers of 2
which each digit represents. For example, the binary number 100101 is converted to
decimal form as follows: 1001012 = [ ( 1 ) × 2 5 ] + [ ( 0 ) × 2 4 ] + [ ( 0 ) × 2 3 ] + [ ( 1 )
× 22 ] + [ ( 0 ) × 21 ] + [ ( 1 ) × 20 ]
ii) Binary to Octal:
To convert a decimal fraction to octal, multiply by 8; the integer part of the result
is the first digit of the octal fraction. Repeat the process with the fractional part of the
result, until it is null or within acceptable error bounds. Example: Convert 0.1640625 to
octal: 0.1640625 × 8 = 1.3125 = 1 + 0.3125.
iii) Binary to Hexadecimal:
First, split the binary number into groups of four digits, starting with the least
significant digit. Next, convert each group of four binary digits to a single hex digit. Put
the single hex digits together in the order in which they were found, and you're done!
Octal conversions:
i)Octal to Decimal:
Technique
– Multiply each bit by 8n, where n is the “weight” of the bit
– The weight is the position of the bit, starting from 0 on the right
– Add the results
7248 => 4 x 80 = 4
2 x 81 = 16
7 x 82 = 448
46810
To convert octal to binary, replace each octal digit by its binary representation.
1 0 5 7
then to hexadecimal:
2 2 F
Hexadecimal conversions:
1)Hexadecimal to Decimal:
Technique
Multiply each bit by 16n, where n is the “weight” of the bit
The weight is the position of the bit, starting from 0 on the right
Add the results
ABC16 => Cx160 = 12 x 1 = 12
Bx161 = 11 x 16 = 176
A x 162 = 10 x 256 = 2560
274810
ii)Hexadecimal to Binary:
Technique
Convert each hexadecimal digit to a 4-bit equivalent binary
representation
10AF16 =
1 0 A F
0001 0000 1010 1111
10AF16 = (0001000010101111)2
3 F A 5
then to octal:
0 3 7 6 4 5
Binary addition:
The simplest arithmetic operation in binary is addition. Adding two single-digit binary
numbers is relatively simple, using a form of carrying:
0+0→0
0+1→1
1+0→1
1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21) )
Adding two "1" digits produces a digit "0", while 1 will have to be added to the next
column. This is similar to what happens in decimal when certain single-digit numbers are added
together; if the result equals or exceeds the value of the radix (10), the digit to the left is
incremented:
11111 (carried digits)
01101
+ 10111
-------------
= 1 0 0 1 0 0 = 36
In this example, two numerals are being added together: 01101 2 (1310) and 101112 (2310).
The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 10 2. The 1 is
carried to the left, and the 0 is written at the bottom of the rightmost column. The second column
from the right is added: 1 + 0 + 1 = 10 2 again; the 1 is carried, and 0 is written at the bottom. The
third column: 1 + 1 + 1 = 11 2. This time, a 1 is carried, and a 1 is written in the bottom row.
Proceeding like this gives the final answer 1001002 (36 decimal).
Binary subtraction:
A − B = A + not B + 1
Binary multiplication:
Multiplication in binary is similar to its decimal counterpart. Two numbers A and B can
be multiplied by partial products: for each digit in B, the product of that digit in A is calculated
and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in B
that was used. The sum of all these partial products gives the final result.
Since there are only two digits in binary, there are only two possible outcomes of each partial
multiplication:
For example, the binary numbers 1011 and 1010 are multiplied as follows:
1 0 1 1 (A)
× 1 0 1 0 (B)
---------
0 0 0 0 ← Corresponds to the rightmost 'zero' in B
+ 1011 ← Corresponds to the next 'one' in B
+ 0000
+1011
---------------
=1101110
Binary numbers can also be multiplied with bits after a binary point:
101.101 A (5.625 in decimal)
×110.01 B (6.25 in decimal)
-------------------
1 . 0 1 1 0 1 ← Corresponds to a 'one' in B
+ 00.0000 ← Corresponds to a 'zero' in B
+ 000.000
+ 1011.01
+ 10110.1
---------------------------
= 1 0 0 0 1 1 . 0 0 1 0 1 (35.15625 in decimal)
Multiplication table
0 1
0 0 0
1 0 1
The binary multiplication table is the same as the truth table of the logical conjunction operation
Binary division:
In the example below, the divisor is 1012, or 5 decimal, while the dividend is 110112, or
27 decimal. The procedure is the same as that of decimal long division; here, the divisor 1012
goes into the first three digits 1102 of the dividend one time, so a "1" is written on the top line.
This result is multiplied by the divisor, and subtracted from the first three digits of the dividend;
the next digit (a "1") is included to obtain a new three-digit sequence:
1
___________
101 )11011
−101
-----
001
The procedure is then repeated with the new sequence, continuing until the digits in the
dividend have been exhausted:
101
___________
101 )11011
−101
-----
111
− 101
-----
10
Thus, the quotient of 110112 divided by 1012 is 1012, as shown on the top line, while the
remainder, shown on the bottom line, is 10 2. In decimal, 27 divided by 5 is 5, with a remainder of
2.
A signed (meaning negative or non-negative) digit string of a given length in a given base
(or radix). This digit string is referred to as the significand, mantissa, or coefficient. The
length of the significand determines the precision to which numbers can be represented.
The radix point position is assumed always to be somewhere within the significand—often
just after or just before the most significant digit, or to the right of the rightmost (least
significant) digit. This article generally follows the convention that the radix point is set
just after the most significant (leftmost) digit.
A signed integer exponent (also referred to as the characteristic, or scale), which modifies
the magnitude of the numbe
9.Complements:
One’s complement and two’s complement are two important binary concepts. Two’s
complement is especially important because it allows us to represent signed numbers in binary,
and one’s complement is the interim step to finding the two’s complement.
A blog about Linux, programming, and other ideas
Binary– One’s Complement and Two’s Complement
One’s complement and two’s complement are two important binary concepts. Two’s
complement is especially important because it allows us to represent signed numbers in binary,
and one’s complement is the interim step to finding the two’s complement.
We will save the two’s complement subtraction for another lesson, but here, we will look
at what these terms mean and how to calculate them.
One’s Complement
If all bits in a byte are inverted by changing each 1 to 0 and each 0 to 1, we have formed
the one’s complement of the number.
Original One's Complement
-------------------------------
10011001 --> 01100110
10000001 --> 01111110
11110000 --> 00001111
11111111 --> 00000000
00000000 --> 11111111
The two’s complement is a method for representing positive and negative integer values
in binary. The useful part of two’s complement is that it automatically includes the sign bit.
Complements are quite often used to represent negative numbers in digital computers for
simplifying the subtraction operation and logical manipulation. For instance, the number N 2
has
to be subtracted from N 1 i.e, N 1 - N 2 then without using subtraction the complement
form
of negative number is formed and then added. It can be pointed out that since the subtraction of
number N 2 from N 1 is same as the addition of N 1 and complement of N 2 (i.e) N 1 + (-N 2).
When the value of the base is substituted the two types receive the names 2's and
1's
complement for binary number or 10's and 9's complement for decimal numbers.
The
r's complement is sometimes called as "True Complement" and the (r-1)'s complement as
"Radix
minus one's complement".
The (r-1)'s complement
Given a positive number N in base r with an integer part of N digits and a fraction part of
n m
decimal system is 9's complement and 1's complement in case of binary. Some numerical
examples of 9's complement is as follows:
=(10 - 10 -3 ) -25.638
=99.999-25.638
=74.361
Example:2
The 1's complement of 101100 is
6
=(2 - 1)-101100
=(1000000 - 1) - 101100
=111111-101100
=010011
The 1's complement of (0.0110) 2 is
-4
= (1-2 )10 -0.0110
=1-0.0001 -0.0110
=0.1111-0.0110
=0.1001
The 1's complement of 10.101 is
2 -3
=(2 -2 )10 -10.101B
=(100-0.001) B-10.101B
=11.1111-10.101
=1.010
5
=10 -52510
=47490
The 10's complement of 0.3266 10 is
=1-0.3266
=0.6734
The 10's complement of 25.638 10 is
2
=10 -25.638
=74.362
The 2's complement of 101100 is
=(2 6 ) -101100B
=(1000000 - 101100)B
=010011
The 2's complement of (0.0110) is
=(1-0.0110)B
= 0.1010
The 2's complement of 10.101 is
=(2 2 ) D - 10.101
=(100 - 10.101)D
=01.011
The r's complement can be obtained from the (r-1)'s complement after the addition of r
to the least significant digit.
Binary codes:
A binary code represents text or computer processor instructions using the binary
number system's two binary digits, 0 and 1. The binary code assigns a bit string to each symbol
or instruction. For example, a binary string of eight binary digits (bits) can represent any of 256
possible values and can therefore correspond to a variety of different symbols, letters or
instructions.
In computing and telecommunications, binary codes are used for various methods of
encoding data, such as character strings, into bit strings. Those methods may use fixed-width or
variable-width strings. In a fixed-width binary code, each letter, digit, or other character is
represented by a bit string of the same length; that bit string, interpreted as a binary number, is
usually displayed in code tables in octal, decimal or hexadecimal notation. There are many
character sets and many character encodings for them.
Excess 3-code:
Excess-3 or 3-excess[1] binary code (often abbreviated as XS-3 or X3[2][3]) or Stibitz code[1]
(after George Stibitz) is a self complementary binary-coded decimal code and numeral system. It
is a biased representation. Excess-3 was used on some older computers as well as in cash
registers and hand-held portable electronic calculators of the 1970s, among other uses.
Biased codes are a way to represent values with a balanced number of positive and negative
numbers using a pre-specified number N as a biasing value. Biased codes (and Gray codes) are
nonweighted codes. In XS-3, numbers are represented as decimal digits, and each digit is
represented by four bits as the digit value plus 3 (the "excess" amount):
To encode a number such as 127, one simply encodes each of the decimal digits as above,
giving (0100, 0101, 1010).
Excess-3 arithmetic uses different algorithms than normal non-biased BCD or binary
positional system numbers. After adding two excess-3 digits, the raw sum is excess-6.
For instance, after adding 1 (0100 in excess-3) and 2 (0101 in excess-3), the sum looks
like 6 (1001 in excess-3) instead of 3 (0110 in excess-3). In order to correct this problem,
after adding two digits, it is necessary to remove the extra bias by subtracting binary 0011
(decimal 3 in unbiased binary) if the resulting digit is less than decimal 10, or subtracting
binary 1101 (decimal 13 in unbiased binary) if an overflow (carry) has occurred. (In 4-bit
binary, subtracting binary 1101 is equivalent to adding 0011 and vice versa.)
The reflected binary code (RBC), also known as Gray code after Frank Gray, is a
binary numeral system where two successive values differ in only one bit (binary digit). The
reflected binary code was originally designed to prevent spurious output from electromechanical
switches.
Digital systems can process data in discrete form only. Many physical systems supply
continuous output data. The data must be converted into digital form before they can be used by
a digital computer. Continuous, or analog information is converted into digital form by means of
an analog to digital converter. The reflected binary or gray code is shown in the table is
sometimes used for the converted digital data. The advantage of the gray code over straight
binary numbers is that the gray code changes by only one bit as it sequences from one number to
the next. In other words, the change from any number to the next in sequence is recognized by a
change of only one bit from 0 to 1 data is represented by the continuous change of a
shaft
position. The shaft is partitioned into segments with each segment assigned a number. If adjacent
segments are made to correspond to adjacent Gray code numbers, ambiguity is reduced when the
shaft position is in the line that separates any two segments.
Gray code counters are sometimes used to provide the timing sequences that control the
operations in a digital system. A gray code counter is a counter whose flip-flop go through a
sequence of states. Gray code counters remove the ambiguity during the change from one state of
the counter to the next because only one bit can change during the state transition.
Decimal Binary Gray Gray as decimal
0 000 000 0
1 001 001 1
2 010 011 3
3 011 010 2
4 100 110 6
5 101 111 7
6 110 101 5
7 111 100 4
Arithmetic Circuits:
Arithmetic circuit such as binary adders, parallel binary adder and BCD adder are
explained with circuit diagram.
Half adder
The half adder adds two single binary digits A and B. It has two outputs, sum (S) and
carry (C). The carry signal represents an overflow into the next digit of a multi-digit addition.
The value of the sum is 2C + S. The simplest half-adder design, pictured on the right,
incorporates an XOR gate for S and an AND gate for C. With the addition of an OR gate to
combine their carry outputs, two half adders can be combined to make a full adder. [1] The half
adder adds two input bits and generates a carry and sum, which are the two outputs of a half
adder. The input variables of a half adder are called the augend and addend bits. The output
variables are the sum and carry. The truth table for the half adder is:
Inputs Outputs
A B C S
0 0 0 0
1 0 0 1
0 1 0 1
1 1 1 0
Full adder
Schematic symbol for a 1-bit full adder with Cin and Cout drawn on sides of block to
emphasize their use in a multi-bit adder
A full adder adds binary numbers and accounts for values carried in as well as out. A
one-bit full adder adds three one-bit numbers, often written as A, B, and Cin; A and B are the
operands, and Cin is a bit carried in from the previous less-significant stage. [2] The full adder is
usually a component in a cascade of adders, which add 8, 16, 32, etc. bit binary numbers.
In this implementation, the final OR gate before the carry-out output may be replaced by an
XOR gate without altering the resulting logic. Using only two types of gates is convenient if the
circuit is being implemented using simple IC chips which contain only one gate type per chip.
A full adder can be constructed from two half adders by connecting A and B to the input of one
half adder, connecting the sum from that to an input to the second adder, connecting Ci to the
other input and OR the two carry outputs. The critical path of a full adder runs through both
XOR-gates and ends at the sum bit
Parallel binary adder is used to add two binary numbers. As for example, if we want to
add two four-bit binary numbers, we need to construct a four bit parallel binary adder as shown
below. Such an adder requires one Half-Adder denoted by HA and three Full-Adders denoted by
FA. The binary numbers being added are A4 A3 A2 A1 and B4 B3 B2 B1 and the answer is:
A4 A3 A2 A1
+ B4 B 3 B 2 B 1
The first column requires only a Half-Adder. For any column above the first, there may be a
carry from the preceding column. Therefore, we must use a Full-Adder for each column above
the first. To illustrate how parallel binary adder of the above picture works, let us take an
example. If we want to add two numbers say 9 and 11. The binary equivalent of decimal 9 is
1001 and that of decimal 11 is 1011. The given block diagram shown below shows how the
binary adder works with these inputs.
As shown in the above picture, the Half-Adder adds the binary bits 1 + 1 to give a sum of
0 and a carry 1. This carry goes into the first Full-Adder which adds 0 + 1 + 1 to get a sum of 0
and a carry of 1. Now, this carry goes into the next Full-Adder that adds 0 + 0 + 1 to get a sum of
1 and a carry of 0. The last Full-Adder adds 1 + 1 + 0 to get a sum of 0 and a carry of 1. The final
input of the system is 10100. The decimal equivalent of binary 10100 is 20 which is the correct
decimal sum of 9 and 11. The parallel binary adder of above figure has limited capacity. The
largest binary numbers that can be added using it are 1111 and 1111.
BCD Adder
BCD or Binary Coded Decimal is that number system or code which has the binary
numbers or digits to represent a decimal number. A decimal number contains 10 digits (0-9).
Now the equivalent binary numbers can be found out of these 10 decimal numbers. In case of
BCD the binary number formed by four binary digits, will be the equivalent code for the given
decimal digits. In BCD we can use the binary number from 0000-1001 only, which are the
decimal equivalent from 0-9 respectively. Suppose if a number have single decimal digit then it’s
equivalent Binary Coded Decimal will be the respective four binary digits of that decimal
number and if the number contains two decimal digits then it’s equivalent
BCD will be the respective eight binary of the given decimal number. Four for the first
decimal digit and next four for the second decimal digit. It may be cleared from an example. Let,
(12)10 be the decimal number whose equivalent Binary coded decimal will be 00010010. Four
bits from L.S.B is binary equivalent of 2 and next four is the binary equivalent of 1. is of four bit
but in case of BCD it is an eight bit number. This is the main difference between Binary number and
binary coded decimal. For 0 to 9 decimal numbers both binary and BCD is equal but when decimal
number is more than one bit BCD differs from binary.
Decimal number Binary number Binary Coded Decimal(BCD)
0 0000 0000
1 0001 0001
2 0010 0010
3 0011 0011
4 0100 0100
5 0101 0101
6 0110 0110
7 0111 0111
8 1000 1000
9 1001 1001
10 1010 0001 0000
11 1011 0001 0001
12 1100 0001 0010
13 1101 0001 0011
14 1110 0001 0100
15 1111 0001 0101
Half Subtractor :
Half Subtractor is used for subtracting one single bit binary digit from another
single bit binary digit.The truth table of Half Subtractor is shown below.
Like Adders Here also we need to calculate the equation of Difference and Borrow for
more details please read What is meant by Arithmetic Circuits?
Difference = A'B+AB'=A B
Borrow=A'B
From the Truth Table The Difference and Borrow will written as
Difference=A'B'C+A'BB'+AB'C'+ABC
Reduce it like adder
Then We got
Difference=A B C
Borrow=A'B'C+A'BC'+A'BC+ABC
=A'B'C+A'BC'+A'BC+A'BC+A'BC+ABC ----------> A'BC=A'BC+A'BC+A'BC
=A'C(B'+B)+A'B(C'+C)+BC(A'+A)
Borrow=A'C+A'B+BC
Basic Gates
NOT gate has one input and one output, where the output will always be the
inversion (i.e. negation) of the input. This means that if the input is zero, then output will
be one and vice-versa. AND gate can have multiple inputs, while its output is restricted to
be a single one. Here the output will be high only if all the inputs are high; for any other
case the output will be low. OR gate is a multi-input, single-output gate which has its
output equal to logic 'zero' when all the inputs to the gate are low. Further for all other
remaining combination of inputs, where in at least one of the inputs will be high, the
output will be logically high.
OR gate performs logical or operation which means outputs is logical 1 if at least
one of the inputs is 1. Just like AND gate an OR gate may also have two or any more
numbers of inputs but only one output. Only if all of the inputs are only in low state or
logical 0 the output is low or 0 and in all other inputs conditions the output will be high
or logical 1.The logical symbol of OR gate is shown below,
Truth Table of OR Gate
From above explanation the truth table of logical OR gate can be represented as,
In digital electronics there are several logical gates which work or operate on different
logical operations, say logical addition, logical multiplication etc. AND Gate is a logical gate
which is widely used having two or more inputs and a single output. This gate works or operates
on logical multiplication rules. In this gate if either of the inputs is low (0), then the output is also
low, but if all the inputs are high (1) the output will also be high (1). There are many integrated
circuit which works on this logic we will come to it later. First of all let us gather some idea how
output with respect to inputs is observed in case of AND Gate. We just told that an AND gate
performs multiplication operation of binary digits. We also know there are two binary digits 1
and 0. In multiplying 0 with 0 we will get 0, 1 with 0 or 0 with 1 we will get 0. Only we get 1
when 1 is multiplied by 1. In other words, an AND gate is a digital device which produces high
output only when all inputs are high and produces low output at all other inputs conditions. High
digital signal means logically 1 and low digital signal means logically 0. An AND gate may have
any number of input probes but only one output probe.
NOT gate is a logical gate which only inverts the input digital signal. This is why, a
NOT gate sometimes is referred as inverter. A NOT gate always have high or logical 1 output
when its input is low or logical 0. On the other hand a logical NOT gate always have low or
logical 0 output when input is high or logical 1. The logical symbol of a NOT gate is shown
below,
If the input binary variable of a NOT gate is considered as A, then the output binary
variable of the gate will be Ā. As the symbol of not operation is ( - ) bar. If the value of A is 1.
then Ā = 0 and in opposite if the value of A is then Ā = 1. The truth table of a NOT gate hence
can be represented as,
X OR Gate and X NOR Gate
Modulo sum of two variables in binary system is like this, The gate performs this modulo
sum operation without including carry is known as X OR gate. An X OR gate is normally two
inputs logic gate where, output is only logical 1 when only one input is logical 1. When both
inputs are equal, that is either both are 1 or both are 0, the output will be logical 0. This is the
reason an X OR gate also called anti-coincidence gate or inequality detector. This gate is called
as X OR or exclusive OR gate because, its output is only 1 when one of its input is exclusively 1.
The truth table of X OR gate is given below,
The binary operation of above truth table is known as exclusive OR operation and it is
represented as, A ⊕ B. The symbol of exclusive OR operation is represented by a plus ring
surrounded by a circle ⊕.
NOR Gate
NOR gate means NOT OR gate. In a NOR gate an OR gate is inverted through a NOT gate.
Actually an inverted OR operation is NOR operation and the logic gate performs this operation is
called NOR gate. A NOT gate followed by an OR gate makes a NOR gate. The basic logic
construction of the NOR gate is shown below,
The symbol of OR gate is similar to OR gate but one bubble is drawn at the output point of the
OR gate in the case of OR gate. NOR gate means not an OR gate which means output of this
gate is just reverse of that of a similar OR gate. We know that output an OR gate is 0 only when
all inputs of OR gate is 0. But in the case of NOR gate the output is 1 only when all inputs are 0.
In all other cases, that is for all other combinations of inputs the output is 0. Hence, the truth
table of a NOR gate is shown below,
It is Like OR gate, NOR gate may also have more than two inputs. A NOR gate is also referred
as universal gate. Because all binary operations can be realized by only using NOR gates. As we
know that there are only three basic operations AND, OR and NOT. Also we know that all
complex binary operations can be realized by using these three basic operations. If we can prove
that AND, OR and NOT operations can be realized by using only NOR gate, then we can easily
say that all binary operations can be realized by using only NOR gates.
Points to remember:
A number system of base (also called radix) r is a system, which have r distinct symbols
for r digits
Representation of units tens,hundreds,thousands etc are called weights.
The integer part and the fractional part in a decimal number system are seperated by a
decimal point.
BOOLEAN ALGEBRA:
Boolean algebra is a different kind of algebra or rather can be said a new kind of algebra
which was invented by world famous mathematician George Boole in the year of 1854. He
published it in his book “An Investigation of the Laws of Thought”. Later using this technique
Claude Shannon introduced a new type of algebra which is termed as Switching Algebra. In
digital electronics there are several methods of simplifying the design of logic circuits. This
algebra is one of these methods. According to George Boole symbols can be used to represent
the structure of logical thoughts. This type of algebra deals with the rules or laws, which are
known as laws of Boolean algebra by which the logical operations are carried out. There are
also few theorems of Boolean algebra, that are needed to be noticed carefully because these
make calculation fastest and easier. Boolean logic deals with only two variables, 1 and 0 by
which all the mathematical operations are to be performed.Variable, complement, and literal are
terms used in Boolean algebra. Avariable is a symbol used to represent a logical quantity. Any
single variablecan have a 1 or a 0 value. The complement is the inverse of a variable and
isindicated by a bar over variable (overbar). For example, the complement ofthe variable A is A.
If A = 1, then A = 0. If A = 0, then A = 1. Thecomplement of the variable A is read as "not A" or
"A bar." Sometimes aprime symbol rather than an overbar is used to denote the complement of
avariable; for example, B' indicates the complement of B. A literal is avariable or the
complement of a variable.
Boolean Addition
Recall from part 3 that Boolean addition is equivalent to the ORoperation. In Boolean
algebra, a sum term is a sum of literals. In logiccircuits, a sum term is produced by an OR
operation with no ANDoperations involved. Some examples of sum terms are A + B, A + B, A
+B + C, and A + B + C + D.
A sum term is equal to 1 when one or more of the literals in the term are 1. Asum term is
equal to 0 only if each of the literals is 0.
Example
Determine the values of A, B, C, and D that make the sum termA + B + C + D equal to 0.
Boolean Multiplication
Also recall from part 3 that Boolean multiplication is equivalent to the ANDoperation. In
Boolean algebra, a product term is the product of literals. Inlogic circuits, a product term is
produced by an AND operation with no ORoperations involved. Some examples of product
terms are AB, AB, ABC,and ABCD.A product term is equal to 1 only if each of the literals in the
term is 1. Aproduct term is equal to 0 when one or more of the literals are 0.
Example
Determine the values of A, B, C, and D that make the product term ABCDequal to 1.
►The commutative law of multiplication for two variables isA.B = B.AThis law states
that the order in which the variables are ANDed makes nodifference.
Associative Laws :
►The associative law of addition is written as follows for three variables:
A + (B + C) = (A + B) + C
This law states that when ORing more than two variables, the result is the same regardless of the
grouping of the variables.
This law states that it makes no difference in what order the variables aregrouped when
ANDing more than two variables.
Distributive Law:
►The distributive law is written for three variables as follows:
A(B + C) = AB + AC
This law states that ORing two or more variables and then ANDing the resultwith a
single variable is equivalent to ANDing the single variable with eachof the two or more variables
and then ORing the products. The distributivelaw also expresses the process of factoring in
which the common variable Ais factored out of the product terms, for example,
AB + AC = A(B + C).
12 basic rules that are useful in manipulating and simplifyingBoolean expressions. Rules 1
through 9 will be viewed in terms of theirapplication to logic gates. Rules 10 through 12 will be
derived in terms ofthe simpler rules and the laws previously discussed.
Rule 1. A + 0 = A
A variable ORed with 0 is always equal to the variable. If the input variable
A is 1, the output variable X is 1, which is equal to A. If A is 0, the output is
0, which is also equal to A.
Rule 2. A + 1 = 1
A variable ORed with 1 is always equal to 1. A 1 on an input to an OR gate
produces a 1 on the output, regardless of the value of the variable on the
other input.
Rule 3. A . 0 = 0
A variable ANDed with 0 is always equal to 0. Any time one input to an
AND gate is 0, the output is 0, regardless of the value of the variable on the
other input.
Rule 4. A . 1 = A
A variable ANDed with 1 is always equal to the variable. If A is 0 the output
of the AND gate is 0. If A is 1, the output of the AND gate is 1 because both
inputs are now 1s
Rule 5. A + A = A
A variable ORed with itself is always equal to the variable. If A is 0, then 0
+ 0 = 0; and if A is 1, then 1 + 1 = 1
Rule 6. A + A = 1
A variable ORed with its complement is always equal to 1. If A is 0, then 0 +
0 = 0 + 1 = 1. If A is l, then 1 + 1 = 1+ 0 = 1.
Rule 7. A . A = A
A variable ANDed with itself is always equal to the variable. If A = 0,
then 0.0 = 0; and if A = 1. then 1.1 = 1.
Rule 8. A . A = 0
A variable ANDed with its complement is always equal to 0. Either A or A
will always be 0: and when a 0 is applied to the input of an AND gate. the
output will be 0 also.
Rule 9 A = A
The double complement of a variable is always equal to the variable. If you
start with the variable A and complement (invert) it once, you get A. If you
then take A and complement (invert) it, you get A, which is the original
variable.
Rule 10. A + AB = A
This rule can be proved by applying the distributive law, rule 2, and rule 4 as
follows:
A + AB = A( 1 + B) Factoring (distributive law)
= A . l Rule 2: (1 + B) = 1
= A Rule 4: A . 1 = A
Rule 11. A + AB = A + B
This rule can be proved as follows:
A + AB = (A + AB) + AB Rule 10: A = A + AB
= (AA + AB) + AB Rule 7: A = AA
=AA +AB +AA +AB Rule 8: adding AA = 0
= (A + A)(A + B) Factoring
= 1. (A + B) Rule 6: A + A = 1
=A + B Rule 4: drop the 1
Karnaugh Maps
Minterm Notation:
From truth table to minterm notation:
ABC|M
-----------+--
000|1
001|1
010|1
011|0
100|1
101|0
110|0
111|0
M(A,B,C) = Sm(0,1,2,4).
_ _ __
=AB+AB+AB
01 10 00
1 2 0
F(A,B) = Sm (0,1,2)
Of course this also simplifies further:
_ _ _
= A (B + B) + A B
_ _
F(A,B) = A + A B
but we can't easily write that in Minterm notation.
Expand to SOP using DeMorgan's & Distributive
Add missing terms to get to Normalized SOP form using Inverse OR
Write down the Minterms.
__ _ _
F(A,B) = A B + A B + A B
The !A!B comes from the upper left cell.
The !AB comes from the lower left cell.
The A!B comes from the upper right cell.
Which is what we had before!
_ __
P=AB+AB
_ _
= A (B + B)
_
=A
What terms are contained in the circle represented by P?
The circle again contains two terms, but they also simplify!
__ _
Q=AB+AB
_ _
= B (A + A)
_
=B
How can we write a complete expression for F(A,B)?
Combining all the circled terms using OR's:
F(A,B) = P + Q
_ _
=A+B
Which is simpler than we got as a first pass with the Boolean Algebra.
You can get the same result with Boolean Algebra it just isn't as easy to see:
_ _ __
F(A,B) = A B + A B + A B
_ _ _
= A (B + B) + A B
_ _
=A+AB
_ _ _
= (A + A) (A + B) Distributive AND
_ _
= 1 (A + B)
_ _
=A+B
When you group a term it means that one or more of the varibles involved doesn't
matter. As in the !A term - the term is true when B is true and also when B is false.
Thus, the B term does not matter and we can express the whole term as !A
As we showed above, this is actually an application of the Inverse OR law.
_ __
Term = A B + A B
_ _
= A (B + B)
_
=A
We'll see that a similar reasoning will apply to larger groupings of terms.
In general on K-Maps you will be able to combine 1, 2, 4 or 8 adjacent 1's into a single term.
In a K-Map physically adjacent implies Logically Adjacent
Moving from one cell to the next horizontally or vertically only changes 1 variable.
You can also wrap around from top to bottom (TB), bottom to top (BT), left to right (LR)
and right to left (RL).
K-Maps of 2 Variables:
Aside:
It's nice to work with 2 variables but there are not a lot of interesting functions of 2
variables.
How many different functions of 2 variables are there?
Only 24 = 16.
Since there are 4 rows in the table the 4 output bits can only take on 16 different possible
combinations.
K-Maps of 3 Variables:
Notice that the possible values of AB are not written in the usual order.
This is done to ensure that adjoining cells are logically adjacent.
This includes LR, RL, TB and BT wraparounds.
K-Maps of 4 Variables:
Example #1:
Use a K-Map to simplify the logic function represented by the following truth table:
ABC|M
------+--
000|1
001|1
010|0
011|0
100|1
101|0
110|1
111|0
What is the minterm representation of this function?
M(A,B,C) = Sm(0,1,4,6)
What does the K-Map look like?
As follows:
__ _
M(A,B,C) = A B + A C
To write this expression:
Identify the terms within each group that do not change.
__ _ __
M(A,B,C) = A B C + A B C + B C
The upper left and upper right terms are logically adjacent so they can be grouped.
Note that this is not a minimal expression.
Nor is it straight forward to simplify it with Boolean algebra.
Try it -- for homework!
__ _ __
M(A,B,C) = A B C + A B C + B C
__ _ __ _
= A B C + A B C + B C (A + A)
__ _ __ ___
=ABC+ABC+ABC+ABC
__ _ _ _
= A B (C + C) + A C (B + B)
__ _
=AB+AC
Which is conveniently what we got using the K-Map earlier.
__ _ __
M(A,B,C) = A B + A C + B C
Again, this is not a minimal expression and it is not easy to simplify by Boolean Algebra.
__ _ __
M(A,B,C) = A B + A C + B C
__ _ __ _
= A B + A C + B C (A + A)
__ _ __ ___
=AB+AC+ABC+ABC
__ _ _ _
= A B (1 + C) + A C (1 + B)
__ _
=AB+AC
Which again gives us what we got via the K-Map earlier.
Canonical Forms:
Canonical forms express all binary variables in every product (AND) or sum (OR) term of the
Boolean function. There are two types of canonical forms of a Boolean function. The first one is
called sum of product and the second one is called product of
sum. A’BC+A’BC’+ABC’+AB’C’+A’B’C’+ABC is a canonical sum of product form of a
Boolean function, the given function has contained three inputs that means three binary
variables. In each term of the given function has contained three variables. (X’+Y+Z).
(X+Y’+Z).(X’+Y+Z).(X’+Y+Z’) is a canonical product of sum form of a Boolean function.
Non canonical forms do not express all binary variables in every product or sum term of the
Boolean function.
A binary variable may appear either in its normal form (X) OR in its complement form (X’).
Now consider two binary variables X and Y combined with an AND operation. Since each
variable may appear in either form, these are four possible combinations ( X’.Y’, X’.Y, X.Y’,
X.Y). Each of these four AND terms is called a minterm or a standard product. In a similar
manner, n variables can be combined to form 2n minterm. The 2n different minterms may be
determined by a method similar to the one shown below for three variables. The binary numbers
from 0 to 2n – 1 are listed under the n variable. Each minterm is obtained from an AND term of
the n variables, with each variable being primed if the corresponding bit of the binary number is
0 and unprimed if a 1. In a similar fashion, n variables forming an OR term, with each variable
being primed or unprimed, provide 2n possible combinations called maxterms or standard sums.
The eight maxterms for three variables, together with their symbolic designation, are listed
below. Any 2n maxterms for n variables may be determined similarly. Each maxterm is obtained
from an OR term of the n variables, with each variable being unprimed if the corresponding bit is
0 and primed if it is a 1.
SUM OF PRODUCT
A sum of product expression is a product term (minterm) or several product terms (minterms)
logically added (ORed) together. The following steps are used to express a Boolean function in
its sum-of –product form:
Form a minterm for each combination of the variables which produces a 1 in the function.
The desired expression is the sum (OR) of all the minterms obtained in step 2.
For example: 011, 100 and 111 their corresponding minterms are X’.Y.Z, X.Y’.Z’, X.Y.Z.
Hence, taking the sum of all these minterms, the given function can be expressed in its sum of
product form as:
F1 = X’.Y.Z + X.Y’.Z’ + X.Y.Z
F1 = m3+m4+m7
Similarly, F2 = X’.Y.Z + X.Y’.Z + X.Y.Z’ + X.Y.Z can be easily expressed in its sum of product
form:
X Y Z F1 F2
0 0 0 0 0
0 0 1 0 0
0 1 0 0 0
0 1 1 1 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
This is the truth table for F 1 and F2. It is sometimes convenient to express a Boolean function in
its sum of product form. If not in this form, it can be made. So by first expanding the expression
into a sum of AND terms. Each term is then inspected to see if it contains all the variables. If it
misses one or more variables, it is ANDed with an expression of the form (X+X’), where X is
one of the missing variable. Let us take an example, express the given Boolean function into its
sum of product form.
F = A’.(B+C’)
F = A’.B+A’.C’
The given function has three variables A, B, and C. The first term of the function A’B is missing
one variable, therefore
A’B = A’.B.(C+C’)
A’B = A’.B.C+A’B.C’
Similarly, the second term of the function A’C’ is missing one variable, therefore
A’.C’= A’.C’.(B+B’)
A’.C’ = A’.C’.B+A’.C’.B’
F = A’.B.C+A’.B.C’+A’.C’.B+A’.B’.C’
F = A’.B.C+A’.B.C’+A’.B.C’+A’.B’.C’
But in the above expression, the term A’.B.C’ appears twice and according to Boolean postulate
we have X+X’ = X. Hence it is possible to remove one of them.
F = A’.B.C+A’.B.C’+A’.B’.C’
F = A’.B’.C’+A’.B.C’+A’.B.C
F = m0+m2+m3
F (A, B, C) = ∑ (0, 2, 3)
The summation symbol ‘∑’ stands for the ORing of terms. The numbers following it are the
minterm of the function.
PRODUCT OF SUM
A product of sum expression is a sum term (maxterm) or several maxterms logically multiplied
(ANDed) together. For example, the expression (X’+Y)(X+Y’) is a product of sum expression.
The following steps are used to express a Boolean function in its product of sum form:
The desired expression is the sum (AND) of all the maxterms obtained in step 2.
For example: 000, 010, 011, 101, 110. The following five combinations of the variable produce a
0.
F = (X+Y+Z).(X+Y’+Z).(X+Y’+Z’).(X’+Y+Z’).(X’+Y’+Z)
F = M0.M2.M3.M5.M6
Let us take an example:-Express the Boolean function in the product of sum form.
F = X.Y+X’.Y
Atfirst convert the function into OR term using the distributive law;
F = X.Y+X’.Z
= (X.Y+X’).(X.Y+Z)
= (X+X’).(Y+X’).(X+Z).(Y+Z)
= (X+Y’).(X+Z).(Y+Z)
The function has three variables X, Y, and Z. Each OR term is missing one variable, therefor:
Combining all the terms and removing those that appear more than one, we finally obtain:
F = (X+Y+Z).(X+Y’+Z).(X’+Y+Z).(X’+Y+Z’)
F = M0.M2.M4.M5
F (X,Y,Z) = π (0,2,4,5)
The product symbol π denotes the ANDing of the maxterms. The numbers following it are the
maxterms of the function.
Since there are a finite number of Boolean functions of n input variables, yet an infinite
number of possible logic expressions you can construct with those n input values, clearly there
are an infinite number of logic expressions that are equivalent (i.e., they produce the same result
given Boolean function using a canonical, or standardized, form. For any given boolean function
there exists a unique canonical form. This eliminates some confusion when dealing with Boolean
functions .
Actually, there are several different canonical forms. We will discuss only two here and
employ only the first of the two. The first is the so-called sum of minterms and the second is the
product of maxterms. Using the duality principle, it isvery easy toconverty between these two.
implicant
A prime implicant is an implicant that (from the point of view of the map) is not fully
contained in any one other implicant. An essential prime implicant is a prime implicant that
cludes at least one 1 that is not included in any other prime implicant.
a flip-flop is a circuit that has two stable states and can be used to store state information. A flip-
flop is a bistable multivibrator. The circuit can be made to change state by signals applied to one
or more control inputs and will have one or two outputs. It is the basic storage element in
sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics
systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements. A flip-flop stores a single bit (binary
digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data
storage can be used for storage of state, and such a circuit is described as sequential logic. When
used in a finite-state machine, the output and next state depend not only on its current input, but
also on its current state (and hence, previous inputs). It can also be used for counting of pulses,
and for synchronizing variably-timed input signals to some reference timing signal.
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is,
when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single
type (positive going or negative going) of clock edge.
Flip flop is a sequential circuit which generally samples its inputs and changes its outputs only at
particular instants of time and not continuously. Flip flop is said to be edge sensitive or edge
triggered rather than being level triggered like latches.
S-R Flip Flop
It is basically S-R latch using NAND gates with an additional enable input. It is also called as
level triggered SR-FF. For this, circuit in output will take place if and only if the enable input (E)
is made active. In short this circuit will operate as an S-R latch if E = 1 but there is no change in
the output if E = 0.
Block Diagram
Circuit Diagram
Truth Table
Operation
1 S = R = 0 : No change Hence R' and S' both will be equal to 1. Since S' and R' are the
input of the basic S-R latch using NAND gates, there will be no
change in the state of outputs.
Since S = 0, output of NAND-3 i.e. R' = 1 and E = 1 the output
Hence the Race condition will occur in the basic NAND latch.
Master slave JK FF is a cascade of two S-R FF with feedback from the output of second to input
of first. Master is a positive level triggered. But due to the presence of the inverter in the clock
line, the slave will respond to the negative level. Hence when the clock = 1 (positive level) the
master is active and the slave is inactive. Whereas when clock = 0 (low level) the slave is active
and master is inactive.
Circuit Diagram
Truth Table
Operation
2 J = 0 and K = 1 (Reset) Again clock = 1 − Master active, slave inactive. Therefore even
with the changed outputs Q = 0 and Q bar = 1 fed back to
master, its output will be Q1 = 0 and Q1 bar = 1. That means S =
0 and R = 1.
These changed output are returned back to the master inputs. But
since clock = 0, the master is still inactive. So it does not
respond to these changed outputs. This avoids the multiple
toggling which leads to the race around condition. The master
slave flip flop will avoid the race around condition.
Delay Flip Flop or D Flip Flop is the simple gated S-R latch with a NAND inverter connected
between S and R inputs. It has only one input. The input data is appearing at the output after
some time. Due to this data delay between i/p and o/p, it is called delay flip flop. S and R will be
the complements of each other due to NAND inverter. Hence S = R = 0 or S = R = 1, these input
condition will never appear. This problem is avoid by SR = 00 and SR = 1 conditions.
Block Diagram
Circuit Diagram
Truth Table
Operation
Toggle flip flop is basically a JK flip flop with J and K terminals permanently connected
together. It has only input denoted by T as shown in the Symbol Diagram. The symbol for
positive edge triggered T flip flop is shown in the Block Diagram.
Symbol Diagram
Block Diagram
Truth Table
Operation
Multiplexers
Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and m
select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and routes it
to the output. The selection of one of the n inputs is done by the selected inputs. Depending on
the digital code applied at the selected inputs, one out of n data sources is selected and
transmitted to the single output Y. E is called the strobe or enable input which is useful for the
cascading. It is generally an active low terminal that means it will perform the required operation
when it is low.
Block diagram
2 : 1 multiplexer
4 : 1 multiplexer
16 : 1 multiplexer
32 : 1 multiplexer
Block Diagram
Truth Table
Demultiplexers
A demultiplexer performs the reverse operation of a multiplexer i.e. it receives one input and
distributes it over several outputs. It has only one input, n outputs, m select input. At a time only
one output line is selected by the select lines and the input is transmitted to the selected output
line. A de-multiplexer is equivalent to a single pole multiple way switch as shown in fig.
1 : 2 demultiplexer
1 : 4 demultiplexer
1 : 16 demultiplexer
1 : 32 demultiplexer
Block diagram
Truth Table
Decoder
Block diagram
Relay actuator
2 to 4 Line Decoder
The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs where D
through D are the four outputs. Truth table explains the operations of a decoder. It shows that
each output is 1 for only a specific combination of inputs.
Block diagram
Truth Table
Logic Circuit
Encoder
Encoder is a combinational circuit which is designed to perform the inverse operation of the
decoder. An encoder has n number of input lines and m number of output lines. An encoder
produces an m bit binary code corresponding to the digital input number. The encoder accepts an
n input digital word and converts it into an m bit another digital word.
Block diagram
Priority encoders
Decimal to BCD encoder
Octal to binary encoder
Priority Encoder
This is a special type of encoder. Priority is given to the input lines. If two or more input line are
1 at the same time, then the input line with highest priority will be considered. There are four
input D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest priority and
D0 has the lowest priority. That means if D 3 = 1 then Y1 Y1 = 11 irrespective of the other inputs.
Similarly if D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other inputs.
Block diagram
Truth Table
Logic Circuit
For a ripple up counter, the Q output of preceding FF is connected to the clock input of
the next one.
For a ripple up counter, the Q output of preceding FF is connected to the clock input of
the next one.
For a ripple down counter, the Q bar output of preceding FF is connected to the clock
input of the next one.
Let the selection of Q and Q bar output of the preceding FF be controlled by the mode
control input M such that, If M = 0, UP counting. So connect Q to CLK. If M = 1,
DOWN counting. So connect Q bar to CLK.
Block Diagram
Truth Table
Operation
S.N. Condition Operation
If M = 0 and M bar = 1, then the AND
gates 1 and 3 in fig. will be enabled
whereas the AND gates 2 and 4 will be
disabled.
Input output interface provides a method for transferring information between internal
storage and external I/O devices. Peripherals connected to a computer need special
communication links for interfacing them with CPU. Difference between the central computer
and each peripheral are:
1. Peripherals are electromechanical and electromagnetic devices and their manner of
operation is different from the operation of the CPU and memory, which are
electronic devices. Therefore, a conversion of signal values may be required.
2. The data transfer rate of peripherals is usually slower than the transfer rate of the CPU
and consequently a synchronization mechanism may be needed.
3. Data codes and formats in peripherals differ from the word format in the CPU
and memory.
4. The operating modes of peripherals are different from each other and each must
be controlled so as not to disturb the operation of other peripherals connected to the CPU.
Each device may have its own controller that supervises the operations of the particular
mechanism in the peripheral.
A typical communication link between the processor and several peripherals. The I/O bus
consists of data lines, address lines and control lines. The magnetic disk, printer, and terminal are
employed in practically any general-purpose computer. The magnetictape is used in some
computers for backup storage. Each peripheral device is has associated with it an interface unit.
Each interface decodes the address and control received from the I/O Bus, interprets them
for the peripheral, and provides signals for the peripheral controller. It also synchronizes
the data flow and supervises the transfer between peripheral and processor. Each
peripheral has its own controller that operates the particular electromechanical devices.
For example, the printer controller controls the paper motion, the print timing and the
selection of printing characters. A controller may be housed separately or may be physically
integrated with the peripheral.
The I/O bus from the processor is attached to all peripheral interfaces. To
communicate with a particular device, the processor places a device address on the
address lines. Each interface attached to the I/O bus contains an address decoder that monitors
the address lines. When the interface detects its own address, it activates the path between the
bus lines and the device that it controls. All peripherals whose address does not correspond to the
address in the bus are disabled by their interface.
At the same time that the address is made available in the address lines, the processor
provides a function code in the control lines. The interface selected responds to the function code
and proceeds to execute it. The function code is referred to as an I/O command ad is in essence
an instruction that is executed in the interface and its attached peripheral unit. The interpretation
of the command depends on the peripheral that the processor is addressing. There are four types
of commands that an interface may receive. They are classified as control, status, data output and
data input.
A control command is issued to activate the peripheral and to inform it what to do. For
example, a magnetic tape unit may be instructed to backspace the tape by one record, to rewind
the tape, or to start the tape moving in the forward direction. The particular control
command issued depends on the peripheral, and each peripheral receives its own distinguished
sequence of control commands, depending on its mode of operations.
A status command is used to test various status conditions in the interface and
the peripheral. For example, the computer may wish to check the status of the peripheral
before a transfer is initiated. During the transfer, one or more errors occur which are
detected by the interface. These errors are designated by setting bits in a status register
that the processor can read at certain intervals.
A data output command causes the interface to respond by transferring data from the bus
into one of its registers. Consider an example with a tape unit. The computer starts the tape
moving by issuing a control command. The processor then monitors the status of the tape by
means of a status command. When the tape is in the correct position, the processor issues a data
output command. The interface responds to the address and command and transfers the
information from the data lines in the bus to its buffer register. The interface then communicates
with the tape controller and sends the data to be stored on tape.
The data input command is the opposite of the data output. In this case the interface
receives an item of data from the peripheral and places it in its buffer register. The processor
checks if data are available by means of a status command and then issues a data input
command. The interface places the data on the data lines, where they are accepted by
the processor.
In addition to communicating with I/O, the processor must communicate with memory
unit. Memory bus contains data, address and read/write control lines. Three ways for computer
buses that can be used to communicate with memory and I/O:
1. Use two separate buses, one for memory and other for I/O.
2. Use one common bus for both memory and I/O but have separate control lines
3. Use one common bus for memory and I/O with common control lines.
In the first method, the computer has independent sets of data, address and control buses,
one for accessing memory and other for I/O. this is done in computers that provide a separate I/O
processor (IOP) in addition to the CPU.
The memory communicates with both the CPU and the IOP through a memory bus and
also with input and output devices through a separate I/O bus with its own address, data and
control lines. The purpose of the IOP is to provide an independent pathway for the
transfer of information between the external devices and the internal memory. The I/O
processor is sometimes called a data channel.
Many computers use one common bus to transfer information between memory or I/O
and the CPU. The distinction between a memory transfer and I/O transfer is made
through separate read and write lines.
The CPU specifies whether the address on the address lines is for a memory word or for
an interface register by enabling one of two possible read or writ lines. The I/O read and I/O
writes control lines are enabled during an I/O transfer. The memory read and memory write
control lines are enabled during a memory transfer.
This configuration isolates all I/O interface addresses from the address assigned to
memory and is referred to as the isolated I/O method for assigning addresses in a common bus.
ISOLATED I/O
In the isolated I/O configuration, the CPU has distinct input and output instructions and
each of these instructions are associated with the address of an interface register. When the CPU
fetches and decodes the operation code of an input or output instruction, it places the address
associated with the instruction into the common address lines. At the same time, it
enables the I/O read (for input) or I/O write (for output) control line. This informs the external
components that are attached to the common bus that the address in the address lines is
for an interface register and not for a memory word.
The isolated I/O method isolates memory and I/O addresses so that memory
address values are not affected by interface address assignment since each has its own address
space. The other alternative is to use the same address space for both memory and I/O. This is
the case in computers that employ only one set of read and write signals and do not distinguish
between memory and I/O addresses. This configuration is referred to as memory mapped I/O.
The computer treats an interface register as being part of the memory system.
The assigned addresses for interface registers cannot be used for memory words, which
reduce the memory address, range available. In memory mapped I/O organization, there are no
specific inputs or output instructions. The CPU can manipulate I/O data residing in interface
registers with the same instructions that are used to manipulate memory words.
Each interface is organized as a set of registers that respond to read and write requests in
the normal address space. Typically, a segment of the total address space is reserved for interface
registers, but in general, they can be located at any address as long as there is not also a memory
word that responds to the same address. Computers with memory mapped I/O can use memory
type instructions to access I/O data. It allows the computer to use the same instructions for either
input-output transfers or for memory transfers.
ADVANTAGE
Load and store instructions used for reading and writing from memory can be used to
input and output data from I/O registers.
a status bit may indicate that port A has received a new data item from the I/O device.
Another bit in the status register may indicate that a parity error has occurred during the transfer.
The interface registers communicate with CPU through the bidirectional data bus. The
address bus selects the interface unit through the chip select and the two register select
inputs.
The circuit enables the chip select (CS) input when the interface is selected by
the address bus. The two register select inputs RS1 and RS0 are usually connected to the two
least significant lines of the address bus. These two inputs select one of the four
registers in the interface as specified in the table accompanying the diagram.
The content of the selected registered is transfer into the CPU via the data bus when the
I/O read signal is enabled. The CPU transfers binary information into the selected register via the
data bus when the I/O write input is enabled.
The internal operations in a digital system are synchronized by means of clock pulses
supplied by a common pulse generator. If the interface registers and CPU shares a
common clock, the transfer is said to be synchronous.
Clock pulses are applied to all registers within a unit and all data transfers among internal
registers occur simultaneously during the occurrence of a clock pulse. If the internal timing in
each unit is independent from the other, each uses its own private clock for internal registers. In
this case, two units are said to be synchronous to each other. Methods
Asynchronous data transfer between two independent units requires that control
signals be transmitted between the communicating units to indicate the time at which data
is being transmitted. Two ways of achieving this,
1. Strobe control
2. Handshaking
STROBE CONTROL
The strobe control method of asynchronous data transfer employs a single control line to
time each transfer. The strobe may be activated by either the source or the destination unit.
Strobe pulse is supplied by one of the units to indicate to the other unit when the transfer has to
occura data transfer initiated by the destination unit activates the strobe pulse informing the
source to provide the data. The source unit responds by placing the requested binary information
on the data bus. The data must be valid and remain in the bus long enough for
the destination unit to accept it. The falling edge of the strobe pulse can be used again to trigger a
destination register. The destination unit then disables the strobe. The source removes the
data from the bus after a predetermined time interval.
2 HANDSHAKING
This is commonly used to accompany each data being transferred with a control signal
that indicates the presence of data in a bus. The unit receiving the data item responds
with another control signal to acknowledge receipt of the data. This type of agreement between
two independent units is referred to as handshaking.
The basic principle of the two-wire handshaking method of data transfer is as follows:
One control line is in the same direction as the dataflow in the bus from the source to
the destination. It is used by the source unit to inform the destination unit whether there are
valid data in the bus. The other control line is in the other direction from the destination to the
source. It is used by the destination unit to inform the source whether it can accept data.
MEMORY ORGANIZATION:
MEMORY HIERARCHY
The part of the operating system that supervises the flow of information between
all storage devices is called "Memory Management System". The memory management system
distributes program and data to various levels in the memory hierarchy. They are
• Batch Mode
• Time sharing mode
In a Batch mode, each user prepares his program off-line and submits it to the computer
center. An operator loads all programs into the computer where they are executed. The operator
retrieves the printed output and returns it to the user.
In a time-sharing mode, many users communicate with the computer via remote
terminals. Because of slow human response compared to computer speeds, the computer can
respond to multiple users at, seemingly at the same time.
MAIN MEMORY:
The main memory is the central storage unit in a computer system. It is a relatively large
and fast memory used to store programs and data during the computer operation. The principal
technology used for the main memory is based on semiconductor integrated circuits. Integrated
circuit RAM chips are available in two possible operating modes, static and dynamic. The static
RAM consists essentially of internal flip-flops that store the binary information.
The stored information remains valid as long as power is applied to the unit. The
dynamic RAM stores the binary information in the form of electric charges that are applied to
capacitors. The capacitors are provided inside the chip by MOS transistors. The stored
charges on the capacitors tend to discharge with time and the capacitors must be periodically
recharged by refreshing the dynamic memory.
Refreshing is done by cycling through the words every few milliseconds to restore the
decaying charge. The dynamic RAM offers reduced power consumption and larger storage
capacity in a single memory chip. The static RAM is easier to use and has shorter read and write
cycles.
A memory is just like a human brain. It is used to store data and instruction. Computer memory
is the storage space in computer where data is to be processed and instructions required for
processing are stored.The memory is divided into large number of small parts. Each part is called
a cell. Each location or cell has a unique address which varies from zero to memory size minus
one.
For example if computer has 64k words, then this memory unit has 64 * 1024 = 65536 memory
location. The address of these locations varies from 0 to 65535.
Memory is primarily of two types
Internal Memory − cache memory and primary/main memory
External Memory − magnetic disk / optical disk etc.
RAM
A RAM constitutes the internal memory of the CPU for storing data, program and
program result. It is read/write memory. It is called random access memory (RAM).
Since access time in RAM is independent of the address to the word that is, each storage location
inside the memory is as easy to reach as other location & takes the same amount of time. We can
reach into the memory at random & extremely fast but can also be quite expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is
a power failure. Hence, a backup uninterruptible power system (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can hold.
RAM is of two types
Static RAM (SRAM)
Dynamic RAM (DRAM)
Static RAM (SRAM)
The word static indicates that the memory retains its contents as long as power remains
applied. However, data is lost when the power gets down due to volatile nature. SRAM chips use
a matrix of 6-transistors and no capacitors. Transistors do not require power to prevent leakage,
so SRAM need not have to be refreshed on a regular basis.Because of the extra space in the
matrix, SRAM uses more chips than DRAM for the same amount of storage space, thus making
the manufacturing costs higher.Static RAM is used as cache memory needs to be very fast and
small.
DRAM, unlike SRAM, must be continually refreshed in order for it to maintain the data.
This is done by placing the memory on a refresh circuit that rewrites the data several hundred
times per second. DRAM is used for most system memory because it is cheap and small. All
DRAMs are made up of memory cells. These cells are composed of one capacitor and one
transistor.
ROM
ROM stands for Read Only Memory. The memory from which we can only read but
cannot write on it. This type of memory is non-volatile. The information is stored permanently in
such memories during manufacture.A ROM, stores such instruction as are required to start
computer when electricity is first turned on, this operation is referred to as bootstrap. ROM chip
are not only used in the computer but also in other electronic items like washing machine and
microwave oven.
ASSOCIATIVE MEMORY:
The memory locates all words, which match the specified content, and marks them for
reading. Because of its organization, the associative memory is uniquely suited to do parallel
searches by data association. Moreover, searches can be done on an entire word or on a specific
field within a word. Associative memories are used in applications where the search time is very
critical and must be very short.
HARDWARE ORGANIZATION
The associative memory consists of a memory array and logic for m words with n bits per
word. The argument register A and key register K each have n bits, one for each have n bits, one
for each bit of a word. The argument register A and key register K each have n bits, one for each
bit of a word. The match register M has m bits, one for each memory word.
Each word in memory is compared in parallel with the content of the argument register.
The word that matches the bits of the argument register set a corresponding bit in the match
register. Reading is accomplished by a sequential process or access to memory for those words
whose corresponding bits in the match register have been set. The key register provides a mask
for choosing a particular field or key in the argument word.
MATCH LOGIC
The match logic for each word can be derived from the comparison algorithm of two
binary numbers. First, we neglect the K bits and compare the argument in A with the bits stored
in the cells of the words.
READ OPERATION
If more than one word in memory matches the unmasked argument field, all the matched
words will have 1's in the corresponding bit position of the match register.
It is then necessary to scan the bits of the match register one at a time. The matched
words are read in sequence by applying a read signal to each word line whose corresponding M i
bit is a 1.
If only one word may match the unmasked argument field, then connect output
Mi directly to the read line in the same word position, the content of the matched word will be
presented automatically at the output lines and no special read command signal is needed.
Furthermore, if we exclude words having zero content, then all zero output will indicate that no
match occurred and that the searched item is not available in memory.
WRITE OPERATION
If the entire memory is loaded with new information at once, then the writing can be done
by addressing each location in sequence. The information is loaded prior to a search operation. If
unwanted words have to be deleted and new words inserted one at a time, there is a need for a
special register to distinguish between active an inactive words. This register is called
"Tag Register".
A word is deleted from memory by clearing its tag bit to 0.
CACHE MEMORY
Cache memory is a very high speed semiconductor memory which can speed up CPU. It
acts as a buffer between the CPU and main memory. It is used to hold those parts of data and
program which are most frequently used by CPU. The parts of data and programs, are transferred
from disk to cache memory by operating system, from where CPU can access them.
Advantages
Cache memory is faster than main memory.
It consumes less access time as compared to main memory.
It stores the program that can be executed within a short period of time.
It stores data for temporary use.
Disadvantages
Cache memory has limited capacity.
It is very expensive.
Virtual memory is a technique that allows the execution of processes which are not
completely available in memory. The main visible advantage of this scheme is that programs can
be larger than physical memory. Virtual memory is the separation of user logical memory from
physical memory.
This separation allows an extremely large virtual memory to be provided for programmers
when only a smaller physical memory is available. Following are the situations, when entire
program is not required to be loaded fully in main memory.
User written error handling routines are used only when an error occurred in the data or
computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a small
amount of the table is actually used.
The ability to execute a program that is only partially in memory would counter many
benefits.
Less number of I/O would be needed to load or swap each user program into memory.
A program would no longer be constrained by the amount of physical memory that is
available.
Each user program could take less physical memory, more programs could be run the
same time, with a corresponding increase in CPU utilization and throughput.
Auxiliary Memory
Auxiliary memory is much larger in size than main memory but is slower. It normally stores
system programs, instruction and data files. It is also known as secondary memory. It can also be
used as an overflow/virtual memory in case the main memory capacity has been exceeded.
Secondary memories cannot be accessed directly by a processor. First the data/information of
auxiliary memory is transferred to the main memory and then that information can be accessed
by the CPU. Characteristics of Auxiliary Memory are following −
Non-volatile memory − Data is not lost when power is cut off.
Reusable − The data stays in the secondary storage on permanent basis until it is not
overwritten or deleted by the user.
Reliable − Data in secondary storage is safe because of high physical stability of
secondary storage device.
Convenience − With the help of a computer software, authorised people can locate and
access the data quickly.
Capacity − Secondary storage can store large volumes of data in sets of multiple disks.
Cost − It is much lesser expensive to store data on a tape or disk than primary memory.
ASSOCIATIVE MAPPING
The associative mapping stores both the address and content (data) of the memory
word.it's shown as a 5 digit octal number and its corresponding 12 bits is shown as 4
digit octal number A CPU address of 15 bits is placed in the argument register and associative
memory is searched for a matching address. If the address is found, the corresponding 12 bit
data is read and sent to the CPU.
DIRECT MAPPING
The simplest way of associating main memory blocks with cache block is the direct
mapping technique. In this technique, block k of main memory maps into block k modulo m of
the cache, where m is the total number of blocks in cache.The address is broken into three parts:
(s-r) MSB bits represent the tag to be stored in a line of the cache corresponding to the block
stored in the line; r bits in the middle identifying which line the block is always stored in; and the
w LSB bits identifying each word within the block.
This means that:
The number of addressable units = 2s+w words or bytes
The block size (cache line width not including tag) = 2w words or bytes
The number of blocks in main memory = 2s (i.e., all the bits that are not in w)
The number of lines in cache = m = 2r
The size of the tag stored in each line of the cache = (s - r) bits
Direct mapping is simple and inexpensive to implement, but if a program accesses 2 blocks that
map to the same line repeatedly, the cache begins to thrash back and forth reloading the line over
and over again meaning misses are very high.
This is the one that you really need to pay attention to because this is the one for the
homework. Set associative addresses the problem of possible thrashing in the direct mapping
method. It does this by saying that instead of having exactly one line that a block can map to in
the cache, we will group a few lines together creating a set. Then a block in memory can map to
any one of the lines of a specific set.
The memory address is broken down in a similar way to direct mapping except that there is a
slightly different number of bits for the tag (s-r) and the set identification (r). It should look
something like the following:
If we took the exact same system, but converted it to 2-way set associative mapping (2-way
meaning we have 2 lines per set), we'd get the following:
Tag (13 bits) line identifier (9 bits) word id (2 bits)
The simplest and most commonly used procedure is to update main memory with every
memory write operation, with cache memory being update in parallel if it contains the word at
the specified address. This is called the "Write-through method".
This method has the advantage that main memory always contains the same data as the
cache. The second procedure is called the "write-back method". In this method only the cache
location is updated during a write operation. The location is then marked by a flag so that later
when the word is removed from the cache it is copied into main memory.
The reason for the write-back method is that during the time a word resides in the cache,it
may be updated several times. Analytical results indicate that the number of memory writes in a
typical program ranges between 10 and 30 % of the total references to memory.