0% found this document useful (0 votes)
2 views

CA_Notes

The document provides an overview of computer architecture and organization, highlighting their differences and classifications based on size, functionality, and data handling. It details the functional components of a digital computer, including input units, CPU, memory, and output units, along with various number systems and conversion methods. Additionally, it introduces Hamming code as an error-correcting method for ensuring data accuracy during transmission or storage.

Uploaded by

monika19sharmaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CA_Notes

The document provides an overview of computer architecture and organization, highlighting their differences and classifications based on size, functionality, and data handling. It details the functional components of a digital computer, including input units, CPU, memory, and output units, along with various number systems and conversion methods. Additionally, it introduces Hamming code as an error-correcting method for ensuring data accuracy during transmission or storage.

Uploaded by

monika19sharmaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT-1

1. Computer architecture describes how a system is built from its components.


 This can be a high-level overview or a detailed explanation, including the
instruction set architecture, microarchitecture, logic design, and
implementation.
 It is about designing a computer system to balance performance, efficiency,
cost, and reliability.
2. Computer Organization is about how the components of a computer system,
like the CPU, memory, and input/output devices, are connected and work together
to execute programs. It focuses on the operational aspects and how hardware
components are implemented to support the architecture.

Computer Architecture VS Computer Organization

Computer Architecture Computer Organization

Computer Architecture is concerned with


Computer Organization is concerned with
the way hardware components are
the structure and behaviour of a computer
connected together to form a computer
system as seen by the user.
system.

It acts as the interface between hardware It deals with the components of a


and software. connection in a system.

Computer Organization tells us how


Computer Architecture helps us to
exactly all the units in the system are
understand the functionalities of a system.
arranged and interconnected.

A programmer can view architecture in


Whereas Organization expresses the
terms of instructions, addressing modes
realization of architecture.
and registers.

While designing a computer system An organization is done on the basis of


architecture is considered first. architecture.

Computer Architecture deals with high- Computer Organization deals with low-
level design issues. level design issues.

Architecture involves Logic (Instruction Organization involves Physical


sets, Addressing modes, Data types, Cache Components (Circuit design, Adders,
optimization) Signals, Peripherals)

Classification of Computers
The computer systems can be classified on the following basis:
1. On the basis of size.
2. On the basis of functionality.
3. On the basis of data handling.

Classification on the basis of size

1. Super computers : The super computers are the most high performing
system. A supercomputer is a computer with a high level of performance
compared to a general-purpose computer.
The actual Performance of a supercomputer is measured in FLOPS instead
of MIPS. All of the world’s fastest 500 supercomputers run Linux-based
operating systems.
Supercomputers actually play an important role in the field of computation,
and are used for intensive computation tasks in various fields, including
quantum mechanics, weather forecasting, climate research, oil and gas
exploration, molecular modeling, and physical simulations.
eg: PARAM, jaguar, roadrunner.
2. Mainframe computers : These are commonly called as big iron, they are
usually used by big organizations for bulk data processing such as statistics,
census data processing, transaction processing and are widely used as the
servers as these systems has a higher processing capability as compared to
the other classes of computers, most of these mainframe architectures were
established in 1960s, the research and development worked continuously
over the years and the mainframes of today are far more better than the
earlier ones, in size, capacity and efficiency.
Eg: IBM z Series, System z9 and System z10 servers.
3. Mini computers : These computers came into the market in mid 1960s and
were sold at a much cheaper price than the main frames, they were actually
designed for control, instrumentation, human interaction, and
communication switching as distinct from calculation and record keeping,
later they became very popular for personal uses with evolution.
Eg: Personal Laptop, PC etc.

4. Micro computers : A microcomputer is a small, relatively inexpensive


computer with a microprocessor as its CPU. It includes a microprocessor,
memory, and minimal I/O circuitry mounted on a single printed circuit
board.The previous to these computers, mainframes and minicomputers,
were comparatively much larger, hard to maintain and more expensive. They
actually formed the foundation for present day microcomputers and smart
gadgets that we use in day to day life.
Eg: Tablets, Smartwatches.

Classification on the basis of functionality

1. Servers : Servers are nothing but dedicated computers which are set-up to
offer some services to the clients. They are named depending on the type of
service they offered. Eg: security server, database server.

2. Workstation : Those are the computers designed to primarily to be used by


single user at a time. They run multi-user operating systems. They are the
ones which we use for our day to day personal / commercial work.
3. Information Appliances : They are the portable devices which are designed
to perform a limited set of tasks like basic calculations, playing multimedia,
browsing internet etc. They are generally referred as the mobile devices.
They have very limited memory and flexibility and generally run on “as-is”
basis.
4. Embedded computers : They are the computing devices which are used in
other machines to serve limited set of requirements. They follow instructions
from the non-volatile memory and they are not required to execute reboot or
reset. The processing units used in such device work to those basic
requirements only and are different from the ones that are used in personal
computers- better known as workstations.

Classification on the basis of data handling

1. Analog : An analog computer is a form of computer that uses the


continuously-changeable aspects of physical fact such as electrical,
mechanical, or hydraulic quantities to model the problem being solved. Any
thing that is variable with respect to time and continuous can be claimed as
analog just like an analog clock measures time by means of the distance
traveled for the spokes of the clock around the circular dial.
2. Digital : A computer that performs calculations and logical operations with
quantities represented as digits, usually in the binary number system of “0”
and “1”, “Computer capable of solving problems by processing information
expressed in discrete form. from manipulation of the combinations of the
binary digits, it can perform mathematical calculations, organize and analyze
data, control industrial and other processes, and simulate dynamic systems
such as global weather patterns.
3. Hybrid : A computer that processes both analog and digital data, Hybrid
computer is a digital computer that accepts analog signals, converts them to
digital and processes them in digital form.
Functional Components of a Computer
A computer is a combination of hardware and software resources which integrate
together and provides various functionalities to the user.
Hardware are the physical components of a computer like the processor, memory
devices, monitor, keyboard etc. while software is the set of programs or
instructions that are required by the hardware resources to function properly.

A computer needs certain input, processes that input and produces the desired
output. The input unit takes the input, the central processing unit does the
processing of data and the output unit produces the output. The memory unit holds
the data and instructions during the processing.
Digital Computer: A digital computer can be defined as a programmable machine

Details of Functional Components of a Digital Computer

 Input Unit :The input unit consists of input devices that are attached to the
computer. These devices take input and convert it into binary language that
the computer understands. Some of the common input devices are keyboard,
mouse, joystick, scanner etc.
 Central Processing Unit (CPU) : Once the information is entered into the
computer by the input device, the processor processes it. The CPU is called
the brain of the computer because it is the control center of the computer. It
first fetches instructions from memory and then interprets them so as to
know what is to be done. If required, data is fetched from memory or input
device. Thereafter CPU executes or performs the required computation and
then either stores the output or displays on the output device. The CPU has
three main components which are responsible for different functions –
Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers
 Arithmetic and Logic Unit (ALU) : The ALU, as its name suggests
performs mathematical calculations and takes logical decisions. Arithmetic
calculations include addition, subtraction, multiplication and division.
Logical decisions involve comparison of two data items to see which one is
larger or smaller or equal.
 Control Unit : The Control unit coordinates and controls the data flow in
and out of CPU and also controls all the operations of ALU, memory
registers and also input/output units. It is also responsible for carrying out all
the instructions stored in the program. It decodes the fetched instruction,
interprets it and sends control signals to input/output devices until the
required operation is done properly by ALU and memory.
 Memory Registers : A register is a temporary unit of memory in the CPU.
These are used to store the data which is directly used by the processor.
Registers can be of different sizes(16 bit, 32 bit, 64 bit and so on) and each
register inside the CPU has a specific function like storing data, storing an
instruction, storing address of a location in memory etc. The user registers
can be used by an assembly language programmer for storing operands,
intermediate results etc. Accumulator (ACC) is the main register in the ALU
and contains one of the operands of an operation to be performed in the
ALU.
 Memory : Memory attached to the CPU is used for storage of data and
instructions and is called internal memory The internal memory is divided
into many storage locations, each of which can store data or instructions.
Each memory location is of the same size and has an address. With the help
of the address, the computer can read any memory location easily without
having to search the entire memory. when a program is executed, it’s data is
copied to the internal memory and is stored in the memory till the end of the
execution.
The internal memory is also called the Primary memory or Main memory.
This memory is also called as RAM, i.e. Random Access Memory. The time
of access of data is independent of its location in memory, therefore this
memory is also called Random Access memory (RAM).
 Output Unit: The output unit consists of output devices that are attached
with the computer. It converts the binary data coming from CPU to human
understandable form. The common output devices are monitor, printer,
plotter etc.

Number System
A number system is defined as a system of writing to express numbers. It is the
mathematical notation for representing numbers of a given set by using digits or
other symbols in a consistent manner.
It also allows us to operate arithmetic operations like addition, subtraction,
multiplication and division.
Types of Number Systems
There are various types of number systems in mathematics. The four most common
number system types are:
1. Decimal number system (Base- 10)
2. Binary number system (Base- 2)
3. Octal number system (Base-8)
4. Hexadecimal number system (Base- 16)
1. Decimal Number System
Number system with base value 10 is termed as Decimal number
system. It uses 10 digits i.e. 0 to 9 for the creation of numbers.
Examples : 123, 1675489,189,6785 etc.

2. Binary Number System


Number System with base value 2 is termed as Binary number system. It uses 2
digits i.e. 0 and 1 for the creation of numbers. The numbers formed using these two
digits are termed as Binary Numbers.
Binary number system is very useful in electronic devices and computer systems
because it can be easily performed using just two states ON and OFF i.e. 0 and 1.
Decimal Numbers 0 to 9 are represented in binary as: 0, 1, 10, 11, 100, 101, 110,
111, 1000, and 1001
Examples:
14 can be written as 1110
19 can be written as 10011
50 can be written as 110010
3. Octal Number System
Octal Number System is one in which the base value is 8. It uses 8 digits i.e. 0 to7
for creation of Octal Numbers. Octal Numbers can be converted to Decimal value
by multiplying each digit with the place value and then adding the result. Here the
place values are 80, 81, and 82. Octal Numbers are useful for the representation of
UTF8 Numbers.
Example:
(135)10 can be written as (207)8
(215)10 can be written as (327)8

4. Hexadecimal Number System


Number System with base value 16 is termed as Hexadecimal Number System. It
uses 16 digits for the creation of its numbers. Digits from 0-9 are taken like the
digits in the decimal number system but the digits from 10-15 are represented as A-
F i.e. 10 is represented as A, 11 as B, 12 as C, 13 as D, 14 as E, and 15 as F.
Hexadecimal Numbers are useful for handling memory address locations.
Examples:
(255)10 can be written as (FF)16
(1096)10 can be written as (448)16
(4090)10 can be written as (FFA)16

Number System Conversion Methods


1. Decimal to Binary Number System
To convert from decimal to binary, start dividing decimal number by 2, and
whatever the reminder getting, writing down from bottom to top, and that will be
the binary number representation of the decimal number. And the number contains
fractional part, then multiply 2 in the fractional part.
Example
(10.25)10
Note: Keep multiplying the fractional part with 2 until decimal part 0.00 is
obtained.
(0.25)10 = (0.01)2
Answer: (10.25)10 = (1010.01)2
2. Binary to Decimal Number System
To convert from binary to decimal, start multiplying the exponent of 2 with each
digit of the number in decreasing order. If the number contains fractional part then
will divide it by the exponent of 2.
Example
(1010.01)2
1x23 + 0x22 + 1x21+ 0x20 + 0x2 -1 + 1x2 -2 = 8+0+2+0+0+0.25 = 10.25
(1010.01)2 = (10.25)10
3. Decimal to Octal Number System
To convert from decimal to octal, start dividing decimal number by 8, and
whatever the reminder getting, writing down from bottom to top, and that will be
the octal number representation of the decimal number. And the number contains
fractional part, then multiply 8 in the fractional part.
Example
(10.25)10
(10)10 = (12)8
Fractional part:
0.25 x 8 = 2.00
Note: Keep multiplying the fractional part with 8 until decimal part .00 is
obtained.
(.25)10 = (.2)8
Answer: (10.25)10 = (12.2)8

4. Octal to Decimal Number System


To convert from octal to decimal, start multiplying the exponent of 8 with each
digit of the number in decreasing order. If the number contains fractional part then
will divide it by the exponent of 8.
Example
(12.2)8
1 x 81 + 2 x 80 +2 x 8-1 = 8+2+0.25 = 10.25
(12.2)8 = (10.25)10
5. Hexadecimal to Binary Number System
To convert from Hexadecimal to Binary, write the 4-bit binary equivalent of
hexadecimal.
Example
(3A)16 = (00111010)2
6. Binary to Hexadecimal Number System
To convert from Binary to Hexadecimal, start grouping the bits in groups of 4
from the right-end and write the equivalent hexadecimal for the 4-bit binary. Add
extra 0’s on the left to adjust the groups.
Example
1111011011
0011 1101 1011
(001111011011 )2 = (3DB)16
7. Binary to Octal Number System
To convert from binary to octal, start grouping the bits in groups of 3 from the
right end and write the equivalent octal for the 3-bit binary. Add 0’s on the left to
adjust the groups.
Example
111101101
111 101 101
(111101101)2 = (755)8

Hamming code
Hamming code is an error-correcting code used to ensure data accuracy during
transmission or storage. Hamming code detects and corrects the errors that can
occur when the data is moved or stored from the sender to the receiver. This simple
and effective method helps improve the reliability of communication systems and
digital storage. It adds extra bits to the original data, allowing the system to detect
and correct single-bit errors.
What is Redundant Bits?
Redundant bits are extra binary bits that are generated and added to the
information-carrying bits of data transfer to ensure that no bits were lost during the
data transfer. The number of redundant bits can be calculated using the following
formula:
2r ≥ m + r + 1
where m is the number of bits in input data, and r is the number of redundant bits.
Suppose the number of data bits is 7, then the number of redundant bits can be
calculated using: = 24 ≥ 7 + 4 + 1 . Thus, the number of redundant bits is 4.

Types of Parity Bits


A parity bit is a bit appended to a data of binary bits to ensure that the total number
of 1’s in the data is even or odd. Parity bits are used for error detection. There are
two types of parity bits:
 Even Parity Bit: In the case of even parity, for a given set of bits, the number
of 1’s are counted. If that count is odd, the parity bit value is set to 1, making
the total count of occurrences of 1’s an even number. If the total number of
1’s in a given set of bits is already even, the parity bit’s value is 0.
 Odd Parity Bit: In the case of odd parity, for a given set of bits, the number
of 1’s are counted. If that count is even, the parity bit value is set to 1,
making the total count of occurrences of 1’s an odd number. If the total
number of 1’s in a given set of bits is already odd, the parity bit’s value is 0.

Integer Numbers : Signed and Unsigned Binary Numbers


The integer variables are represented in a signed and unsigned manner. The
positive and negative values are differentiated by using the sign flag in signed
numbers. The unsigned numbers do not use any flag for the sign, i.e., only positive
numbers can be stored by the unsigned numbers.
It is very easy to represent positive and negative numbers in our day-to-day life.
We represent the positive numbers without adding any sign before them and the
negative number with - (minus) sign before them. But in the digital system, it is not
possible to use negative sign before them because the data is in binary form in
digital computers. For representing the sign in binary numbers, we require a
special notation.

Signed Numbers
The signed numbers have a sign bit so that it can differentiate positive and negative
integer numbers. The signed binary number technique has both the sign bit and the
magnitude of the number. For representing the negative decimal number, the
corresponding symbol in front of the binary number will be added.
The signed numbers are represented in three ways. The signed bit makes two
possible representations of zero (positive (0) and negative (1)), which is an
ambiguous representation. The third representation is 2's complement
representation in which no double representation of zero is possible, which makes
it unambiguous representation. There are the following types of representation of
signed binary numbers:
1. Sign-Magnitude form
In this form, a binary number has a bit for a sign symbol. If this bit is set to
1, the number will be negative else the number will be positive if it is set to
0. Apart from this sign-bit, the n-1 bits represent the magnitude of the
number.
2. 1's Complement
By inverting each bit of a number, we can obtain the 1's complement of a
number. The negative numbers can be represented in the form of 1's
complement. In this form, the binary number also has an extra bit for sign
representation as a sign-magnitude form.
For example, binary number of 78 : (1001110)2
1’s complement : 0110001
3. 2's Complement
By inverting each bit of a number and adding plus 1 to its least significant
bit, we can obtain the 2's complement of a number. The negative numbers
can also be represented in the form of 2's complement. In this form, the
binary number also has an extra bit for sign representation as a sign-
magnitude form.
For example, binary number of 78 : (1001110)2
1’s complement : 0110001
2’s complement: 0110001
+1
0110010

Real Numbers
Real numbers include rational numbers like positive and negative integers,
fractions, and irrational numbers. In other words, any number that we can think of,
except complex numbers, is a real number. For example, 3, 0, 1.5, 3/2, √5, and so
on are real numbers.
In Fixed-point Representation, the term ‘fixed point’ refers to the corresponding
manner in which numbers are represented, with a fixed number of digits after, and
sometimes before, the decimal point.
With floating-point representation, the placement of the decimal point can ‘float’
relative to the significant digits of the number.
For example, a fixed-point representation with a uniform decimal point placement
convention can represent the numbers 123.45, 1234.56, 12345.67, etc, whereas a
floating-point representation could in addition represent 1.234567, 123456.7,
0.00001234567, 1234567000000000, etc. As such, floating point can support a
much wider range of values than fixed point, with the ability to represent very
small numbers and very large numbers.

You might also like