0% found this document useful (0 votes)
58 views21 pages

CAB Notes

The document summarizes the five generations of computers from 1940 to present. Each generation is defined by major technological developments that made computers smaller, cheaper, more powerful, and efficient. The first generation used vacuum tubes and magnetic drums. The second generation introduced transistors, replacing vacuum tubes. The third generation saw the development of integrated circuits, packing many transistors onto a single chip. The fourth generation brought microprocessors, putting all computer components onto a single silicon chip. The fifth generation involves artificial intelligence applications.

Uploaded by

atul211988
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views21 pages

CAB Notes

The document summarizes the five generations of computers from 1940 to present. Each generation is defined by major technological developments that made computers smaller, cheaper, more powerful, and efficient. The first generation used vacuum tubes and magnetic drums. The second generation introduced transistors, replacing vacuum tubes. The third generation saw the development of integrated circuits, packing many transistors onto a single chip. The fourth generation brought microprocessors, putting all computer components onto a single silicon chip. The fifth generation involves artificial intelligence applications.

Uploaded by

atul211988
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

GENERATIONS OF COMPUTER

The history of computer development is often referred to in reference to the different


generations of computing devices. A generation refers to the state of improvement in t
he development of a product. This term is also used in the different advancements of c
omputer technology. With each new generation, the circuitry has gotten smaller and m
ore advanced than the previous generation before it. As a result of the miniaturization,
speed, power, and memory of computers have proportionally increased. New discover
ies are constantly being developed that affect the way we live, work and play.
Each generation of computer is characterized by a major technological development t
hat fundamentally changed the way computers operate, resulting in increasingly small
er, cheaper, more powerful and more efficient and reliable devices. Read about each g
eneration and the developments that led to the current devices that we use today.
First Generation - 1940-1956: Vacuum Tubes

The first computers used vacuum tubes for circuitry and magnetic drums for memory,
and were often enormous, taking up entire rooms. A vacuum tube is a fragile glass de
vice which used filaments as a source of electronic. It could amplify and control electr
onic signals. Without any moving parts, vacuum tubes could take very weak signals a
nd make the signal stronger (amplify it).  Vacuum tubes could also stop and start the f
low of electricity instantly (switch). A magnetic drum, also referred to as drum, is a m
etal cylinder coated with magnetic iron-oxide material on which data and programs ca
n be stored. Magnetic drums were once used as a primary storage device but have sinc
e been implemented as auxiliary storage devices. . Input was based on punched cards
and paper tape, and output was displayed on printouts.
They were very expensive to operate and in addition to using a great deal of electricit
y, generated a lot of heat, which was often the cause of malfunctions. First generation
computers relied on machine language to perform operations, and they could only sol
ve one problem at a time. Machine languages are the only languages understood by co
mputers
The UNIVAC and ENIAC computers are examples of first-generation computing devi
ces. The UNIVAC was the first commercial computer delivered to a business client, t
he U.S. Census Bureau in 1951.

Second Generation - 1956-1963: Transistors

Transistors replaced vacuum tubes and ushered in the second generation of


computers. In 1947 three scientists, John Bardeen, William Shockley, and Walter
Brattain working at AT&T's Bell Labs invented what would replace the vacuum tube
forever.  This invention was the transistor which functions like a vacuum tube in that
it can be used to relay and switch electronic signals. There were obvious differences
between the transistor and the vacuum tube.  The transistor was faster, more
reliable, smaller, and much cheaper to build than a vacuum tube.  One transistor
replaced the equivalent of 40 vacuum tubes.  These transistors were made of solid
material, some of which is silicon, an abundant element (second only to oxygen)
found in beach sand and glass.  Therefore they were very cheap to produce. 
Transistors were found to conduct electricity faster and better than vacuum tubes. 
They were also much smaller and gave off virtually no heat compared to vacuum
tubes.

Figure 1.1(b) pg 7
Second-generation computers still relied on punched cards for input and printouts for
output. Second-generation computers moved from cryptic binary machine language to
symbolic, or assembly, languages, which allowed programmers to specify instructions
in words. High-level programming languages were also being developed at this time,
such as early versions of COBOL and FORTRAN. These were also the first computer
s that stored their instructions in their memory, which moved from a magnetic drum t
o magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.

Third Generation - 1964-1971: Integrated Circuits


       Transistors were a tremendous breakthrough in advancing the com
puter.  However no one could predict that thousands even now million
s of transistors (circuits) could be compacted in such a small space.  Th
e integrated circuit, or as it is sometimes referred to as semiconducto
r chip, packs a huge number of transistors onto a single wafer of silico
n. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas
Instruments independently discovered the amazing attributes of integr
ated circuits.  Placing such large numbers of transistors on a single chi
p vastly increased the power of a single computer and lowered its cost
considerably.
        Since the invention of integrated circuits, the
number of transistors that can be placed on a single
chip has doubled every two years, shrinking both the
size and cost of computers even further and further
enhancing its power.  Most electronic devices today
use some form of integrated circuits placed on printed
circuit boards-- thin pieces of bakelite or fiberglass
that have electrical connections etched onto them --
sometimes called a mother board.

        These third generation computers could carry out instructions in billionths of a
second.  The size of these machines dropped to the size of small file cabinets. Yet,
the single biggest advancement in the computer era was yet to be discovered.
The development of the integrated circuit was the hallmark of the third generation of
computers. Transistors were miniaturized and placed on silicon chips, called
semiconductors, which drastically increased the speed and efficiency of computers.

Figure 1.1(c) pg 7
A chip is a small piece of semi conducting material(usually silicon) on which an integ
rated circuit is embedded. A typical chip is less than ¼-square inches and can contain
millions of electronic components (transistors). Computers consist of many chips plac
ed on electronic boards called printed circuit boards. There are different types of chips.
For example, CPU chips (also called microprocessors) contain an entire processing u
nit, whereas memory chips contain blank memory.
Computer chips, both for CPU and memory, are composed of semiconductor material
s. Semiconductors make it possible to miniaturize electronic components, such as tran
sistors. Not only does miniaturization mean that the components take up less space, it
also means that they are faster and require less energy.
Instead of punched cards and printouts, users interacted with third generation compute
rs through keyboards and monitors and interfaced with an operating system, which all
owed the device to run many different applications at one time with a central program
that monitored the memory. Computers for the first time became accessible to a mass
audience because they were smaller and cheaper than their predecessors.

Fourth Generation - 1971-Present: Microprocessors


The microprocessor brought the fourth generation of computers, as thousands of
integrated circuits were built onto a single silicon chip. A silicon chip that contains a
CPU. In the world of personal computers, the terms microprocessor and CPU are
used interchangeably. At the heart of all personal computers and most workstations
sits a microprocessor. Microprocessors also control the logic of almost all digital
devices, from clock radios to fuel-injection systems for automobiles.

Three basic characteristics differentiate microprocessors:


 Instruction Set: The set of instructions that the microprocessor can execute.

 Bandwidth: The number of bits processed in a single instruction.

 Clock Speed: Given in megahertz (MHz), the clock speed determines how many
instructions per second the processor can execute.
In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit
microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that
runs at 25MHz.
What in the first generation filled an entire room could now fit in the palm of the hand.
The Intel 4004chip, developed in 1971, located all the components of the computer -
from the central processing unit and memory to input/output controls - on a single chi
p.
As these small computers became more powerful, they could be linked together to for
m networks, which eventually led to the development of the Internet. Fourth generatio
n computers also saw the development of GUIs, the mouse and handheld devices

Fifth Generation - Present and Beyond: Artificial Intelligence


Fifth generation computing devices, based on artificial intelligence, are still in
development, though there are some applications, such as voice recognition, that are
being used today.

Artificial Intelligence is the branch of computer science concerned with making comp
uters behave like humans. The term was coined in 1956 by John Mc Carthyat the Mas
sachusetts Institute of Technology. Artificial intelligence includes:
 Games Playing: programming computers to play games such as chess and
checkers

 Expert Systems: programming computers to make decisions in real-life situations


(for example, some expert systems help doctors diagnose diseases based on
symptoms)

 Natural Language: programming computers to understand natural human


languages

 Neural Networks: Systems that simulate intelligence by attempting to reproduce the


types of physical connections that occur in animal brains

 Robotics: programming computers to see and hear and react to other sensory
stimuli
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate
human behavior). The greatest advances have occurred in the field of games playing.
The best computer chess programs are now capable of beating humans. In May,1997,
an IBM super-computer called Deep Blue defeated world chess champion Gary Kaspa
rov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they ar
e capable only of very limited tasks. Robots have great difficulty identifying objects b
ased on appearance or feel, and they still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would all
ow people to interact with computers without needing any specialized knowledge. Yo
u could simply walk up to a computer and talk to it. Unfortunately, programming com
puters to understand natural languages has proved to be more difficult than originally
thought. Some rudimentary translation systems that translate from one human languag
e to another are in existence, but they are not nearly as good as human translators.
Characteristics Of Computers:-
The computer is a powerful tool due to the following characteristics:

1. Automatic: A machine is said to be automatic if it works by


itself without human intervention .computers are automatic
machines because once started on a job , they carry on , until
the job is finished, normally without any human assistance.
2. Speed: A computer is a very fast device. It can perform in few
seconds, the amount of work that a human being can do in
entire year – if he worked day and night and did nothing else.
That is a computer does in one minute what would take a man
his entire lifetime.
3. Accuracy: In addition to being very fast computers are very
accurate. The accuracy of computers is consistently high and
the degree of accuracy of a particular computer depends upon
its design. Errors can occur in a computer. However these are
mainly due to human rather than technological weaknesses.
For example computer errors due to incorrect input data.
4. Diligence: Unlike human being a computer is free from
monotony, tiredness and lack of concentration. It can
continuously work for hours without creating any errors and
without grumbling.
5. Versatility: It is an important characteristic of computer. One
moment it is preparing results of an examination, the other
moment it is busy preparing electricity bills and in between it
may help an office secretary to trace an important letter in
seconds. In short computer is capable of performing almost
any task, if the task is broken down into a series of logical
steps.
6. Power of remembering: A computer can store and recall any
amount of information because of its secondary storage
capability. Even after several years the information recalled
would be as accurate as on the day it was fed to the computer.
It is entirely up to the user to make a computer retain or forget
a particular information.
7. No I. Q.: A computer is not a magical device. It possesses no
intelligence of its own. Its I. Q. is zero. It has to be told what to
do and in what sequence. Only the user can determine what
tasks a computer will perform. A computer cannot take its own
decision.
8. No feelings: Computers are devoid of emotions. They have no
feelings because the are machines.

HARDWARE AND SOFTWARE


Hardware

As we learned in the Overview portion of the study guide, a computer system


has two basic parts: hardware and software. The equipment associated with a c
omputer system is the hardware. Computer hardware is responsible for perfor
ming four basic functions: input, processing, output, and storage. Let’s go bac
k to the basic definition of a computer. A computer is an electronic device that
is programmed to accept data (input), process it into useful information (outpu
t), and store it for future use (storage). The processing function is under the co
ntrol of a set of instructions (software).

Software

As important as hardware devices may be, they are useless without the instruct
ions that control them. These instructions used to control hardware and accom
plish tasks are called software. Software falls into two broad categories— appl
ications and systems software.

Relationship between Hardware and Software:


For a computer to produce useful output its hardware and software must work
together. Nothing useful can be done with the hardware on its own, and the
software cannot be utilized without supporting hardware
To take an analogy, a cassette player and its cassettes purchased from the market
are hardware. The songs recorded on the cassettes are its software. Following are
the important points regarding relationship between hardware and software:

1. Both hardware and software are necessary for a computer


to do useful job. Both are complementary to each other.
2. Same hardware can be loaded with different software to
make computer perform different types of jobs just as different
songs can be played using the same cassette player.
3. Except for upgrades hardware is normally a one time
expense, whereas software is a continuing expense

Types of Software

As important as hardware devices may be, they are useless without the instruct
ions that control them. These instructions used to control hardware and accom
plish tasks are called software. Software falls into two broad categories— appl
ications and systems software.
Applications software allows you to perform a particular task or solve a specif
ic problem. A word processor is the most widely used example of applications
software; it can be used to create a letter or memo or anything else you need to
type. Other examples include games, spreadsheets, tax preparation programs, t
yping tutor, etc. Applications software can be purchased in stores and is called
packaged or commercial software. In other words, it is prewritten. However, t
here may be situations that require a specific type of software that is not availa
ble. It would then be necessary to design and write a program; this software is
called custom software. Most often, personal computers utilize packaged soft
ware.
When packaged software is purchased, it will come with written instructions f
or installation and use. These instructions are documentation. Packaged softw
are can be purchased, or in some cases, it is available for no cost. Freeware is
software considered to be in the public domain, and it may be used or altered
without fee or restriction. Another form of somewhat free software is sharewa
re. The author of shareware hopes you will make a voluntary contribution for
using the product.

Task-oriented software is sometimes called productivit


y software, because it allows you to perform tasks that make you more product
ive. The major categories of productivity software are word processing, spread
sheet, database management, graphics, and communications. Most often these
categories of software are bundled together and sold as a single package. This
is called an office suite. A suite is designed to work together. This is very imp
ortant because this allows you to share files. Another advantage in using suites
is that the software looks similar and reduces your learning curve. Microsoft
Office is the most popular office suite for the personal computer today. Two ot
her important office suite products are Corel’s WordPerfect Office Suite and S
un’s Star Office Suite.
 The most important applications software categories included in office suites
are described in the table below:

Software Category Function

Word Processor Provides the tools for entering and revising text, adding gra
phical elements, formatting and printing documents.

Spreadsheets Provides the tools for working with numbers and allows yo
u to create and edit electronic spreadsheets in managing and
analyzing information.

Database Management Provides the tools for management of a collection of interrel


ated facts. Data can be stored, updated, manipulated, retriev
ed, and reported in a variety of ways.

Presentation Graphics Provides the tools for creating graphics that represent data i
n a visual, easily understood format.

Communication Software Provides the tools for connecting one computer with anothe
r to enable sending and receiving information and sharing fi
les and resources.

Internet Browser Provides access to the Internet through a service provider b


y using a graphical interface.

 As important as applications software may be, it is not able to directly commu
nicate with hardware devices. Another type of software is required operating s
ystems software. Operating Systems software is the set of programs that lies b
etween applications software and the hardware devices.
Think of the cross section of an onion. The inner core of the onion represents t
he hardware devices, and the applications software represents the outside layer
The middle layer is the operating systems software. The instructions must be
passed from the outer layer through the middle layer before the reaching the in
ner layer.

INPUT DEVICES
Q. Define input devices. Explain any two input devices. M’07

Q. Give various types of input devices. M’06

Q. Explain the following:

1. OCR
2. MICR
3. OMR M’06
An Input device is an electromechanical device which accepts data from the outside world
in a form that the computer can utilize. Also, the input devices send the data or instructions to the
processing unit to be processed into useful information. There are many examples of input
devices. They can be classified into the following categories:-

o Keyboard devices
In computing, a keyboard is an input device, partially modeled after
the typewriter keyboard, which uses an arrangement of buttons or ke
ys, which act as mechanical levers or electronic switches. A keyboar
d typically has characters engraved or printed on the keys and each p
ress of a key typically corresponds to a single written symbol. Howe
ver, to produce some symbols requires pressing and holding several
keys simultaneously or in sequence. While most keyboard keys prod
uce letters, numbers or signs (characters), other keys or simultaneou
s key presses can produce actions or computer commands.
In normal usage, the keyboard is used to type text and numbers into
a word processor, text editor or other program. In a modern compute
r, the interpretation of keypresses is generally left to the software. A
computer keyboard distinguishes each physical key from every other
and reports all keypresses to the controlling software. Keyboards are
also used for computer gaming, either with regular keyboards or by
using keyboards with special gaming features, which can expedite fr
equently used keystroke combinations. A keyboard is also used to gi
ve commands to the operating system of a computer, such as Windo
ws' Control-Alt-Delete combination, which brings up a task window
or shuts down the machine.

o Pointing devices
A pointing device is an input interface (specifically a human interface d
evice) that allows a user to input spatial (ie, continuous and multi-dimen
sional) data to a computer. CAD systems and graphical user interfaces
(GUI) allow the user to control and provide data to the computer using p
hysical gestures — point, click, and drag — for example, by moving a h
and-held mouse across the surface of the physical desktop and activatin
g switches on the mouse. Movements of the pointing device are echoed
on the screen by movements of the pointer (or cursor) and other visual c
hanges.
While the most common pointing device is the mouse, many more devic
es have been developed.
Based on motion of an object 
Mouse

A mouse is a small handheld device pushed over a horizontal surface.A


mouse moves the graphical pointer by being slid across a smooth surfac
e. The conventional roller-ball mouse uses a ball to create this action: th
e ball is in contact with two small shafts that are set at right angles to ea
ch other. As the ball moves these shafts rotate, and the rotation is measu
red by sensors within the mouse. The distance and direction information
from the sensors is then transmitted to the computer, and the computer
moves the graphical pointer on the screen by following the movements
of the mouse. Another common mouse is the optical mouse. This device
is very similar to the conventional mouse but uses visible or infrared lig
ht instead of a roller-ball to detect the changes in position.

Trackball

A trackball is a pointing device similar to a mouse which consists of a


ball housed in a socket containing sensors to detect rotation of the ball a
bout two axes, similar to an upside-down mouse: as the user rolls the bal
l with a thumb, fingers, or palm the mouse cursor on the screen will also
move. Trackballs are commonly used on CAD workstations for ease of
use, where there may be no desk space on which to use a mouse. Some a
re able to clip onto the side of the keyboard and have buttons with the sa
me functionality as mouse buttons. A trackball comes in various shapes.
The three common shapes are – a ball, a button, and a square.

Based on touching a surface 


Joystick

A lever that moves in all directions and controls the movement of a


pointer or some other display symbol. A joystick is similar to a mouse,
except that with a mouse the cursor stops moving as soon as you stop
moving the mouse. With a joystick, the pointer continues moving in the
direction the joystick is pointing. To stop the pointer, you must return
the joystick to its upright position. Most joysticks include two buttons
called triggers.
Joysticks are used mostly for computer games, but they are also used oc
casionally for CAD/CAM systems and remote control of industrial robot
s.

Light pen

An input device that utilizes a light-sensitive detector to select objects on a


display screen. A light pen is similar to a mouse, except that with a light pen
you can move the pointer and select objects on the display screen by directly
pointing to the objects with the pen. Movement of the pen causes the
graphical cursor on the screen to move. Applying pressure on the tip causes
same action as left button click and keeping the tip pressed for a short
duration causes same action as right button click. A user can also draw
graphics directly on the screen with it.

Touchscreen
A touch screen is a computer display screen that is also an input device.

The term generally refers to touch or contact to the display of the device by a f
inger or hand. The screens are sensitive to pressure; a user interacts with the c
omputer by touching pictures or words on the screen. . A Touchscreen is a de
vice embedded into the screen of the TV Monitor, or System LCD monitor scr
eens of laptop computers. It could consist by invisible sensor grid of touch-sen
sible wires drowned in a crystal glass positioned in front of real monitor scree
n, or it could consist of an infrared controller inserted into the frame surroundi
ng the monitor screen itself.
Examples of touch screens include a smart board, a microwave, a dishwasher,
or an ATM at a bank.
o Data scanning devices
Data scanning devices are input devices used for direct data entry into a
computer system from source documents. Some of them are also capable of
recognizing marks or characters.commonly used data scanning devices are discussed
below

Image scanner
An image scanner is an input device that optically scans images, printed text,
handwriting, or an object, and converts it to a digital image. It is very useful f
or preserving paper documents in electronic form. Common examples found
in offices are variations of the desktop (or flatbed) scannerand Hand-held sc
anners
Flatbed scanner

A type of scanner which is like a copier machine that consists of a glass plate on its
top and a lid that covers the glass plate. The documents to be scanned are placed
upside down on this glass plate. A light source situated below the glass plate moves
horizontally from one end to another when activated. Flatbed scanners are

particularly effective for bound documents.


Hand-held scanners

A handheld or portable optical scanner is an image scanner which is designed to be


moved by hand across the object or document being scanned. Today, a hand
scanner is extensively used with a personal computer or a word processor as an
image inputting device for optically reading image data out of a document by being
operated by hand. A handheld scanner comes in document or 3D forms. The scanner
produces light from green LEDs which highlight and scan the image onto a computer
to be viewed. An image scanner can also be 3D, and these scanners are now the
most popular form of hand scanners on the market today. These image scanners are
able to compensate for the uneven movements of the hand by relying on placement
of reference markers to mark correct positions.

Optical character recognition (OCR) device

Optical character recognition (OCR) is a process of capturing an image of a


document and then extracting the text from that image. With the help of
optical character recognition (OCR) software, data placed on a form can be
digitized by the OCR device and the digitized data can be interpreted as text
by the OCR software. The OCR software converts the bit map images of
characters to equivalent ASCII key code i.e., the scanner first creates the
bitmap image of the document and then the OCR software translates the
array of grid points into ASCII text which the computer can interpret as
letter, numbers and special characters.
Example of using optical character input:

 Converting paper records into electronic files.


 Scanning invoices into spreadsheets

Optical Mark Reader (OMR)


Optical Mark Recognition (also called Optical Mark Reading
or OMR) is the process of capturing human-marked data
from document forms like surveys and tests. This device is
designed to be able to read markings that have been
placed in specific places on a form or card. The person
filling out the form/card will either colour in a series of
small squares or perhaps make a cross within the square.
The device then scans the card and senses where marks
have been placed. Optical Mark Readers are much faster,
more accurate and easier to operate. Many traditional
OMR devices work with a dedicated scanner device that
shines a beam of light onto the form paper. The
contrasting reflectivity at predetermined positions on a
page is then utilized to detect the marked areas because
they reflect less light than the blank areas of the paper.
Lottery tickets are a form of input that an optical mark reader uses to input
data.

The lottery card has a number of small squares printed on it that you colour in or
mark with a dark pen.

The input reader senses those dark squares and converts them into the lottery
number you have selected.

Example of optical mark input:


 Lottery ticket
 Official forms

Magnetic Ink Character Recognition (MICR)

The problem with an OCR device is as follows


A standard OCR device relies on high contrast between the paper and the ink itself. T
his is often not the case! Paper can be grubby or crinkled, or the ink may be smudged.
Also the huge number of fonts that people like to use makes it a very tricky task to re
ad text with 100% accuracy.
The MICR device solves the problem as follows

 Text is printed with a special magnetic ink so the reader no longer relies on
simple contrast.
 Text is also printed in a special font that makes it much easier for the
machine to tell characters apart.

Magnetic Ink Character Recognition, or MICR, is a character recognition technolog


y adopted mainly by the banking industry to facilitate the processing of cheques. Ban
ks make extensive use of this technology. The combination of magnetic ink and speci
al font allows thousands of bank cheques to be scanned per hour.
Barcode Reader

A barcode reader, also called a price scanner or point-of-sale (POS) scanner, is


a hand-held or stationary input device used to capture and read information co
ntained in a bar code. A barcode reader consists of a scanner, a decoder (either
built-in or external), and a cable used to connect the reader with a computer. B
ecause a barcode reader merely captures and translates the barcode into numbe
rs and/or letters, the data must be sent to a computer so that a software applicat
ion can make sense of the data.A barcode reader works by directing a beam of
light across the bar code and measuring the amount of light that is reflected ba
ck. (The dark bars on a barcode reflect less light than the white spaces betwee
n them.) The scanner converts the light energy into electrical energy, which is
then converted into data by the decoder and forwarded to a computer.
There are five basic kinds of barcode readers -- pen wands, slot scanners, Char
ge-Couple Device (CCD) scanners, image scanners, and laser scanners.
Pg 15-37
ORGANIZATION OF COMPUTERS
1. Input unit:- Data and instructions must enter the computer system
before any computation can be performed . This task is performed by
the input unit which links the external environment with the computer
system. All input devices transform the input data into the binary
codes which primary memory of a computer is designed to accept.
The following functions are performed by the input unit:-
 It accepts the instructions and data from outside world
 It converts these instructions and data in computer acceptable
form
 It supplies the converted instructions and data to the computer
system for further processing

2. Output unit:- The job of an output unit is just the reverse of that of an
input unit. It supplies the information obtained from the processing
unit to the outside world. It links the computer with the external
environment. The results produced are in binary form. Before
supplying these results to the outside world they must be converted to
human acceptable form. This task is performed by the output unit.
The following functions are performed by the output unit:-
 It accepts the result produced by the computer which is in
code form.
 It converts the coded result in human acceptable form.
 It supplies the converted result to the outside world.

3. Storage unit:- The data and instructions entered into the computer
system have to be stored inside the computer before the actual
processing starts. The storage unit provides space for storing data and
instructions, intermediate results and space for the final result. The
storage unit is comprised of the following two types –
 Primary storage: It is also known as main memory. It is
used to hold pieces of program instructions and data,
intermediate results and final results. While the information
remains in the main memory, the CPU can access it directly
at a very fast speed. The primary storage can hold the
information only while the computer system is on. As soon
as the system is switched off or reset, the information in the
primary storage disappears. The primary storage has limited
capacity because it is very expensive.
 Secondary storage: It is also known as auxiliary storage. It
is used to take care of the limitations of primary storage. It
is used to supplement the limited storage capacity and the
volatile characteristics of primary storage. The secondary
storage is normally used to hold the program, data and
information on which the computer system is not working
on currently but needs to hold them for processing later.
4. Arithmetic Logic Unit (ALU):- it is a place where actual execution of
instructions takes place. Calculations and all comparisons are done in
the ALU. Data and instructions stored in a primary storage before
processing are transferred as and when needed to the ALU. All ALU
are designed to perform the four basic arithmetic operations such as –
add, subtract, divide and multiply – and logical operations such as
less than, equal to and greater than.

5. Control Unit: - How does the input device know that it is time for it to
feed data into the storage unit. How does the ALU know that what
should be done with the data once they are received. How is it that
only the final results are sent to the output device and not the
intermediate result? All this is possible due to the control unit of the
computer system. It does not perform any actual processing on the
data. The control unit acts as a central nervous system for the other
components of the system. It manages and coordinates the entire
computer system. It obtains instructions from the program stored in
the main memory, interprets the instructions and issues signals which
cause other units of the system to execute them.

6. Central Processing Unit (CPU): The control unit and the arithmetic
logic unit are jointly known as the CPU. The CPU is the brain of the
computer system. In a human body all major decisions are taken by
the brain and the other parts of the body function as directed by the
brain. Similarly in a computer system all major calculation and
comparison are made inside the CPU. The CPU is responsible for
activating and controlling the operations of other units of the
computer system.

You might also like