0% found this document useful (0 votes)
22 views

CSC 111 Course Material

This document is an introduction to computer science, covering the definition, historical development, and contributions of significant figures in computing. It explains the characteristics of computers, their applications across various fields, and the evolution of computer generations. The document also includes self-assessment questions to evaluate understanding of the material presented.

Uploaded by

b558t424d5
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

CSC 111 Course Material

This document is an introduction to computer science, covering the definition, historical development, and contributions of significant figures in computing. It explains the characteristics of computers, their applications across various fields, and the evolution of computer generations. The document also includes self-assessment questions to evaluate understanding of the material presented.

Uploaded by

b558t424d5
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 124

OSUN STATE UNIVERSITY

Faculty of Communication Information Technology

Introduction to Computer

CSC 111
Unit 1 Introduction to Computer

Expected Duration: 2hrs

Introduction

Two questions frequently asked about computers are: What are they and why are all the interest

in them? The answers to these questions partially explain the motivation for this book. There is

something about computers that is both fascinating and alarming. They are fascinating when they

are used in rocketry and space research and when they enable man to get to the moon and back.

Many people think of them as almost human machine with “brains” that allow them to think

On the other hand, we are inclined to be alarmed by their complex mechanisms and involved

scientific principles upon which they are built.

In this lecture you will learn about information and Communication Technology (ICT), as you

know, we are now in information age and every aspect of our life depends on Information

Technology (IT). Therefore in this course you will learn about definition of computer system, its

historical development. You will also learn about great computer scientists who contributed to

development of computer, its applications in different areas of human endeavour such as

banking, education, science, health, agriculture etc. The course will also take you through

memory lane of computer developmental stages and characteristics.

Learning Outcome for Unit 1

At the end of this unit, you should be able to:

1.1 Define computer system


1.2 Explain historical development of computer

1.3 Explain the contributions of great computer scientists to developments of computer

machines.

1.1 The Meaning and Characteristics of Computer

1.1.1 Definition of Computer

There is no doubt that man is highly gifted and is of high capabilities and potentials. In

fact, man is truly an amazing being and a master of inventions. He constantly uses the

power of his imaginations and inventions to solve problem in his environment. A lot

technologies have been developed, such as television, vehicle, camera, radio etc. are all

human inventions. Computer is not an exemption, it is one of the inventions of this

amazing being. Therefore, what is a computer? A computer is an electronic machine used

in solving problem. This definition of computer that you just read is just a simple

definition, it does not say enough about computer. This is because not all electronic

machines are computer. You should also note that computer is not just physical

equipment that you can only see or touch, it is also made up of parts you cannot easily

see like the program. Now in a more encompassing manner, let us define computer.

Computer therefore, is an electronic device (calculating machine) that is capable of

accepting data (Input), process the data logically or arithmetically using some sets of

instructions (Processing) and release result (Output).

In another way a computer can be defined as an electronic machine that solves problem

by applying prescribed instructions with little or no intervention on data presented to it.

To the present day generation, computer has different meaning to different group of

people. The use to which it is put determines the understanding attached to it. It is
common for different group to see it differently because of differences in usage. As you

study along, bear it in mind that computer is not composed just a machine, but it is a

collection of interrelated parts which are able to transmit information to one another, see

diagram below on functional parts of computer system showing system units and other

peripherals.

Fig 1.1: Functional Parts of Computer System.

1.2 Historical Background of Computer

1.2.1 ABACUS

Historically, computing may be considered to have begun with ABACUS, which originated

about 5,000 years ago. During the Middle age, the abacus was used throughout the European and
Arab Worlds as well as in Asia. The design is simply a wooden rack holding parallel wires on

which beads are strung. Calculations can be performed manually by sliding (beads or blocks)

along the parallel wires (rods). The counters are divided into two sections by means of a bar

perpendicular to the rods. One section has two counters, representing 0 and 5 depending upon

their position along the rod. The second section has four or five counters, representing units.

Each bar represents a significant digit, with the least significant digit on the right. Another

computing instrument, the ASTROLABE, was also in use about 2,000 years ago for navigation.

Fig 1.2: Abacus

1.2.2 BLAISE PASCAL

Going by the popular saying that “necessity is the mother of invention, a young man by name

Blaise Pascal invented the first calculating machine at the age of 19 years during the 17th

century, 1642 to be precise. His invention was in response to his desire to assist his father in his

cumbersome business accounting works that involved a lot of calculations. Pascal’s machine was

able to carry out only addition and subtraction of numbers. It utilized a mechanical gear system

to add and subtract, with as many as eight columns of digits.


Between 1663 and 1666 Sir Morland in England unaware of Pascal’s machine invented three

machines out of which only one was used for British currency addition, pounds and shillings, the

two other machines were devices which provided access to pre–calculated tables.

1.2.3 GOTFRIED WILHELM LEIBNITZ

In 1694 the German mathematician Leibnitz developed a more advanced mechanical calculator.

His calculator called Stepped Reckoner could also multiply, divide and extract square roots. This

calculator’s first working model was completed 100 years later in 1794 and exhibited at Royal

Society in London.

1.2.4 CHARLES BABBAGE

After Leibniz’s machine proved unreliable, by the 1830’s Charles Babbage an English inventor

developed the first automatic digital computer called Analytical Engine. The new device was

able to combine arithmetic process of addition, subtraction, multiplication and division with

decisions based on its own computations. Most of the basic elements of the modern digital

computer was found in Babbage’s engine which includes punched–card, input/output medium,

arithmetic unit, memory for storage of numbers and sequential control. Charles Babbage

invention marked the beginning of modern computer architectural design. Considering this great

achievement, he was referred to as the father of modern computers. Although he was not able to

implement his design because of level of technology as at that time.

1.2.5 GEORGE BOOLE

The essays written during the mid-19th century by Boole was of greater significance. He called

attention to the analogy between the symbols of algebra and those of logic as used to represent
logical forms. Boole’s system with its binary logical operators (e.g. AND, OR and NOT) became

the basis of what is now known as Boolean algebra on which electronic computer switching

theory and procedures are based.

1.2.6 HERMAN HOLLERITH

Introduction of punched cards in 1880 by Herman Hollerith a U. S. Statistician who worked on

census return was another major step in computer development. He recognised the possibility

that a pattern of holes in perforated sort and manipulate electrically by a machine specially

designed to sort and manipulated the numerical data represented by the holes. By the U. S.

Census of 1890, Hollerith had invented a tabulating system that automated census count. As

stated earlier Hollerith’s system had accomplished in one year and seven months what it would

have taken a hundred clerks seven years and eleven months to do. Hollerith left the census

bureau in 1896 to form the tabulating Machine Company which was eventually changed to the

International Business Machine Corporation (IBM), which today stands out as one of the largest

computer manufacturer in the world.

Fig 1.3: punch card


1.2.7 AUTOMATIC CALCULATOR

In 1939 John V. Atanasoff, a U. S. Mathematician and Physicist built what some consider

to be a prototype of an electromechanical digital computer. That same year Howard Aiken of

Harvard University, in association with engineers of International Business Machine Corporation

began work on a fully automatic large-scale calculator using standard business machine

components. By 1944 the first calculator called Automatic Sequence Controlled Calculator,

commonly known as Mark-1 was made. Later Mark-2 and Mark-3 were built on similar line.

Fig 1.4: Automatic calculator

Another machine called ENIAC (Electronic Numerical Integrator and calculator) which consists

of switches and interconnecting wires was built around 1939, this was mainly for calculating

trajectories and could also be used in other computations. The use of paper tape for data entry

into these machines was slow and did not allow the machine to operate at full speed. Similarly
there was a need to make programs available internally along with the data, to take advantage of

the high speed inherent in electronic systems. Large memory was designed in Cambridge by M.

V. Wilkes. His machine called EDSAC (Electronic Delay Storage Automatic Calculator) was

used for the training of a whole generation of computers oriented mathematicians at Cambridge.

Between 1945 and 1950, EDVAC (Electronic Discrete Variable Computer) was designed. This

machine emphasized the idea of stored program. By 1948 a prototype machines at Manchester

was completed. Later, companies like IBM, Remington Co-operation, ICL and many other

joined in producing computer in commercial quantities.

Fig 1.6: ENIAC

Exercise 1

i. Can you highlight the contributions of the great computer scientist you have read about?

ii. Go over unit 1.2

iii. Do you know why Charles Babbage contribution was unique?

iv. He proposed the architecture of the modern computer

Significance of Computer
Scientific and military applications were the first areas to which the computer was put to use to

aid problem solving calculations, especially during war. Presently it is also widely used for

planning and as an aid to business. Because of its wide area of applications, it can be said to be a

general purpose machine. It performs its data-processing operations accurately at high speed

with little or no human interventions by loading it with different packages or programmes. It is

also called an automatic device and has the ability to perform calculations, sort Files, and Edit

information.

It must be noted that its compatibility to solve any given problem is limited by the instruction

supplied. A problem that has no solution from human point of view also has no solution in the

computer realm. Hence it can be said to an extension of human mind, though in speed and

accuracy, it performs better. This view is not true because it has no mind of its own, it cannot

start itself, and its ability to solve problem is limited by logic or steps supplied by the

programmer. There is also a high degree of reliability in its processing operations and

performance of repetitive operations.

It stores vast quantities of information and also retrieves any given volume within a very short

time gap. One major advantage is the ability to take some decisions by altering the flow

instructions.

Due to speed and accuracy of processing, computer machine are fast more becoming more

popular and there is increase in their demand n the world over. It is currently having a proud

influence on science, business, government, industry, and education.Science and mathematical

research have been vastly accelerated by the use of the computer. In business and government,

management practices have been revolutionized by computer and also because of its ability to
process data and present it in a more meaningful form. The development in the computer

industry is so fast that latest developments today may be out dated within 2 years. This

continuous revolution and development in the computer industry makes it challenging area to be

exploded.

Today’s computers come in a variety of shapes, sizes and costs. Larger general-purpose

computers are used by many large corporations, universities, hospitals and government agencies

to carry out sophisticated scientific and business calculations. These computers are generally

referred to as mainframes. They are very expensive (some cost millions of dollars), and they

require a carefully controlled environment (temperature, humidity, etc.). As a rule, they are not

physically accessible to the scientists, engineers and corporate accountants that use them.

Mainframes have been available since the early 1950s, but very few people had any opportunity

to use them, particularly in the earlier years. Thus it is not surprising that computers were viewed

mysteriously and with some suspicion by the general public.

The late 1960 and early 1970s saw the development of smaller, less expensive minicomputers.

Many of these machines offer the performance of earlier mainframes at a fraction of the cost.

Many business and educational institution that could not afford mainframe acquired

minicomputers as they became increasingly available. By the md-1970s, advances in integrated

circuit technology (silicon “chips”) resulted in the development of still smaller and less

expensive computers called microcomputers. These machines are built entirely of integrated

circuits and are therefore not much larger (or more expensive) than a conventional office

typewriter. Yet they can be use for a wide variety of personal, educational, commercial and

technical applications. Their use tends to complement rather than replace the use of mainframes.

In fact, many large organizations utilize microcomputers as terminal or workstations that are
connected to a mainframe computer (or series of mainframes) through a communication

network. Particular interest is the development of the personal computer a small, inexpensive

microcomputer that is intended to be used by only one person at a time. Many of these machines

approach small minicomputer in power. Moreover, their performance continues to improve

dramatically as their cost continues to drop. Personal computers are now used in many schools

and small business, and it appears likely that they will soon become a common household items.

Summary of Unit 1

In Unit 1, you have learned that:

1. A computer is an electronic device (calculating machine) that is capable of accepting data

(Input), process the data logically or arithmetically using some sets of instructions

(Processing) and release result (Output).

2. Computer was developed through efforts and contributions of great computer scientists.

Self-Assessment Questions (SAQs) for Unit 1

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 1.1 (tests learning outcome 1.1)

How can we correctly define computer?

SAQ 1.2 (tests learning outcome 1.2)

Briefly discuss contribution of Charles Babbage and Blaise Pascal.

SAQ 1.3 (tests learning outcome 1.3)


Name the people who designed the following machines

(i) Difference engine

(ii) Pascaline

Notes on the Self-Assessment Questions (SAQs) for Unit 1

SAQ 1.1: A computer is an electronic device (calculating machine) that is capable of accepting

data (Input), process the data logically or arithmetically using some sets of instructions

(Processing) and release result (Output).

SAQ 1.2: Charles Babbage proposed the architecture of the modern computer while Blaise

Pascal developed a adding machine also known as pascaline.

Difference engine - Charles Babbage

Pascaline - Blaise Pascal

SAQ 1.4: Fist Generation - Vacuum tube.

Second Generation – Transistor

Third Generation - IC (Integrated circuit)

Fourth Generation – VLSI

Fifth Generation – AI ( Artificial intelligence )

1. The characteristics of computer include: speed, accuracy, storage, integrity, security etc

2. Computer has generations and that each generation is a reflection of its developmental

stages.
References

1. Jaiyesimi, S.B.; Lala, O.G.; Akinwumi A.O. and Omotosho L.O. (2004) “Computer

Literacy”. Information Technology Publishers, Lagos, Nigeria, ISBN 978-34457-6-5.]\

2. Steve Sleight (2000) “Information Technology”. Printed by Dorling Kindersley Limited,

9 Henrietta Street, Covent Garden, London WC2E 8PS. Website: www.dk.com.

3. C.S. French “Computer Science” New Edition


UNIT 2: GENERATIONS OF COMPUTER

Expected Duration: 2hrs

Introduction

In chapter 1 you learnt the definition and historical development of computers. This chapter will

take you a step further by learning the developmental stages that the computer has gone through.

The computer has gone through stages of design ranging from big size to small size computer, a

computer that generates heat to the one that generates less heat, computer with high processing

power to computer with less processing power. These and some other design features are the

characteristics of computer generations that you will learn in this chapter.

Learning Outcome for Unit 2

At the end of this chapter, you should be able to:

2.1 List five (5) generations of computer

2.2 Explain the generations of computer

2.3 State the distinctive characteristics of each generation of computer

2.1 Generations of Computer

2.1.1 Overview

The history of computer development is often referred to as the different generations of

computing devices. Each generation of computers is characterized by a major technological

development that fundamentally changed the way computers operate, resulting in increasingly

smaller, cheaper, powerful, more efficient and reliable devices. Since the development of Mark-
1, the digital computer has evolved at an extremely rapid pace. The succession of advances in

computer hardware, most notably in logic circuitry and storage system is generally discussed in

terms of the concept of generation. Each stage of development is associated with one sort of

technological innovation or another. Each generation is usually better than the previous by

making possible, certain which were not possible with the earlier generation. Generally, the five

generations of computers are characterized by electrical current flowing through the processing

mechanisms, such as the first generation within vacuum tubes, second-generation within

transistors, third-generation within integrated circuits, fourth-generation within microprocessor

chips and fifth-generation unveiled smart devices capable of artificial intelligence. The summary

of the development of five generations of computers from its early days to the present situations

are simply analyzed in Table 2.1.

Table 2.1: Analysis of the five generations of computers

Generation Architecture Speed Memory Heat Power Physical Cost

Capacity generated consumption Size

First vacuum tubes Very Very Large High Very big High

Low small

Second Transistors Low Small Large Low Small High

Third integrated Higher Large Low Low Very Low

circuits small

Fourth microprocessor Higher Very Low Low Very Relatively

chips Large small Low

Fifth artificial Higher Very Low Low Very Relatively

intelligence Large small Low


2.1.2 FIRST GENERATION: VACUUM TUBES (1940-1956)

The first generation of modern-day computers with ENIAC (Electronic Numerical Integrator and

Calculator) was ushered in by J. P. Eckert and John W. Mauchly in 1946. It was the first all-

purpose, all-electronic digital computer. Figures 2.1 and 2.2 identify vacuum tubes for circuitry

and magnetic drums for memory of mercury relay lines respectively used in the first computers

instead of relays as its active logic element. There was a substantial increase in computational

speed due to the use of electron tubes. This computer was more than 1000 times faster than its

electromechanical predecessors and could execute an average of 5,000 basic arithmetic

operations per second.

Figure 2.1: Vacuum Tubes5 Figure 2.2: Magnetic Tapes5

In 1974 the Hungarian mathematician John Von Neumann devised a method of converting the

ENIAC concept of an externally programmed machine to that of a stored program concept. This

stored-program concept led to the development of the self – modifying computer. Other notable

first generation of computers called the UNIVAC-1 (Universal Automatic Computer) built in the

year 1951 was the first commercial computer delivered to a business client, the U.S. Census

Bureau. The first electronic digital computers relied on machine language, the lowest-level
programming language understood by computers, to perform operations, and they could only

solve one problem at a time. Input was based on punched cards and paper tape, and output was

displayed on printouts. UNIVAC-I was the first computer to handle both numerical and

alphabetical information with equal ease and assailed the principle of the separation of

input/output from computation per se.

The main problem encountered during the era of first-generation computers was that they

occupied a large amount of space and large circuits as shown in figure 2.3. Generally, they were

slow in operation and generated a lot of heat with the problem of unreliability which was often

the cause of malfunctions compared to other generations. They were very expensive to operate

and the period of this generation spanned mid-forties to mid-fifties.

Figure 2.3: First-generation Computer4

Examples of the first generation of computers

i. ENIAC (1946)

ii. EDSAC (1949)

iii. EDVAC (1950)

iv. UNIVAC I (1951)


Characteristics of the first generation of computers

i. Used vacuum tubes for circuitry

ii. Electron emitting metal in vacuum tubes burned out easily

iii. Used magnetic drums for memory

iv. Were huge, slow, expensive, and many times undependable

v. Generated a lot of heat

vi. Solved one problem at a time

vii. Used input based on punched cards

viii. Outputs are being displayed in print outs

ix. Used magnetic tapes

x. Used machine language

xi. Had limited primary memory

Advantages of the first generation of computers

i. Computers could calculate in milliseconds.

Disadvantages of the first generation of computers

i. Very big, weight was about 30 tones.

ii. Store only a small amount of information

iii. It requires a large cooling system

iv. Very less work efficiency

v. Limited programming capabilities


vi. Large amount of energy consumption.

vii. Not reliable and constant maintenance is required

2.1.3 SECOND GENERATION: TRANSISTORS (1956-1963)

Figure 2.4 introduced the second generation of computers that was invented in 1947 with a semi-

conductor device known as TRANSISTORS in electronic engineering to replaced vacuum tubes

but still relied on punched cards for input and printouts.

Figure 2.4: Second generation of computer4

The transistor was unable to see the widespread use until the late 1950s after a series of

development in transistors became a viable alternative to the vacuum tube. The transistors helped

in building a series of processors operating at microsecond speed ranges with a lower level of

generated heat shown in figure 2.5. By using transistors in control, arithmetic and logic circuits

along with improved magnetic core memory manufacturers were able to produce more efficient,

smaller, faster and cheaper computers than its predecessors. Figure 2.5 showcase the typical

example of a more efficient and smaller size of the second generation of computers.
Figure 2.5: Improved Second generation of computer4

This generation of computers with memory to store advanced from cryptic binary machine

language to symbolic which allows programmers to specify instructions in words using high-

level programming languages like COBOL and FORTRAN developed at this time as early

versions. The memory was also upgraded from a magnetic drum to magnetic core technology.

The first products of advanced second generation of computers were developed for the atomic

energy industry. There was a vast improvement in transistor over the vacuum tube, though the

transistors were still subjected the computer to damage due to a great deal of heat attributed to it.

The small size of the transistor as presented in figure 2.6, its greater reliability and its

comparatively low power consumption of computers in the second generation made it far

superior to the vacuum tube computers in the first generation. This generation was between the

late fifties and early sixties.


Figure 2.6: Transistors5

Examples of the second generation of computers

i. IBM-7000

ii. CDC 3000 series

iii. UNIVAC 1107

iv. IBM-7094

v. MARK III

vi. Honeywell 400

Characteristics of the second generation of computers

i. Used transistors

ii. Faster and more reliable

iii. Slightly smaller and cheaper

iv. Generated heat

v. Used punch cards and printouts for input/output

vi. Allowed assembly and high-level languages

vii. Stored data in magnetic media

viii. Costly and need air conditioning

ix. Introduced assembly language and operating system software

Advantages of the second generation of computers

i. It reduced the size of a computer


ii. Less energy and not produce much heat

iii. Assembly language and punch cards were used for input.

iv. Low cost

v. Calculate data in microseconds

vi. Portability

Disadvantages of the second generation of computers

i. A cooling system was required

ii. Constant maintenance was required.

iii. Only used for specific purposes

2.1.4 THIRD GENERATION: INTEGRATED CIRCUITS (1964-1971)

During the late 1960s and 1970s, transistors were miniaturized and placed on silicon chips,

called semiconductors to develop the integrated circuit of a solid-state device. A solid-state

device (integrated circuit) consists of hundreds of transistors, diodes and resistors on a tiny

silicon chip which happened to be the existence of the third generation of computers. Thereafter,

an improved Integrated Circuit was able to develop a Large Scale Integration (LSI) which made

it possible to pack thousands of transistors and related devices on a single integrated circuit as

figured out in figure 2.7. This also led to the invention of the microprocessor that contains all the

arithmetic, logic and control circuitry called the Central Processing Unit (CPU). The CPU is the
part of the digital computer that interprets and executed instruction. The development of the CPU

into a single integrated circuit led to the production of microcomputers.

Figure 2.7: Integrated Circuits. 5

The typical structure of the third generation showcased in figure 2.8 which was the generation

that produced an important technological innovation to increase the speed and efficiency of

computers instead of punched cards and printouts. Users interacted directly with computers

through the use of keyboards and monitors that interfaced with an operating system and allowed

the device to run many different applications at one time with a central program that monitored

the memory.

Figure 2.8: Third generation computer7

The construction of mainframe (large-scale) computers of higher operating speed and capacity

reliability at substantially lower cost was achieved based on the impact of integrated circuitry
innovation which further helped the engineers to design mini computers. The third generation of

computers richly helped the society by making computers to be accessible due to its smaller size

and cheaper than its predecessors.

Examples of third-generation computers

i. IBM-360

ii. Personal Data Processor (PDP)

iii. IBM-370

Characteristics of the third generation of computers

i. Used integrated circuits (ICs)

ii. Used parallel processing

iii. Slightly smaller, cheaper and faster

iv. Used motherboards

v. Data was input using keyboards

vi. Output was visualized on the monitors

vii. Used operating systems

viii. Allowed multitasking

ix. Encouraged simplified programming languages

Advantages of the third generation of computers

i. ICs improves the performance of the computer

ii. It has a big storage capacity


iii. Mouse and keyboard are used for input

iv. They used an operating system

v. It encourages the concept of time-sharing and multiple programming

vi. It reduces the computational time from microseconds to nanoseconds

Disadvantages of the third generation of computers

i. IC chips are difficult to maintain

ii. The highly sophisticated technology required for the manufacturing of IC chips.

iii. Air conditioning is required

2.1.5 FOURTH GENERATION: MICROPROCESSOR CHIPS (1971-PRESENT)

The development of microprocessor led to the fourth generation of computers. Thousands of

integrated circuits were built and packed in a single silicon chip. The first microprocessor named

as Intel 4004 chip was developed in 1971, contained all the components of the computer ranged

from the central processing unit, memory to input/output controls as indicated in figure 2.9. A

microprocessor is a central processing unit fabricated on a chip.

Figure 2.9: Microprocessor chip5


The set of computer produced in the 1980’s were referred to as the fourth generation computers.

In 1981, IBM introduced its first computer for the home user while Apple introduced the

Macintosh in the year 1984. Microprocessors were upgraded from the use of desktop computers

into many areas of life as more and more everyday products began to use microprocessors as

described as an example of the migration in figure 2.10. The main feature attributed to this

generation is the availability of the VERY LARGE-SCALE INTEGRATED (VLSI) which has

vastly increased the circuit density of the microprocessor, memory and support chips. Note that

the large-scale integrated circuits contain thousands of computers on a silicon chip less than 0.2

inch (five mm) square, the very large-scale integrated circuit holds hundreds of thousands of

electronic components within the same amount of space.

Figure 2.10: fourth-generation computer4

During this era, there is no much difference between fourth generation and the third generation

of computers; but this generation witnessed the flooding of the market with a wide variety of

software tools, application packages like database management systems, word processing

packages, spreadsheet packages, game packages, etc and the enhancement on networking

capabilities in the areas of LAN (Local Area Network). As this generation of computers became

more powerful, the use of the Internet and the GUIs also became friendly and useful. The fourth

generation of mainframes and supercomputers evolved to be powerful systems. This generation


is an open-source and free software, examples are Ubuntu OS, Mozilla Firefox browser, Open

Office, MySQL and VLC media player.

Examples of the fourth generation of computers

i. Desktops

ii. All-in-one

iii. Laptops

iv. Workstations

v. Nettops

vi. Tablets

vii. Smartphones

Characteristics of 4th Gen Computers

i. Used central processing units (CPUs)

ii. Much smaller and fitted

iii. Used a mouse

iv. Used in networks

v. Were cheap and very fast

vi. Had GUI

Advantages of the fourth generation of computers

i. Fastest in computation and reduced in size

ii. Heat generated is negligible


iii. Less maintenance is required

iv. All types of high-level language can be used in this type of computers

Disadvantages of the fourth generation of computers

i. The design of microprocessors are very complex

ii. Air conditioning is required in many cases due to the presence of ICs.

iii. Advanced technology is required to make the ICs.

2.1.6 FIFTH GENERATION: ARTIFICIAL INTELLIGENCE (PRESENT AND

BEYOND)

The fifth generation of computing devices is based on artificial intelligence that is still in

development. Though there are some applications like voice recognition, which are being used

today. The use of parallel processing and superconductors is helping to make artificial

intelligence a reality. Quantum computation and molecular, and nanotechnology will radically

change the face of computers in the years to come. The goal of the fifth-generation of computers

is to develop devices that respond to natural language input and are capable of learning, self-

organization, reasoning, recognize relationships and improving the performance based on the

experience. Manufacturers are also expected to produced voice-input devices that capable of

handling connected speech of larger vocabularies as described in figure 2.11.


Figure 2.11: Fifth generation of computer4

Note that, the fifth generation of computers is yet to be in the market because of the features

expected. There is hope that simultaneous execution of several separate operations (e.g. memory,

logic and control) employing numerous integrated circuits in which millions of CPU, memory,

and input/output circuits are combined on a single chip.

Examples of the fifth generation of computers

i. Virtual personal assistants

ii. Smart cars

iii. Computer-Aided Diagnosis

iv. News generation tools

v. Laptop

vi. NoteBook

vii. UltraBook

viii. Chromebook

Advantages of the fifth generation of computers

i. It is more reliable and works faster


ii. It is available in different sizes and unique features

iii. It provides computers with more user-friendly interfaces with multimedia features

Disadvantages of the fifth generation of computers

i. They need very low-level languages

ii. They may make the human brains dull and doomed

iii. It is busy doing yoyo things

Summary of Unit 2

In Unit 2, you have learned that:

1. Computer has generations and that each generation is a reflection of its developmental

stages

2. Each generation of computers has distinctive features: size, heat generation, memory size,

processing speed and the technology used in building the computer of that generation.

Self-Assessment Questions (SAQs) for Unit 2

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 2.1 (tests learning outcome 2.1)

List five (5) generation of computer you know

SAQ 2.2 (tests learning outcome 2.2)

Explain the feature of the fifth generation computer

SAQ 2.3 (tests learning outcome 2.3)

State the unique characteristics of each computer generation?


Notes on the Self-Assessment Questions (SAQs) for Unit 2

SAQ 2.1: First generation, second generation, third generation, fourth generation and fifth

generation.

.SAQ 2.2:See unit 2.1.6

SAQ 2.3: First Generation - Vacuum tube.

Second Generation – Transistor

Third Generation - IC (Integrated circuit)

Fourth Generation – Microprocessor chip

Fifth Generation – AI (Artificial intelligence)

References

1. Denning, Peter J. (2000). "Computer Science: The Discipline" (PDF). Encyclopedia of

Computer Science. Archived from the original (PDF) on May 25, 2006.

2. Keates, Fiona (2012). "A Brief History of Computing" The Repository. The Royal

Society.

3. Miss N. Nembhard (2004). ‘Fundamentals of Information in Technology”.

4. Shapiro, Ehud Y. "The fifth-generation project—a trip report." Communications of the

ACM 26.9 (1983): 637-641.

5. Shapiro E.( 2020). A subset of Concurrent Prolog and its interpreter, ICOT Technical

Report TR-003, Institute for New Generation Computer Technology, Tokyo, 1983. Also
in Concurrent Prolog: Collected Papers, E. Shapiro (ed.), MIT Press, 1987, Chapter

2."Generation of computer " www.computerhope.com. Retrieved April 11, 2020

6. Robers, Eric S. (1995). “The Art and Science of C. Addison”-Wesley Publishing

Company. Reading: 1995.

7. Shapiro, Ehud Y. (1983). "The fifth-generation project—a trip report." Communications

of the ACM 26.9 (1983): 637-641. http://www.rogerclarke.com/SOS/SwareGenns.html

8. Van Emden, Maarten H., and Robert A. Kowalski. (1976). "The semantics of predicate

logic as a programming language." Journal of the ACM 23.4 (1976): 733-742.

9. Carl Hewitt (2009). Inconsistency Robustness in Logic Programming ArXiv 2009.

10. Hendler, James (2008). "Avoiding Another AI Winter" (PDF). IEEE Intelligent Systems.

23 (2): 2–4. doi:10.1109/MIS.2008.20. Archived from the original (PDF) on 12 February

2012.
UNIT 3 COMPUTER SYSTEM

Expected Duration: 2hrs

Introduction

Now that you have learned history and generation of computer, it is time to study in detail what

computer really look like, its characteristics, uses, application areas and benefits. In this unit you

will also learn types of computer and classifications of computer based on size and purpose and

speed..

Learning Outcome for Unit 3

At the end of this unit, you should be able to:

3.1 Explain computer system

3.2 State five (5) characteristics of computer

3.3 Areas of application of computer

3.4.1 State three (3) types of computer

3.5 Classify computer by size, purpose and by speed

3.1 COMPUTER AS A SYSTEM.

Before we discuss computer in detail, there is need for us to learn what a system is. We often

speak of water system, digestive system in biology, computer system and other types of system.

What then is a system? A system is a collection interrelated components interacting together to

achieve a goal. Most systems have input, process and output stages as illustrated with the

diagram below.
PROCESS OUTPUT
INPUT
ii

fig3.1: Input-Process-Output system

Input: this is the element that enter the system for processing

Processing: This organises or arranges input into an output

Output: This is the result obtained from processing activities

To facilitate understanding of input-process-output processing in a system we shall use digestive

system, and human body as an illustration

3.1.1 Digestion of food as a System

Digestion of food is the taking in of food through the mouth, breaking down the foods into

soluble forms and wastes by body enzymes and releasing the wastes in form of urine and excreta.

The I-P-O phase in the system involves the following:

Input: foods

Process: breaking down the foods

Output: waste inform of urine and excreter

3.1.2 Human body as a system

You can take a look at the figure 3.4 below. It shows different parts of human body. The parts

can be likened to I-P-O system. That is, it has input, processing and output components
Fig3.2: Human body system

Input:

- Eyes: for sensing object

- -Ears: for hearing sound

- Mouth : for drinking and eating

- Nose : for breathing in oxygen

Processing

- Brain : for thinking, memorising and controlling the activities of the body

Output

- Hands: for writing information

- Mouth for speech

- Nose for breathing out carbondioxide

3.1.3 Computer as a system

The definition of computer you read in unit 1 of this module shows that computer is an I-P-O

system. From the definition, computer accepts data (input), processes the data and gives out

results(output). The I-P-O is represented in figure 3.6 below


INPUT PROCESS OUTPUT

Data Execution Results

Figure 3.3: Input-Process-Output-Phase in Computer

Consider this scenario: suppose numbers 10 and 15 are supplied to a computer with an

instruction to add the two numbers. Can you show the I-P-O phase of how computer will carry

out this scenario. The computer will add the two numbers according to the given instruction and

generate the required result which is 25. The I-P-O phase of the addition operation of the two

numbers is represented in figure 3.7 below

INPUT PROCESS OUTPUT

10 and Add 10 to
15 15 25

Figure 3.4: I-P-O PHASE

3.2 COMPUTER SYSTEM

In our earlier study in unit 1 we gave simple definition of computer. You have also further

studied that a computer is a system. What then is a computer system? A computer system is not a

single machine. It consists of a group of electronic components like monitor, system unit,
keyboard, mouse, printer and other components working together to achieve a particular goal.

You can see figure 1.1 for different components of a computer systems

3.2.1 Characteristics of a Computer System

Computer has some characteristics or features which distinguish them from other machine.

These characteristics constitute the advantages of computer. Below are some of these

characteristics

Speed: Computers process information at a very fast rate, the speed of processing is measured in

nanoseconds (billionth of seconds), some also operate faster in picoseconds (trillionths of

seconds). Some processing speeds are of the order of gigaflops.

Access : Computers are used mainly for information processing, but more important access to

the speed stored processed information in more important. They offer the advantage of fast and

easy access to store information. The speed of retrieval however depends on the capacity of the

system (computer) and the peripheral device used.

Extra ordinary task: Computers have the ability to perform tasks that would otherwise not be

feasible or cost effective using conventional means. A good example of this is the ability to solve

tedious and long equations in space program.

Security: Computers are provided with in-built security codes that make it impossible for

outsiders to manipulate the data or record in the computer files. This security helps to check

unauthorized access by other insider except the user.


Storage Space: Every piece of information that is stored within the computer’s memory s

encoded as some unique combination of zeros and ones. These zeros and ones are called bits

(binary digits). Each bit is represented by an electronic device that is, in some sense, either “off”

(zero) or “on” (one). Most small computers have memories that are organized into 8-bit

multiples called bytes. Normally 1 byte represents a character (i.e., a letter, a single digit or a

punctuation symbol)

An instruction may occupy 1, 2, or 3 bytes, and a single numerical quantity may occupy

anywhere from 2 to 8 bytes, depending on the precision and type of number. The size of a

computer’s memory is usually expresses as some multiple of 2^10 = 1024 bytes. This is referred

to as 1k. Small computers have memories whose sizes typically range from 64k to 1024k (1

mega) bytes.

Take a look at this explanation

If the memory of a small computer is say 64k bytes, then, as many as 64 * 1024 =65,536

characters and/or instructions can be stored in the computer memory. If the entire memory is

used to represent character data, then about 800 names and address can be stored within the

computer at any one time (assuming 80 characters for each name address). If the memory is used

to represent numerical data rather than names and address, then about 16,00 individual quantities

can be stored at any one time (assuming 4 bytes per number). Large computers have memories

that are organized into words rather than bytes. Each word will consist of a relatively large

number of bits, typically 32 or 36. This allows one numerical quantity, or a small group of

characters (typically four or five) to be represented within a single word of memory. Large

computer memories are usually expressed as some multiple of 1k (i.e.2^10 = 1024) words.
A large computer may have several millions words of memory. Some memories have

ability to store 16,000 or 64,000 bits (pieces of information) and there are others that can store

information in the region of gigabytes (1 gigabyte 1024 * 1024 = 1048576).

Now can you calculate the capacity of this computer in term of data storage?

o If the memory of a large general purpose computer is 2048kb, can you determine the

storage capacity of the computer?

• Then this is equivalent to 2048 * 1024 = 2,097,152 words. If the entire memory is used

to represent numerical data, then roughly 2 million numbers can be stored within the

computer at any one time. If the memory is used to represent character rather than

numerical data, then about 8 million characters can be stored at any one time. This is

more than enough memory to store the content of an entire book.

Most computers also employ auxiliary memory device (e.g. magnetic tapes, disks, solid-state

memory devices) in addition to their primary memories. These devices typically range from a

few hundred thousand bytes (for a small computer) to several million words (for a larger

computer). Moreover, they allow for the permanent recording of information, since they can be

physically mounted or dismounted from the computer and stored when not in use. However the

access time (i.e. the time required to store or retrieve information) is considerably greater for

these auxiliary devices than for primary memory.

SPEED AND RELIABILITY: Because of its extremely high speed, a computer can carry out

calculations in just a few minutes that would require month’s perhaps even years – if carried out

by hand. Simple tasks, such as adding two numbers, can be carried out in fractions of a

microsecond (1 us = 10-6s). On a more practical level; the end-of-semester grades for all

students n a large University can typically be processed in just a few minutes of computer time.
For example it was estimated that Hollerith’s system had accomplished in one year and seven

months what it would have taken hundred clerks seven years and eleven months to do.

This very speed is accompanied by an equally high level reliability. Thus a computer practically

never makes a mistake of its own accord. Highly publicized “computer” errors such as a person

receiving a monthly bill of a programming error or an error in data transmission rather than an

error caused by the computer itself. In computer systems, output could be described as 100%

reliable if the input is correct. Hence the saying garbage-in, garbage-out (what you send n, is

what you must expect).

Activity 3.1

Take a moment to reflect on what you have read so far. Based on your learning experience, and

knowing that computer has a lot characteristics which make them to be very useful for daily

activities. Can you mention some of the advantages, individual, organisations and even

government can derive from using computer?

Activity 3.1 Feedback:

The advantages of computer ranges from speed, accuracy, storage capacity, integrity and

security. Read more from unit 3.2.1

3.3. TYPES AND CLASSIFICATIONS OF COMPUTER

3.3.1 Types Of Computers

There are two basic types of computers namely

(1) Analogue Computers and


(2) Digital Computers.

(3) Hybrid Computers

The analogue computers operate on data represented by variable physical quantities, such as

voltages and are measured continuously.

Digital computer on the other hand works with numbers, words and symbols expressed as digits,

which it manipulates and counts discretely.

ANALOGUE COMPUTERS

An analogue device is defined as one that operates on the principle of similarity in proportional

relations to a process modelled when values are kept constant over a specified range. A computer

of this type solves problems by operating on continuous variables rather than on discontinuous or

discrete units as do digital computers. Analogue computers are similar to a voltmeter in the way

they measure values. They translate various physical conditions such as flow, temperature,

pressure, mechanical motion, and angular position into mechanical or electrical analogue values.

These types of computer use various types of amplifiers to perform arithmetic operations as

summation and multiplication.

Fig 3.5: Analogue computer


DIGITAL COMPUTERS

A digital computer processes all kinds of data in discrete form i.e numbers expressed directly as

the two digits 0 and 1 of the binary code. Using various techniques, these two binary digits called

“BIT’ can be made to represent numbers, letters, and symbols. Binary 0110 for example,

represent the decimal number 6. By operating in binary codes, a computer is able to indicate two

possible states or conditions. The state is said to be either ON or OFF, ON stands for 1 and OFF

stands for 0. Groups of binary digits are called BYTES or WORDS.

In computer programming these sets of bytes is what is used to develop both complex and simple

sets of instructions called SUBROUNTINE, ROUTINES and PROGRAMS.These sets of

instruction called programs assist a computer to generate solution for scientific, business and

machine control problems.

A digital computer also has the ability to compare, it also has capacity to make decisions, by

using or employing prescribed criteria.

Examples of typical decision-making instruction to a computer two of such reads thus:

If A multiplied by B is less than X perform program P or if the result of A multiply by B is less

than X perform process Y and add 1 to counter. All the computer will do is to assess the value of

the manipulation and make conclusion without human interference but still following strictly the

program or instruction earlier given.

Fig3.6: Digital computer


HYBRID COMPUTERS

In some cases, the user may wish to obtain the output from an analog computer as processed by

a digital computer or vice versa. To achieve this, he set up a hybrid machine where the two are

connected and the analog computer may be regarded as a peripheral of the digital computer. In

such a situation, a hybrid system attempts to gain the advantage of both the digital and the

analog elements in the same machine. This kind of machine is usually a special-purpose device

which is built for a specific task. It needs a conversion element which accepts analog inputs,

and output digital value. Such converters are called digitizers. There is need for a converter from

analog to digital also. It has the advantage of giving real-time response on a continuous basis.

Complex calculations can be dealt with by the digital elements, thereby requiring a large

memory, and giving accurate results after programming. They are mainly used in aerospace and

process control applications.

3.3.2 CLASSIFICATION

Classification of digital computers depends on the same factors like size, complexity, cost,

computation, retrieval speed, and transmission capability. It must be noted that they all follow

certain fundamental concepts and operational principles.

The principal factor for classifying computers is the processing power.

Using these factors, computer can be classified into three broad categories namely:

(1) Mainframe Computers

(2) Mini Computers


(3) Micro Computers

It must be noted that recent development has made the classification difficult. Recently,

some mini computers and microcomputers produced are more powerful than traditional

mainframes.

MAINFRAME COMPUTERS

The first and second generation computers commonly referred to as the earliest computers were

all mainframes:

Mainframe computers have the following characteristics;

(i) They are mostly large occupying large floor space.

(ii) Their functional units being physically separated.

(iii) They are general purpose processors capable of handling multiple simultaneous functions

such as batch processing interactive and transaction processing under the control of an operating

system.

(iv) They support a wide range of peripheral equipment’, such as printers, including high

speed devices and communication lines.

(v) They are normally housed in air-conditioned rooms surrounded by security measures, run

by team of professional operators.

(vi) They have large memories of say 4 Megabytes with several disk units, each holding 3-6

hundred megabytes of information.


Mainframe computers are normally used by large organizations such as University and Research

establishments where they supply general-purpose computing facilities.

Banks where large amount of information has to be collected, sorted and distributed also make

use of Mainframe Computers.

Fig 3.7: Mainframe computer

MINI COMPUTERS

The third generation of computers ushered in Mini computers.

A minicomputer is structurally a small version of a mainframe computer.

It is used for low volume applications which require relatively sophisticated computational

capability.

The earliest developed mini computers were used for aerospace applications and they appeared

in the market between 1961 – 1962.

Generally when an organization decides to decentralize its operation or distribute its computer

power to various stations or locations within user departments, mini computers were the first

choice before the arrival of microcomputers.

General characteristics of mini computers:


(i) Easier to install

(ii) Have smaller memory size and word lengths

(iii) Are best suited for dedicated purposes

(iv) Need no complex management structures

(v) Typical word length of 12 -18bits

(vi) The main memory ranges from 256K to 512K. It has ability to expand to several

megabytes (MBs).

Fig. 3.8: Mini computer

MICRO COMPUTERS

Technological advancement that led to the production of LSI made it possible to develop micro

computers. Micro computer is a small computer consisting of processors on a single silicon chip

mounted on a circuit board together with memory chips, ROMs and RAM chips etc.

Major characteristics are as follows:

(i) A keyboard for the entry of data and instructions

(ii) A screen for display purpose


(iii) Interface for connecting peripherals such as plotters, disc drive, light pen, a mouse e.t.c.

(iv) It has five basic components which include Random Access Memory (RAM), Read Only

Memory (ROM), Input and Output devices, Interface components.

(v) They have word length of 4, 8, 16 bits (some are 32 bits, and they are referred to as super

micro computers)

(vi) They can operate under normal office conditions

(vii) Their main memory range’s from 4K to 256K

(viii) They have facility for add-on memory of up to 1MB

(ix) They are commonly found in homes, schools business, offices etc.

Fig 3.9: Micro computer

HYBRID COMPUTER

A hybrid computer system consists of a combination of analogue and digital computers.

The earliest hybrid computers were introduced in late 1950’s. The principle here was the

‘employment of digital machines as a support device for the analogue unit. Most recent hybrid

computers, by contrast are digitally based


The basic components are; -

(i) A digital processor

(ii) A memory for the internal storage of a master digital program and data.

(iii) Primary Input/Output hardware which are video display terminals.

(iv) An electronic keyboards

(v) Interactive graphic devices

(vi) Several analogue units used to provide continuous parallel computational capability

(vii) Provision of converters called Digital to Analogue Converters (ADC) for proper

interfacing (This translates data from the analogue processors into digits of the binary codes) and

(viii) Provision of devices that convert digitally processed information into analogue

representation called Digital to Analogue converters (DAC)

Fig 3.10: Hybrid computer

One major advantage of hybrid system is the offer of greater precision than do analogue

computers and more control capability than provided by digital machines.

3.4 USES OF COMPUTER SYSTEM


Computer system can be used to do the following

1. Type and print documents

2. Used to send information

3. use to play music

4. can be used to do calculation

5. used to store and process data

6. used to television and listen to radio programme over the internet

3.5 AREAS OF APPLICATION OF COMPUTER

Computer is applied in almost every aspect of human life and operation. In other word computer

has made their presence felt in almost every speck of life today. Some of the areas where

computer has become very common are:

3.5.1 Commercial application

• Banking sector

• Super markets

• Electricity bill generation

• Transportation

• Alarm system

• Paperless money through credit cards.

3.5.1 Educational Institutions:

• Online examination
• Open distance learning format

• Processing examination result

• Computer aided learning

3.5.2 Broadcasting Services

• Use of computer in receiving urgent news

• Runtime reporting as done during parliament elections or sports events

3.5.3 other Applications include

• space technology

• field of medicine

• applied science and technology

• in industrial research

Summary of Unit 3

In Unit 3, you have learned that:

1. A system is a collection of elements working together to achieve a particular goal

2. A computer is a system made up of input, processing unit and output

3. There are different types of computer; analogue, digital and hybrid computer

4. Computer can be classified by size, purpose and speed.

5. Computer can be used to process, store and transmit data

6. Computer has its applications in different areas : banking, education , medicine, industries

etc
Self-Assessment Questions (SAQs) for Unit 3

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 3.1 (tests learning outcome 3.1)

Define a system. Mention two input and two output and one processing unit in human body

SAQ 3.2 (tests learning outcome 3.2)

State five characteristics of computer

SAQ 3.3(tests learning outcome 3.3)

Mention five areas where computer can be used

SAQ 3.4 (tests learning outcome 3.4)

Describe the type of computer

SAQ 3.5 (tests learning outcome 3.5)

Name class of computer system

Notes on the Self-Assessment Questions (SAQs) for Unit 3

SAQ 3.1: A system is collection of a components working together to achieve a goal

Input:

- Eyes: for sensing object

- Mouth : for drinking and eating


- Processing

- Brain : for thinking, memorising and controlling the activities of the body

Output

- Hands: for writing information

- Mouth for speech

SAQ 3.2 (tests learning outcome 3.2)

The characteristics of computer are:

- Speed

- Storage

- Reliability

- Extra ordinary task

- security

SAQ 3.3(tests learning outcome 3.3)

Areas where computer can be used are:

- banking

- transportation

- education

- broadcasting

- electricity billing

- cashless system

SAQ 3.4 (tests learning outcome 3.4)


The types of computer are :

Analogue, digital and hybrid computers

SAQ 3.5 (tests learning outcome 3.5)

If you classify by their sizes:

- Mainframe computer

- Mini computer

- Micro computer

Refences

Akinyokun, O.C, (1999). Principles and Practice of Computing Technology. International

Publishers Limited, Ibadan.

Balogun, V.F., Daramola, O.A., Obe, O.O., Ojokoh, B.A., and Oluwadare S.A., (2006).

Introduction to Computing: A Practical Approach. Tom-Ray Publications, Akure.

Larry Long (1984). Introduction to Computers and Information Processing. Prentice-Hall Inc.,

New Jersey.

Gray S. Popkin and Arthur H. Pike (1981). (1981). Introduction to Data Processing with BASIC,

2nd edition, Houghton Mifflin Company, Boston.


UNIT 4 HARDWARE COMPONENTS OF COMPUTER

Expected Duration: 2hrs

Introduction

In this unit you will learn about hardware components of computer. The hardware components

are physical components of computer system.

Learning Outcome for Unit 4

At the end of this unit, you should be able to:

4.1 explain computer hardware

4.2 List five (5) examples of input unit

4.3 List four examples of output unit

4.4 Explain the two main types of computer memory

4.1. COMPUTER HARDWARE

Computer hardware is divided into two main categories: the system unit and peripherals. The

system unit contains the electronic components used to process and temporarily store data and

instructions. These components include the central processing unit, primary memory, and the

system board. Peripheral devices are hardware used for input, auxiliary storage, display, and

communication. These are attached to the system unit through a hardware interface that carries

digital data to and from main memory and processors.


Fig. 4.1: Hardware Components

Central Processing Unit (CPU)

The CPU is commonly referred to as the heart of the system without it no system can function. It

is the minimum hardware a subsystem can use. The CPU has three essential sets of transistors

that work together in processing digital data: a control unit, an arithmetic logic unit, and

registers.

Fig 4.2: Central Processing Unit

Control Unit

The control unit directs the flow of data and instructions within the processor and electronic

memory.

This unit co-ordinate’s the activities of the units of the system and ensures that the instructions

contain in its programs are executed in proper sequence; it also controls the activities of various

input/output devices.
Fig 4.3: Control unit

The operations carried out by the control unit while executing a single instruction may be

summarized as follows:

i. Obtain the ‘address of memory for the current instruction to be obeyed from the Program

counter

ii. Copy the instruction from its location in memory into the Instruction Register.

iii. Increment the Program Counter so that it now contains the address of the next instruction

to be obeyed.

iv. Decode the instruction from its pattern of binary digit to determine what operation is to

be carried out, using the Instruction Decoder.

v. Execute the Instruction using the ALU.

vi. Go back to step (A).

Arithmetic Logic Unit

The arithmetic logic unit (ALU) contains programmed transistors that perform mathematical and

logical calculations on the data. The Arithmetic and Logic Units (ALU) consists of two units,

i. The Arithmetic Unit: This unit performs arithmetic operations such as addition,

subtraction, multiplication, division etc.


ii. The Logic Unit: This unit performs logical operations such as the comparison

between numbers, shifting of values from one area to another. It is sometimes called

the “mill” for the computer data processing

Control Arithmetic/ Logic


EEEDF Unit Unit
Input Output
Program Output Data Device
Device
Buffer
Input Data
Buffer

MEMORY

Fig. 4.4: Data flow in a computer system

Registers

The registers are special transistors that store data and instructions as they are being manipulated

by the control unit and ALU. New microprocessors also have additional high-speed memory

called cache, on the chip to store frequently used data and instructions.

Fir 4.5: Main memory


The Peripheral Devices

Peripheral devices are hardware used for input, auxiliary storage, display, and communication.

These are attached to the system unit through a hardware interface that carries digital data to and

from main memory and processors.

Input/Output (I/O) Devices

These are used as means of communication between computer and the outside and include

keyboard, mouse, modems, scanners, digital camera, network interface cards, and ports. They

allow you to send information to the computer or get information from the computer.

Mouse Keyboard Scanner Monitor Printer Digital

Camera
Input Devices

An input device can be any piece of equipment that transfers information into a computer. This

includes mouse, keyboard, scanner, camera etc.

i. Mouse

A computer mouse is an input device that is used with a computer. Moving a mouse along a

flat surface can move the cursor to different items on the screen. Items can be moved or selected

by pressing the mouse buttons (called clicking). Today's mice have two buttons, the left button

and right button, with a scroll wheel in between the two. Today, many computer mouse

use wireless technology and have no wire.

There are many types of mouse; Optical mouse, wireless mouse, mechanical mouse, trackball

mouse. A computer mouse is a handheld hardware input device that controls a cursor in a

Graphical user Interface (GUI) and can move and select text, icons, files, and folders. For

desktop computers, the mouse is placed on a flat surface such as a mouse pad or a desk and is

placed in front of your computer. The picture to the right is an example of a desktop computer

mouse with two buttons and a wheel. The mouse was originally known as the X-Y position

Indicator for a display system and was invented by Douglas Engelbart in 1963 while working

at Xerox PARC. However, due to Alto's lack of success, the first widely used application of the

mouse was with the Apple Lisa computer ( "mouse from FOLDOC". foldoc.org.).

ii. Keyboard

A computer keyboard is a typewriter-style device which uses an arrangement of buttons

or keys to act as mechanical levers or electronic switches. Following the decline of punch

cards and paper tape, interaction via teleprinter-style keyboards became the main input

method for computers. Keyboard keys (buttons) typically have characters engraved or printed on
them, and each press of a key typically corresponds to a single written symbol. However,

producing some symbols may require pressing and holding several keys simultaneously or in

sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys

or simultaneous key presses can produce actions or execute computer commands.

In normal usage, the keyboard is used as a text entry interface for typing text and numbers into

a word processor, text editor or any other program. In a modern computer, the interpretation of

key presses is generally left to the software. A computer keyboard distinguishes each physical

key from every other key and reports all key presses to the controlling software. Keyboards are

also used for computer gaming either regular keyboards or keyboards with special gaming

features, which can expedite frequently used keystroke combinations.

iii. Scanner

Scanners capture text or images using a light-sensing device. Popular types of scanners include

flatbed, sheet fed, and handheld, all of which operate in a similar fashion: a light passes over the

text or image, and the light reflects back to a CCD (charge-coupled device). A CCD is an

electronic device that captures images as a set of analog voltages. The analog readings are then

converted to a digital code by another device called an ADC (analog-to-digital converter) and

transferred through the interface connection (usually USB) to RAM. The quality of a scan

depends on two main performance factors. The first is spatial resolution. This measures the

number of dots per inch (dpi) captured by the CCD. The second performance factor is colour

resolution, or the amount of colour information about each captured pixel. Colour resolution is

determined by bit depth, the number of bits used to record the colour of a pixel.

iv. Camera
Digital cameras are a popular input source for multimedia developers. These cameras eliminate

the need to develop or scan a photo or slide. Camera images are immediately available to review

and reshoot if necessary, and the quality of the digital image is as good as a scanned image.

Digital capture is similar to the scanning process. When the camera shutter is opened to capture

an image, light passes through the camera lens. The image is focused onto a CCD, which

generates an analog signal. This analog signal is converted to digital form by an ADC and then

sent to a digital signal processor (DSP) chip that adjusts the quality of the image and stores it in

the camera’s built-in memory or on a memory card . The memory card or stick has limited

storage capacity. Images can be previewed on the card, and if not suitable, deleted to make space

for additional images. Digital camera image quality, like scanning, is based on spatial resolution

and color resolution.

Output Devices

An output device transfers information to the outside of the computer. It includes; Printer,

Speaker, Display monitor, Projector etc.

i. Printer

Printers remain an important multimedia peripheral device, despite the fact that multimedia

applications are primarily designed for display. Storyboards, system plans, schematics, budgets,

contracts, and proposals are just a few common documents that are frequently printed during

multimedia production. End users print images and web pages, as well as the standard text

documents associated with most computer applications.

There are two basic printing technologies: impact and nonimpact. Impact printers form images

and text by striking paper. Dot-matrix printers use a series of pins that strike the paper through an
inked ribbon. These printers are used for applications that require multiform output or high-speed

printing. They are easy to maintain and relatively inexpensive to operate. However, limited

colour and graphics capability, combined with high noise levels, make impact printers

undesirable for most printing needs. Nonimpact printers form printed output without physically

contacting the page. These devices include inkjet, photo, and laser printers.

ii. Speaker

Speaker systems are essential components of modern computers. Most early microcomputers

restricted sound output to warning sounds such as a loud beep when there was an error message.

Macintosh computers raised the bar on sound output when the first Mac introduced itself to the

world in 1984. A computer that could speak changed the prevailing belief that all computer

information needed to be in visual form. Sound capability soon became a requirement for a

multimedia computer. Sound output devices are speakers or headsets. They are plugged into the

soundboard where digital data is converted to analog sound waves. Soundboards can be a part of

the system board or added to a computer’s expansion slots. Soundboard circuitry performs four

basic processes: it converts digital sound data into analog form using a digital-to-analog

converter, or DAC; records sound in digital form using an ADC; amplifies the signal for delivery

through speakers; and creates digital sounds using a synthesizer.

• Display monitor

• Projector

Computer Memory

Mainly computer have two types of memory namely;

1. Primary or Main Memory (Volatile Memory.


2. Secondary Memory / Non Volatile Memory.

1. Primary Memory / Volatile Memory

Primary memory is internal memory of the computer. It is also known as main memory and

temporary memory. Primary Memory holds the data and instruction on which computer is

currently working. Primary Memory is nature volatile. It means when power is switched off it

lost all data.

Types of Primary Memory

Primary memory is generally of two types.

1. RAM (Random Access Memory)

2. ROM (Read Only Memory)

1. RAM

It stands for Random Access Memory.RAM is known as read /writes memory. It generally

refereed as main memory of the computer system. It is a temporary memory. The information

stored in this memory is lost as the power supply to the computer is switched off. That’s why

RAM is also called “Volatile Memory”

Types of RAM

RAM is also of two types:

a) Static RAM
Static RAM also known as SRAM ,retain stored information as long as the power supply is ON.

SRAM are of higher coast and consume more power .They have higher speed than Dynamic

RAM

b) Dynamic RAM

Dynamic RAM also known as DRAM, its stored information in a very short time (a few

milliseconds) even though the power supply is ON. The Dynamic RAM are cheaper and

moderate speed and also they consume less power.

2. ROM

It stands for Read Only Memory.ROM is a Permanent Type memory. Its content are not lost

when power supply is switched off. Content of ROM is decided by the computer manufacturer

and permanently stored at the time of manufacturing. ROM cannot be overwritten by the

computer. It is also called “Non-Volatile Memory”.

Type of ROM

ROM memory is three types names are following-

1. PROM(Programmable Read Only Memory)-PROM chip is programmable ROM.it is

PROM chips to write data once and read many once chip has been programmed ,the

recorded information cannot be changed. PROM is also non-volatile memory.

2. EPROM (Erasable Programmable Read Only Memory)- EPROM chip can be

programmed time and again by erasing the information stored earlier in it. Information

stored in EPROM exposing the chip for some time ultraviolet light.
3. EEPROM (Electrically Erasable Programmable Read Only Memory)-The EEPROM

is programmed and erased by special electrical waves in millisecond. A single byte of a

data or the entire contents of device can be erased.

2. Secondary or auxiliary Memory (Non-Volatile Memory)

Secondary Memory is external memory of the computer. It is also known as auxiliary memory

and permanent memory. It is used to store the different programs and the information

permanently. Secondary memory is nature non volatile. It means data is stored permanently even

if power is switched off.

The secondary storage devices are:

1. Floppy Disks

2. Magnetic (Hard) Disk

3. Magnetic Tapes

4. Pen Drive

5. Winchester Disk

6. Optical Disk(CD,DVD)
Differences between Primary and Secondary Memory

S/n. Primary Memory Secondary Memory

1 Primary memory is temporary Secondary memory is permanent

Primary memory is directly accessible by Secondary memory is not directly accessible


2
Processor/CPU by CPU

Nature of Parts of Primary memory varies. RAM-


3 It’s always Non-volatile in nature
volatile in nature. ROM- Non-volatile

Primary memory devices are more expensive than Secondary memory devices are less expensive
4
secondary storage devices when compare to primary memory devices

The memory devices used for primary memory are The secondary memory devices are magnetic
5
semiconductor memories and optical memories

Primary memory is also known as Main memory or Secondary memory is also known as External
6
Internal memory memory or Auxiliary memory

Examples: RAM, ROM, Cache memory, PROM, Examples: Hard Disk, Floppy Disk, Magnetic
7
EPROM, Registers etc Tapes etc

Summary of Unit 4

In Unit 4, you have learned that:

1. Computer is made up of hardware components and that these components are the physical

part of computer system


2. The hardware components are the input units, processing unit, control unit, memory units

and output units Computer can be classified by size, purpose and speed.

3. Examples include monitor, keyboard, system unit, memory etc

Self-Assessment Questions (SAQs) for Unit 4

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 4.1 (tests learning outcome 4.1)

Explain hardware components of computer.

SAQ 4.2 (tests learning outcome 4.2)

List five (5) examples of input unit

SAQ 4.3 (tests learning outcome 4.2)

List four examples of output unit

SAQ 4.4 (tests learning outcome 4.2)

Explain the two main types of computer memory

Notes on the Self-Assessment Questions (SAQs) for Unit 4

SAQ 4.1: Physical components of computer. Read more units 4.1

SAQ 4.2: Examples of input units are: keyboard, mouse, scanner, digitising tablet, digital

camera, joystick, microphone and light pen.

SAQ 4.3: Examples of Output units are: monitor, printer, graph plotter, projector and speaker.

SAQ 4.4: The two main memories of computer are:


ROM - Read only memory

RAM -Random access memory

References

Khalid Saeed (2016). New Directions in Behavioral Biometrics. ISBN 978-1315349312.

"computer keyboard". TheFreeDictionary.com. Retrieved 26 June 2018.


UNIT 5 SOFTWARE COMPONENTS OF COMPUTER

Expected Duration: 2hrs

Introduction

In this unit you will learn about computer software and major operations. The software

components are the intangible/non-physical parts of the computer system which is very crucial to

the functioning of any computer system. Presently, software is used everyday to perform basic

and advanced tasks or complex computations. This chapter discusses the fundamentals of the

computer software presenting definition, classifications and applications of software.

Learning Outcome for Unit 5

At the end of this unit, you should be able to:

5.1 explain computer software

5.2 differentiate between system software and application software

5.1 COMPUTER SOFTWARE

A computer software is a generic term for all sorts of programs that run on the hardware system.

The hardware system on its own is just a bunch of electrical gadgets that do not perform any

form of task because the software drives and directs the hardware. A computer software can

generally be defined as a collection of instructions (programs/codes) and documents that allows a

computer to perform specific simple to complex tasks. Since the first program was written by

Augusta Ada for Charles Babbage’s proposed analytical engine, software has evolved into being
sophisticated over the years to cater for changing individual and organizational needs while

providing improved interface and user interactions.

5.2 SOFTWARE CLASSIFICATION

Software is classified into two major types which are the system and application programs which

are further classified into specific tools and functions. System softwares are machine and

software centered while application softwares are user centred. Figure 1 shows the classification

where the system software comprises the operating system, utility software and language

translators and application software comprises general purpose, special purpose and bespoke

applications.
Figure5.1: Classification of Computer Software

5.2.1 SYSTEM SOFTWARE

These refer to the suite of programs that facilitate the optimal use of the hardware systems and/or

provide a suitable environment for the writing, editing, debugging, testing and running of user

programs. Usually every computer system comes with a collection of these suites of programs

which are provided by the hardware manufacturers. They constitute an essential part of any

computer system. Examples of system software are: The operating system, the language

processor/translators, loaders, compilers etc.

5.2.1.1 OPERATING SYSTEM (OS)


An Operating System (OS) is a suite of programs acting as an interface between the users of a

computer on one hand and the hardware on the other. It provides the users with features that

make it easier for him to code, test, and execute, debug and maintain his program while

efficiently managing the hardware resources. It is responsible for managing a computer’s

software and hardware. An OS typically is installed on the computer by the manufacturer before

it is sold out. The OS oversees computer boot operations, and tasks as simple as pressing a key

on the keyboard, to installing and running other softwares. If a computer system has no OS

installed, it follows that most of these tasks will be carried out by the operator. In essence, the

processor will be idle most of the time which again will affect the through output of the system.

The OS precludes this by reducing the operator’s intervention. Functions of OS includes:

Resource (hardware, software) sharing

Provision of virtual machine

Input/output handling

Memory management

File management

Protection and error handling

Program interaction and control

Accounting of computing resources

Operating systems can be found in devices ranging from mobile phones to automobiles to

personal and mainframe devices. There are so many operating systems developed and in use
presently which can be categorized into proprietary, non-proprietary, most popular ones include

windows, mac, android, iOS and Linux (see table 1 for classification by devices).

Figure 5.2: Examples of Operating Systems.

Table 5.1: Top Operating Systems by Devices

Mobile Desktop

Android Windows

iOS Unix

Windows Mac OS

Blackberry Solaris

Symbian MS DOS

Bada Ubuntu

TYPES OF OPERATING SYSTEM

1. BATCH OPERATING SYSTEM: This type of operating system does not allow interactions

with the computer directly. There is an operator which takes similar jobs having the same
requirement and groups them into batches. The operator will then be responsible for sorting

the jobs with similar needs. Getting the right priorities for jobs is one major problem of this

type of system.

2. SINGLE-USER OPERATING SYSTEM: Provide machines for only one user at a time.

Examples are OS in microcomputer or PC like CIVIP, PC-DOS AND MS-DOS.

3. MULTI-USER OPERATING SYSTEM: This system allows multiple users on different

computers to access a single OS resource simultaneously.

4. TIME SHARING OPERATING SYSTEM: Time-sharing is a method that enables many

users at various points, to use a particular computer system at the same time. Also referred to

as multitasking, in time-sharing, each task is allotted some time to execute, so that all the tasks

run smoothly.

5. DISTRIBUTED OPERATING SYSTEM: Various independent interconnected computers

communicate with each other using a shared communication network. Distributed systems use

multiple central processors to serve multiple real-time applications and multiple users. One

advantage of this system is a failure of one of the computers connected will not result in

failure of every other computer.

6. NETWORK OPERATING SYSTEM: A Network Operating System runs on a server where

the server is responsible for managing data, users, groups, security and applications. The

primary purpose of the network operating system is to allow shared file and printer access

among multiple computers in a network. Examples of network operating systems include,

Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X,

Novell NetWare, and BSD.


7. PROCESS CONTROL: The main function of such an OS is to provide maximum reliability

with minimum operator intervention and to ‘Fail Safe’ in case of a hardware malfunction.

8. FILE INTERROGATION SYSTEM: Here there is a large set of data which is interrogated

for information and answers provided without involving the users in the details of

implementation. Area of application includes Management Information System(MIS).

9. TRANSACTION PROCESSING: Large amount of data that is frequently being modified

e.g. airline seat reservation and banking. There can be several users accessing a data item

simultaneously; the operating system gives each user the impression that he is the sole user of

the data item.

10. GENERAL PURPOSE SYSTEM: Use by computers having a large number of users

performing a wide range of tasks. They operate in batch or multi-access mode. In batch mode

you do not interact with your program when it is running while on multi-access you can

interact with your program while running. Examples include XENIX,VAX, MVS and VM

operating systems.

5.2.1.2 LANGUAGE TRANSLATORS

A language is a set of notations used for communication. A programming language is a set

notation in which we express our computer programs. At the initial state of computer

development programs were hard to write, read, debug and maintain. In an attempt to solve

these problems other computer languages were developed which are english like and user-

friendly. However, computers can run programs written into machine language. There is

therefore the need to translate programs written in these other languages into machine language.

The suite of the programs that translates programs written in these other languages to machine
language is called LANGUAGE TRANSLATORS. The initial program written in a language

different from machine language is called the SOURCE PROGRAM and its equivalent in

machine language is called the OBJECT PROGRAM. The three examples of classes of language

translators are Assemblers, Interpreters, and Compilers.

ASSEMBLERS

An assembly language is a set of notations using symbols or mnemonics that are easily readable,

and is used to write computer programs. An assembler is a computer program that accepts a

source program in assembly language and produces an equivalent machine language program

called the object program or object code. Each machine has its own assembly language. Machine

language of one machine cannot run on another machine.

INTERPRETERS

An interpreter is a program that accepts a program in a source language, reads, translates and

executes it one line at a time. An interpreter, directly executes instructions written in a

programming language, without requiring compilation into a machine language program.

COMPILERS

A compiler is a computer program that accepts a source program in one high-level language,

reads and translates the entire user’s program into an equivalent program in machine language

called the object program or object code. Some examples of high level languages are

FORTRAN, COBOL, C, Java, Python, etc.

The stages required for a high-level source code includes:

Lexical analysis: where the syntax is broken into tokens, removing comments and whitespaces.
Syntax analysis: where the given input is checked if it is in the correct syntax of the

programming language in which the input has been written.

Semantic analysis: where declarations and statements of a program are checked so that their

meaning is clear and consistent with defined semantic and control structures.

Code generation: where equivalent target codes are generated from intermediate code

representation.

For each high-level language there is a different compiler. We can therefore talk of COBOL

compiler, FORTRAN compiler, C compiler, etc. A compiler also detects syntax errors, errors

that arise from the use of the language. Compilers are portable e.g. a COBOL compiler on one

machine can run on a different machine with minimum changes.

5.2.1.3 UTILITY SOFTWARE

This is a set of commonly used programs that provides general computer optimization services,

e.g. anti-virus, registry cleaners, disk formatters, data generators, etc. Utility software perform

the following operations:

FILE CONVERSION

This covers data transfer from any medium to another making an exact copy or simultaneously

editing and validating. For example, a copy from a magnetic tape to a disk.

FILE COPY

It makes an exact copy of a file from one medium to another lot or the same medium e.g copy

from one area of the disk pack to another area.


FILE REORGANIZATION

It involves cleaning the computer memory by reorganizing cylinder and bucket indexes which

transferred and placed records back into their home buckets.

FILE MAINTENANCE

It enables users to insert and delete records into or from sequential files. It also allow user to

rename files and amend standing data

SORTING

It provides certain parameters and requests the machine to arrange a set of records into a certain

order (ascending or descending) using some keys.

5.2.2 APPLICATION PROGRAMS

Application programs often called packages or applications consist of programs designed to

assist users with specific tasks like word editing and publishing, browsing, designing, etc.

Application software may be general purpose (web browsers, word processing, etc.), special

purpose (CAD, Business Management Software, etc.) and bespoke(e-commerce softwares,

content management systems, etc.)

Application programs classified by tasks include:

1. Accounting Packages: These applications cover sales ledger, invoicing, inventory control

payroll, fixed assets, purchased ledger, other financial and accounting processing. Examples

include; SAGE, Freshbooks, NETSuite ERP, etc.


2. Word Processing Packages: These packages help to edit and format text documents.

Examples include; Word Perfect, WordStar, Display writer, Professional writer, LOTUS

manuscript, MS-word, etc.

3. Spreadsheet Packages: A spreadsheet is a sheet of paper ruled into a grid of rows and columns

on which you can store and analyse data. . Examples include; LOTUS 1-2-3, MS Excel,

google sheets and VP Planner.

4. Utilities: They do the same job as the utility software, which was discussed earlier. Some of

their functions include, undeleting and compressing a file, reading and writing a file sector by

sector such that it would not work successfully if copied. Examples include; NORTON, PC

Tools, Copywriter, LOTUS Magellan, etc..

5. Integrated Packages: They are programs or packages that perform a variety of different

processing operations that are compatible with whatever operation is being carried out. They

perform a number of operations like word processing, database management and spread sheet

processes. Examples are: Office writer, Logistics symphony, Framework, Enable, Ability,

Smart ware II, Microsoft works V2.

6. Graphic Packages: These are packages that enable you to create and manipulate images..

Examples include; CorelDraw, Adobe Photoshop, Adobe Illustrator, Blender, 3D Paint, etc.

7. Database Packages: These are packages used for designing, setting up and subsequently

managing a database. (A database is an organized collection of data that allows for

modification taking care of different user views). Examples include; MySQL, MS Access,

Dbase IV, FoxBASE, Revelation Advanced, etc.


Summary of Unit 5

In Unit 5, you have learned that:

1. Computers are also made up of software components which are non-physical and whose

function is to control the operations of a computer .

2. Software is divided into system software and application software and is further

categorized according to purpose of operation.

Self-Assessment Questions (SAQs) for Unit 5

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 5.1 (tests learning outcome 5.1)

Explain computer software.

SAQ 5.2 (tests learning outcome 5.2)

Differentiate between system software and application software, giving types and

examples.

Notes on the Self-Assessment Questions (SAQs) for Unit 5

SAQ 5.1: Software, also called a program, consists of a series of related instructions, organized

for a common purpose, that tells the computer what tasks to perform and how to

perform them.
SAQ 5.2: System software controls the activities of computer hardware and the execution of

computer programs while application software are the users programs that allow users

perform tasks as requested.

Types of system software includes, operating systems, e.g. Android, Windows, etc.,

utilities software, e.g. anti-virus, disk-cleaning, etc. and language translators e.g.

compilers, assemblers, etc.

Types of application software includes; general-purpose, e.g. MS Word, MS, Excel,

Mozilla Firefox, etc., special purpose, e.g. Business Management Software, etc. and

Bespoke, e.g. air traffic control systems, Content Management Systems, etc.

References

Deitel, H. M., Deitel, P. J. and Choffnes, D. R. (2003) Operating Systems, 3rd edn., U.K.:

Pearsons.

Nancy C. M. (2017) Computers for Seniors, 5th edn., U.S.A.: For Dummies.

Rama, M. A. (2013) 'Introduction to Programming Concepts', in (ed.) C Programming. India:

pp. 1-35.

Timothy, J.O. and Linda, I.O. (2010) Computing Essentials, U.S.A: McGrawHill.

Vermaat, Misty E (2014) Microsoft Office 2013 Introductory : Cengage Learning.

Vineet, K. S (2013) Introduction to Computer Fundamental, DOI: 10.13140/2.1.3485.3761.


UNIT 6 INTRODUCTION TO DATA PROCESSING

Expected Duration: 2hrs

Introduction

In this unit you will learn about data processing. You will learn the differences between data and

information. You will be exposed methods and mode of data processing.

Learning Outcome for Unit 6

At the end of this unit, you should be able to:

6.1 Explain data processing

6.2 Differentiate between data and information.

6.3 List and explain stages involved in data processing cycle

6.4 Explain operational modes in data processing

6.1 WHAT IS DATA PROCESSING?

Data processing is the manipulation of data into usable information. In computer terms, this is

done on databases and includes data entry and data mining. Data are raw facts and figures

collected from events or other sources. Data need to be processed or organized, so that they

become meaningful and useful. The processed data is referred to as information

6.2 DATA PROCESSING CYCLE

The data processing cycle is the order in which data is processed. There are four stages;

Data collection

Data input

Data processing and storage

Data output

6.3 METHOD OF DATA PROCESSING


The stages or activities of converting organizing data into information may be done in the

following ways

Manual method: this involves human efforts. The operations are performed using brain to think,

it could be using calculator for calculation. It also involves using writing material to write, using

drawing material to draw. It is suitable for volume of data.

Mechanical Method: It involves the use mechanical machines to process data. The use of

adding machine to update ledger is a good example

Electronic Method: This method involves the use of electronic devices like computer system.

This method is suitable for processing large volume of data. The advantages of this method lies

in its speed, accuracy and large storage.

6.4 COMPUTER MODE OF PROCESSING DATA

Real time processing: In real-time processing data entered into computer are processed and the

result is generated immediately. This time of processing is common in banking operation

Time-Sharing: In time sharing the system is interactive, it allows users to process data

independently on a single computer at the same time. This method allows a user to share

resources of computer with other user

Batch Processing: This type of processing allow user to submit data or job to be processed over

a given period of time before the processing takes place. The type of technique is suitable for

large volume of data.

Distributed Processing: In this method some of the processing devices and procedure are

situated in different locations. The processing devices are connected together by transmission

facilities
Summary of Unit 6

In Unit 6, you have learned that:

1. Data processing is the manipulation of data into usable information.

2. Data are raw facts and figures collected from events or other sources.

3. The processed data is referred to as information

4. There are four stages involved in data processing cycle: data collection, data input, data

processing, data output.

5. Data processing method can be manual, mechanical and electronic

Self-Assessment Questions (SAQs) for Unit 6

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 6.1 (tests learning outcome 6.1)

Explain data processing

SAQ 6.2 (tests learning outcome 6.2)

Differentiate between data and information

SAQ 6.3 (tests learning outcome 6.3)

List and explain stages involved in data processing cycle

SAQ 6.4 (tests learning outcome 6.4)


Explain operational modes in data processing

Notes on the Self-Assessment Questions (SAQs) for Unit 6

SAQ 6.1: Data processing is the manipulation of data into usable information.

SAQ 6.2: Data are raw facts and figures collected from events or other sources while processed

data is information

SAQ 6.3: Go over 6.2

SAQ 6.3: Go over 6.4

References

1. C. French. (1996). Data Processing and Information Technology (10th ed.), Thomson, pp. 2. .

2. V. Illingworth. (1997). Dictionary of Computing. Oxford Paperback Reference (4th ed.).

Oxford University Press.

3. B. Bourque, V. Virginia. (1992). Processing Data: The Survey Example, Quantitative

Applications in the Social Sciences, no. 07, pp.085, Sage Publications.

4. J. Levy. (1967). Punched Card Data Processing. McGraw-Hill Book Company.


UNIT 7 NUMBER SYSTEMS

Expected Duration: 2hrs

NUMBER SYSTEMS

There are four common number systems namely: binary, octal, decimal and hexadecimal

systems. The number of digits used in the number system is referred to as the base. Thus,

decimal number system has a base often because it uses ten digits. Similarly, binary number

system has a base of two. The table below shows the common number systems with their digits

and bases.

Table 7.1 Common Number Systems with their digits and bases.

Number Digits Number of


Binary
System 0 1 Two
digits (Base)
Octal 0 1 2 3 4 5 67 Eight
Decimal 0 1 2 3 4 5 6 789 Ten
Hexadecimal 0 1 2 3 4 5 6 789 Sixteen
A B C D E F

7.1.1 Binary number System

Binary number system has only two digits: 0, 1. It has a base of 2. Examples are: 101, 1001 and

1011. These binary numbers are written as 1012, l00l2 and 10112 (or 101lwo, 1001 two and

1011two) respectively. The subscript 2 indicates the base number.

7.1.2 Octal number System


Octal number system has eight digits: 0, 1, 2, 3, 4, 5, 6, 7. It has a base of 8. Examples of octal

number are 176, 405, 260 and 737. They are written as 176s, 4058, 2608 and 7378 (or 176eight,

405eight 260eight and 737eight) respectively. The subscript 8 indicates the base number.

7.1.3 Decimal number System

Decimal number system uses ten digits: 0, 1, 2, 3,4, 5,.6, 7, 8, 9 and has a base of 10. Example of

decimal number is 4958. This number may be written as 475610. 4756 means 4 thousands,

7hundreds, 5 ten and 6 units.

= 4x1000 7x100 5x10 6x1

= 4xl03 + 7xl02 + 5xlO1 + 6x10°

= 4756.

7.1.4 Hexadecimal Number System

Hexadecimal number system uses sixteen digits namely:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.

It has a base of 16. The first ten digits are equivalent to the decimal digits. The alphabets A, B,

C, D, H and F represent 10, 11, 12, 13, 14 and 15 respectively.

Thus, A=10, B = H, O12, 11=13, 1-14 and F=15. Examples of hexadecimal number system are

8F16 35B16 10E16 and C916

7.2 Conversion From Decimal System To other Number System

The following procedure is used to convert from decimal system to another number system.
Divide the decimal number by the new base,

Continue dividing until you reach»zero (0).

Write down the remainder each time you divide; and

List the remainder figures starting from the last to the first in successive order to arrive at the

required answer.

Example

Convert (i) 25.

Solution

(i) 25 to binary (base 2)

25-2=12 Rem. 1

12÷2= 6 Rem. 0

6÷2=3 Rem. 0

3÷2 = 1 Rem. 1

1÷2=0 Rem. 1

2510 to binary = 110012

Now that you have been able to work through the examples above. Can you try this? Convert

73710 octal numbers

The solution to the question is 13418.


Now cross check your answer with solution below.

Solution

737 to octal (base 8)

737 ÷8 = 92 Rem.

92÷8 = 11 Rem. 4

11÷8= 1 Rem. 3

1 ÷8 = 0 Rem. 1

73710 to octal = 13418

More examples

Convert the following decimal numbers to hexadecimal numbers,

(i) 1046 (ii) 268

Solution

(i) 1046 to Hex (base 16)

1046÷16 = 65 Rem. 6

65÷16= 4 Rem. 1

4÷16= 0 Rem. 4

104610 to Hex = 41616


(ii) 268 to Hex (base 16)

268÷16 = 16 Rem.l2=C

16÷16 = 1 Rem. 0

1÷16 = 0 Rem. 1

26810 to Hex = 10C16

7.3 CONVERSION FROM ANY NUMBER SYSTEM TO DECIMAL NUMBER

SYSTEM

It is possible to convert from any number system to decimal system. Each digit in the number

system is multiplied by the base number raised to various powers as you will see in the examples

The results of the multiplication are added up to arrive at the required answer.

Examples

Convert the following binary numbers to decimal numbers.

(i) 11012 (ii) I10112

Solution

11012 to decimal numbers

1 x 23 = 8

1 x 22 = 4
0 x 21 = 0

1 X 20 = 1

11012 = 13

01102 to decimal numbers

0 x23 = 0

1 x 22 = 4

1 x 21 = 2

0 x 20 = 0

01102 = 6

Convert 1058 to decimal numbers.

Solution

(i) 105s to decimal number.

1 x82 = 64

0x81 = 0

5x81= 5 .

1058 = 69
Now, let us test how far you have understood the subject again. Convert 2608 to number in

base ten. If you have worked it correctly, your answer should be 176. Compare your working

steps with the one below

2x82 = 128

6x81 = 48

0x80 = 0

2608 = 176

Example

Convert the following hexadecimal numbers to decimal numbers.

41616 (ii) 35F16


(i) 4l616 to decimal number

4x 162 = 1024

1 x 161 = 16

6 X 160 = 6

41616 = 1046

(ii) 35F16 to decimal number

3 x 162 = 768

5 x 161 = 80

F x 160 = 15=F

35F16 = 863
Summary of Unit 7

In Unit 7, you have learned that:

The units of counting are known as number system.

There are basically four systems of counting. Binary, octal, decimal and hexadecimal.

We can convert number systems from one to the other

Self-Assessment Questions (SAQs) for Unit 2

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 7.1 (tests learning outcome 7.1)

List the commonly used number

SAQ 7.2 (tests learning outcome 7.2)

Convert the following hexadecimal numbers to decimal numbers:

9B16 (ii) 10C16

SAQ 7.3 (tests learning outcome 7.3)

Convert the following binary numbers to decimal numbers.

(i)O1102 (ii) 1002


Notes on the Self-Assessment Questions (SAQs) for Unit 7

SAQ 7.1: The commonly used number systems are : Binary, octal, decimal and hexadecimal

SAQ 7.2:

(i) 9B16 to decimal number

9 x 161 = 144

B x 160 = 11=B

9B16 = 155

(ii) 10C16 to decimal number

1 x 162 = 256

0 x 161 = 0

C x 160 = 12=C

10C16 = 268

SAQ 7.3:

(i) 01102 to decimal numbers

0 x23 = 0

1 x 22 = 4

1 x 21 = 2

0 x 20 = 0

01102 = 6

(ii) 1002 to decimal numbers


1 x 22 = 4

0 x 21 = 0

0 x 20 = 0

=4

References

UNIT 8 DATA REPRESENTATION AND ITS MEASUREMENT IN COMPUTER

Expected Duration: 2hrs

Introduction

In this lecture you will learn about how to measure capacity of memory media in computer

system. You will learn the concept of bit, byte and character in information storage. its

applications in different areas of human endeavour such as banking, education, science, health,

agriculture etc. The course will also take you the students through memory lane of computer

developmental stages and characteristics.

Learning Outcome for Unit 8

At the end of this chapter, student should be able to understand:

i. Data representation i.e. data size and speed.

ii. How to measure physical quantities in digital form.

iii. Units of measurement in computer.

iv. Measurement of data speed.

v. Different between various measurement of storage capacity.


vi. Changing from one unit to another.

8.1 DATA REPRESENTATION IN COMPUTER

Data in a computer is represented in a series of bits (binary digits) or ones and zeroes. Since the

birth of computers, bits have been the language that control the processes that take place inside

that mysterious black box called your computer.

Data and instructions are entered into the computer in alphabetic and number forms. These

entries are converted to binary digits before the computer uses them. For convenience, computer

uses coding schemes to represent numbers, alphabets, special characters and symbols in bits. The

common coding schemes are binary coded decimal, extended binary coded decimal interchange

code and American standard code for information interchange, (i) Binary Coded Decimal

(BCD) uses 4 bits (22 bits) to represent numbers, (ii) Extended Binary Coded Decimal

Interchange Code (EBCDIC) is an~8-bit coding scheme. It uses 8 bits (23 bits) to represent

numbers 0-9, letters and special characters. For instance, 1111 0101 represents 5 and 1100 0010

represents uppercase letter B. (iii) American Standard Code for Information Interchange (ASCII)

uses 256 bits (28 bits) to represent numbers 0-9, letters, special characters,

mathematical symbols and keyboard characters.

ASCII (American Standard Code for Information Interchange)

ASCII uses 8 bit binary numbers to represent text characters. An 8

bit code allows 256 different characters to be stored:

A-Z - 26 characters

Figure 1: Data Representation


a-z - 26 characters

Control Characters (return, tab etc) - 32 characters

0-9 - 10 characters

Punctuation - approximately 20 characters

Mathematical Symbols - approximately 50 characters

The remaining spaces in the 256 characters code are used to store foreign alphabet letters. Figure

1 shows forms in which data can be represented.

Binary to Denary

To convert a binary number to a denary number, simply add up the columns where a 1 appears.

Example 1: Convert the binary number 01100100 into a denary value.

64 32 4 64+32+4 = 100

Denary to Binary

To convert a number from denary to binary we reverse the process and place 1s into the columns

ensuring that they add up to the number we are looking for.

Example 1: Convert 94 into a binary number

Finally place zeros in all the columns you have not filled. Answer = 01011110
Using Binary to Store Real Numbers

Real number, or numbers with decimal places are stored using scientific notation. For example,

the number 345.765 would be stored as:

3.45765 x 102

The computer then stores two separate integers with a set number of bits.

The mantissa 3.45765 X 102

1010001100101010

&

The Exponent 3.45765 X 102 00000010

The complete number is then stored as one long integer - 101000110010101000000010

Note that the number of bits that a computer uses to store the mantissa and exponent has an

effect on the number stored.

8.2 BASIC UNIT OF MEASUREMENT


Bit and Byte are basic unit of measurement. These two units are discussed as follows:

BITS

All information in the computer is handled using electrical components like the integrated

circuits, semiconductors, all of which can recognize only two states – presence or absence of an

electrical signal. Two symbols used to represent these two states are 0 and 1, and are known as

BITS (an abbreviation for BInary DigiTS). 0 represents the absence of a signal, 1 represents the

presence of a signal. A BIT is, therefore, the smallest unit of data in a computer and can either

store a 0 or 1. Since a single bit can store only one of the two values, there can possibly be only

four unique combinations:

00 01 10 11

Bits are, therefore, combined together into larger units in order to hold greater range of values.

BYTES

BYTES are typically a sequence of eight bits put together to create a single computer

alphabetical or numerical character. More often referred to in larger multiples, bytes may appear

as Kilobytes (1,024 bytes), Megabytes (1,048,576 bytes), GigaBytes (1,073,741,824), TeraBytes

(approx. 1,099,511,000,000 bytes), or PetaBytes (approx. 1,125,899,900,000,000 bytes). Bytes

are used to quantify the amount of data digitally stored (on disks, tapes) or transmitted (over the

internet), and are also used to measure the memory and document size.

Sn Unit Description

1. Bit Short for Binary digIT. It is the smallest unit which can be defined in a

computer.

Bits (1s or 0s) correspond to simple switches being on or off.


2. Byte A byte is a group of 8 bits. Early computer worked with groups of 8 bits or

byte. Today’s computers typically process groups of 64 bits (8 bytes) at a

time.

3. KiloByte (KB) 210 Exactly 1024 bytes. Kilo usually means 1000 of something but in

binary 1024

is a round number. Text files and small graphic files are usually quoted in

KB.

4. MegaByte (MB) 220 Exactly 1024 Kilobytes. Mega usually means 1 million of something

and in

this case it is approximately 1 million bytes. Photos and music files are

usually measured in MB.

5. GigaByte (GB) 230 Exactly 1024 Megabytes. The capacity of some storage devices

(DVDs, USB

Flash Drives) are measured in GB.

6. TeraByte (TB) 240 Exactly 1024 Gigabytes. This measurement is now commonly used

with

newer hard disk drives, mainframe memory and server hard drives.

7. Petabyte (PB) A petabyte is a unit of information or computer storage equal to one

quadrillion

bytes (10245).

Character
Character is an alphabet, a number or a symbol.

Alphabets: a through z or A through Z.

Numbers: 01234 56789

Symbols: @ (%, - +? #)

Blank spaces are regarded as characters. The table below shows number of bytes required to

store characters and numeric values

Type of Value Number of bytes used for storage

1 character 1 byte

1 integer 2 bytes

1 single precision 4 bytes

1 double precision 8 bytes

Word

One unit of information is usually made up of 8, 16 or 32 bits. This unit of information is

referred to as a word. Thus, a 16-bit word is equal to 2 bytes while a 32-bit word is equal to 4

bytes. Different computers use different word-lengths. A word-length of 32 bits is commonly

used.

Exercise

How many bytes are required to store the following information?

(i) A single precision value 5462. (ii) An integer value 68.

Solution
(i) 5462 is one single precision value.

One single precision value requires 4 bytes for storage,

(ii) 68 is an integer value.

One integer value requires 2 bytes for storage.

Complete this Table

1024 bytes ? kilobytes

? kilobyte 1 megabyte

32- bit word ? byte

§ Check this table to see if you are right

1024 bytes 1 kilobyte

1024 kilobytes I mega byte

32-bit word 4 bytes

8.3 MEASUREMENT OF STORAGE CAPACITY

The storage capacity of RAM, hard drives or any other storage device is usually quoted in

Megabytes (MB) or Gigabytes (GB). Table 8.1 shows Volume capacity of common storage

media.

Storage Medium Volume Capacity


Diskette A diskette stores 1.44 millions characters(outdated)

characters characters. Its volume capacity is 1.44Mb.


Compact disc (CD) CD stores 650 or 700 millions of characters. It has volume capacity

of 650Mb or 700Mb. CD can record music for a period of 74


Hard disk The volume capacity of hard disk varies. It may have capacity of
minutes.
4.3Gb, 10Gb, 20Gb, 40Gb or more.
Flash disk The volume capacity of flash disk varies. Its capacity ranges from
Digital video disk DVD
64Mbhas volume
to 1Gb capacity of 4.7Gb.
or more.
(DVD)

Figure 2: Example of Storage Capacity of Physical Quantities

8.4 CHANGING FROM ONE UNIT TO ANOTHER

As well as knowing the order of the units (bits, Bytes, KB, MB, GB, TB, PB) it is important,

when doing calculations in computing, to be able to change from one unit to another. For

example: A high definition movie might require 1,717,986,918 bytes of storage space. If you

were telling your friend that you had downloaded the above movie last night. You would be far

more likely to say that the movie you downloaded was 1.6Gb in size. The following

manipulations assist in changing from one form of unit to another:

i. To covert a small unit to a larger one we divide (for example changing bytes to Mb).
ii. To convert a large unit to a smaller one we multiply (for example Tb to Mb)

What you multiply and divide by, depends on the number of places you are moving up or down.

Use the chart below.

Note:

Kb to Gb would be two places to the right so you would divide by 1024 twice.

Example 1: Convert 4Mb into bytes We are moving two steps to the left 4 x 1024 x 1024

4,194,304 bytes

Example 2: Convert 4096Gb in Tb We are moving one step to the right 4096 /

1024 = 4Tb

Example 3: Convert 3.5Mb into bits We are moving three steps to the left 3.5 x 1024 x

1024

x 8 = 29,360,128 bits

Example 4: Convert 68,719,476,736 bits We are moving four steps to the right

68,719,476,736 /

into Gb 8 / 1024 / 1024 / 1024 = 8 Gb


Summary of Unit 8

In this unit, you have learned that:

i. The capacity of storage media, disk files and computer -memories are measured in

kilobytes, megabytes, gigabytes, Terabytes and Petabytes.

ii. Define bit as strings of 0’s and 1’s. While a byte is combination of 8-bit and a byte is

equivalent to a character

iii. The three (3) data representation schemes commonly used in computer are:

iv. Binary Coded Decimal (BCD) uses 4 bits (22 bits) to represent numbers,

v. Extended Binary Coded Decimal Interchange Code (EBCDIC) is an~8-bit coding

scheme.

vi. American Standard Code for Information Interchange (ASCII) uses 256 bits to represent

character

vii. Measurement of storage capacity are Kilobyte, Megabyte, Gigabyte, Terabyte and

Petabyte

viii. How to change from one unit to the other.

Self-Assessment Questions (SAQs) for Unit 8

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 8.1 (Tests Learning Outcome 8.1)

How many bits make (i) one byte, (ii) one nibble?

Convert the binary number 01101101 into a denary value.


List three coding scheme commonly used in computer

SAQ 8.2 (Tests Learning Outcome 8.2)

How many bytes are required to hold the following in the computer

memory?

(i) Computer. (ii) software. (iii) A single precision number 567.

(iv) Two integer number. (v) 16-bit word

SAQ 8.3 (Tests Learning Outcome 8.3)

Write the storage capacity of these drives: (i) DVD (ii) CD (iii) hard drive

SAQ 8.4 (Tests Learning Outcome 8.4)

Convert 8Mb into bytes

Notes on the Self-Assessment Questions (SAQs) for Unit 8

SAQ 8.1: 8 bits make 1 byte, 4 bits makes 1 nibble

SAQ 8.2: (i) Computer. - 8bytes (ii) software. - 8 bytes

(ii) A single precision number 567 - 4 bytes.

(iv) Two integer number. – 4 bytes

(v) 16-bit word – 2 bytes

SAQ 8.3: (i) DVD – 4.7 GB (ii) CD – 700MB (iii)

SAQ 8.4:

Convert 8Mb into bytes We are moving two steps to the left 8 x 1024 x 1024 =

8,194,304 bytes

References
Gibeson, G. A. (1991) Computer System Concept and Design. Englewood Cliffs, NJ. Prentice

Hall.

Gray N. A. B. (1987) Introduction to Computer System. Englewood Cliffs, NJ. Prentice Hall.
UNIT 9 COMPUTER NETWORK

Expected Duration: 2hrs

Introduction

In this lecture you will learn about connection of computer system together to share and

exchange information. This interconnection of computers is called computer network. Different

types of computer networks will be taught. You will also learn computer network topologies.

Learning Outcome for Unit 9

At the end of this unit, you should be able to:

4.1 Define a computer network

4.2 Explain different types of computer network

4.3 Identify different computer network topologies

4.4 State four (4) advantages of computer network

9.1 COMPUTER NETWORKS

Definition

Computer network is a collection of computer systems linked together by means of

communication line in order to share resources. The communication line may be ordinary cables,

telephone lines or broadcast channels. Each computer system in the network is referred to as a

node. The figure 1 below is an example of a computer network.


Figure 1: Typical Computer Network

9.2 TYPES OF NETWORK

There are three major types of network. They are local area network (LAN), metropolitan area

network (MAN), and wide area network (WAN)

9.2.1 Local Area Network

In a LAN, the computer systems are situated in the same locality or premises within few meters

away from each other. The computers are usually linked with ordinary cables.

9.2.2 Metropolitan Area Network

Computers in metropolitan area network (MAN) are few kilometres away from each other. They

are usually situated within the metropolis, community or city. The computers in this network are

connected to each other with communication equipment.

9.2.3 Wide Area Network

Computers in WAN are spread over a wide area. They may be several kilometres away from

each other. Because of the distance, computers in this type of network are linked with telephone

lines or broadcast channels.

9.3 NETWORK TOPOLOGY


Computers in a network are connected or linked together in different ways. The structure of

physical connections in the network is called topology. Now let us have a look and discuss some

of these typologies.

9.3.1 Star Connection Method

In star topology every computer is connected to the central computer. The computer in the

network also communicate with each other through the central placed computer. Figure 2 is an

example of Star topology structure

Figure 2: Star topology structure

Take a look at this explanation

Activity 9.1

Take a critical look at the above star topology structure, can you explain what happen to the

network if the central computer is down

Activity 1.1 Feedback:

Of course communication among the computer becomes difficult


Read more from unit 9.3.1

9.3.2 Ring Connection Method

Computers in the network are connected together in a ring form as shown in figure 3 below.

Information sent by any of the computers is passed round the network until it is received by the

owner of the information.

Figure 3: A Ring Connection.

Activity 9.1

How can you describe the level of security of information in this network (Ring topology).

Activity 9.1 Feedback:

The topology may not suitable for application requiring high level of confidentiality.

Read more from unit 4.3.2

9.3.3 Bus Connection Method

A bus topology consists of a single cable with the terminator at each end. All present nodes are

connected to the single cable. There is no limit to the number of nodes that can be attached to

this network, but the number of connected nodes can actually affect the performance of the

network. All the computers in the network share the same bus. Figure 4 shows a bus topology
Figure 4: A Bus Connection.

9.3.4 Hierarchical Connection Methods

A Hierarchical network topology interconnects multiple groups that are located on the separate

layers to form a larger network. Each layer concentrates on the specified functions, this allows to

choose the right equipment for the layer.

Figure 5: A Hierarchical Connection

9.4 ADVANTAGES OF NETWORK

The main advantages of computer networks are:

(i) Exchange of information.

(ii) Sharing of the same resources.


(iii) Passing messages from one computer in one location to the computer situated in another

location.

(iv) providing access to wide variety of data and information,

(v) reducing cost of running hardware and software for computers within the same locality.

Summary of Unit 9

In Unit 9, you have learned that:

1. A computer network is a connection of computer system linked together to exchange

information.

2. There are three basic types of computer networks: LAN. MAN and WAN

3. There are different network topologies, among which we have star, ring bus and

hierarchical topologies

4. Advantages of computer network include information sharing, resources sharing and

information sharing

Self-Assessment Questions (SAQs) for Unit 9

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 9.1 (tests learning outcome 9.1)

Define a computer network?

SAQ 9.2 (tests learning outcome 9.2)

Mention two types of computer network


SAQ 9.3 (tests learning outcome 9.3)

Name four methods of connecting computers in a network

SAQ 9.4 (tests learning outcome 9.3)

List four (4) major advantages of computer network.

Notes on the Self-Assessment Questions (SAQs) for Unit 4

SAQ 9.1: A computer network is a connection of computer system linked together to exchange

information

SAQ 9.2: Two types of computer networks are: LAN and WAN

SAQ 9.3: Methods of connecting computers in the network are:

(i) Star topology

(ii) Ring topology

(iii) Bus topology

(iv) Hierarchical topology.

SAQ 9.4: Advantages of computer network are :

(i) Information exchange.

(ii) Resources sharing.

(ii) E-mail

(iii) Information access


References

Prasad, K. V. (2009). Principles of Digital Communication Systems and Computer Networks.

Dreamtech Press.

Tanenbaum, Andrew S. (1996). Computer Networks. Prentice Hall.

Hafner, Katie. (1998). Where Wizards Stay Up Late: The Origins of The Internet.

Sodiya, A. S. (2008). Digital Communication and Computer Network –An Introduction (A

handbook).

Leonard, Kleinrock (2005). The History of the Internet.

http://www.lk.cs.ucla.edu/personal_history.html. Retrieved 2009- 05-28.

Mike, Meyers (2007). (6th Ed.). All in One CompTIA A+ Certification Exam Guide. McGraw

Hill.

Barry, J. R. E.; Lee, A. & Messerschmidt, D.G. (2004). Digital Communication. luwer Academic

Publishers.

Friedhelm, Hillebrand (2002). GSM and UMTS- The Creation of Global Mobile Communications

(Ed.). John Wiley & Sons.


Unit 10 The Internet

Expected Duration: 2hrs

INTRODUCTION

In this lecture will take you further in the study of computer networks. This unit will expose you

to the largest world network. It is called internet, network of networks.

Learning Outcome for Unit 10

At the end of this unit, you should be able to:

10.1 Define an Internet

10.2 Define the application areas of Internet

10.3 Explain what is World-Wide Web and define its working

10.4 Understand the requirements for Internet Connection

10.1 THE INTERNET

10.1.1 Introduction

The Internet, sometimes called simply "the Net' is a world-wide system of computer networks

through which sharing of information is possible INTERNETS is defined as network of

networks. Just as interstate highway system links one city to another, similarly the Internet links

thousands of computer networks. Similarly Internet too is a pool of information on matters

ranging from books, education, movies, current affairs, sports, arts etc.

10.2 APPLICATION AREAS OF INTERNET

The Internet is very useful, It has its application in different areas of human endeavour. The

following are the areas of applications of the Internet:


Banking has replaced the conventional way of banking. Now we are not required to stand in the

long queues for depositing, withdrawing or updating our account, with just a click of a mouse we

can get the required information, about our bank account.

Online education system no more requires a student to go to the institute and register to attend

the classes, and in fact a student can now not only register and attend the classes but also give

examination for that particular class at the click of a mouse

On line employment system with which job seeker can register and obtain information for use

vacancies with various, companies.

Participating in a discussion about your favourite TV show with like minded people across the

globe.

Sending and receiving greetings for various occasions across

globe.

Find out which computer programming languages are ruling industries.

Visit an electronic zoo or a museum.

Download (obtain) some interesting software and try it out.

Publish your portfolio over the net.

10.3 THE WORLD-WIDE WEB

The World Wide Web (WWW) is a large-scale, on-line repository of information that users can

search using an interactive application program called a browser. The World Wide Web is an

Internet-based network of Web servers. A Web Server is the host computer that publishes

information for users to view. In other words, we can define World Wide Web as a universal

database of knowledge.
When we connect to a Web Server we get information in the form of a PAGE. A PAGE displays

information in the-'form of text, graphics or both. These pages are user-friendly and may contain

a link to other pages that contain more in-depth information about the specific page. The links on

these pages leads you to another page, which may reside on the same or different server.

The Web gets its name because of the complex navigation that a user has to carry out without

even realizing it. The connected text is called 'hypertext" and the page on which it is contained is

called a 'Web Page' These web pages are files, similar to those created with a word processor.

The difference is that word processor files have extensions like .DOC [document] or . TXT [text]

whereas these web documents have, a HTML [Hypertext Mark-up Language] extension. These

web documents are stored on computers connected to a network. Many such networks join

together to form the Internet. Let us briefly explain some terminologies related to the internet.

Hypertext :Hypertext is text that 'has connections''. This special text contains the address of

another computer that is part of the WWW. When we click on this text, the browser [a gateway

in INTERNET] understands it as an instruction to get that page from that computer and display it.

Link : A link is the connection from one web page to another using hypertext. These Web Pages

are not physically connected but just contain the address of the page that should be displayed.

HTTP : stands for HyperText Transfer Protocol. It is a" a set of rut

and regulations" that is used to send a page or pages contain hypertext from one computer to

another.

Browser: A browser is an interactive program that permits a user to view information from the

World Wide Web. The information contains selectable items that allow the user to view other

information. Typically a browser is used for the following services.

- Connecting to the source computer whose address is specified in the hyper-link


- Requesting the new page from the source

- Receiving the page

- Closing the connection

- Displaying it to the user after formatting it

Some popular Web Browsers are Internet Explorer, Mozilla, Netscape Navigator and crazy

browser

§ Now let take a look at this, when you go to a restaurant, you will sit down and go through

the menu list. Normally you will always place your order through the waiter. The Waiter

will take your order to the kitchen for the cook to prepare the dishes. The waiter himself

does not prepare the food he only convey the customer order to the kitchen. After the

cook has prepare the food, the waiter will in turn bring the food to the customer who has

placed the order waiting for the arrival of his order. Can you relate this to how the

internet works

o The customer is the user of the Internet in this regard requesting for service, The waiter

is likened to the browser who takes the request of the of the user to the server.The Cook

is the server who actually produced the request of the customer. The waiter will bring it

back to the customer who is the user

Address: Each computer on the Internet has a unique address of its own. This address is

contained in the hyperlink Text of a document. The browser software uses this address to

connect to the server over the network.

Client: The computer that is requesting for some service from another computer is called the-

client.
Server: A server is the computer that actually services the requests of Other Computers. Another

name that is sometimes used for a server is a ‘host’. The server is usually a powerful computer

with a large memory and hard disk containing many thousands of documents. The documents

can be HTM files, they could be sound files, picture files and others).

Summary of Unit 10

In Unit 10, you have learned that:

1. The internet is the world largest network, it is an international network called network of

networks.

2. Internet has its application in different areas of human endeavour ranging from education,

banking, e-mail to hospital information system

3. There are different network topologies, among which we have star, ring bus and

hierarchical topologies

4. The World Wide Web (WWW) is a large-scale, on-line repository of information that

users can search using an interactive application program called a browser

5. Some of the commonly used internet terminology are: Browser, client, server, hypertext,

request etc

Self-Assessment Questions (SAQs) for Unit 10

Now that you have completed this study session, you can assess how well you have achieved its

Learning Outcomes by answering these questions. You can check your answers with the Notes

on the Self-Assessment Questions at the end of this Module.

SAQ 10.1 (tests learning outcome 10.1)

Define the Internet?


SAQ 10.2 (tests learning outcome 10.2)

State five (5) applications of Internet

SAQ 10.3 (tests learning outcome 10.3)

Explain the following www, client and server

Notes on the Self-Assessment Questions (SAQs) for Unit 10

SAQ 10.1: The Internet is a world-wide system of computer networks through which sharing of

information is possible

SAQ 10.2: Five areas of application of internet are:

- Banking

- Online Education

- Online Employment system

- E-mail

- Teleconferencing

SAQ 10.3: The World Wide Web (WWW) is a large-scale, on-line repository of information that

users can search using an interactive application program called a browser.

References

1. Johnny Ryan (2013), A History of the Internet and Digital Futuer Paperback.

You might also like