CSC 111 Course Material
CSC 111 Course Material
Introduction to Computer
CSC 111
Unit 1 Introduction to Computer
Introduction
Two questions frequently asked about computers are: What are they and why are all the interest
in them? The answers to these questions partially explain the motivation for this book. There is
something about computers that is both fascinating and alarming. They are fascinating when they
are used in rocketry and space research and when they enable man to get to the moon and back.
Many people think of them as almost human machine with “brains” that allow them to think
On the other hand, we are inclined to be alarmed by their complex mechanisms and involved
In this lecture you will learn about information and Communication Technology (ICT), as you
know, we are now in information age and every aspect of our life depends on Information
Technology (IT). Therefore in this course you will learn about definition of computer system, its
historical development. You will also learn about great computer scientists who contributed to
banking, education, science, health, agriculture etc. The course will also take you through
machines.
There is no doubt that man is highly gifted and is of high capabilities and potentials. In
fact, man is truly an amazing being and a master of inventions. He constantly uses the
power of his imaginations and inventions to solve problem in his environment. A lot
technologies have been developed, such as television, vehicle, camera, radio etc. are all
in solving problem. This definition of computer that you just read is just a simple
definition, it does not say enough about computer. This is because not all electronic
machines are computer. You should also note that computer is not just physical
equipment that you can only see or touch, it is also made up of parts you cannot easily
see like the program. Now in a more encompassing manner, let us define computer.
accepting data (Input), process the data logically or arithmetically using some sets of
In another way a computer can be defined as an electronic machine that solves problem
To the present day generation, computer has different meaning to different group of
people. The use to which it is put determines the understanding attached to it. It is
common for different group to see it differently because of differences in usage. As you
study along, bear it in mind that computer is not composed just a machine, but it is a
collection of interrelated parts which are able to transmit information to one another, see
diagram below on functional parts of computer system showing system units and other
peripherals.
1.2.1 ABACUS
Historically, computing may be considered to have begun with ABACUS, which originated
about 5,000 years ago. During the Middle age, the abacus was used throughout the European and
Arab Worlds as well as in Asia. The design is simply a wooden rack holding parallel wires on
which beads are strung. Calculations can be performed manually by sliding (beads or blocks)
along the parallel wires (rods). The counters are divided into two sections by means of a bar
perpendicular to the rods. One section has two counters, representing 0 and 5 depending upon
their position along the rod. The second section has four or five counters, representing units.
Each bar represents a significant digit, with the least significant digit on the right. Another
computing instrument, the ASTROLABE, was also in use about 2,000 years ago for navigation.
Going by the popular saying that “necessity is the mother of invention, a young man by name
Blaise Pascal invented the first calculating machine at the age of 19 years during the 17th
century, 1642 to be precise. His invention was in response to his desire to assist his father in his
cumbersome business accounting works that involved a lot of calculations. Pascal’s machine was
able to carry out only addition and subtraction of numbers. It utilized a mechanical gear system
machines out of which only one was used for British currency addition, pounds and shillings, the
two other machines were devices which provided access to pre–calculated tables.
In 1694 the German mathematician Leibnitz developed a more advanced mechanical calculator.
His calculator called Stepped Reckoner could also multiply, divide and extract square roots. This
calculator’s first working model was completed 100 years later in 1794 and exhibited at Royal
Society in London.
After Leibniz’s machine proved unreliable, by the 1830’s Charles Babbage an English inventor
developed the first automatic digital computer called Analytical Engine. The new device was
able to combine arithmetic process of addition, subtraction, multiplication and division with
decisions based on its own computations. Most of the basic elements of the modern digital
computer was found in Babbage’s engine which includes punched–card, input/output medium,
arithmetic unit, memory for storage of numbers and sequential control. Charles Babbage
invention marked the beginning of modern computer architectural design. Considering this great
achievement, he was referred to as the father of modern computers. Although he was not able to
The essays written during the mid-19th century by Boole was of greater significance. He called
attention to the analogy between the symbols of algebra and those of logic as used to represent
logical forms. Boole’s system with its binary logical operators (e.g. AND, OR and NOT) became
the basis of what is now known as Boolean algebra on which electronic computer switching
census return was another major step in computer development. He recognised the possibility
that a pattern of holes in perforated sort and manipulate electrically by a machine specially
designed to sort and manipulated the numerical data represented by the holes. By the U. S.
Census of 1890, Hollerith had invented a tabulating system that automated census count. As
stated earlier Hollerith’s system had accomplished in one year and seven months what it would
have taken a hundred clerks seven years and eleven months to do. Hollerith left the census
bureau in 1896 to form the tabulating Machine Company which was eventually changed to the
International Business Machine Corporation (IBM), which today stands out as one of the largest
In 1939 John V. Atanasoff, a U. S. Mathematician and Physicist built what some consider
began work on a fully automatic large-scale calculator using standard business machine
components. By 1944 the first calculator called Automatic Sequence Controlled Calculator,
commonly known as Mark-1 was made. Later Mark-2 and Mark-3 were built on similar line.
Another machine called ENIAC (Electronic Numerical Integrator and calculator) which consists
of switches and interconnecting wires was built around 1939, this was mainly for calculating
trajectories and could also be used in other computations. The use of paper tape for data entry
into these machines was slow and did not allow the machine to operate at full speed. Similarly
there was a need to make programs available internally along with the data, to take advantage of
the high speed inherent in electronic systems. Large memory was designed in Cambridge by M.
V. Wilkes. His machine called EDSAC (Electronic Delay Storage Automatic Calculator) was
used for the training of a whole generation of computers oriented mathematicians at Cambridge.
Between 1945 and 1950, EDVAC (Electronic Discrete Variable Computer) was designed. This
machine emphasized the idea of stored program. By 1948 a prototype machines at Manchester
was completed. Later, companies like IBM, Remington Co-operation, ICL and many other
Exercise 1
i. Can you highlight the contributions of the great computer scientist you have read about?
Significance of Computer
Scientific and military applications were the first areas to which the computer was put to use to
aid problem solving calculations, especially during war. Presently it is also widely used for
planning and as an aid to business. Because of its wide area of applications, it can be said to be a
general purpose machine. It performs its data-processing operations accurately at high speed
also called an automatic device and has the ability to perform calculations, sort Files, and Edit
information.
It must be noted that its compatibility to solve any given problem is limited by the instruction
supplied. A problem that has no solution from human point of view also has no solution in the
computer realm. Hence it can be said to an extension of human mind, though in speed and
accuracy, it performs better. This view is not true because it has no mind of its own, it cannot
start itself, and its ability to solve problem is limited by logic or steps supplied by the
programmer. There is also a high degree of reliability in its processing operations and
It stores vast quantities of information and also retrieves any given volume within a very short
time gap. One major advantage is the ability to take some decisions by altering the flow
instructions.
Due to speed and accuracy of processing, computer machine are fast more becoming more
popular and there is increase in their demand n the world over. It is currently having a proud
research have been vastly accelerated by the use of the computer. In business and government,
management practices have been revolutionized by computer and also because of its ability to
process data and present it in a more meaningful form. The development in the computer
industry is so fast that latest developments today may be out dated within 2 years. This
continuous revolution and development in the computer industry makes it challenging area to be
exploded.
Today’s computers come in a variety of shapes, sizes and costs. Larger general-purpose
computers are used by many large corporations, universities, hospitals and government agencies
to carry out sophisticated scientific and business calculations. These computers are generally
referred to as mainframes. They are very expensive (some cost millions of dollars), and they
require a carefully controlled environment (temperature, humidity, etc.). As a rule, they are not
physically accessible to the scientists, engineers and corporate accountants that use them.
Mainframes have been available since the early 1950s, but very few people had any opportunity
to use them, particularly in the earlier years. Thus it is not surprising that computers were viewed
The late 1960 and early 1970s saw the development of smaller, less expensive minicomputers.
Many of these machines offer the performance of earlier mainframes at a fraction of the cost.
Many business and educational institution that could not afford mainframe acquired
circuit technology (silicon “chips”) resulted in the development of still smaller and less
expensive computers called microcomputers. These machines are built entirely of integrated
circuits and are therefore not much larger (or more expensive) than a conventional office
typewriter. Yet they can be use for a wide variety of personal, educational, commercial and
technical applications. Their use tends to complement rather than replace the use of mainframes.
In fact, many large organizations utilize microcomputers as terminal or workstations that are
connected to a mainframe computer (or series of mainframes) through a communication
network. Particular interest is the development of the personal computer a small, inexpensive
microcomputer that is intended to be used by only one person at a time. Many of these machines
dramatically as their cost continues to drop. Personal computers are now used in many schools
and small business, and it appears likely that they will soon become a common household items.
Summary of Unit 1
(Input), process the data logically or arithmetically using some sets of instructions
2. Computer was developed through efforts and contributions of great computer scientists.
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
(ii) Pascaline
SAQ 1.1: A computer is an electronic device (calculating machine) that is capable of accepting
data (Input), process the data logically or arithmetically using some sets of instructions
SAQ 1.2: Charles Babbage proposed the architecture of the modern computer while Blaise
1. The characteristics of computer include: speed, accuracy, storage, integrity, security etc
2. Computer has generations and that each generation is a reflection of its developmental
stages.
References
1. Jaiyesimi, S.B.; Lala, O.G.; Akinwumi A.O. and Omotosho L.O. (2004) “Computer
Introduction
In chapter 1 you learnt the definition and historical development of computers. This chapter will
take you a step further by learning the developmental stages that the computer has gone through.
The computer has gone through stages of design ranging from big size to small size computer, a
computer that generates heat to the one that generates less heat, computer with high processing
power to computer with less processing power. These and some other design features are the
2.1.1 Overview
development that fundamentally changed the way computers operate, resulting in increasingly
smaller, cheaper, powerful, more efficient and reliable devices. Since the development of Mark-
1, the digital computer has evolved at an extremely rapid pace. The succession of advances in
computer hardware, most notably in logic circuitry and storage system is generally discussed in
terms of the concept of generation. Each stage of development is associated with one sort of
technological innovation or another. Each generation is usually better than the previous by
making possible, certain which were not possible with the earlier generation. Generally, the five
generations of computers are characterized by electrical current flowing through the processing
mechanisms, such as the first generation within vacuum tubes, second-generation within
chips and fifth-generation unveiled smart devices capable of artificial intelligence. The summary
of the development of five generations of computers from its early days to the present situations
First vacuum tubes Very Very Large High Very big High
Low small
circuits small
The first generation of modern-day computers with ENIAC (Electronic Numerical Integrator and
Calculator) was ushered in by J. P. Eckert and John W. Mauchly in 1946. It was the first all-
purpose, all-electronic digital computer. Figures 2.1 and 2.2 identify vacuum tubes for circuitry
and magnetic drums for memory of mercury relay lines respectively used in the first computers
instead of relays as its active logic element. There was a substantial increase in computational
speed due to the use of electron tubes. This computer was more than 1000 times faster than its
In 1974 the Hungarian mathematician John Von Neumann devised a method of converting the
ENIAC concept of an externally programmed machine to that of a stored program concept. This
stored-program concept led to the development of the self – modifying computer. Other notable
first generation of computers called the UNIVAC-1 (Universal Automatic Computer) built in the
year 1951 was the first commercial computer delivered to a business client, the U.S. Census
Bureau. The first electronic digital computers relied on machine language, the lowest-level
programming language understood by computers, to perform operations, and they could only
solve one problem at a time. Input was based on punched cards and paper tape, and output was
displayed on printouts. UNIVAC-I was the first computer to handle both numerical and
alphabetical information with equal ease and assailed the principle of the separation of
The main problem encountered during the era of first-generation computers was that they
occupied a large amount of space and large circuits as shown in figure 2.3. Generally, they were
slow in operation and generated a lot of heat with the problem of unreliability which was often
the cause of malfunctions compared to other generations. They were very expensive to operate
i. ENIAC (1946)
Figure 2.4 introduced the second generation of computers that was invented in 1947 with a semi-
The transistor was unable to see the widespread use until the late 1950s after a series of
development in transistors became a viable alternative to the vacuum tube. The transistors helped
in building a series of processors operating at microsecond speed ranges with a lower level of
generated heat shown in figure 2.5. By using transistors in control, arithmetic and logic circuits
along with improved magnetic core memory manufacturers were able to produce more efficient,
smaller, faster and cheaper computers than its predecessors. Figure 2.5 showcase the typical
example of a more efficient and smaller size of the second generation of computers.
Figure 2.5: Improved Second generation of computer4
This generation of computers with memory to store advanced from cryptic binary machine
language to symbolic which allows programmers to specify instructions in words using high-
level programming languages like COBOL and FORTRAN developed at this time as early
versions. The memory was also upgraded from a magnetic drum to magnetic core technology.
The first products of advanced second generation of computers were developed for the atomic
energy industry. There was a vast improvement in transistor over the vacuum tube, though the
transistors were still subjected the computer to damage due to a great deal of heat attributed to it.
The small size of the transistor as presented in figure 2.6, its greater reliability and its
comparatively low power consumption of computers in the second generation made it far
superior to the vacuum tube computers in the first generation. This generation was between the
i. IBM-7000
iv. IBM-7094
v. MARK III
i. Used transistors
iii. Assembly language and punch cards were used for input.
vi. Portability
During the late 1960s and 1970s, transistors were miniaturized and placed on silicon chips,
device (integrated circuit) consists of hundreds of transistors, diodes and resistors on a tiny
silicon chip which happened to be the existence of the third generation of computers. Thereafter,
an improved Integrated Circuit was able to develop a Large Scale Integration (LSI) which made
it possible to pack thousands of transistors and related devices on a single integrated circuit as
figured out in figure 2.7. This also led to the invention of the microprocessor that contains all the
arithmetic, logic and control circuitry called the Central Processing Unit (CPU). The CPU is the
part of the digital computer that interprets and executed instruction. The development of the CPU
The typical structure of the third generation showcased in figure 2.8 which was the generation
that produced an important technological innovation to increase the speed and efficiency of
computers instead of punched cards and printouts. Users interacted directly with computers
through the use of keyboards and monitors that interfaced with an operating system and allowed
the device to run many different applications at one time with a central program that monitored
the memory.
The construction of mainframe (large-scale) computers of higher operating speed and capacity
reliability at substantially lower cost was achieved based on the impact of integrated circuitry
innovation which further helped the engineers to design mini computers. The third generation of
computers richly helped the society by making computers to be accessible due to its smaller size
i. IBM-360
iii. IBM-370
ii. The highly sophisticated technology required for the manufacturing of IC chips.
integrated circuits were built and packed in a single silicon chip. The first microprocessor named
as Intel 4004 chip was developed in 1971, contained all the components of the computer ranged
from the central processing unit, memory to input/output controls as indicated in figure 2.9. A
In 1981, IBM introduced its first computer for the home user while Apple introduced the
Macintosh in the year 1984. Microprocessors were upgraded from the use of desktop computers
into many areas of life as more and more everyday products began to use microprocessors as
described as an example of the migration in figure 2.10. The main feature attributed to this
generation is the availability of the VERY LARGE-SCALE INTEGRATED (VLSI) which has
vastly increased the circuit density of the microprocessor, memory and support chips. Note that
the large-scale integrated circuits contain thousands of computers on a silicon chip less than 0.2
inch (five mm) square, the very large-scale integrated circuit holds hundreds of thousands of
During this era, there is no much difference between fourth generation and the third generation
of computers; but this generation witnessed the flooding of the market with a wide variety of
software tools, application packages like database management systems, word processing
packages, spreadsheet packages, game packages, etc and the enhancement on networking
capabilities in the areas of LAN (Local Area Network). As this generation of computers became
more powerful, the use of the Internet and the GUIs also became friendly and useful. The fourth
i. Desktops
ii. All-in-one
iii. Laptops
iv. Workstations
v. Nettops
vi. Tablets
vii. Smartphones
iv. All types of high-level language can be used in this type of computers
ii. Air conditioning is required in many cases due to the presence of ICs.
BEYOND)
The fifth generation of computing devices is based on artificial intelligence that is still in
development. Though there are some applications like voice recognition, which are being used
today. The use of parallel processing and superconductors is helping to make artificial
intelligence a reality. Quantum computation and molecular, and nanotechnology will radically
change the face of computers in the years to come. The goal of the fifth-generation of computers
is to develop devices that respond to natural language input and are capable of learning, self-
organization, reasoning, recognize relationships and improving the performance based on the
experience. Manufacturers are also expected to produced voice-input devices that capable of
Note that, the fifth generation of computers is yet to be in the market because of the features
expected. There is hope that simultaneous execution of several separate operations (e.g. memory,
logic and control) employing numerous integrated circuits in which millions of CPU, memory,
v. Laptop
vi. NoteBook
vii. UltraBook
viii. Chromebook
iii. It provides computers with more user-friendly interfaces with multimedia features
ii. They may make the human brains dull and doomed
Summary of Unit 2
1. Computer has generations and that each generation is a reflection of its developmental
stages
2. Each generation of computers has distinctive features: size, heat generation, memory size,
processing speed and the technology used in building the computer of that generation.
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 2.1: First generation, second generation, third generation, fourth generation and fifth
generation.
References
Computer Science. Archived from the original (PDF) on May 25, 2006.
2. Keates, Fiona (2012). "A Brief History of Computing" The Repository. The Royal
Society.
5. Shapiro E.( 2020). A subset of Concurrent Prolog and its interpreter, ICOT Technical
Report TR-003, Institute for New Generation Computer Technology, Tokyo, 1983. Also
in Concurrent Prolog: Collected Papers, E. Shapiro (ed.), MIT Press, 1987, Chapter
8. Van Emden, Maarten H., and Robert A. Kowalski. (1976). "The semantics of predicate
10. Hendler, James (2008). "Avoiding Another AI Winter" (PDF). IEEE Intelligent Systems.
2012.
UNIT 3 COMPUTER SYSTEM
Introduction
Now that you have learned history and generation of computer, it is time to study in detail what
computer really look like, its characteristics, uses, application areas and benefits. In this unit you
will also learn types of computer and classifications of computer based on size and purpose and
speed..
Before we discuss computer in detail, there is need for us to learn what a system is. We often
speak of water system, digestive system in biology, computer system and other types of system.
achieve a goal. Most systems have input, process and output stages as illustrated with the
diagram below.
PROCESS OUTPUT
INPUT
ii
Input: this is the element that enter the system for processing
Digestion of food is the taking in of food through the mouth, breaking down the foods into
soluble forms and wastes by body enzymes and releasing the wastes in form of urine and excreta.
Input: foods
You can take a look at the figure 3.4 below. It shows different parts of human body. The parts
can be likened to I-P-O system. That is, it has input, processing and output components
Fig3.2: Human body system
Input:
Processing
- Brain : for thinking, memorising and controlling the activities of the body
Output
The definition of computer you read in unit 1 of this module shows that computer is an I-P-O
system. From the definition, computer accepts data (input), processes the data and gives out
Consider this scenario: suppose numbers 10 and 15 are supplied to a computer with an
instruction to add the two numbers. Can you show the I-P-O phase of how computer will carry
out this scenario. The computer will add the two numbers according to the given instruction and
generate the required result which is 25. The I-P-O phase of the addition operation of the two
10 and Add 10 to
15 15 25
In our earlier study in unit 1 we gave simple definition of computer. You have also further
studied that a computer is a system. What then is a computer system? A computer system is not a
single machine. It consists of a group of electronic components like monitor, system unit,
keyboard, mouse, printer and other components working together to achieve a particular goal.
You can see figure 1.1 for different components of a computer systems
Computer has some characteristics or features which distinguish them from other machine.
These characteristics constitute the advantages of computer. Below are some of these
characteristics
Speed: Computers process information at a very fast rate, the speed of processing is measured in
Access : Computers are used mainly for information processing, but more important access to
the speed stored processed information in more important. They offer the advantage of fast and
easy access to store information. The speed of retrieval however depends on the capacity of the
Extra ordinary task: Computers have the ability to perform tasks that would otherwise not be
feasible or cost effective using conventional means. A good example of this is the ability to solve
Security: Computers are provided with in-built security codes that make it impossible for
outsiders to manipulate the data or record in the computer files. This security helps to check
encoded as some unique combination of zeros and ones. These zeros and ones are called bits
(binary digits). Each bit is represented by an electronic device that is, in some sense, either “off”
(zero) or “on” (one). Most small computers have memories that are organized into 8-bit
multiples called bytes. Normally 1 byte represents a character (i.e., a letter, a single digit or a
punctuation symbol)
An instruction may occupy 1, 2, or 3 bytes, and a single numerical quantity may occupy
anywhere from 2 to 8 bytes, depending on the precision and type of number. The size of a
computer’s memory is usually expresses as some multiple of 2^10 = 1024 bytes. This is referred
to as 1k. Small computers have memories whose sizes typically range from 64k to 1024k (1
mega) bytes.
If the memory of a small computer is say 64k bytes, then, as many as 64 * 1024 =65,536
characters and/or instructions can be stored in the computer memory. If the entire memory is
used to represent character data, then about 800 names and address can be stored within the
computer at any one time (assuming 80 characters for each name address). If the memory is used
to represent numerical data rather than names and address, then about 16,00 individual quantities
can be stored at any one time (assuming 4 bytes per number). Large computers have memories
that are organized into words rather than bytes. Each word will consist of a relatively large
number of bits, typically 32 or 36. This allows one numerical quantity, or a small group of
characters (typically four or five) to be represented within a single word of memory. Large
computer memories are usually expressed as some multiple of 1k (i.e.2^10 = 1024) words.
A large computer may have several millions words of memory. Some memories have
ability to store 16,000 or 64,000 bits (pieces of information) and there are others that can store
Now can you calculate the capacity of this computer in term of data storage?
o If the memory of a large general purpose computer is 2048kb, can you determine the
• Then this is equivalent to 2048 * 1024 = 2,097,152 words. If the entire memory is used
to represent numerical data, then roughly 2 million numbers can be stored within the
computer at any one time. If the memory is used to represent character rather than
numerical data, then about 8 million characters can be stored at any one time. This is
Most computers also employ auxiliary memory device (e.g. magnetic tapes, disks, solid-state
memory devices) in addition to their primary memories. These devices typically range from a
few hundred thousand bytes (for a small computer) to several million words (for a larger
computer). Moreover, they allow for the permanent recording of information, since they can be
physically mounted or dismounted from the computer and stored when not in use. However the
access time (i.e. the time required to store or retrieve information) is considerably greater for
SPEED AND RELIABILITY: Because of its extremely high speed, a computer can carry out
calculations in just a few minutes that would require month’s perhaps even years – if carried out
by hand. Simple tasks, such as adding two numbers, can be carried out in fractions of a
microsecond (1 us = 10-6s). On a more practical level; the end-of-semester grades for all
students n a large University can typically be processed in just a few minutes of computer time.
For example it was estimated that Hollerith’s system had accomplished in one year and seven
months what it would have taken hundred clerks seven years and eleven months to do.
This very speed is accompanied by an equally high level reliability. Thus a computer practically
never makes a mistake of its own accord. Highly publicized “computer” errors such as a person
receiving a monthly bill of a programming error or an error in data transmission rather than an
error caused by the computer itself. In computer systems, output could be described as 100%
reliable if the input is correct. Hence the saying garbage-in, garbage-out (what you send n, is
Activity 3.1
Take a moment to reflect on what you have read so far. Based on your learning experience, and
knowing that computer has a lot characteristics which make them to be very useful for daily
activities. Can you mention some of the advantages, individual, organisations and even
The advantages of computer ranges from speed, accuracy, storage capacity, integrity and
The analogue computers operate on data represented by variable physical quantities, such as
Digital computer on the other hand works with numbers, words and symbols expressed as digits,
ANALOGUE COMPUTERS
An analogue device is defined as one that operates on the principle of similarity in proportional
relations to a process modelled when values are kept constant over a specified range. A computer
of this type solves problems by operating on continuous variables rather than on discontinuous or
discrete units as do digital computers. Analogue computers are similar to a voltmeter in the way
they measure values. They translate various physical conditions such as flow, temperature,
pressure, mechanical motion, and angular position into mechanical or electrical analogue values.
These types of computer use various types of amplifiers to perform arithmetic operations as
A digital computer processes all kinds of data in discrete form i.e numbers expressed directly as
the two digits 0 and 1 of the binary code. Using various techniques, these two binary digits called
“BIT’ can be made to represent numbers, letters, and symbols. Binary 0110 for example,
represent the decimal number 6. By operating in binary codes, a computer is able to indicate two
possible states or conditions. The state is said to be either ON or OFF, ON stands for 1 and OFF
In computer programming these sets of bytes is what is used to develop both complex and simple
instruction called programs assist a computer to generate solution for scientific, business and
A digital computer also has the ability to compare, it also has capacity to make decisions, by
than X perform process Y and add 1 to counter. All the computer will do is to assess the value of
the manipulation and make conclusion without human interference but still following strictly the
In some cases, the user may wish to obtain the output from an analog computer as processed by
a digital computer or vice versa. To achieve this, he set up a hybrid machine where the two are
connected and the analog computer may be regarded as a peripheral of the digital computer. In
such a situation, a hybrid system attempts to gain the advantage of both the digital and the
analog elements in the same machine. This kind of machine is usually a special-purpose device
which is built for a specific task. It needs a conversion element which accepts analog inputs,
and output digital value. Such converters are called digitizers. There is need for a converter from
analog to digital also. It has the advantage of giving real-time response on a continuous basis.
Complex calculations can be dealt with by the digital elements, thereby requiring a large
memory, and giving accurate results after programming. They are mainly used in aerospace and
3.3.2 CLASSIFICATION
Classification of digital computers depends on the same factors like size, complexity, cost,
computation, retrieval speed, and transmission capability. It must be noted that they all follow
Using these factors, computer can be classified into three broad categories namely:
It must be noted that recent development has made the classification difficult. Recently,
some mini computers and microcomputers produced are more powerful than traditional
mainframes.
MAINFRAME COMPUTERS
The first and second generation computers commonly referred to as the earliest computers were
all mainframes:
(iii) They are general purpose processors capable of handling multiple simultaneous functions
such as batch processing interactive and transaction processing under the control of an operating
system.
(iv) They support a wide range of peripheral equipment’, such as printers, including high
(v) They are normally housed in air-conditioned rooms surrounded by security measures, run
(vi) They have large memories of say 4 Megabytes with several disk units, each holding 3-6
Banks where large amount of information has to be collected, sorted and distributed also make
MINI COMPUTERS
It is used for low volume applications which require relatively sophisticated computational
capability.
The earliest developed mini computers were used for aerospace applications and they appeared
Generally when an organization decides to decentralize its operation or distribute its computer
power to various stations or locations within user departments, mini computers were the first
(vi) The main memory ranges from 256K to 512K. It has ability to expand to several
megabytes (MBs).
MICRO COMPUTERS
Technological advancement that led to the production of LSI made it possible to develop micro
computers. Micro computer is a small computer consisting of processors on a single silicon chip
mounted on a circuit board together with memory chips, ROMs and RAM chips etc.
(iv) It has five basic components which include Random Access Memory (RAM), Read Only
(v) They have word length of 4, 8, 16 bits (some are 32 bits, and they are referred to as super
micro computers)
(ix) They are commonly found in homes, schools business, offices etc.
HYBRID COMPUTER
The earliest hybrid computers were introduced in late 1950’s. The principle here was the
‘employment of digital machines as a support device for the analogue unit. Most recent hybrid
(ii) A memory for the internal storage of a master digital program and data.
(vi) Several analogue units used to provide continuous parallel computational capability
(vii) Provision of converters called Digital to Analogue Converters (ADC) for proper
interfacing (This translates data from the analogue processors into digits of the binary codes) and
(viii) Provision of devices that convert digitally processed information into analogue
One major advantage of hybrid system is the offer of greater precision than do analogue
Computer is applied in almost every aspect of human life and operation. In other word computer
has made their presence felt in almost every speck of life today. Some of the areas where
• Banking sector
• Super markets
• Transportation
• Alarm system
• Online examination
• Open distance learning format
• space technology
• field of medicine
• in industrial research
Summary of Unit 3
3. There are different types of computer; analogue, digital and hybrid computer
6. Computer has its applications in different areas : banking, education , medicine, industries
etc
Self-Assessment Questions (SAQs) for Unit 3
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
Define a system. Mention two input and two output and one processing unit in human body
Input:
- Brain : for thinking, memorising and controlling the activities of the body
Output
- Speed
- Storage
- Reliability
- security
- banking
- transportation
- education
- broadcasting
- electricity billing
- cashless system
- Mainframe computer
- Mini computer
- Micro computer
Refences
Balogun, V.F., Daramola, O.A., Obe, O.O., Ojokoh, B.A., and Oluwadare S.A., (2006).
Larry Long (1984). Introduction to Computers and Information Processing. Prentice-Hall Inc.,
New Jersey.
Gray S. Popkin and Arthur H. Pike (1981). (1981). Introduction to Data Processing with BASIC,
Introduction
In this unit you will learn about hardware components of computer. The hardware components
Computer hardware is divided into two main categories: the system unit and peripherals. The
system unit contains the electronic components used to process and temporarily store data and
instructions. These components include the central processing unit, primary memory, and the
system board. Peripheral devices are hardware used for input, auxiliary storage, display, and
communication. These are attached to the system unit through a hardware interface that carries
The CPU is commonly referred to as the heart of the system without it no system can function. It
is the minimum hardware a subsystem can use. The CPU has three essential sets of transistors
that work together in processing digital data: a control unit, an arithmetic logic unit, and
registers.
Control Unit
The control unit directs the flow of data and instructions within the processor and electronic
memory.
This unit co-ordinate’s the activities of the units of the system and ensures that the instructions
contain in its programs are executed in proper sequence; it also controls the activities of various
input/output devices.
Fig 4.3: Control unit
The operations carried out by the control unit while executing a single instruction may be
summarized as follows:
i. Obtain the ‘address of memory for the current instruction to be obeyed from the Program
counter
ii. Copy the instruction from its location in memory into the Instruction Register.
iii. Increment the Program Counter so that it now contains the address of the next instruction
to be obeyed.
iv. Decode the instruction from its pattern of binary digit to determine what operation is to
The arithmetic logic unit (ALU) contains programmed transistors that perform mathematical and
logical calculations on the data. The Arithmetic and Logic Units (ALU) consists of two units,
i. The Arithmetic Unit: This unit performs arithmetic operations such as addition,
between numbers, shifting of values from one area to another. It is sometimes called
MEMORY
Registers
The registers are special transistors that store data and instructions as they are being manipulated
by the control unit and ALU. New microprocessors also have additional high-speed memory
called cache, on the chip to store frequently used data and instructions.
Peripheral devices are hardware used for input, auxiliary storage, display, and communication.
These are attached to the system unit through a hardware interface that carries digital data to and
These are used as means of communication between computer and the outside and include
keyboard, mouse, modems, scanners, digital camera, network interface cards, and ports. They
allow you to send information to the computer or get information from the computer.
Camera
Input Devices
An input device can be any piece of equipment that transfers information into a computer. This
i. Mouse
A computer mouse is an input device that is used with a computer. Moving a mouse along a
flat surface can move the cursor to different items on the screen. Items can be moved or selected
by pressing the mouse buttons (called clicking). Today's mice have two buttons, the left button
and right button, with a scroll wheel in between the two. Today, many computer mouse
There are many types of mouse; Optical mouse, wireless mouse, mechanical mouse, trackball
mouse. A computer mouse is a handheld hardware input device that controls a cursor in a
Graphical user Interface (GUI) and can move and select text, icons, files, and folders. For
desktop computers, the mouse is placed on a flat surface such as a mouse pad or a desk and is
placed in front of your computer. The picture to the right is an example of a desktop computer
mouse with two buttons and a wheel. The mouse was originally known as the X-Y position
Indicator for a display system and was invented by Douglas Engelbart in 1963 while working
at Xerox PARC. However, due to Alto's lack of success, the first widely used application of the
mouse was with the Apple Lisa computer ( "mouse from FOLDOC". foldoc.org.).
ii. Keyboard
or keys to act as mechanical levers or electronic switches. Following the decline of punch
cards and paper tape, interaction via teleprinter-style keyboards became the main input
method for computers. Keyboard keys (buttons) typically have characters engraved or printed on
them, and each press of a key typically corresponds to a single written symbol. However,
producing some symbols may require pressing and holding several keys simultaneously or in
sequence. While most keyboard keys produce letters, numbers or signs (characters), other keys
In normal usage, the keyboard is used as a text entry interface for typing text and numbers into
a word processor, text editor or any other program. In a modern computer, the interpretation of
key presses is generally left to the software. A computer keyboard distinguishes each physical
key from every other key and reports all key presses to the controlling software. Keyboards are
also used for computer gaming either regular keyboards or keyboards with special gaming
iii. Scanner
Scanners capture text or images using a light-sensing device. Popular types of scanners include
flatbed, sheet fed, and handheld, all of which operate in a similar fashion: a light passes over the
text or image, and the light reflects back to a CCD (charge-coupled device). A CCD is an
electronic device that captures images as a set of analog voltages. The analog readings are then
converted to a digital code by another device called an ADC (analog-to-digital converter) and
transferred through the interface connection (usually USB) to RAM. The quality of a scan
depends on two main performance factors. The first is spatial resolution. This measures the
number of dots per inch (dpi) captured by the CCD. The second performance factor is colour
resolution, or the amount of colour information about each captured pixel. Colour resolution is
determined by bit depth, the number of bits used to record the colour of a pixel.
iv. Camera
Digital cameras are a popular input source for multimedia developers. These cameras eliminate
the need to develop or scan a photo or slide. Camera images are immediately available to review
and reshoot if necessary, and the quality of the digital image is as good as a scanned image.
Digital capture is similar to the scanning process. When the camera shutter is opened to capture
an image, light passes through the camera lens. The image is focused onto a CCD, which
generates an analog signal. This analog signal is converted to digital form by an ADC and then
sent to a digital signal processor (DSP) chip that adjusts the quality of the image and stores it in
the camera’s built-in memory or on a memory card . The memory card or stick has limited
storage capacity. Images can be previewed on the card, and if not suitable, deleted to make space
for additional images. Digital camera image quality, like scanning, is based on spatial resolution
Output Devices
An output device transfers information to the outside of the computer. It includes; Printer,
i. Printer
Printers remain an important multimedia peripheral device, despite the fact that multimedia
applications are primarily designed for display. Storyboards, system plans, schematics, budgets,
contracts, and proposals are just a few common documents that are frequently printed during
multimedia production. End users print images and web pages, as well as the standard text
There are two basic printing technologies: impact and nonimpact. Impact printers form images
and text by striking paper. Dot-matrix printers use a series of pins that strike the paper through an
inked ribbon. These printers are used for applications that require multiform output or high-speed
printing. They are easy to maintain and relatively inexpensive to operate. However, limited
colour and graphics capability, combined with high noise levels, make impact printers
undesirable for most printing needs. Nonimpact printers form printed output without physically
contacting the page. These devices include inkjet, photo, and laser printers.
ii. Speaker
Speaker systems are essential components of modern computers. Most early microcomputers
restricted sound output to warning sounds such as a loud beep when there was an error message.
Macintosh computers raised the bar on sound output when the first Mac introduced itself to the
world in 1984. A computer that could speak changed the prevailing belief that all computer
information needed to be in visual form. Sound capability soon became a requirement for a
multimedia computer. Sound output devices are speakers or headsets. They are plugged into the
soundboard where digital data is converted to analog sound waves. Soundboards can be a part of
the system board or added to a computer’s expansion slots. Soundboard circuitry performs four
basic processes: it converts digital sound data into analog form using a digital-to-analog
converter, or DAC; records sound in digital form using an ADC; amplifies the signal for delivery
• Display monitor
• Projector
Computer Memory
Primary memory is internal memory of the computer. It is also known as main memory and
temporary memory. Primary Memory holds the data and instruction on which computer is
currently working. Primary Memory is nature volatile. It means when power is switched off it
1. RAM
It stands for Random Access Memory.RAM is known as read /writes memory. It generally
refereed as main memory of the computer system. It is a temporary memory. The information
stored in this memory is lost as the power supply to the computer is switched off. That’s why
Types of RAM
a) Static RAM
Static RAM also known as SRAM ,retain stored information as long as the power supply is ON.
SRAM are of higher coast and consume more power .They have higher speed than Dynamic
RAM
b) Dynamic RAM
Dynamic RAM also known as DRAM, its stored information in a very short time (a few
milliseconds) even though the power supply is ON. The Dynamic RAM are cheaper and
2. ROM
It stands for Read Only Memory.ROM is a Permanent Type memory. Its content are not lost
when power supply is switched off. Content of ROM is decided by the computer manufacturer
and permanently stored at the time of manufacturing. ROM cannot be overwritten by the
Type of ROM
PROM chips to write data once and read many once chip has been programmed ,the
programmed time and again by erasing the information stored earlier in it. Information
stored in EPROM exposing the chip for some time ultraviolet light.
3. EEPROM (Electrically Erasable Programmable Read Only Memory)-The EEPROM
Secondary Memory is external memory of the computer. It is also known as auxiliary memory
and permanent memory. It is used to store the different programs and the information
permanently. Secondary memory is nature non volatile. It means data is stored permanently even
1. Floppy Disks
3. Magnetic Tapes
4. Pen Drive
5. Winchester Disk
6. Optical Disk(CD,DVD)
Differences between Primary and Secondary Memory
Primary memory devices are more expensive than Secondary memory devices are less expensive
4
secondary storage devices when compare to primary memory devices
The memory devices used for primary memory are The secondary memory devices are magnetic
5
semiconductor memories and optical memories
Primary memory is also known as Main memory or Secondary memory is also known as External
6
Internal memory memory or Auxiliary memory
Examples: RAM, ROM, Cache memory, PROM, Examples: Hard Disk, Floppy Disk, Magnetic
7
EPROM, Registers etc Tapes etc
Summary of Unit 4
1. Computer is made up of hardware components and that these components are the physical
and output units Computer can be classified by size, purpose and speed.
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 4.2: Examples of input units are: keyboard, mouse, scanner, digitising tablet, digital
SAQ 4.3: Examples of Output units are: monitor, printer, graph plotter, projector and speaker.
References
Introduction
In this unit you will learn about computer software and major operations. The software
components are the intangible/non-physical parts of the computer system which is very crucial to
the functioning of any computer system. Presently, software is used everyday to perform basic
and advanced tasks or complex computations. This chapter discusses the fundamentals of the
A computer software is a generic term for all sorts of programs that run on the hardware system.
The hardware system on its own is just a bunch of electrical gadgets that do not perform any
form of task because the software drives and directs the hardware. A computer software can
computer to perform specific simple to complex tasks. Since the first program was written by
Augusta Ada for Charles Babbage’s proposed analytical engine, software has evolved into being
sophisticated over the years to cater for changing individual and organizational needs while
Software is classified into two major types which are the system and application programs which
are further classified into specific tools and functions. System softwares are machine and
software centered while application softwares are user centred. Figure 1 shows the classification
where the system software comprises the operating system, utility software and language
translators and application software comprises general purpose, special purpose and bespoke
applications.
Figure5.1: Classification of Computer Software
These refer to the suite of programs that facilitate the optimal use of the hardware systems and/or
provide a suitable environment for the writing, editing, debugging, testing and running of user
programs. Usually every computer system comes with a collection of these suites of programs
which are provided by the hardware manufacturers. They constitute an essential part of any
computer system. Examples of system software are: The operating system, the language
computer on one hand and the hardware on the other. It provides the users with features that
make it easier for him to code, test, and execute, debug and maintain his program while
software and hardware. An OS typically is installed on the computer by the manufacturer before
it is sold out. The OS oversees computer boot operations, and tasks as simple as pressing a key
on the keyboard, to installing and running other softwares. If a computer system has no OS
installed, it follows that most of these tasks will be carried out by the operator. In essence, the
processor will be idle most of the time which again will affect the through output of the system.
Input/output handling
Memory management
File management
Operating systems can be found in devices ranging from mobile phones to automobiles to
personal and mainframe devices. There are so many operating systems developed and in use
presently which can be categorized into proprietary, non-proprietary, most popular ones include
windows, mac, android, iOS and Linux (see table 1 for classification by devices).
Mobile Desktop
Android Windows
iOS Unix
Windows Mac OS
Blackberry Solaris
Symbian MS DOS
Bada Ubuntu
1. BATCH OPERATING SYSTEM: This type of operating system does not allow interactions
with the computer directly. There is an operator which takes similar jobs having the same
requirement and groups them into batches. The operator will then be responsible for sorting
the jobs with similar needs. Getting the right priorities for jobs is one major problem of this
type of system.
2. SINGLE-USER OPERATING SYSTEM: Provide machines for only one user at a time.
users at various points, to use a particular computer system at the same time. Also referred to
as multitasking, in time-sharing, each task is allotted some time to execute, so that all the tasks
run smoothly.
communicate with each other using a shared communication network. Distributed systems use
multiple central processors to serve multiple real-time applications and multiple users. One
advantage of this system is a failure of one of the computers connected will not result in
the server is responsible for managing data, users, groups, security and applications. The
primary purpose of the network operating system is to allow shared file and printer access
Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac OS X,
with minimum operator intervention and to ‘Fail Safe’ in case of a hardware malfunction.
8. FILE INTERROGATION SYSTEM: Here there is a large set of data which is interrogated
for information and answers provided without involving the users in the details of
e.g. airline seat reservation and banking. There can be several users accessing a data item
simultaneously; the operating system gives each user the impression that he is the sole user of
10. GENERAL PURPOSE SYSTEM: Use by computers having a large number of users
performing a wide range of tasks. They operate in batch or multi-access mode. In batch mode
you do not interact with your program when it is running while on multi-access you can
interact with your program while running. Examples include XENIX,VAX, MVS and VM
operating systems.
notation in which we express our computer programs. At the initial state of computer
development programs were hard to write, read, debug and maintain. In an attempt to solve
these problems other computer languages were developed which are english like and user-
friendly. However, computers can run programs written into machine language. There is
therefore the need to translate programs written in these other languages into machine language.
The suite of the programs that translates programs written in these other languages to machine
language is called LANGUAGE TRANSLATORS. The initial program written in a language
different from machine language is called the SOURCE PROGRAM and its equivalent in
machine language is called the OBJECT PROGRAM. The three examples of classes of language
ASSEMBLERS
An assembly language is a set of notations using symbols or mnemonics that are easily readable,
and is used to write computer programs. An assembler is a computer program that accepts a
source program in assembly language and produces an equivalent machine language program
called the object program or object code. Each machine has its own assembly language. Machine
INTERPRETERS
An interpreter is a program that accepts a program in a source language, reads, translates and
COMPILERS
A compiler is a computer program that accepts a source program in one high-level language,
reads and translates the entire user’s program into an equivalent program in machine language
called the object program or object code. Some examples of high level languages are
Lexical analysis: where the syntax is broken into tokens, removing comments and whitespaces.
Syntax analysis: where the given input is checked if it is in the correct syntax of the
Semantic analysis: where declarations and statements of a program are checked so that their
meaning is clear and consistent with defined semantic and control structures.
Code generation: where equivalent target codes are generated from intermediate code
representation.
For each high-level language there is a different compiler. We can therefore talk of COBOL
compiler, FORTRAN compiler, C compiler, etc. A compiler also detects syntax errors, errors
that arise from the use of the language. Compilers are portable e.g. a COBOL compiler on one
This is a set of commonly used programs that provides general computer optimization services,
e.g. anti-virus, registry cleaners, disk formatters, data generators, etc. Utility software perform
FILE CONVERSION
This covers data transfer from any medium to another making an exact copy or simultaneously
editing and validating. For example, a copy from a magnetic tape to a disk.
FILE COPY
It makes an exact copy of a file from one medium to another lot or the same medium e.g copy
It involves cleaning the computer memory by reorganizing cylinder and bucket indexes which
FILE MAINTENANCE
It enables users to insert and delete records into or from sequential files. It also allow user to
SORTING
It provides certain parameters and requests the machine to arrange a set of records into a certain
assist users with specific tasks like word editing and publishing, browsing, designing, etc.
Application software may be general purpose (web browsers, word processing, etc.), special
1. Accounting Packages: These applications cover sales ledger, invoicing, inventory control
payroll, fixed assets, purchased ledger, other financial and accounting processing. Examples
Examples include; Word Perfect, WordStar, Display writer, Professional writer, LOTUS
3. Spreadsheet Packages: A spreadsheet is a sheet of paper ruled into a grid of rows and columns
on which you can store and analyse data. . Examples include; LOTUS 1-2-3, MS Excel,
4. Utilities: They do the same job as the utility software, which was discussed earlier. Some of
their functions include, undeleting and compressing a file, reading and writing a file sector by
sector such that it would not work successfully if copied. Examples include; NORTON, PC
5. Integrated Packages: They are programs or packages that perform a variety of different
processing operations that are compatible with whatever operation is being carried out. They
perform a number of operations like word processing, database management and spread sheet
processes. Examples are: Office writer, Logistics symphony, Framework, Enable, Ability,
6. Graphic Packages: These are packages that enable you to create and manipulate images..
Examples include; CorelDraw, Adobe Photoshop, Adobe Illustrator, Blender, 3D Paint, etc.
7. Database Packages: These are packages used for designing, setting up and subsequently
modification taking care of different user views). Examples include; MySQL, MS Access,
1. Computers are also made up of software components which are non-physical and whose
2. Software is divided into system software and application software and is further
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
Differentiate between system software and application software, giving types and
examples.
SAQ 5.1: Software, also called a program, consists of a series of related instructions, organized
for a common purpose, that tells the computer what tasks to perform and how to
perform them.
SAQ 5.2: System software controls the activities of computer hardware and the execution of
computer programs while application software are the users programs that allow users
Types of system software includes, operating systems, e.g. Android, Windows, etc.,
utilities software, e.g. anti-virus, disk-cleaning, etc. and language translators e.g.
Mozilla Firefox, etc., special purpose, e.g. Business Management Software, etc. and
Bespoke, e.g. air traffic control systems, Content Management Systems, etc.
References
Deitel, H. M., Deitel, P. J. and Choffnes, D. R. (2003) Operating Systems, 3rd edn., U.K.:
Pearsons.
Nancy C. M. (2017) Computers for Seniors, 5th edn., U.S.A.: For Dummies.
pp. 1-35.
Timothy, J.O. and Linda, I.O. (2010) Computing Essentials, U.S.A: McGrawHill.
Introduction
In this unit you will learn about data processing. You will learn the differences between data and
Data processing is the manipulation of data into usable information. In computer terms, this is
done on databases and includes data entry and data mining. Data are raw facts and figures
collected from events or other sources. Data need to be processed or organized, so that they
The data processing cycle is the order in which data is processed. There are four stages;
Data collection
Data input
Data output
following ways
Manual method: this involves human efforts. The operations are performed using brain to think,
it could be using calculator for calculation. It also involves using writing material to write, using
Mechanical Method: It involves the use mechanical machines to process data. The use of
Electronic Method: This method involves the use of electronic devices like computer system.
This method is suitable for processing large volume of data. The advantages of this method lies
Real time processing: In real-time processing data entered into computer are processed and the
Time-Sharing: In time sharing the system is interactive, it allows users to process data
independently on a single computer at the same time. This method allows a user to share
Batch Processing: This type of processing allow user to submit data or job to be processed over
a given period of time before the processing takes place. The type of technique is suitable for
Distributed Processing: In this method some of the processing devices and procedure are
situated in different locations. The processing devices are connected together by transmission
facilities
Summary of Unit 6
2. Data are raw facts and figures collected from events or other sources.
4. There are four stages involved in data processing cycle: data collection, data input, data
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 6.1: Data processing is the manipulation of data into usable information.
SAQ 6.2: Data are raw facts and figures collected from events or other sources while processed
data is information
References
1. C. French. (1996). Data Processing and Information Technology (10th ed.), Thomson, pp. 2. .
NUMBER SYSTEMS
There are four common number systems namely: binary, octal, decimal and hexadecimal
systems. The number of digits used in the number system is referred to as the base. Thus,
decimal number system has a base often because it uses ten digits. Similarly, binary number
system has a base of two. The table below shows the common number systems with their digits
and bases.
Table 7.1 Common Number Systems with their digits and bases.
Binary number system has only two digits: 0, 1. It has a base of 2. Examples are: 101, 1001 and
1011. These binary numbers are written as 1012, l00l2 and 10112 (or 101lwo, 1001 two and
number are 176, 405, 260 and 737. They are written as 176s, 4058, 2608 and 7378 (or 176eight,
405eight 260eight and 737eight) respectively. The subscript 8 indicates the base number.
Decimal number system uses ten digits: 0, 1, 2, 3,4, 5,.6, 7, 8, 9 and has a base of 10. Example of
decimal number is 4958. This number may be written as 475610. 4756 means 4 thousands,
= 4756.
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F.
It has a base of 16. The first ten digits are equivalent to the decimal digits. The alphabets A, B,
Thus, A=10, B = H, O12, 11=13, 1-14 and F=15. Examples of hexadecimal number system are
The following procedure is used to convert from decimal system to another number system.
Divide the decimal number by the new base,
List the remainder figures starting from the last to the first in successive order to arrive at the
required answer.
Example
Solution
25-2=12 Rem. 1
12÷2= 6 Rem. 0
6÷2=3 Rem. 0
3÷2 = 1 Rem. 1
1÷2=0 Rem. 1
Now that you have been able to work through the examples above. Can you try this? Convert
Solution
737 ÷8 = 92 Rem.
92÷8 = 11 Rem. 4
11÷8= 1 Rem. 3
1 ÷8 = 0 Rem. 1
More examples
Solution
1046÷16 = 65 Rem. 6
65÷16= 4 Rem. 1
4÷16= 0 Rem. 4
268÷16 = 16 Rem.l2=C
16÷16 = 1 Rem. 0
1÷16 = 0 Rem. 1
SYSTEM
It is possible to convert from any number system to decimal system. Each digit in the number
system is multiplied by the base number raised to various powers as you will see in the examples
The results of the multiplication are added up to arrive at the required answer.
Examples
Solution
1 x 23 = 8
1 x 22 = 4
0 x 21 = 0
1 X 20 = 1
11012 = 13
0 x23 = 0
1 x 22 = 4
1 x 21 = 2
0 x 20 = 0
01102 = 6
Solution
1 x82 = 64
0x81 = 0
5x81= 5 .
1058 = 69
Now, let us test how far you have understood the subject again. Convert 2608 to number in
base ten. If you have worked it correctly, your answer should be 176. Compare your working
2x82 = 128
6x81 = 48
0x80 = 0
2608 = 176
Example
4x 162 = 1024
1 x 161 = 16
6 X 160 = 6
41616 = 1046
3 x 162 = 768
5 x 161 = 80
F x 160 = 15=F
35F16 = 863
Summary of Unit 7
There are basically four systems of counting. Binary, octal, decimal and hexadecimal.
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 7.1: The commonly used number systems are : Binary, octal, decimal and hexadecimal
SAQ 7.2:
9 x 161 = 144
B x 160 = 11=B
9B16 = 155
1 x 162 = 256
0 x 161 = 0
C x 160 = 12=C
10C16 = 268
SAQ 7.3:
0 x23 = 0
1 x 22 = 4
1 x 21 = 2
0 x 20 = 0
01102 = 6
0 x 21 = 0
0 x 20 = 0
=4
References
Introduction
In this lecture you will learn about how to measure capacity of memory media in computer
system. You will learn the concept of bit, byte and character in information storage. its
applications in different areas of human endeavour such as banking, education, science, health,
agriculture etc. The course will also take you the students through memory lane of computer
Data in a computer is represented in a series of bits (binary digits) or ones and zeroes. Since the
birth of computers, bits have been the language that control the processes that take place inside
Data and instructions are entered into the computer in alphabetic and number forms. These
entries are converted to binary digits before the computer uses them. For convenience, computer
uses coding schemes to represent numbers, alphabets, special characters and symbols in bits. The
common coding schemes are binary coded decimal, extended binary coded decimal interchange
code and American standard code for information interchange, (i) Binary Coded Decimal
(BCD) uses 4 bits (22 bits) to represent numbers, (ii) Extended Binary Coded Decimal
Interchange Code (EBCDIC) is an~8-bit coding scheme. It uses 8 bits (23 bits) to represent
numbers 0-9, letters and special characters. For instance, 1111 0101 represents 5 and 1100 0010
represents uppercase letter B. (iii) American Standard Code for Information Interchange (ASCII)
uses 256 bits (28 bits) to represent numbers 0-9, letters, special characters,
A-Z - 26 characters
0-9 - 10 characters
The remaining spaces in the 256 characters code are used to store foreign alphabet letters. Figure
Binary to Denary
To convert a binary number to a denary number, simply add up the columns where a 1 appears.
64 32 4 64+32+4 = 100
Denary to Binary
To convert a number from denary to binary we reverse the process and place 1s into the columns
Finally place zeros in all the columns you have not filled. Answer = 01011110
Using Binary to Store Real Numbers
Real number, or numbers with decimal places are stored using scientific notation. For example,
3.45765 x 102
The computer then stores two separate integers with a set number of bits.
1010001100101010
&
Note that the number of bits that a computer uses to store the mantissa and exponent has an
BITS
All information in the computer is handled using electrical components like the integrated
circuits, semiconductors, all of which can recognize only two states – presence or absence of an
electrical signal. Two symbols used to represent these two states are 0 and 1, and are known as
BITS (an abbreviation for BInary DigiTS). 0 represents the absence of a signal, 1 represents the
presence of a signal. A BIT is, therefore, the smallest unit of data in a computer and can either
store a 0 or 1. Since a single bit can store only one of the two values, there can possibly be only
00 01 10 11
Bits are, therefore, combined together into larger units in order to hold greater range of values.
BYTES
BYTES are typically a sequence of eight bits put together to create a single computer
alphabetical or numerical character. More often referred to in larger multiples, bytes may appear
are used to quantify the amount of data digitally stored (on disks, tapes) or transmitted (over the
internet), and are also used to measure the memory and document size.
Sn Unit Description
1. Bit Short for Binary digIT. It is the smallest unit which can be defined in a
computer.
time.
3. KiloByte (KB) 210 Exactly 1024 bytes. Kilo usually means 1000 of something but in
binary 1024
is a round number. Text files and small graphic files are usually quoted in
KB.
4. MegaByte (MB) 220 Exactly 1024 Kilobytes. Mega usually means 1 million of something
and in
this case it is approximately 1 million bytes. Photos and music files are
5. GigaByte (GB) 230 Exactly 1024 Megabytes. The capacity of some storage devices
(DVDs, USB
6. TeraByte (TB) 240 Exactly 1024 Gigabytes. This measurement is now commonly used
with
newer hard disk drives, mainframe memory and server hard drives.
quadrillion
bytes (10245).
Character
Character is an alphabet, a number or a symbol.
Symbols: @ (%, - +? #)
Blank spaces are regarded as characters. The table below shows number of bytes required to
1 character 1 byte
1 integer 2 bytes
Word
referred to as a word. Thus, a 16-bit word is equal to 2 bytes while a 32-bit word is equal to 4
used.
Exercise
Solution
(i) 5462 is one single precision value.
? kilobyte 1 megabyte
The storage capacity of RAM, hard drives or any other storage device is usually quoted in
Megabytes (MB) or Gigabytes (GB). Table 8.1 shows Volume capacity of common storage
media.
As well as knowing the order of the units (bits, Bytes, KB, MB, GB, TB, PB) it is important,
when doing calculations in computing, to be able to change from one unit to another. For
example: A high definition movie might require 1,717,986,918 bytes of storage space. If you
were telling your friend that you had downloaded the above movie last night. You would be far
more likely to say that the movie you downloaded was 1.6Gb in size. The following
i. To covert a small unit to a larger one we divide (for example changing bytes to Mb).
ii. To convert a large unit to a smaller one we multiply (for example Tb to Mb)
What you multiply and divide by, depends on the number of places you are moving up or down.
Note:
Kb to Gb would be two places to the right so you would divide by 1024 twice.
Example 1: Convert 4Mb into bytes We are moving two steps to the left 4 x 1024 x 1024
4,194,304 bytes
Example 2: Convert 4096Gb in Tb We are moving one step to the right 4096 /
1024 = 4Tb
Example 3: Convert 3.5Mb into bits We are moving three steps to the left 3.5 x 1024 x
1024
x 8 = 29,360,128 bits
Example 4: Convert 68,719,476,736 bits We are moving four steps to the right
68,719,476,736 /
i. The capacity of storage media, disk files and computer -memories are measured in
ii. Define bit as strings of 0’s and 1’s. While a byte is combination of 8-bit and a byte is
equivalent to a character
iii. The three (3) data representation schemes commonly used in computer are:
iv. Binary Coded Decimal (BCD) uses 4 bits (22 bits) to represent numbers,
scheme.
vi. American Standard Code for Information Interchange (ASCII) uses 256 bits to represent
character
vii. Measurement of storage capacity are Kilobyte, Megabyte, Gigabyte, Terabyte and
Petabyte
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
How many bits make (i) one byte, (ii) one nibble?
How many bytes are required to hold the following in the computer
memory?
Write the storage capacity of these drives: (i) DVD (ii) CD (iii) hard drive
SAQ 8.4:
Convert 8Mb into bytes We are moving two steps to the left 8 x 1024 x 1024 =
8,194,304 bytes
References
Gibeson, G. A. (1991) Computer System Concept and Design. Englewood Cliffs, NJ. Prentice
Hall.
Gray N. A. B. (1987) Introduction to Computer System. Englewood Cliffs, NJ. Prentice Hall.
UNIT 9 COMPUTER NETWORK
Introduction
In this lecture you will learn about connection of computer system together to share and
types of computer networks will be taught. You will also learn computer network topologies.
Definition
communication line in order to share resources. The communication line may be ordinary cables,
telephone lines or broadcast channels. Each computer system in the network is referred to as a
There are three major types of network. They are local area network (LAN), metropolitan area
In a LAN, the computer systems are situated in the same locality or premises within few meters
away from each other. The computers are usually linked with ordinary cables.
Computers in metropolitan area network (MAN) are few kilometres away from each other. They
are usually situated within the metropolis, community or city. The computers in this network are
Computers in WAN are spread over a wide area. They may be several kilometres away from
each other. Because of the distance, computers in this type of network are linked with telephone
physical connections in the network is called topology. Now let us have a look and discuss some
of these typologies.
In star topology every computer is connected to the central computer. The computer in the
network also communicate with each other through the central placed computer. Figure 2 is an
Activity 9.1
Take a critical look at the above star topology structure, can you explain what happen to the
Computers in the network are connected together in a ring form as shown in figure 3 below.
Information sent by any of the computers is passed round the network until it is received by the
Activity 9.1
How can you describe the level of security of information in this network (Ring topology).
The topology may not suitable for application requiring high level of confidentiality.
A bus topology consists of a single cable with the terminator at each end. All present nodes are
connected to the single cable. There is no limit to the number of nodes that can be attached to
this network, but the number of connected nodes can actually affect the performance of the
network. All the computers in the network share the same bus. Figure 4 shows a bus topology
Figure 4: A Bus Connection.
A Hierarchical network topology interconnects multiple groups that are located on the separate
layers to form a larger network. Each layer concentrates on the specified functions, this allows to
location.
(v) reducing cost of running hardware and software for computers within the same locality.
Summary of Unit 9
information.
2. There are three basic types of computer networks: LAN. MAN and WAN
3. There are different network topologies, among which we have star, ring bus and
hierarchical topologies
information sharing
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 9.1: A computer network is a connection of computer system linked together to exchange
information
SAQ 9.2: Two types of computer networks are: LAN and WAN
(ii) E-mail
Dreamtech Press.
Hafner, Katie. (1998). Where Wizards Stay Up Late: The Origins of The Internet.
handbook).
Mike, Meyers (2007). (6th Ed.). All in One CompTIA A+ Certification Exam Guide. McGraw
Hill.
Barry, J. R. E.; Lee, A. & Messerschmidt, D.G. (2004). Digital Communication. luwer Academic
Publishers.
Friedhelm, Hillebrand (2002). GSM and UMTS- The Creation of Global Mobile Communications
INTRODUCTION
In this lecture will take you further in the study of computer networks. This unit will expose you
10.1.1 Introduction
The Internet, sometimes called simply "the Net' is a world-wide system of computer networks
networks. Just as interstate highway system links one city to another, similarly the Internet links
ranging from books, education, movies, current affairs, sports, arts etc.
The Internet is very useful, It has its application in different areas of human endeavour. The
long queues for depositing, withdrawing or updating our account, with just a click of a mouse we
Online education system no more requires a student to go to the institute and register to attend
the classes, and in fact a student can now not only register and attend the classes but also give
On line employment system with which job seeker can register and obtain information for use
Participating in a discussion about your favourite TV show with like minded people across the
globe.
globe.
The World Wide Web (WWW) is a large-scale, on-line repository of information that users can
search using an interactive application program called a browser. The World Wide Web is an
Internet-based network of Web servers. A Web Server is the host computer that publishes
information for users to view. In other words, we can define World Wide Web as a universal
database of knowledge.
When we connect to a Web Server we get information in the form of a PAGE. A PAGE displays
information in the-'form of text, graphics or both. These pages are user-friendly and may contain
a link to other pages that contain more in-depth information about the specific page. The links on
these pages leads you to another page, which may reside on the same or different server.
The Web gets its name because of the complex navigation that a user has to carry out without
even realizing it. The connected text is called 'hypertext" and the page on which it is contained is
called a 'Web Page' These web pages are files, similar to those created with a word processor.
The difference is that word processor files have extensions like .DOC [document] or . TXT [text]
whereas these web documents have, a HTML [Hypertext Mark-up Language] extension. These
web documents are stored on computers connected to a network. Many such networks join
together to form the Internet. Let us briefly explain some terminologies related to the internet.
Hypertext :Hypertext is text that 'has connections''. This special text contains the address of
another computer that is part of the WWW. When we click on this text, the browser [a gateway
in INTERNET] understands it as an instruction to get that page from that computer and display it.
Link : A link is the connection from one web page to another using hypertext. These Web Pages
are not physically connected but just contain the address of the page that should be displayed.
and regulations" that is used to send a page or pages contain hypertext from one computer to
another.
Browser: A browser is an interactive program that permits a user to view information from the
World Wide Web. The information contains selectable items that allow the user to view other
Some popular Web Browsers are Internet Explorer, Mozilla, Netscape Navigator and crazy
browser
§ Now let take a look at this, when you go to a restaurant, you will sit down and go through
the menu list. Normally you will always place your order through the waiter. The Waiter
will take your order to the kitchen for the cook to prepare the dishes. The waiter himself
does not prepare the food he only convey the customer order to the kitchen. After the
cook has prepare the food, the waiter will in turn bring the food to the customer who has
placed the order waiting for the arrival of his order. Can you relate this to how the
internet works
o The customer is the user of the Internet in this regard requesting for service, The waiter
is likened to the browser who takes the request of the of the user to the server.The Cook
is the server who actually produced the request of the customer. The waiter will bring it
Address: Each computer on the Internet has a unique address of its own. This address is
contained in the hyperlink Text of a document. The browser software uses this address to
Client: The computer that is requesting for some service from another computer is called the-
client.
Server: A server is the computer that actually services the requests of Other Computers. Another
name that is sometimes used for a server is a ‘host’. The server is usually a powerful computer
with a large memory and hard disk containing many thousands of documents. The documents
can be HTM files, they could be sound files, picture files and others).
Summary of Unit 10
1. The internet is the world largest network, it is an international network called network of
networks.
2. Internet has its application in different areas of human endeavour ranging from education,
3. There are different network topologies, among which we have star, ring bus and
hierarchical topologies
4. The World Wide Web (WWW) is a large-scale, on-line repository of information that
5. Some of the commonly used internet terminology are: Browser, client, server, hypertext,
request etc
Now that you have completed this study session, you can assess how well you have achieved its
Learning Outcomes by answering these questions. You can check your answers with the Notes
SAQ 10.1: The Internet is a world-wide system of computer networks through which sharing of
information is possible
- Banking
- Online Education
- Teleconferencing
SAQ 10.3: The World Wide Web (WWW) is a large-scale, on-line repository of information that
References
1. Johnny Ryan (2013), A History of the Internet and Digital Futuer Paperback.