0% found this document useful (0 votes)
28 views44 pages

Computer Science I

The Computer Science I course aims to equip learners with foundational knowledge and skills in computer systems, including hardware, software, and data processing. Key learning outcomes include understanding computer functionality, specifying hardware and software requirements, and using various software packages. The course covers topics such as computer fundamentals, programming, data representation, and the application of computer science in mechanical engineering.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views44 pages

Computer Science I

The Computer Science I course aims to equip learners with foundational knowledge and skills in computer systems, including hardware, software, and data processing. Key learning outcomes include understanding computer functionality, specifying hardware and software requirements, and using various software packages. The course covers topics such as computer fundamentals, programming, data representation, and the application of computer science in mechanical engineering.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Computer Science I (45 hours)

Purpose of the Course:


To provide the learner with knowledge and skills of computers and their applications in
processing of different types of data.

Expected Learning Outcomes:


At the end of the course, a learner will be able to:
1. Explain how computers work – how they acquire, process, store and communicate
information.
2. Specify computer hardware and software requirements for a given engineering
application
3. Explain how information is stored in a computer storage system
4. Explain the different types of errors generated by computer systems
5. Use different types of computer software packages
Course Content (Lifted from MME - Syllabus)
1. Computer systems fundamentals;
2. Hardware: Input devices, output devices, secondary storage devices, CPU and the
control step. Machine code. Communication; Cloud Computing
3. Software: Systems software, operating systems, compiling systems,
utilities. Information storage: bits, bytes, words;
4. Binary Numbers: Binary integers and fractions, floating point; Character codes:
ASCII, EBCDIC;
5. Errors generated by computers: rounding, truncation, and cancellation errors;
6. Application Software Packages: word-processing, spreadsheets, database
management, mathematical programming and statistical software (tabulations and
regression)
7. Introduction to software development: knowledge-based systems (A.I.)
Course Content

Introduction To Computer Fundamentals


• Introduction to computers and computer science
• History, Evolution and generations of Computers
• Components of a Computer System
• Computer Security and Privacy
Computer Hardware
• Input devices
• Output devices
• CPU and GPU
• Memory - primary vs secondary memory
• Storage
Computer Software
• Types of computer software
• System Software (operating system, utility software, drivers)
• Application Software ( )
Computer Networks and Internet
• Introduction to Computer Networks, Types of Networks (LAN, WAN, MAN)
• Internet, Intranet, Extranet, World Wide Web
• Network Topologies (Star, Bus, Ring)
• Network Protocols (TCP/IP, HTTP, FTP)
• Network Devices (Hub, Repeater, Bridge, Switch, Router, Gateways and Brouter)
Data representation and Number system
• Decimal number system
• Binary number system
• Octal number system
• Hexadecimal number system
• Number systems conversions
• Binary arithmetic—Binary addition, binary subtraction
• Signed and unsigned numbers—Complement of binary numbers
• Binary data representation—Fixed point number representation, floating point
number
• representation
• Binary coding schemes—EBCDIC, ASCII, unicode
Introduction to Programming
• Program development life cycle
• Algorithm, flowchart, pseudo code
• Control structures
• Programming paradigms (Structured programming, Procedural programming,
modular programming, Object-oriented programming, Aspect-oriented
programming, Characteristics of a good program)
• Introduction to C++
Computer systems fundamentals
Introduction
There are four main interrelated disciplines that study computing. These are the pillars
of the technology field and most other tech courses are an iteration of these. Although
they are interconnected and will have a large overlap, each has its core area of study. They
include;
• Computer engineering
• Information technology
• Software engineering
• Computer science

Computer Engineering
Computer engineering is a field that mainly focuses on the hardware components of a
computer and computing systems. It involves application of both computer science and
electrical engineering in the designing and development of computer hardware and
firmware.

Information technology
This field of computing focuses more on computer systems and networks thus in a typical
work set up IT professionals will mainly install, maintain and improve a company’s IT
infrastructure and assets. IT focuses mainly on practical implementation, management,
and support of technology systems and services unlike computer science which focuses
on theoretical aspects of computing. Key areas of IT include Database and network
management, information security, computer technical support, software development
etc.

Software engineering
The main focus of software engineering is how to design, build, test and maintain software
systems. It is less about the actual code that gets written and more about the processes
one goes through to write the code. Software engineers ensure that code is properly tested,
changes to code during updates are seamlessly and stably managed, and teams are
working together with a common set of standards and practices.

Computer Science
This is the study of computers and computational systems and their practical and
theoretical applications and use for processing information. Computer science builds off
of the theory of computation, which has deep roots in logic, mathematics, and philosophy
from hundreds of years before computers existed. In fact, the first computer science
departments grew out of mathematics. Theoretical disciplines include algorithms,
information theory and theory of computations whereas practical disciplines include
design and implementation of hardware and software.

Computer science plays a crucial role in the field of mechanical engineering, offering tools
and methodologies that enhance design, analysis, and manufacturing
processes. Integrating computer science into mechanical engineering enhances
innovation, efficiency, and the overall capability to solve complex engineering problems.
The combination of computer smarts and mechanical engineering is key for pushing the
boundaries of what we can build and how efficiently it can be done. Key areas where
computers and computer science is applied in mechanical engineering include;
1. Computer Aided Engineering - This includes both CAD and CAM. Computer
science helps mechanical engineers simplify complex design and manufacturing
tasks allowing for simplified efficient solutions to current engineering problems.
2. Finite Element Analysis - This is the process of predicting an object's behavior
based on calculations made with the finite element method. Application of
computers to FEA helps in improving designs by analyzing the performance of
various design alternatives.
3. Control Systems/Embedded systems - Integrating hardware and software to
control machines and devices, Programming and implementing control systems
for automated processes in manufacturing and robotics.
4. Mechatronics - Combines mechanical engineering with electronics, computer
science, and control engineering to design and manufacture smart systems and
products.
5. Robotics - Development of robots and automated systems for various
applications, from manufacturing to healthcare.
6. Additive Manufacturing (3D Printing) - Using specialized software to design
parts that are then produced using 3D printing technology. Predicting and
optimizing the printing process to ensure high-quality output.
7. Data Analysis and Machine Learning - Predictive Maintenance: Using data from
sensors and machine learning algorithms to predict equipment failures and
schedule maintenance. Analyzing data to optimize manufacturing processes,
reduce waste, and improve product quality.
8. Internet of Things (IoT) - Implementing IoT devices to collect data from
machines and processes, leading to more efficient and adaptive manufacturing
systems. Enabling remote monitoring and control of mechanical systems.
9. Software Development - Developing custom software tools to address specific
engineering problems or to streamline workflows. Writing software to simulate
physical systems or to model complex phenomena.
10. Virtual Reality (VR) and Augmented Reality (AR) - Using VR and AR for virtual
prototyping and visualization of mechanical designs. Developing immersive
training programs for operators and engineers.
11. Smart Manufacturing - The rise of smart manufacturing relies heavily on
integrating mechanical systems with software for real-time data processing,
automation, and decision-making.

Introduction to Computers.
Nowadays, computers are an integral part of our lives. They are across almost all
aspects of life and are slowly finding their way into almost all devices and equipment we
interact with day to day.
A computer can be defined as an electronic machine that accepts data from the user,
processes the data by performing calculations and operations on it as per a stored
program, and generates the desired output results. Their ability to process data at
lightning speed with extreme accuracy and their shrinking size has enabled them to
infiltrate all aspects of our modern lives.
The term computer comes from the word compute which means to calculate; therefore,
all computers ever do is to take in some input in the form of binary numbers or logic
inputs, perform arithmetic and logic operations on the input and produce the desired
output.
From the definition, it comes out clearly that there are 4 key features in the definition of
a computer.
1. Input
2. Processing
3. Storage
4. Output

Input

In order to process data, a computer needs to be able to receive this data and instructions.
This is carried out using several input devices that are connected to a computer. The input
devices receive instructions from the user and using various software converts these
instructions into binary digits which a computer can understand and work with. They
include Mouse, touchscreen, microphone, keyboard etc.

Output
Once a computer has processed its data, it communicates this data using one of the
various output devices connected to it. The output can be in different forms like image,
graphic audio, video, etc. Output Devices are those devices that gives us the processed
result from the input data given to a computer system. The output devices include;
Monitors, Printer, Speakers, Projector etc.
Storage/Memory
Memory is required to save data, information and computer programs. Memory is divided
into cells, and they are stored in the storage space present in the computer. Every cell has
its unique location/address. The data in a memory are stored and retrieved by the process
called writing and reading respectively. A memory unit consists of data lines, address
selection lines, and control lines that specify the direction of transfer. The total number
of bits a memory can store is its capacity
There are two types of computer memory. Primary memory (volatile memory), and
secondary memory (non-volatile).

Processing

This is the key feature of a computer. It is carried out by its CPU which stands for Central
Processing Unit and is "the brain of the computer". A CPU is the primary component of a
computer that performs most of the processing and controls the operation of all
components running inside a computer. A CPU has two main sections, the Control Unit
and the Arithmetic and Logic Unit.
Below is a block diagram showing the interactions between the various parts of a
computer
CPU Functions
A CPU performs four main functions regardless of its type and size. The main functions
are
1. Fetch - The CPU retrieves an instruction from memory (RAM). When a program
is first run, instructions regarding the program are loaded from its storage (hard
disk) to memory for faster access. During executions instructions are fetched
from RAM.
2. Decode - Fetched instructions are interpreted by the control unit which then
determines the next cause of action.
3. Execute- The control unit directs the movement of data while the ALU performs
arithmetic and logic operations.
4. Store - The result of the operation is written back to memory or a register.

Data versus Information


Data can be defined as raw facts fed into the computer for processing. Data has no
meaning to the user and may include numbers, letters, symbols.
Information on the other hand refers to the already processed data useful for decision
making.
Processing is the act of converting from one form to another i.e., data (raw facts) to
information (already processed data)

Characteristics of a computer
The characteristics of a computer can be described as the features or qualities of the
computer. Understanding these characteristics helps in harnessing the full potential of
computers and helps to specify computing needs for a specific engineering task. They
include
− Speed - A computer can process data very fast, at the rate of billions of instructions
per second. Some calculations that would have taken hours and days to complete
otherwise, can be completed in a few seconds using the computer.
− Accuracy: Computers perform tasks with a high degree of accuracy and precision.
Once programmed correctly, they consistently produce reliable results without
human errors, making them indispensable for tasks that require precise calculations
or data processing.
− Storage: Computers have the ability to store vast amounts of data in various forms,
including text, images, videos, and more. This data can be accessed quickly when
needed and retained for long periods without degradation.
− Diligence: Computers do not experience fatigue or boredom, enabling them to
perform repetitive tasks tirelessly for extended periods. This characteristic makes
them ideal for automating routine processes and handling large volumes of data
without a decline in performance.
− Versatility: Computers are versatile machines capable of performing a wide range
of tasks, from basic arithmetic calculations to complex simulations and multimedia
editing. With the right software and hardware, they can adapt to different
applications and serve diverse user needs.
− Automation: Computers excel at automation, allowing them to streamline
workflows and increase efficiency by executing tasks automatically based on
predefined instructions or algorithms. This automation capability is essential for
optimizing productivity in various industries and domains.
− Communication: Computers can communicate with each other and with external
devices through networks and interfaces. This enables data exchange, collaboration,
and remote access, facilitating interconnectedness and information sharing on a
global scale.
− Scalability: Computers can scale in terms of processing power, storage capacity, and
connectivity to accommodate growing demands. This scalability ensures that
computing systems can adapt to evolving requirements and handle increasing
workloads effectively.
− Reliability: While computers are complex systems composed of numerous
components, they are designed to operate reliably under normal conditions.
Redundancy, error-checking mechanisms, and fault-tolerant features help minimize
the risk of system failures and ensure uninterrupted operation.
− Security: With the increasing importance of data privacy and cybersecurity,
computers incorporate various security measures to protect sensitive information
and prevent unauthorized access or malicious attacks. These measures include
encryption, authentication, firewalls, and antivirus software.

Advantages of Computer
1. Speed − Computers can execute programs quickly. Thousands of instructions can
execute in milliseconds or seconds.
2. Accuracy − Computers can perform very complex computations accurately in a
very short period of time. If a user inputs the correct input to the computer, it gives
accurate results that can be used in decision-making.
3. Storage − Computers can store large amounts of data permanently. The data is
saved in files, which can be accessed at any time; these files are saved for a long
time period until a user deletes them.
4. Power of Remembering − A computer stores data permanently. It forgets or loses
certain information only when asked to do so.
5. Versatility − A computer is a versatile device. It can run different programmes
simultaneously.
6. Diligently − A computer can do the assigned task diligently. A computer can work
for hours without getting tired. Hence, it can do thousands of complex
computations with the same accuracy.
7. Automation − A computer is an automated device. It works without human
intervention.
8. No I.Q. − A computer does not have its own I.Q.; it carries out the predetermined
tasks and does not take its own decisions.
9. No Feelings − A computer does not have emotions. It works as per the given
instructions by users.

Disadvantages of computer
1. Health Issues − Working long hours on computers leads to health issues. Student's
playing games and accessing related applications for long periods of time cause
serious health problems.
2. Virus and hacking attacks − Viruses are unwanted programs that enter computers
through networks or the internet. These programs may steal information or
damage computers. Sometimes this locks the application programs of the
computer to affect its working.
3. No IQ − Computers cannot make their own decisions. Its functioning depends on
human interventions.
4. Negative effect on the environment − The increasing use of computers and
automated devices has posed a major threat to the environment.
5. Crashed Networks − Hackers may destroy the network, which affects the overall
working of the existing system. In today’s time, most of the data is on servers, so
destroying the network may be a serious threat to communication.
6. Online cybercrimes − the practice of using a computer to facilitate unlawful
activities including fraud, the trafficking of child pornography and other items of
intellectual property, identity theft, and privacy violations The relevance of
cybercrime, particularly over the Internet and dark web, has increased as the
computer is most widely used in business, entertainment, and government.
7. Data and information violation − A breach of confidentiality occurs when
information is given to a third party without the data owner's authorization. The
owner of the data has the right to file for legal action to recover the potential losses.

History of computers
History of computers dates back to over 200 years in the 19th century. As mathematics
and numbers-crunched by entrepreneurs increased in complexity, a need emerged for
design of machines to address these problems.
Abacus is regarded as one of the first earliest forms of a calculating tool that was
developed and used by Asian merchants to rapidly evaluate simple mathematics
functions. It could perform all the 4 basic mathematical functions and experienced users
could handle more complex calculations.
Mechanical calculators became a thing and in the 19th century, Charles Babbage
designed the difference engine and later the analytical engines which are considered to
be the precursor to the modern computer. Ada Lovelace who worked with Babbage
discovered additional capability of the analytical engine and wrote ‘notes’ describing an
algorithm that can be used to calculate a sequence of Bernoulli’s numbers using the
engine. This achievement led to her recognition as the first computer programmer,
though no programming language had yet been developed.
Early 20th century saw the development of analogue computers and their various
applications including military calculations in World War II. Later in 1936, Alan Turing
Developed the concept of the Turing Machine which laid the theoretical foundation for
computer science.

In 1945, a US government funded project developed the ENIAC - Electronic Numerical


Integrator and Computer which was the first programmable computer. The same
inventors of ENIAC developed the UNIVAC I which was an upgrade and the first general
purpose electronic computer design for business applications. It used 6,103 vacuum
tubes and weighed 7.3 tons consuming 145kW of energy and performed 1905 operations
per second using a 2.25MHz clock. The size of its processing unit was 4.3 m by 2.4 m by
2.6 m high.

The introduction of the IC in the 1960's led to a revolution in the computing world
known as the microprocessor revolution. It was now possible to build more powerful
and energy efficient computers.

In 1971, intel built the 4004, the first commercially available microprocessor that made
it possible to build personal computers. Later on, MITS - an American electronics
company built the Altair 8800 using intel's advanced 8080 microprocessor. Bill Gates
and Paul Allen offered to write software for the Altair using the new BASIC language
and this led to the evolution of proprietary software and Microsoft as a company. The
8080 was an 8-bit processor with a 2MHz clock. It had 6000 transistors and the size
was 7 mm x 50 mm x 15 mm. As of 2023, chips have been made with up to 5 trillion
transistors and the size of a fingernail.

In 1965 Gordon Moore, the co-founder of Intel, observed that the number of transistors
in an integrated circuit (IC) doubled about every two years with minimal rise in cost.
This is known as Moore’s Law.

In 1976 Steve Jobs and Steve Wozniak co-found Apple Computer on April Fool's Day
and unveiled the Apple I, the first computer with a single-circuit board and ROM (Read
Only Memory),

The 1980’s led to the standardization of the personal computers in particular due to the
popularity of the IBM PC. Apple’s Macintosh computer introduced the graphical user
interface.
In 1991, the World Wide Web was developed which revolutionized information sharing
and communication. In 1995, Windows95 was released which brought GUI and
Multitasking to a whole new level. Computers became more user friendly and accessible
to the larger masses.
In the 21st Century, Smartphones and cloud computing and increased connectivity
changed computing by making it portable and touch based.

Generations of Computer
The history of computers can be better assessed by grouping the evolution in computing
into five distinct generations, each representing a significant leap in technological
advancements and breakthroughs. The distinct generations are

First generation (1940s-1950s)


Computers mainly used vacuum tubes to perform the arithmetic and logic operations.
Magnetic drums were used for memory and input was done through punched cards.
Programming these machines was difficult and time consuming. They were bulky,
expensive and consumed a lot of electricity.

Second Generation (1950s-1960s)


Vacuum tubes were replaced by transistors and this revolutionized computer design,
making them faster, smaller and more energy-efficient. Assembly language was
developed during this generation and other early high-level programming languages e.g
COBOL, FORTRAN etc. Input was

Third Generation (1960s-1970s)


Improved IC designs meant smaller and faster computers. High level languages like C
and Pascal were developed during this period. Storage used was magnetic disk and
keyboards monitors were introduced as primary input/output devices. More advanced
peripherals were also introduced during this era.

Fourth Generation (1970s-Present)


We are currently between the fourth and fifth generations. Computers use advanced
Microprocessors and high-level and object-oriented languages are predominantly in use.
Keyboards, mice, monitors, printers, touchscreens, microphones, speakers and various
sensors are used as primary input/output devices.
Solid-state drives (SSD) have replaced the traditional HDD which used magnetism to
store data. Volatile memory has advanced and currently DDR4 and DDR5 are the most
commonly used.

Fifth Generation (Present and Beyond)


Fifth-generation computers focus on artificial intelligence and quantum computing.
These systems are designed to mimic human intelligence, solve complex problems, and
process vast amounts of data quickly. Innovations in AI, machine learning, natural
language processing, and quantum mechanics are at the forefront of this generation.
They use advanced languages and frameworks for AI and ML (e.g., Python, R,
TensorFlow) Storage is done through Cloud storage, advanced SSDs of up to 64TB, and
other emerging technologies. The Input/Output through Voice recognition, natural
language processing, virtual/augmented reality interfaces. They are highly compact and
diverse (from powerful servers to wearable devices).
Some examples include; AI-driven systems, quantum computers (e.g., Google’s
Sycamore, IBM’s Q System One)

Classifications of Computers
Computers can be classified according to the following factors
• Physical size & processing power
• Purpose for which they are designed
• Functionality (Method/mode of operation)

Classification of computers according to physical size and processing power


Based on physical size computers can be classified into four main groups
• Supercomputers
• Mainframe computers
• Minicomputers
• Microcomputers

Supercomputers
Supercomputers are the fastest, largest, most expensive & also the most powerful
computers available. Their characteristics are; They are the fastest computers, Largest in
size, most expensive, huge processing power, very heavy, generate a lot of heat and use
multiple processors. They are operated by computer specialists. A Supercomputer can be
operated by over 500 users at the same time
Applications: They find application in Scientific research, Defense and weapon analysis,
nuclear energy research, Weather forecasting, Petroleum research. Etc. These tasks use
large amounts of data, which need to be manipulated within a very short time.
Examples of Supercomputers are the CRAY T3D, NEC-500., CDC 6600, ABC (Atanasoff-
Berry Computer), ENIAC

Mainframe computers.
Mainframes are less powerful & less expensive than supercomputers. Mainframe executes
many programs concurrently and supports many simultaneous executions of programs.
They are mostly found in government and big organizations such as banks, hospitals,
airports etc.
Characteristics include; Have a large storage capacity, Large in size, Multi-user, Multi-
processing, Supports a variety of peripherals,
Applications: Mainframe computers are mostly found in government departments, big
organizations and companies which have large information processing needs, e.g., they
are used; In Banks & Hospitals for preparing bills, Payrolls, etc. In communication
networks such as the Internet where they act as Servers. By Airline reservation systems
where information of all the flights is stored.
Examples of Mainframes: IBM 360,4381., ICL 39 Series., CDC Cyber series., BINAC,
UNIVAC
Minicomputers.
A Minicomputer is physically smaller than a mainframe. However, it can support the
same peripheral devices supported by a mainframe. Their characteristics include; Multi-
user, e.g., can be operated by 6 users at a time. Easier to manufacture & maintain
compared to mainframes., Cheaper than the mainframes, They handle small amounts of
data, are less powerful, & have less memory than the mainframes., Minicomputers are
slow compared to mainframe computers.
Applications; Used in scientific laboratories, Used in research institutions, Engineering
plants, Automatic processing, Also they are well adapted for functions such as
Accounting, Word processing, Database administration,

Microcomputers.
Microcomputers are the PCs mostly found today in homes, schools & many small
offices. They are called Personal Computers (PCs) because they are designed to be used
by one person at a time. Their characteristics include; Are cheaper than both mini &
mainframe computers, are very fast (i.e., have high processing speeds), Small in size,
hence they occupy less space in an office, are more energy efficient (i.e., consume less
power), Are more reliable than the early Mainframe computers,
Application; Training and learning institutions such as schools, small business
enterprises, and Communication centers as terminals,
The following are the various types of microcomputers in operation today arranged in
descending order according to size.
• Desktop computer; is designed to be placed on top of an office desk
• Notebook or laptop; portable convenient for mobile users.
• Personal Digital Assistant (PDA); Is small enough to fit in the pocket
• Tablets and mobile phones

Classification according to purpose


Computers can be classified according to the tasks they perform as
• general or
• special purpose computers.

General purpose computers


They are the most common types of computers in use today. Their flexibility enables them
to be applied in a wide range of applications like; Document processing, Performing
calculations, Accounting, Data and information management,
Examples of general-purpose computers: Mainframes, Minicomputers, Microcomputers
& Laptops used in most offices & schools.

Special-purpose computer.
A special-purpose computer is designed to handle/accomplish a particular specific task
only. Such computers cannot perform any other task except the one they were meant to
do. Therefore, the programs which are used in a special-purpose computer are fixed
(hard-wired) at the time of manufacture.
For example: In a computer Network, the Front-End Processor (FEP) is only used to
control the communication of information between the various workstations and the host
computer.
A Special-purpose computer is dedicated to a single task; hence it can perform it quickly
& very efficiently.
Examples of special-purpose computers:
• Robots used in a manufacturing industry for production only.
• Mobile phones used for communication only.
• Calculators that carry out calculations only.
• Computers used in Digital watches.
• Computers used in Petrol pumps.

Dedicated computer
A Dedicated computer is a general-purpose computer that is committed to some
processing tasks; though capable of performing a variety of tasks in different application
environments.
E.g., the computer can be dedicated to carrying out Word processing tasks only.

Classification according to functionality


Usually, there are two forms of data; Digital data, and Analogue data. Computers can be
classified according to the type of data they can process as either.
• Digital computers.
• Analogue computers
• Hybrid computers.

Digital Computers
This is the most commonly used type of computer. Digital computers process data that is
discrete in nature. Discrete data also known as digital data is usually represented using a
two-state square waveform. It can process both numeric & alphabetic data within the
computer, e.g., 0, 1, 2, 3…, A, B, C….
Their operation is based on 2 states, “ON” & “OFF” or on digits “1” & “0”. Therefore, any
data to be manipulated by a digital computer must first be converted to digital form. Most
of the devices found at homes today are digital in nature.
Examples: ¨A Television with a button which is pressed to increase or decrease the
volume, ¨Digital watches, ¨Calculators, etc.

Analogue computers.
An Analogue computer is a computer that operates on continuous data. They carry out
their data processing by measuring the amount of change that occurs in physical
attributes/quantities, such as changes in electrical voltage, speed, currents, pressure,
length, temperature, humidity, etc.
An Analogue computer is usually a special-purpose device that is dedicated to a single
task. For example, they are used in specialized areas such as in: Scientific or engineering
experiments, Military weapons, controlling manufacturing processes like monitoring &
regulating furnace temperatures and pressures, Weather stations to record & process
physical quantities, e.g., wind, cloud speed, temperature, etc.
The output from analogue computers is in the form of smooth graphs produced by a
plotting pen or a trace on a Cathode Ray Tube (CRT) from which the information can be
read.
Analogue computers are very accurate & efficient since they are dedicated to a single task.
They are very fast since most of them use multiple processors.
Examples of analogue devices:
• Thermometer. It uses a volume of Mercury to show temperature. The
Thermometer is calibrated to give an exact temperature reading.
• A Petrol pump measures the rate of flow of Gasoline (petrol) & converts the volume
delivered to 2 readings; one showing the volume & the other showing the cost.
• A Post-office scale converts the weight of a parcel delivered into a charge for
posting.
• A Monitor with knobs that are rotated to increase brightness.
• A Television with knobs that are rotated to increase or decrease the volume.

Hybrid computers.
Hybrid computers are designed to process both analogue & digital data. They combine
both the functional capabilities of the digital and analogue computers. Hybrid computers
are designed by interconnecting the elements of a digital computer & analogue computer
directly into one processor, using a suitable interfacing circuitry. Hybrid computers are
more expensive.
Example;
In a hospital Intensive Care Unit, an analogue device may be used to measure the
functioning of a patient’s heart, temperature and other vital signs. These measurements
may then be converted into numbers and sent to a digital device, which may send an
immediate signal to the nurses’ station if any abnormal readings are detected.

Computing Environments
Computing environments refer to the various configurations and setups of computer
systems and networks that enable users to perform tasks, run applications, and manage
data. These environments can vary widely based on the type of infrastructure, purpose,
and security requirements.
Types of Computing Environments
• Personal Computing Environment: Typically involves individual devices such as
desktops, laptops, tablets, and smartphones. Used for personal tasks like web
browsing, document editing, gaming, and communication. Users are responsible for
maintaining security, often using antivirus software and secure passwords.
• Client-Server Environment: Involves a central server that provides services or
resources to client devices (computers, tablets, etc.). Commonly used in business
settings where a server manages resources like files, databases, and applications.
Security measures include access controls, authentication, and server-side security
configurations.
• Distributed Computing Environment: Multiple computers work together to achieve a
common goal, distributing tasks across various nodes or systems. Used in high-
performance computing, scientific simulations, and data analysis. Security involves
ensuring secure communication between nodes and protecting data integrity.
• Cloud Computing Environment: Delivers computing services (e.g., servers, storage,
databases, networking, software) over the internet. Types include public, private,
hybrid, and multi-cloud setups. Benefits include scalability, cost efficiency, and
accessibility. Cloud service providers include - Amazon Web Services (AWS),
Microsoft Azure and Google Cloud.
• Virtual Computing Environment: Uses virtualization technology to create virtual
machines (VMs) that emulate physical computers. Often used in data centers and for
testing, development, and isolated application environments.
• Embedded Computing Environment: Systems embedded within devices (e.g.,
appliances, cars, medical equipment) that perform dedicated tasks. Typically involve
real-time operating systems (RTOS) tailored for specific functions. Security challenges
include firmware vulnerabilities, physical access, and ensuring secure updates.
• Mobile Computing Environment: Focuses on devices such as smartphones, tablets,
and wearable devices that provide computing on the go. Heavily reliant on wireless
networks and mobile apps.
• Edge Computing Environment: Data processing occurs near the source of data
generation (e.g., IoT devices) rather than in centralized data centers. Used in
applications requiring low latency and real-time processing.
Computer security and privacy
Computer security is essential to protect data, maintain system integrity, and ensure the
availability of services. The increasing dependence on technology has made it crucial to
safeguard systems from unauthorized access, data breaches, and cyber-attacks.
Importance of Computer Security
• Protect sensitive data, such as personal, financial, and business information.
• Prevent unauthorized access, data theft, and manipulation.
• Maintain system functionality and performance.
• Safeguard organizational reputation and trust.
• Comply with legal and regulatory requirements.
Security threats come in various forms and can compromise the safety of computer
systems. Some common security threats include:
• Phishing: Fraudulent attempts to obtain sensitive information, such as usernames,
passwords, and credit card details, by disguising as a trustworthy entity.
• Social Engineering: Manipulating individuals into revealing confidential
information.
• Denial of Service (DoS) Attacks: Disrupting services by overwhelming a network
or system with traffic.
• Insider Threats: Security risks originating from within the organization, often by
employees or contractors.
• Malware (Viruses, Worms, Trojans) - Malware is malicious software designed to
harm, exploit, or otherwise compromise computer systems. Types of malware
include:
• Viruses: Code that attaches itself to legitimate programs, replicates, and
spreads, often corrupting or destroying data.
• Worms: Self-replicating malware that spreads through networks without user
intervention, often consuming bandwidth or overloading systems.
• Trojans: Malware disguised as legitimate software, which, once activated, can
steal data, provide unauthorized access, or cause harm.
Measures to ensure cyber protection and general computer security
• Strengthen Passwords and Use Multi-Factor Authentication (MFA) - Create Strong
Passwords by using complex passwords with a mix of upper and lower case letters,
numbers, and special characters. Use a Password Manager and generate a unique
password for each account. MFA adds an extra layer of security.
• Keep Software and Systems Updated - Regularly Update all software with the latest
patches and security updates.
• Use Antivirus and Anti-Malware Software Install Reliable Security Software from
trusted antivirus and anti-malware suppliers to detect and prevent threats. Run
Regular Scans: Schedule scans to check for malware or suspicious activity regularly.
• Secure Your Network - Use Strong Wi-Fi Passwords: Set up strong passwords for your
home or office Wi-Fi and change default router settings. Enable Network Encryption
e.g., use WPA3 or WPA2 encryption to secure your wireless network.
• Be Cautious with Emails and Links - Avoid Phishing Scams, be wary of unsolicited
emails or messages asking for personal information. Do not click on links or download
attachments from unknown sources. Check the URL before entering login credentials
or sensitive information. Look for secure connections (HTTPS) and double-check for
misspellings or unfamiliar domains.
• Back Up Data Regularly - Create Regular Backups: Back up important data to external
hard drives, cloud storage, or other secure locations regularly.
• Etc.

Introduction to Computer Networks


A computer network is a collection of interconnected devices that communicate with each
other to share resources, data, and applications. Networking allows multiple devices, such
as computers, servers, and peripherals, to connect and exchange information, enhancing
communication, collaboration, and efficiency.
Types of Networks
Networks can be categorized based on their size, range, and purpose:
• Local Area Network (LAN) - This covers a small geographic area, such as a
home, office, or school. It is typically used to connect computers, printers, and
other devices within a single location. It is characterized by high-speed
connections and low latency; usually managed by a single organization.
• Wide Area Network (WAN) - This type of network spans a large geographical
area, often globally, connecting multiple LANs. It uses public or private
transmission systems, like leased lines or satellites. The Internet is the most
prominent example of a WAN.
• Metropolitan Area Network (MAN) - This covers a larger area than a LAN
but smaller than a WAN, usually a city or a large campus. It is used by businesses,
government bodies, and educational institutions for regional connectivity.
• Personal Area Network (PAN) - A small network covering a very short range,
usually a few meters, like Bluetooth or USB connections. Commonly used for
connecting personal devices such as smartphones, tablets, and laptops.
• Internet, Intranet, Extranet
Internet This is a global network connecting millions of private, public, academic,
business, and government networks. Uses standard protocols (like TCP/IP) for
communication, enabling access to web services, email, and more.
Intranet - A private network accessible only within an organization, used to
share internal resources, documents, and services. Uses the same technologies as
the Internet (e.g., web browsers, servers) but is restricted to authorized users.
Extranet - An extended version of an intranet that allows external users (such as
partners, vendors, or customers) limited access to an organization’s resources.
Facilitates collaboration while maintaining security and control.
• World Wide Web (WWW) is a vast collection of interlinked hypertext
documents and resources accessed via the Internet using web browsers. It uses
HTTP/HTTPS protocols to transmit data and is built on technologies like HTML,
CSS, and JavaScript. The web allows users to access information, communicate,
shop, and interact with content worldwide.

Network Topologies
Network topology refers to the physical or logical arrangement of nodes (devices) in a
network. Nodes usually include devices such as switches, routers and software with switch
and router features. There are different types of network topologies;
• Star Topology - All devices connect to a central hub or switch. If the central
device fails, the entire network goes down, but individual node failures do not
affect others.
• Bus Topology - All devices share a common communication line (bus). Easy to
set up, but a single point of failure in the main cable can disrupt the entire
network.
• Ring Topology - Devices are connected in a circular manner, with each device
having exactly two neighbors. Data travels in one direction; a failure in any device
can affect the entire network unless a redundant ring is used.
• Mesh Topology - Every device is connected to multiple other devices, providing
multiple paths for data. Highly reliable and resilient but complex and costly to
implement.
Network Devices
Various devices facilitate the functioning and management of computer networks:
• Hub:
− A basic device that connects multiple Ethernet devices, making them act as
a single network segment.
− Broadcasts incoming data to all connected devices.
• Repeater:
− Extends the range of a network by amplifying the signal over long
distances.
• Bridge:
− Connects and filters traffic between two or more network segments,
reducing overall network traffic.
• Switch:
− More intelligent than a hub, it sends data only to the intended device,
improving network efficiency.
• Router:
− Directs data packets between different networks, such as between a home
network and the Internet.
− Routes traffic efficiently using IP addresses.
• Gateway:
− Acts as a "gate" between two networks, often translating protocols between
them.
• Brouter (Bridge Router):
− Combines the functions of both a bridge and a router, capable of routing
data for known protocols and bridging data for unknown ones.
Network Protocols Definition
Network protocols are a set of rules outlining how connected devices communicate
across a network to exchange information easily and safely. Protocols serve as a
common language for devices to enable communication irrespective of differences in
software, hardware, or internal processes.

Types of network protocols


Various types of network protocols can be categorized into the following three broad
categories to help organizations operate seamlessly across different business scenarios:

1. Network Communication Protocols: These protocols determine the rules and


formats to transfer data across networks. Communication protocols govern various
aspects of analog and digital communications, such as syntax, authentication,
semantics, and error detection, among others. Some key network communication
protocols include:
o Hyper-Text Transfer Protocol (HTTP): Commonly referred to as the protocol of
the internet that allows communication between a server and browser.
o Transmission Control Protocol (TCP): A reliable, connection-oriented protocol
that helps in the sequential transmission of data packets to ensure data reaches
the destination on time without duplication.
o Internet Protocol (IP): Facilitates routing the data packets across networks. IP
contains addressing and control information to deliver packets across a network.
It works along with TCP. While it ensures delivering the packets to the right
address, TCP aligns them in the right order.
o User Datagram Protocol (UDP): Unlike TCP, UDP is a connectionless protocol
that doesn’t ensure a connection between the application and server before
transmitting a message. It’s effective for use cases such as broadcasts or multicast
connections.
o File Transfer Protocol (FTP): Allows file sharing between servers by establishing
two TCP connections, one for data transfer and the other for control. The data
transfer connection transfers the actual files while the control connection
transfers control information such as passwords to ensure data retrieval in case
of data loss.

2. Network Security Protocols: These protocols ensure safe data transmission over
the network connections. Network security protocols define the procedures to secure data
from any unauthorized access. These protocols leverage encryption and cryptography to
safeguard. Here are the most widely used network security protocols:
o Secure File Transfer Protocol (SFTP): Helps securely transfer files across a
network by using public-key encryption and authenticating the client and server.
o Hyper-Text Transfer Protocol Secure (HTTPS): Overcomes the limitation of
HTTP by ensuring the security of data transmitted between the browser and
server through data encryption. HTTPS is a secure version of HTTP.
o Secure Socket Layer (SSL): Primarily helps secure internet connections and
safeguard sensitive data using encryption. SSL protocol enables both server-
client communication and server-server communication.
3. Network Management Protocols: Network managers require standard policies
and procedures to manage and monitor the network for maintaining smooth
communication. Network management protocols ensure quick troubleshooting and
optimal performance across the network. The following are essential network protocols
management:
o Simple Network Management Protocol (SNMP): Helps administrators manage
network devices by monitoring endpoint information to proactively track
network performance and pinpoint network glitches for quick troubleshooting.
o Internet Control Message Protocol (ICMP): Helps diagnose network connectivity
issues. Network devices employ ICMP for sending error messages, highlighting
congestion and timeouts, and transmitting other operational information to
assist in network troubleshooting.
THE COMPUTER SYSTEM
Computer is an electronic device that accepts data as input, processes the input data by
performing mathematical and logical operations on it, and giving the desired output. A
computer system will consist of various components that work together to perform
desired tasks. These components can be broadly grouped as;

• Humanware
• Hardware and
• Software.

Humanware
Humanware is the human user of a computer and can be seen as a method of adding a
human facet into the building of computers and development of computer programs. The
main goal of developing humanware is to make hardware and software as functional as
possible.
While software and hardware make up an actual computer, understanding humanware is
necessary when designing user interfaces (UI) and enhancing user experience (UX).
Humanware is the combination of hardware and software elements that make human
interaction with a device as good as possible.
In any design work, user considerations come first as users lie at the center of all business
operations. Often, developing humanware begins by defining who the computer’s
potential users are, what they are interested in, and what they need before designing the
infrastructure.

Hardware
These are the physical parts of a computer, i.e. machine itself and its connected physical
devices such as monitor, keyboard, mouse etc. Using these devices, we can control
computer operations like input and output. A computer hardware setup will typically have
a system unit and its connected peripherals (I/O devices).
The hardware components of a computer or personal computer can be categorized into 4
primary categories: -
a. System Unit
b. Display Device
c. Input/Output Devices
d. Other Peripheral Devices

a. Display Devices
A display device is a personal computer component that enables user to view the text and
graphical data produced by the computer. Display devices commonly connect to the
system unit via a cable, and they have controls to adjust the settings for the device. They
vary in size and shape, as well as the technology used.

b. Input Output (I/O) Devices


These are devices that enable us to interact with the computer. With the help of input
devices, the user enters the data or instructions into the computer. This information or
data is accepted by the input devices and converted into a computer-acceptable format,
which is again sent to the computer system for processing. After processing, the computer
relays the processed information via appropriate output device.

c. System Unit
A System Unit is the main component of a personal computer, which houses the other
devices necessary for the computer to function. It is comprised of a chassis and other
internal components such as;
• System board (motherboard),
• Microprocessor (CPU)
• Graphics Processing Unit (GPU)
• Memory modules,
• disk drives,
• Network Interface Card (NIC):
• adapter cards,
• Output Devices:
• power supply,
• a fan or other cooling device and ports
Motherboard
The main circuit board that houses the CPU, memory, and other essential components. It
includes connectors for additional peripherals. It contains a Chipset that Manages data
flow between the CPU, memory, and other peripherals. It also contains BIOS/UEFI, a
Firmware that initializes hardware during the booting process and provides runtime
services for operating systems and programs. All the components of the computer are
directly or indirectly connected to the motherboard.

CPU (Central Processing Unit)


The CPU is the heart or the "brain" of the computer and it performs calculations and
executes instructions given to the computer. Its speed and efficiency are crucial for the
overall performance of the computer system. With advancements in technology, CPUs
continue to evolve, offering greater power, efficiency, and capabilities.
Components of the CPU

1. Control Unit (CU)


This is a part of the CPU that controls what happens inside the CPU ensuring it
performs its tasks accurately. It Directs the operation of the processor and ensures
instructions are executed in the right order. It coordinates the operations of a
computer's memory, arithmetic logic unit, and input/output devices.
Its main functions include fetching instructions from memory and decoding them and
executing them by coordinating with other parts of the CPU. Functions of the CU can
be summed as,

1. Read the code for the next instruction to be executed.


2. Increment the program counter so it points to the next instruction.
3. Read whatever data the instruction requires from cells in memory.
4. Provide the necessary data to an ALU or register.
5. If the instruction requires an ALU or specialized hardware to complete, instruct
the hardware to perform the requested operation.
2. Arithmetic Logic Unit (ALU)
The ALU is a part of the CPU that’s responsible for carrying out all arithmetic and
logical operations. As any information can be numerically encoded, general purpose
processing of information and all computational tasks takes place in the ALU. Its
efficiency and capability directly thus influence the overall performance of a computer.
Advancements in technology like pipelining and parallelism have greatly improved the
abilities of the ALU, enabling faster and more complex computing tasks.
Arithmetic operations conducted by the ALU are Addition, subtraction,
multiplication, division, and logical operations are AND, OR, NOT, XOR etc.
3. Registers
As the ALU runs operations at extremely high speeds, fetching instructions and
information from storage or memory would mean slowing down its operations. To
improve on this, the CPU has its own memory that is relatively faster for holding
data temporarily during execution. These small sized memories are called registers.
4. Cache Memory
This is a high-speed memory located close to the CPU for holding frequently used
data and instructions to speeds up access

CPU Performance Factors


1. Clock Speed
This is the operating speed of a CPU expressed in cycles per second. The unit of
measurement is the Hertz abbreviated Hz. One Hertz represents one cycle per
second. A typical computer will be rated at 1 -4GHz translating to 1 -4 billion cycles
per second.
Any information being processed by a CPU is first numerically encoded i.e
converted to zero’s and one’s. This reduces all processing work by CPU to arithmetic
and logical operations on the zeros and ones. Each operation requires one clock cycle
to execute.
A higher clock speed translates to more instructions executed in one second thus a
faster CPU, but other factors like CPU architecture also play a significant role.
2. Cores
This is the processor within the CPU that can execute instructions and processes
data independently. Modern CPUs have multiple cores, each capable of executing its
own thread of instructions simultaneously thus increasing the number of
instructions executed per clock cycle and allowing multitasking.
3. Threads
This is the smallest sequence of programmed instructions that can be managed
independently by an operating system. Computers often support multi-threading,
which allows a single core to execute multiple threads simultaneously.
4. Instruction Set Architecture (ISA)
This is a subset of computer architecture and defines the set of instructions that the
CPU can execute and defines how a CPU is controlled by software. The ISA serves as
the boundary between software and hardware. Common ISAs include x86 mostly
used in personal computers, ARM for mobile phones, and RISC-V.
5. Microarchitecture
This refers to how a given instruction set architecture is implemented in a
processor. This dictates how instructions are processed and executed. Popular
microarchitectures include Intel’s Core, AMD’s Zen, ARM’s Cortex.

Storage Devices/Memory
1. Memory
Primary Memory (RAM): Also known as main memory or RAM is a temporary
storage that the CPU uses to store data and instructions currently in use.
Characteristics:
• Volatile: Loses its contents when the power is turned off.
• Speed: Faster access time than secondary memory, allowing for quick data access
and processing during runtime.
Types:
• RAM: Used for temporary storage of data and instructions during processing. Data
can be read or written in any order, making it very efficient for processing tasks.
They vary in size, typically a few gigabytes. The more RAM a system has, the more
data it can handle simultaneously.
Types of RAM
− DRAM (Dynamic RAM): Needs to be refreshed thousands of times per second
to maintain data.
− SRAM (Static RAM): Faster and more reliable than DRAM, used for cache
memory, but more expensive and denser.
• Cache Memory: A smaller, faster type of volatile memory located inside the CPU,
used to speed up access to frequently used data.

Secondary Memory:
This refers to non-volatile storage that retains data even when the power is turned off. It
is used for long-term data storage. Characteristics include;
• Non-volatile: Data remains stored even without power.
• Speed: Generally slower than primary memory but offers much larger storage
capacity.
Types:
− Hard Disk Drives (HDDs): Traditional magnetic storage devices.
− Solid State Drives (SSDs): Faster, flash-based storage devices.
− Optical Discs: Such as CDs and DVDs, used for media storage.
− USB Flash Drives: Portable storage devices that use flash memory.
Difference between primary and secondary memory
a) Primary memory is used for immediate data processing, while secondary memory is
used for long-term storage of data and applications.
b) Primary memory is volatile, while secondary memory is non-volatile.
c) Primary memory is significantly faster than secondary memory, making it essential
for active tasks.

Graphics Processing Unit (GPU)


This is a specialized processor designed to accelerate rendering images and video,
enabling smooth graphics and visual effects in applications like gaming, 3D modeling,
and video editing. Unlike a CPU, which handles general-purpose tasks, a GPU is
optimized for parallel processing, making it highly efficient at performing multiple
calculations simultaneously.
Modern GPUs are also used for non-graphical tasks, such as machine learning and
scientific computations, thanks to their ability to process large amounts of data in parallel.
GPU’s can either be integrated GPUs (built into CPUs) or dedicated GPUs (separate cards
installed in computers).

Power Supply
All of a computer system’s parts are powered by a power source. Typically, a power cord
is used to connect a computer tower to an electrical outlet. The Power Supply unit
converts electrical power from an outlet into usable power for the various computer's
components.

Cooling system
A computer cooling system is essential for maintaining optimal temperatures of the
computer’s components, especially the CPU and GPU, which generate significant heat
during operation. Effective cooling prevents overheating, improves performance, extends
the lifespan of components, and ensures system stability. Several types of cooling systems
are used, e.g., air cooled, liquid cooled etc. A fan is usually attached to enhance cooling.

SOFTWARE

A computer system consists of hardware and software. The computer hardware cannot
perform any task on its own. It needs to be instructed about the tasks to be performed.
Software is a set of programs that instructs the computer about the tasks to be performed.
Software tells the computer how the tasks are to be performed; hardware carries out these
tasks. Different sets of software can be loaded on the same hardware to perform different
kinds of tasks. For example, a user can use the same computer hardware for writing a
report or for running a payroll program.

Types of Software
Software can be broadly classified in two categories:
1. System Software, and
2. Application Software.

System software
This is a low-level software that manages and controls a computer’s hardware and
provides basic services or platforms to higher-level software. A System Software controls
the basic functions of a computer and hides complexity of computer system from user and
application software. E.g., Operating System, Compiler, Interpreter etc. It only runs in the
background of your device, at the most basic level while you use other application
software. This is why system software is also called “low-level software”.
Most prominent features of a system software−
• Close to the system
• Fast in speed
• Difficult to design
• Difficult to understand
• Less interactive
• Smaller in size
• Difficult to manipulate
• Generally written in low-level language
The purposes/functions of the system software are:
1. Hardware Communication: System software serves as an interface between the
hardware and software components of a computer, enabling them to communicate
and work together.
2. Resource Management: System software manages computer resources such as
memory, CPU usage, and storage, optimizing their utilization and ensuring that
the system operates efficiently.
3. Security: System software provides security measures such as firewalls, antivirus
software, and encryption, protecting the system and its data from malware,
viruses, and other security threats.
4. User Interface: System software provides a user interface that allows users to
interact with the computer or computing device and perform various tasks.
5. Application Support: System software supports the installation and running of
applications and software on the system.
6. Customization: System software allows for customization of the system settings
and configuration, giving users greater control over their computing environment.

System software may be broadly divided into two categories based on their functionality
− System software for computer management
− System software for developing software

1. System software for computer management


System software for the management and functionality of a computer relates to the
functioning of different components of the computer, like, processor, input and output
devices etc. System software is required for managing the operations performed by the
components of a computer and the devices attached to the computer. It provides support
for various services, as requested by the application software. System software under this
category include
• Operating system,
• device drivers, and
• system utilities

2. System software for developing software


System software for the development of application software provides services and
software tools required for the development and execution of application software. They
include;
• The programming language software,
• translator software,
• loader, and
• linker

1. Operating System
An operating system (OS) is a type of system software that manages a computer’s
hardware and software resources. It provides common services for computer programs.
An OS acts as a link between the software and the hardware. It controls and keeps a record
of the execution of all other programs that are present in the computer, including
application programs and other system software.
The main functions of operating systems are as follow:
• Resource Management: The operating system manages and allocates memory,
CPU time, and other hardware resources among the various programs and
processes running on the computer.
• Process Management: The operating system is responsible for starting, stopping,
and managing processes and programs. It also controls the scheduling of processes
and allocates resources to them.
• Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as
access controls and encryption.
• File Management: The operating system is responsible for organizing and
managing the file system, including the creation, deletion, and manipulation of
files and directories.
• Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.

Type of operating system

− Single-User, Single Task Operating System: These operating systems work on single
task & single user at a time e.g. DOS.
− Single-User, Multi-Task Operating System: These operating systems works on more
than one task and process them concurrently at a time.
− Multiuser Operating System: In these OS, multiple users are allowed to access the
same data or information at a time via a network. E.g., Unix, Linux, Windows7.
− Multiprocessing Operating System: Here, a single process runs on two or more
processors. All the processing and their management takes place in a parallel way;
hence this OS are also called as Parallel Processing. E.g., Linux, UNIX and Windows
− Embedded Operating System: These are embedded in a device, which is located in
ROM E.g., OS of microwaves, washing machine.
− Distributed Operating System: In these OS, the computers work in cooperation with
each other

2. Language Processor
An important function of system software is to convert all user instructions into machine
understandable language. When we talk of human machine interactions, languages are of
three types −
− Machine-level language − This language is nothing but a string of 0s and 1s that the
machines can understand. It is completely machine dependent.
− Assembly-level language − This language introduces a layer of abstraction by
defining mnemonics. Mnemonics are English like words or symbols used to denote a
long string of 0s and 1s. For example, the word "READ" can be defined to mean that
a computer has to retrieve data from the memory. The complete instruction will also
tell the memory address. Assembly level language is machine dependent.
− High level language − This language uses English like statements and is completely
independent of machines. Programs written using high level languages are easy to
create, read and understand.

Program written in high level programming languages like Java, C++, etc. is called
source code. Set of instructions in machine readable form is called object code or
machine code. System software that converts source code to object code is called
language processor. There are three types of language processors −

a) Assembler − Converts assembly level program into machine level program.


b) Interpreter − Converts high level programs into machine level programs line by
line.
c) Compiler − Converts high level programs into machine level programs at one go
rather than line by line.

Difference Between a Compiler and an Interpreter


Compiler and Interpreter are used to convert a program written in high-level language to
machine language; however, they work differently. The key differences between a
compiler and an interpreter are as follows:
a. Interpreter looks at a source code line-by-line. Compiler looks at the entire source
code.
b. Interpreter converts a line into machine executable form, executes the line, and
proceeds with the next line. Compiler converts the entire source code into object-code
and creates the object code. The object code is then executed by the user.
c. For a given source code, once it is compiled, the object code is created. This object code
can be executed multiple times by the user. However, the interpreter executes line-by-
line, so executing the program using an interpreter means that during each execution,
the source code is first interpreted and then executed.
d. During execution of an object code, the compiler is not required. However, for
interpretation, both interpreter and the source code is required during execution
(because source code is interpreted during execution).
e. Since an interpreter interprets line-by-line, the interpreted code runs slower than the
compiled code.

3. Device Drivers
Device drivers are a class of system software that minimizes the need for system
troubleshooting. Internally, the operating system communicates with hardware elements.
Device drivers make it simple to manage and regulate this communication.
To operate the hardware components, the operating system comes with a variety of device
drivers. The majority of device drivers, including those for a mouse, keyboard, etc., are
pre-installed in the computer system by the businesses that make computers.

4. Firmware
These are the operational programs installed on computer motherboards that assist the
operating system in distinguishing between Flash, ROM, EPROM, and memory chips.
However, managing and controlling all of a device’s actions is the main purpose of any
firmware software. For initial installation, it makes use of non-volatile chips. There are
mainly two main types of firmware chips:
BIOS (Basic Input/Output System) chip.
UEFI (Unified Extended Firmware Interface) chips.

5. Utility Software
System Software and application software interact through utility software. A third-party
product called utility software is created to lessen maintenance problems and find
computer system defects. It is included with your computer’s operating system. Listed
below are some particular attributes of utility software:
• Anti-virus utility to scan computers for viruses
• Data Compression utility to compress the files.
• Cryptographic utility to encrypt and decrypt files.
• Disk Compression utility to compress contents of a disk for increasing the capacity
• Disk Partitioning to divide a single drive into multiple logical drives. Each drive is
then treated as an individual drive and has its own file system.
• Disk Cleaners to find files that have not been used for a long time. It helps the user
to
• decide what to delete when the hard disk is full.

6. Linker
A linker is a special program that combines the object files, generated by the
compiler/assembler and other pieces of code to originate an executable file that has a .exe
extension. In the object file, the linker searches and appends all libraries needed for the
execution of the file. It regulates the memory space that will hold the code from each
module. It also merges two or more separate object programs and establishes links among
them.
7. Loader
It is a special program that takes input of executable files from linker, loads it to main
memory, and prepares this code for execution by computer. Loader allocates memory
space to program. Even it settles down symbolic references between objects. It is in charge
of loading programs and libraries in the operating system. The embedded computer
systems don’t have loaders. In them, code is executed through ROM.

Application software
Application software is used by the users to perform specific tasks. The user may choose
the appropriate application software, for performing a specific task, which provides the
desired functionality. The application software interacts with the system software and
the users of the computer. Figure below shows the hierarchy of software, hardware and
users.

Figure: Software hierarchy

Application software may be a single program or a set of programs. A set of programs that
are written for a specific purpose and provide the required functionality is called software
package. Application software is written for different kinds of applications—graphics,
word processors, media players, database applications, telecommunication, accounting
purposes etc.
Some examples of application software packages are;
• Word Processing Software: For writing letter, reports, documents etc. (e.g. MS-
WORD).
• Image Processing Software: For assisting in drawing and manipulating graphics
(e.g. Adobe Photoshop).
• Accounting Software: For assisting in accounting information, salary, tax returns
(Tally software).
• Spreadsheet Software: Used for creating budget, tables etc. (e.g. MS-Excel).
• Presentation Software: To make presentations, slide shows (e.g. MS-PowerPoint)
• Suite of Software having Word Processor, Spreadsheet and Presentation
Software: Some examples are MS-Office, Google Docs, Sun Openoffice, Apple
iWork.
• CAD/CAM Software: To assist in architectural design. (e.g. AutoCAD, Autodesk)
• Geographic Information Systems: It captures, stores, analyzes, manages, and
presents data, images and maps that are linked to different locations. (e.g.
arcGIS)
• Web Browser Software: To access the World Wide Web to search documents,
sounds, images etc. (e.g. Internet Explorer, Netscape Communicator, Chrome).

SOFTWARE ACQUISITION
Different kinds of software are made available for use to users in different ways. The
user may have to purchase the software, can download for free from the Internet, or can
get it bundled along with the hardware. Nowadays with the advent of Cloud computing,
many applications software are also available on the cloud for use through the Internet,
e.g., Google Docs. The different ways in which the software are made available to users
are:
• Retail Software is off-the-shelf software sold in retail stores. It comes with
printed manuals and installation instructions. For example, Microsoft Windows
operating system.
• OEM Software stands for “Original Equipment Manufacturer” software. It refers
to software which is sold, and bundled with hardware. Microsoft sells its
operating system as OEM software to hardware dealers. OEM software is sold at
reduced price, without the manuals, packaging and installation instructions. For
example, Dell computers are sold with the “Windows 7” OS pre-loaded on them.
• Demo Software is designed to demonstrate what a purchased version of the
software is capable of doing and provides a restricted set of features. To use the
software, the user must buy a fully- functional version.
• Shareware is a program that the user is allowed to try for free, for a specified
period of time, as defined in the license. It is downloadable from the Internet.
When the trial period ends, the software must be purchased or uninstalled.
• Freeware is software that is free for personal use. It is downloadable from the
Internet. The commercial use of this software may require a paid license. The
author of the freeware software is the owner of the software, though others may
use it for free. The users abide by the license terms, where the user cannot make
changes to it, or sell it to someone else.
• Public Domain Software is free software. Unlike freeware, public domain
software does not have a copyright owner or license restrictions. The source code
is publicly available for anyone to use. Public domain software can be modified by
the user.
• Open-Source Software is software whose source code is available and can be
customized and altered within the specified guidelines laid down by the creator.
Unlike public domain software, open-source software has restrictions on their
use and modification, redistribution limitations, and copyrights. Linux, Apache,
Firefox, OpenOffice are some examples of open-source software
Number Systems

The Number System is the mathematical notation for representing numbers of a given set by
using digits or other symbols in a consistent manner. All the Mathematical concepts and formulas
are based on the Number system.

The value of any digit in a number can be determined by a digit, its position in the number, and
the base of the number system. The numbers are represented in a unique manner and allow us
to operate arithmetic operations like addition, subtraction, and division.

Number systems are fundamental in computer science because they provide the basis for how
data is represented, processed, and communicated in computing systems. They are the
foundation of all operations in computer science, from the lowest level of hardware interaction to
high-level software development. Understanding these systems is essential for anyone working
in the field.

Various numbering systems are in use based on the base value and the number of allowed
digits. The four most common number system types are:

• Decimal number system


• Binary number system
• Octal number system
• Hexadecimal number system

Table of different number systems


Decimal Number System

The decimal number system uses ten digits: 0 to 9 with the base number as 10. The
decimal number system is the system that we generally use to represent numbers in real
life. If any number is represented without indicating its base, it means that its base is 10.

Octal Number System

Octal Number System, also known as base-8, is one in which the base value is 8. It uses 8 digits
i.e., 0-7 for creation of Octal Numbers. In computing, the octal system was historically used
because of its relationship with binary (base-2) numbers, which are fundamental to computer
systems. Each octal digit corresponds to exactly three binary digits (bits) and thus can be used to
simplify long strings of binary numbers. Although rarely used these days, they still find
application in representation of UTF8 Numbers and in contexts related to Unix and Linux
permissions.

Hexadecimal Number System


The hexadecimal number system has the base number as 16. It has sixteen digits/alphabets:
0,1,2,3,4,5,6,7,8,9 and A, B, C, D, E, F. Here, A-F of the hexadecimal system means the numbers
10-15 of the decimal number system respectively. This system is used in computers to reduce
the large-sized strings of the binary system e.g., the binary number ‘10101111’ is represented as
‘AF’ in hex. Hex numbers find various applications in computer science including addressing
memory, specifying colors when programming, debugging and in writing machine code
instructions in low-level programming languages like assembly.

Binary Number System

As the name suggests, this type of number system has a base value of 2 (binary). This system uses
only two digits, i.e., 0 and 1 for creating numbers, For example: 14 can be written as 1110 and 50
can be written as 110010. Each digit in a binary number is called a bit - derived from a binary
digit. So, a binary number 101 has 3 bits. It is extensively used in computer applications since it
is easy to implement on computers as they operate on digital electronic circuits, which have two
states: on (1) and off (0). This forms the basis of all data representation, including text, images,
and sound.
Understanding different number systems is crucial for optimizing algorithms, particularly those
that involve low-level programming, cryptography, or hardware design, where the choice of
number system can impact efficiency and performance.

Conversion of Number Systems

A number can be converted from one number system to another number system using number
system formulas. Like binary numbers can be converted to octal numbers and vice versa, octal
numbers can be converted to decimal numbers and vice versa, and so on.

1. Binary to Decimal
To convert a binary number to a decimal:
• Multiply each bit by 2 raised to the power of its position (starting from 0 on the right).
• Sum all the results.
Example: Convert 1011₂ to decimal
1011 2 = (1×2 )+(0×2 )+(1×2 )+(1×2 )
3 2 1 0

= (1×8) + (0×4) + (1×2) + (1×1)


= 8+0+2+1
= 11 10

2. Decimal to Binary
To convert a decimal number to binary:
• Divide the number by 2.
• Record the remainder.
• Repeat the process with the quotient until the quotient is 0.
• The binary number is the sequence of remainders read from bottom to top.
Example: Convert 13₁₀ to binary
• 13 ÷ 2 = 6 remainder 1
• 6 ÷ 2 = 3 remainder 0
• 3 ÷ 2 = 1 remainder 1
• 1 ÷ 2 = 0 remainder 1 So, 13₁₀ = 1101₂.

3. Binary to Octal
To convert binary to octal:
• Group the binary digits into sets of three, starting from the right. Add leading zeros if
necessary.
• Convert each group to its octal equivalent.
Example: Convert 101101₂ to octal
• Group: 101 101
• Convert: 101 =5 101 = 5
2 8 2 8

• So, 101101₂ = 55₈.

4. Octal to Binary
To convert octal to binary:
• Convert each octal digit to its 3-bit binary equivalent.
Example: Convert 57₈ to binary
• 5₈ = 101₂
• 7₈ = 111₂
• So, 57₈ = 101111₂.

5. Binary to Hexadecimal
To convert binary to hexadecimal:
• Group the binary digits into sets of four, starting from the right. Add leading zeros if
necessary.
• Convert each group to its hexadecimal equivalent.
Example: Convert 11010110₂ to hexadecimal
• Group: 1101 0110
• Convert: 1101 =D and 0110 =6
2 16 2 16

• So, 11010110₂ = D6₁₆.

6. Hexadecimal to Binary
To convert hexadecimal to binary:
• Convert each hexadecimal digit to its 4-bit binary equivalent.
Example: Convert A3₁₆ to binary
• A₁₆ = 1010₂
• 3₁₆ = 0011₂
• So, A3₁₆ = 10100011₂.

7. Decimal to Octal
To convert a decimal number to octal:
• Divide the number by 8.
• Record the remainder.
• Repeat the process with the quotient until the quotient is 0.
• The octal number is the sequence of remainders read from bottom to top.
Example: Convert 73₁₀ to octal
• 73 ÷ 8 = 9 remainder 1
• 9 ÷ 8 = 1 remainder 1
• 1 ÷ 8 = 0 remainder 1 So, 73₁₀ = 111₈.

8. Octal to Decimal
To convert an octal number to decimal:
• Multiply each digit by 8 raised to the power of its position (starting from 0 on the right).
• Sum all the results.
Example: Convert 127₈ to decimal

1278 =(1×8 )+(2×8 )+(7×8 )


2 1 0

=(1×64)+(2×8)+(7×1)

=64+16+7
=87 10

9. Hexadecimal to Decimal
To convert a hexadecimal number to decimal:
• Multiply each digit by 16 raised to the power of its position (starting from 0 on the right).
• Sum all the results.
Example: Convert 1A₁₆ to decimal

1A =(1×16 )+(10×16 )
16
1 0

=(1×16)+(10×1)

=16+10

=26 10

10. Decimal to Hexadecimal


To convert a decimal number to hexadecimal:
• Divide the number by 16.
• Record the remainder.
• Repeat the process with the quotient until the quotient is 0.
• The hexadecimal number is the sequence of remainders read from bottom to top.
Example: Convert 254₁₀ to hexadecimal
• 254 ÷ 16 = 15 remainder 14 (E₁₆)
• 15 ÷ 16 = 0 remainder 15 (F₁₆) So, 254₁₀ = FE₁₆.

Binary Coded Decimal (BCD)


BCD is a way of encoding decimal numbers in a binary format. In BCD, each decimal digit is
represented by its corresponding 4-bit binary equivalent. This form of encoding is inefficient
compared to pure binary representation but finds application where accuracy and ease of
conversion between decimal and binary are more important than space efficiency.

In BCD, a 4-bit binary number is used to represent each digit of a decimal number (0-9) . For
example:
• Decimal 0 is represented as 0000 in BCD.
• Decimal 1 is represented as 0001 in BCD.
• Decimal 9 is represented as 1001 in BCD.

Therefore, the decimal number 35 would be represented as 0011 0101 in BCD (where 3 is
0011 and 5 is 0101).

BCD finds various applications where accuracy and ease of conversion is more important that
efficiency e.g
• Digital Clocks: In digital clocks, each digit of the time is often stored in BCD to facilitate
easy conversion to a human-readable format.
• Calculators: Early calculators used BCD to represent numbers to avoid errors due to
binary floating-point arithmetic.
• POS Systems: BCD is often used in point-of-sale (POS) systems to handle currency,
ensuring exact representation of monetary values.
• Banking systems: BCD is particularly useful in financial and commercial applications
where precision is crucial. Since each decimal digit is represented individually, there is
no approximation error, unlike in floating-point representations.

Character Encoding

As we all know by now, computers do not understand the normal written language, be it English
characters in the alphabet or the Chinese characters. It only understands and takes in data and
instructions as sequences of 0’s and 1’s. Therefore, characters and instructions given to
computers in the form of a computer program have to be converted to 0’s and1’s. Character
encoding is the mapping of or converting a series of characters i.e., letters, numbers,
punctuations, symbols etc. to unique binary code that a computer can understand and work
with. Encoding makes it possible for a computer to handle all types of data including numbers,
text, photos, audio, and video files.

There are various types of character encoding in use. They include;

• ASCII
• Extended ASCII
• ISO 8859
• UTF-8
• UTF-16
• UTF-32

ASCII (American Standard Code for Information Interchange):


• One of the earliest character encodings, ASCII uses 7 bits to represent 128 characters,
including English letters, digits, and some control characters. Early computers only
handled data in bytes - a group of eight bits and thus this limited the ASCII to 7 bits
reserving the eighth bit for error checking.

Extended ASCII:
• Is an extension of ASCII and uses 8 bits to allow for 256 characters, including additional
symbols, accented letters, and other characters not in standard ASCII. It was used in
early computer systems to support more languages.

ISO 8859:
• A series of encodings based on 8-bit characters, with different versions (ISO 8859-1,
ISO 8859-2, etc.) to cover different languages and scripts. For example, ISO 8859-1
(Latin-1) covers Western European languages.

UTF-8 (Unicode Transformation Format - 8-bit):


• A variable-length encoding that can represent every character in the Unicode character
set. UTF-8 uses 1 to 4 bytes per character:
o 1 byte for characters in the ASCII range (U+0000 to U+007F).
o 2 to 4 bytes for other characters.
• UTF-8 is backward-compatible with ASCII and is the most widely used character
encoding on the web.

UTF-16:
• A variable-length encoding that uses 2 or 4 bytes per character. It’s often used internally
by systems and programming languages like Java and Windows.

UTF-32:
• A fixed-length encoding that uses 4 bytes for each character. It’s straightforward but less
space-efficient, as it uses more memory compared to UTF-8 or UTF-16.

Importance of Character Encoding

Text Display and Storage: Proper character encoding ensures that text is displayed
correctly across different systems and devices. For instance, without the correct
encoding, special characters or symbols might appear as garbled text. If text is encoded
in one encoding and decoded in another, it can result in corrupted or unreadable
(jiberish) text

Data Interchange: Character encoding is crucial for the exchange of data between
systems, particularly in the global context where multiple languages and scripts are
used. UTF-8's wide adoption has helped standardize this process.

Handling Multiple Languages: Unicode, with its various encodings like UTF-8 and
UTF-16, allows for the representation of virtually all characters from all written
languages, enabling globalization and internationalization of software and data.

Binary Arithmetic

Binary arithmetic is an essential part of all digital computers and many other digital systems. It
involves performing arithmetic operations using binary numbers. These operations are
fundamental in computer systems because computers use binary to represent all data and
instructions.
There are four main types of binary operations which are:

• Binary Addition.
• Binary Subtraction.
• Binary Multiplication.
• Binary Division.

Binary Addition
It is the most common binary operation and is key for other binary operations
including subtraction, multiplication, division. There are four rules of binary
addition.

Binary addition is no different in principle than in base-10. The only potential


problem is remembering that (1 + 1) gives (0 carry 1) that is 10 (i.e. in
2

decimal (1 + 1) = 2); and that (1 + 1 + 1) gives (1 carry 1) that is 11 (i.e. in


2

decimal (1 + 1 + 1) = 3).

Example − Addition

Binary Subtraction
Binary subtraction can be performed directly (as for base-10) but it is tricky due
to the ‘borrowing’ process. The two's complement(discussed later) approach is
easier, less error prone, and is therefore recommended. It is based upon the
fact that for two numbers A and B then A − B = A + (−B) where −B is the
complement of B. So rather than subtracting B directly from A, the complement
of B is added to A. All that we need is a way of generating the complement of a
number.

This can be achieved (in any base) by taking the number whose complement is
required away from the next highest power of the base. So the complement of
an n-digit number, p, in base-m is given by:
m −p
n

There are four rules of binary subtraction.


Example − Subtraction

Binary Multiplication
Binary multiplication is similar to decimal multiplication. It is simpler than decimal
multiplication because only 0s and 1s are involved. There are four rules of binary
multiplication.

Example − Multiplication
Binary Division
Binary division is similar to decimal division. It is called the long division
procedure.

Example − Division

Complement arithmetic

Complements are used in the digital computers in order to simplify the subtraction
operation and for the logical manipulations. It's a powerful yet simple technique
which minimizes the hardware implementation of signed arithmetic operations in a
digital machine thus, the process of subtraction becomes that of addition. For each
radix-r system (radix r represents the base of the number system) there are two
types of complements.

S.No. Complement Description

1 Radix Complement The radix complement is referred to as the


r's complement

2 Diminished Radix The diminished radix complement is referred


Complement to as the (r-1)'s complement

Binary system complements


As the binary system has base r = 2. So the two types of complements for the
binary system are 2's complement and 1's complement.
1's complement
The 1's complement of a number is found by changing all 1's to 0's and all 0's
to 1's. This is called taking complement or 1's complement. Example of 1's
Complement is as follows.

2's complement
The 2's complement of binary number is obtained by adding 1 to the Least
Significant Bit (LSB) of 1's complement of the number.
2's complement = 1's complement + 1
Example of 2's Complement is as follows.

Signed vs. Unsigned Binary Numbers

Binary numbers can be represented in signed and unsigned ways. Signing of binary
numbers is necessitated by a need for representing negative numbers in binary
arithmetic. To indicate a negative number, a sign bit is added to the binary number
making it a signed binary number. Below is a table showing various forms of binary
numbers representation.
Unsigned binary numbers

Just as in decimal number systems, unsigned binary numbers are always assumed to have a
positive sign preceding them, and thus all unsigned binary numbers are assumed to be positive
numbers by default.

Signed binary numbers

Signed numbers contain a sign flag, this representation distinguishes positive and negative
numbers. This technique contains both sign bit and magnitude of a number.

Representation of Signed Binary Numbers:

There are three types of representations for signed binary numbers.

• Sign-Magnitude
• One's Complement
• Two's Complement.

The most common method for representing signed binary numbers is the two’s complement. In
sign-Magnitude, the MSB is reserved for signs where 1 represents a negative sign and zero a
positive sign.

You might also like