0% found this document useful (0 votes)
331 views344 pages

Annamalai University

The document discusses the introduction to computers. It describes the basic elements of a computer including hardware, software, input, output and processing. It also covers the characteristics, advantages, limitations and generations of computers as well as different types of computers.

Uploaded by

deepika .y
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
331 views344 pages

Annamalai University

The document discusses the introduction to computers. It describes the basic elements of a computer including hardware, software, input, output and processing. It also covers the characteristics, advantages, limitations and generations of computers as well as different types of computers.

Uploaded by

deepika .y
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 344

178E1250

179E1250
347E1250
348E1250
349E1250
I – VI

ANNAMALAI UNIVERSITY
DIRECTORATE OF DISTANCE EDUCATION

M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester

Computer Applications in Management


Units: I - VI

Copyright Reserved
(For Private Circulation Only)
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester

COMPUTER APPLICATIONS IN MANAGEMENT


Editorial Board

Dr. N.Ramgopal
Dean
Faculty of Arts
Annamalai University

Dr.R. Singaravel Dr. P.Vijayan


Director Director,
Directorate of Distance Education Academic Affairs
Annamalai University Annamalai University
Dr. G.Udayasuriyan Dr. S.Arulkumar
Professor and Head Associate Professor and Dy. Coordinator
Dept. of Business Administration Management Wing, DDE
Annamalai University Annamalai University

Internals

Dr. V.Velmurugan Dr. S.Partheeban


Associate Professor Assistant Professor
Management Wing, DDE Management Wing, DDE
Annamalai University Annamalai University

Externals

Dr. S.Kaliamoorthy Dr. B.Rajasekaran


Director Professor of Business Administration
Alagappa Institute of Management Manonmaniam.Sundaranar University
Alagappa University Tirunelveli
karaikudi
Lesson Writer

Dr.J.John Adaikalam
Associate Professor
Dept. of Business Administration
Annamalai University
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester

COMPUTER APPLICATIONS IN MANAGEMENT


SYLLABUS
Unit–1 Computer Fundamentals
Organization of computers – Generations of computers – Types of computers –
Input /Output devices – Storage devices – Software: Systems software, Application
software – Overview of Linux, Unix, Windows, Freewares – Programming
Languages/ Assembly Languages – Compiler – Interpreter.
Unit–2 Software Development Process and Applications
Software Development Process: File Design & Report Design – Data File Types/
Organization; Master, Transaction File. Application and uses of MSOffice: Word,
Power Point, Excel, Access.
Unit–3 Data Processing Concepts
Basics of Data Processing – Modes of Data Processing – Data Hierarchy – Data
Processing Systems – Management of Data Processing System in Business
Organization – Application portfolio Development – Program Development Cycle –
Flow Chart.
Unit–4 Computer Networks
DATA Communications: Networking Concepts, Classification – LAN, MAN,
WAN – Wireless LAN – Internet, Intranet, Extranet – Virtual Private Networks (VPN)
– PeertoPeer, Client Server – Networking topologies – Virus – Meaning – Types –
Antivirus – Benefits – Detection and Elimination.
Unit-5 Implementing and Managing IT
IT strategic Alignment – Competitive Forces Model – Value Chain Model –
Strategic Resources and Capabilities – IT Planning – Managing IS Department –
Evaluating IT Investment: Benefits, Costs and Issues – IT Economics Strategies –
Managerial Issues.
Unit-6 Business and IT consulting
Business converging with IT – IT Consulting Convergence An Integrated
Stepped Approach – Consulting Process – Proposal Development – Contract –
Execution – Implementation – Planning – Closing and Collecting. Consultant Client
Relationship – Consulting Themes.
ii

M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester

COMPUTER APPLICATIONS IN MANAGEMENT


CONTENTS
Lesson No Title Page
1 Introduction to Computers 1
2 Input Output and Storage Devices 19
3 Introduction to Software 41
4 Programming Languages 65
5 System Development Process 77
6 SDLC Models 95
7 System , Input and Output Design 106
8 Integration of Application & Text Processing 132
9 Data Processing 166
10 Hierarchy of Data 176
11 Application Portfolio Development 187
12 Flow Chart & Evaluation 192
13 Networking - Introduction 201
14 Classification of Networks 212
15 Networking Topologies 228
16 Advanced Networking and Virus 238
17 IT Strategic Alignment 247
18 Information Technology Planning 263
19 Managing of IS Department 273
20 Evaluating IT Investment 288
21 Introduction to Business Convergence 307
22 BIT Consulting Process 314
23 Executing the Consulting Management 322
24 BIT Consulting Themes 333
LESSON - 1
INTRODUCTION TO COMPUTER
1.1 INTRODUCTION
Today, almost all of us in the world make use of computers in one way or the
other. It finds applications in various fields of engineering, medicine, commercial,
research and others. Not only in these sophisticated areas, but also in our daily
lives, computers have become indispensable. They are present everywhere, in all
the dev ices that we use daily like cars, games, washing machines, microwaves etc.
and in day to day computations like banking, reservations, electronic mails,
internet and many more.
1.2 OBJECTIVES
• To get introduced to the computer
• To study the organization of computer
• To understand the evolution of computer
• To get verse with the types of computers
1.3 CONTENTS
1.3.1 Computer and Its Elements
1.3.2 Characteristics of Computer
1.3.3 Advantages and Limitations of Computer
1.3.4 Generation of Computer
1.3.5 Types of Computer
1.3.1 COMPUTER AND ITS ELEMENTS
The word computer is derived from the word compute. Compute means to
calculate. The computer was originally defined as a super fast calculator. It had the
capacity to solve complex arithmetic and scientific problems at very high speed. But
nowadays in addition to handling complex arithmetic computations, computers
perform many other tasks like accepting, sorting, selecting, moving, comparing
various types of information. They also perform arithmetic and logical operations on
alphabetic, numeric and other types of information. This information provided by
the user to the computer is data. The information in one form which is presented to
the computer is the input information or input data.
Information in another form is presented by the computer after performing a
process on it. This information is the output information or output data.
The set of instructions given to the computer to perform various operations is
called as the computer program. The process of converting the input data into the
required output form with the help of the computer program is called as data
processing. The computers are therefore also referred to as data processors
Therefore a computer can now be defined as a fast and accurate data
processing system that accepts data, performs various operations on the data, has
2

the capability to store the data and produce the results on the basis of detailed step
by step instructions given to it. The terms hardware and software are almost always
used in connection with the computer.
Hardware
The hardware is the machinery itself. It is made up of the physical parts or
devices of the computer system like the electronic Integrated Circuits (ICs),
magnetic storage media and other mechanical devices like input devices, output
devices etc. All these various hardware are linked together to form an effective
functional unit. The various types of hardware used in the computers, has evolved
from vacuum tubes of the first generation to Ultra Large Scale Integrated Circuits of
the present generation.
Software
The computer hardware itself is not capable of doing anything on its own. It
has to be given explicit instructions to perform the specific task. The computer
program is the one which controls the processing activities of the computer. The
computer thus functions according to the instructions written in the program.
Software mainly consists of these computer programs, procedures and other
documentation used in the operation of a computer system. Software is a collection
of programs which utilize and enhance the capability of the hardware
Elements of a computer
Computer is a very effective and efficient machine which performs several
activities in few minutes, which otherwise would have taken several days if
performed naturally. Besides there would have been a doubt about the accuracy,
finish etc. The computer may be faster; more accurate but it cannot compete with
human brain. However there are some similarities between the human and the
computer which would make the computer more understandable.

Human Computer
Human beings has ears, nose, Computers have input devices such as
eyes etc. keyboard, scanner, touch screen, mouse
etc to get information.
we remember things Computer also stores information

We recollect certain information The computer also retrieves information


as required when times
We express ourselves by speech, Computer expresses through screen,
writing etc Printouts etc which We call as output
When we watch, hear, learn with the help of software, computer also
certain things and analyze can analyze Information and draw
conclusions
we store, analyze in brain The computer brain is RAM which is in
CPU (Central Processing Unit) where it
analyses data
3

The computer has storage devices like floppies (now not in usage), hard disks,
compact disks to store and retrieve information. However computer does not
understand emotions, it does not understand meaning beyond words, it cannot
read between the lines like the human. We learn many things unknowingly, certain
things knowingly; we call it as upbringing. But computers can learn everything only
knowingly. We learn many things on our own, but computer has to be taught to do
everything.
The basic parts of computer system are:
• Input Unit
• The Central Processing Unit (Storage Unit, Control Unit, Arithmetic Logic
Unit)
• Output Unit

The Input Unit


Input devices are the devices which are used to feed programs and data to the
computer. The input connects the external environment with the computer system.
The input devices are the means of communication between the user and the
computer system. Typical input devices include the keyboard, floppy disks, mouse,
microphone, light pen, joy stick, magnetic tapes etc. The way in which the data is
fed into the computer through each of these devices is different. However, a
computer can accept data only in a spe specific
cific form. Therefore these input devices
transform the data fed to them, into a form which can be accepted by the computer.
These devices are a means of communication and inter1 station between the user
and the computer systems.
4

Various input devices like keyboard etc, provide input to computer; whenever
a key is pressed, the letter or key gets automatically translated to binary codes and
then transmitted to either memory or processor. The information is stored in
memory for further use.
Thus the functions of the input unit are:
• Accept information (data) and programs.
• Convert the data in a form which the computer can accept.
• Provide this converted data to the computer for further processing.
The Central Processing Unit:
This is the head of any computer system. The central processing unit or CPU
is made of three parts:
• The control unit
• The arithmetic logic unit
• The primary storage unit
The Control Unit
The Control Unit controls the operations of the entire computer system. The
control unit gets the instructions from the programs stored in primary storage unit
interprets these instruction an subsequently directs the other units to execute the
instructions. Thus it manages and coordinates the entire computer system.
The Arithmetic Logic Unit
The Arithmetic Logic Unit (ALU) actually executes the instructions and
performs all the calculations and decisions. The data is held in the primary storage
unit and transferred to the ALU whenever needed. Data can be moved from the
primary storage to the arithmetic logic unit a number of times before the entire
processing is complete. After the completion, the results are sent to the output
storage section and the output devices.
CPU Components
Memory Address Register (MAR) - Specifies address for next read or write.
Memory Buffer Register (MBR) – Contains data to be written into or receives
data read from memory.
Program Counter (PC) -Stores the address of the next instruction to be
executed.
General Purpose Registers(R1, R2 etc.) -Used for storing information at the
time of execution by the user.
Instruction Register(IR) – Stores instruction before decoding.
Instruction Decoder(ID) – Decodes the instruction before execution.
Arithmetic & Logic Unit(ALU)- It does all the arithmetic and logical
computations.
5

Control Unit(CU)- Generates control signals to control every action inside the
computer.
The Primary Storage Unit
This is also called as Main Memory. Before the actual processing starts the
data and the instructions fed to the computer through the input units are stored in
this primary storage unit. Similarly, the data which is to be output from the
computer system is also temporarily stored in the primary memory. It is also the
area where intermediate results of calculations are stored. The main memory has
the storage
orage section that holds the computer programs during execution. Thus the
primary unit:
Stores data and programs during actual processing
Stores temporary results of intermediate processing
Stores results of execution temporarily
The main function of memmemory
ory unit is to store data and programs. The
programs must be stored in the device while execution. Inside the system, memory
plays a vital role in execution of set of instructions. Memory can be further
classified into:

Primary Memory: The data or set of instructions are stored in primary storage
before processing and the data is transferred to ALU where further processing is
done. These are expensive and also known as Main Memory.
Secondary Memory: The data or set of instruction are stored permanently;
user can use it whenever required in future. They are cheaper than primary
memory.
The Output Unit
The output devices give the results of the process and computations to the
outside world. The output units accept the results produced by the computer,
convert
ert them into a human readable form and supply them to the users. The more
common output devices are printers, plotters, display screens, magnetic tape drives
etc.
It provides processed results received from the operations performed. Devices
like printers,, monitor etc provides the desired output.
6

The most efficient computer depends on how quickly it executes tasks. The
performance highly depends on few factors:
As the programs are written in high level language, compiler transfers the high
level language to machine level language; so the performance is highly affected.
The speed of the computer depends on the hardware design and machine
instruction sets.
Therefore, for optimum results it is important to design compiler, hardware
and machine instruction sets in a coordinated way.
The hardware comprises of processor and memory usually connected by a bus.
The execution of the program depends on computer system, the processor time
depends on hardware. Cache memory is a part of the processor.

The flow of program instructions and data between processor and memory:
All the program and data are fetched from input device, and then stored in
main memory.
Instructions are fetched one by one over bus from memory into processor and
a copy is placed in cache memory for future use whenever required.
The processor and small cache memory is fabricated in a small integrated
circuit chip which makes the processing speed very high.
If the instruction movement between main memory and processor is
minimized, program will be executed faster which is achieved by cache memory.
1.3.2 CHARACTERISTICS OF COMPUTER
Much of the world runs on computers and computers profoundly changed
human life mostly for better. The characteristics of computers are
Speed
A computer is a very fast device. It can carry out instructions at a very high
speed obediently, uncritically and without exhibiting any emotions. It can perform
in a few seconds the amount of work that a human being can do in an entire year –
if he work day and night and is nothing else.
7

Some calculation that would have taken hours and days to complete otherwise,
can be completed in a few seconds using the computer. The speed of computer is
calculated in MHz, which are one million instructions per second.
Accuracy
Accuracy of a computer is consistently high and the degree of accuracy of a
particular computer depends on the instructions and the type of processor. But for
a particular computer, each and every calculation is performed. For example, the
computer accurately gives the result of division of any number up to 10 decimal
points.
Versatility
Versatility is one of the most wonderful things about computer. Multi-
processing features of computer makes it quiet versatile in nature. One moment, it
is preparing the results of particular examination, the next moment it is busy
preparing electricity bills, and in between it may be helping an office secretary to
trace an important letter in seconds.
It can perform different types of tasks with same ease. All that is required to
change its talent is to slip in a new program into it. Briefly, a computer is capable of
performing almost any task provided that the task can be reduced to a series of
logical steps.
Reliability
Computer provides very high speed accompanied by an equal high level of
reliability. Thus computers never make mistakes of their own accord.
Power of Remembering
A computer can store and recall any amount of information because of its
secondary storage capability. Every piece of information can be retain as long as
desired by the user and it can be recalled information almost instantaneously. Even
after several years, the information recalled will be as accurate as on the day when
it was fed to the computer.
No I.Q
A computer is a magical device. It can only perform tasks that a human being
can. The difference is that it performs these tasks with unthinkable speed and
accuracy.
It has no intelligence of its own. Its I.Q is zero at least till today. It can only
perform what is programmed to do. Hence, only the user can determine what tasks
a computer will perform. Computers have no sense of meaning, cannot perceive and
are only able to make simple robotic decision about the data they receive.
Common Data Used
One item can be involved in several different procedures or accessed, updated
and inspected by a number of different users. This can hinder the work of those
who need access to data. As the time is changing, more and more facilities are
being added to the computers they can perform but in practical life many tasks are
limited to these basic operations.
8

Diligence
The computer is a machine, does not suffer from the human traits of tiredness.
Nor does it lose concentration even after working continuously for a long time.
This characteristic is especially useful for those jobs where same tasks are
done again and again. It can perform long and complex calculations with same
speed and accuracy from the start till the end.
Storage
The computers have a lot of storage devices which can store a tremendous
amount of data. Data storage is essential function of the computer. Second storage
devices like floppy disk can store a large amount of data permanently.
1.3.3 ADVANTAGES AND LIMITATIONS OF A COMPUTER
I. Advantages
a. Speed up Work Efficiency
This is by far the biggest advantage of using computers. They have replaced the
use of manpower in carrying out tedious and repetitive work. Work that can take
days to complete manually can be done in a few minutes using a computer. This is
made possible by the fact that data, instructions and information move very fast in
the electric circuits of computers. They process trillions of instructions within a
second.
b. Large and Reliable Storage Capacity
Computers can store huge volumes of data. To put this into perspective,
physical files that can fill a whole room can be stored in one computer once they
are digitized. Better yet, access to the stored information is super-fast. It takes
micro-seconds for data to be transferred from storage to memory in a computer.
The same cannot be said for the retrieval of physical files.
With a computer, you can store videos, games, applications, documents etc.
that you can access whenever required. Better yet, storage can be backed up fast
and efficiently.
c. Connection with Internet
The Internet is probably the most outstanding invention in history. Computers
allow you to connect to the Internet and access this global repository of knowledge.
With the Internet, you can communicate faster with people across the globe.
You can send email, hold voice and video calls or use IM services. The Internet
also allows for instant sharing of files. You can also connect with friends and family
on social networks and even learn a new language online. The Internet is a great
educational resource where you can find information on virtually anything.
One of the biggest breakthroughs on the Internet is probably ecommerce. We
can actually shop in the convenience from our home and have the items delivered
to our doorstep.
9

d. Consistency
You always get the same result for the same process when using a computer.
For example if you created a document on one computer, you can open it on
another without making any special adjustments. This consistency makes it
possible to save and edit a document from different computers in different parts of
the world. Collaboration is therefore easier.
Whatever job you need done, you can always rest assured that the computer
will get it just right. There will be no variations in results achieved from the same
process. This makes computers ideal for doing tedious and repetitive work.
ii. Limitations
a. Health Risk
Improper and prolonged use of a computer might lead to disorders or injuries of
the elbows, wrist, neck, back, and eyes. As a computer user you can avoid these
injuries by working in a workplace that is well designed, using a good sitting
position and taking proper work breaks. Technology load and computer addiction
are the major behavioral health risks. Addiction comes when you are obsessed with
a computer.
Technology overload comes when you are over loaded with computer and
mobile phones. Both technology overload and computer addiction are avoidable if
the habits are noted and a follow up is done.
b. Violation of Privacy
When using the Internet on your computer, you run the risk of leaking your
private information. This is especially so if you happen to download malicious
software into your computer. Trojans and Malware can infiltrate your system and
give identity thieves access to your personal information.
Of particular interest to identity thieves and to hide your bank and credit card
details. Make sure to install reliable antivirus software to keep malware and Trojans
at bay. You should also avoid clicking on suspicious looking links when using the
Internet.
c. Impact on Environment
Manufacturing process of computers and computer waste are harmful to the
environment. When computer junk is discarded in open grounds, they release
harmful chemicals like lead and mercury to the environment. Mercury can result in
cancer and lead can cause radiation diseases when exposed to the environment.
Disposed computers could also cause fire.
d. Data Security
This is one of the most controversial aspects of computers today. The safety
and integrity of data is key for any business. However, data stored in a computer
can be lost or compromised in a number of ways.
There are instances where the computer could crash wiping out all data that
had been stored therein. Hackers could also gain access into your computer and
compromise the integrity of your data. This is why you should always have a
10

backup. Moreover, you should put measures in place to keep your data safe from
hackers.
1.3.4 GENERATION OF COMPUTERS
The computer has evolved from a large—sized simple calculating machine to a
smaller but much more powerful machine. The evolution of computer to the current
state is defined in terms of the generations of computer. Each generation of
computer is designed based on a new technological development, resulting in
better, cheaper and smaller computers that are more powerful, faster and efficient
than their predecessors. Currently, there are five generations of computer. In the
following subsections, we will discuss the generations of computer in terms of—
• The technology used by them (hardware and software),
• Computing characteristics (speed, i.e., number of instructions executed per
second),
• Physical appearance, and
• Their applications.
First Generation (1940 to 1956): Using Vacuum Tubes
Hardware Technology The first generation of computers used vacuum tubes for
circuitry and magnetic drums for memory. The input to the computer was through
punched cards and paper tapes. The output was displayed as printouts.
Software Technology: The instructions were written in machine language.
Machine language uses 0s and 1s for coding of the instructions. The first
generation computers could solve one problem at a time.
Computing Characteristics: The computation time was in milliseconds.
Physical Appearance: These computers were enormous in size and required a
large room for installation.
Application: They were used for scientific applications as they were the fastest
computing device of their time.
Examples UNIVersal Automatic Computer (UNIVAC), Electronic Numerical
Integrator and Calculator (ENIAC), and Electronic Discrete Variable Automatic
Computer (EDVAC).
The first generation computers used a large number of vacuum tubes and thus
generated a lot of heat. They consumed a great deal of electricity and were
expensive to operate. The machines were prone to frequent malfunctioning and
required constant maintenance. Since first generation computers used machine
language, they were difficult to program.
The first task was to perform series of complex calculation that is helped
determine Hydrogen-bomb feasibility instead. General purpose use only.
It can process 30 tons + 15000 sq. ft. + 18000 vacuum tubes + 140 KW = 5000
additions/sec
11

Vvon Neumann Machine


1945: stored-program concept first implement for EDVAC. Key concepts:
Data to be processed and instructions to b executed on those data are stored
in a single read-write memory
The contents of this memory are addressable by location, without regard to the
type of data contained there.
Execution occurs in a sequential fashion (unless explicitly modified) from one
instruction to the next.

Von Neumann Machine


Institute for Advanced Studies (IAS) computer
In 1946 at Princeton.
Prototype for all subsequent general-purpose computers. With rare exceptions,
all of today’s computers have this same general structure, and are thus referred to
as von Neumann machines
A main memory, which stores both data and instructions
An ALU capable of operating on binary data.
A control unit, which interprets the instructions in memory and causes them
to be executed.
First commercial computers
1950: UNIVAC - commissioned by Census Bureau for 1950 calculations.
late 1950's: UNIVAC II
Greater memory and higher performance.
Same basic architecture as UNIVAC
First example of upward compatibility
1953: IBM 701 - primarily for science
1955: IBM 702 - primarily for business
12

Second Generation (1956 to 1963): Using Transistors


Hardware Technology: Transistors replaced the vacuum tubes of the first
generation of computers. Transistors allowed computers to become smaller, faster,
cheaper, energy efficient and reliable. The second generation computers used
magnetic core technology for primary memory. They used magnetic tapes and
magnetic disks for secondary storage. The input was still through punched cards
and the output using printouts. They used the concept of a stored program, where
instructions were stored in the memory of computer.
Software Technology: The instructions were written using the assembly
language. Assembly language uses mnemonics like ADD for addition and SUB for
subtraction for coding of the instructions. It is easier to write instructions in
assembly language, as compared to writing instructions in machine language.
High-level programming languages, such as early versions of COBOL and FORTRAN
were also developed during this period.
Computing Characteristics: The computation time was in microseconds.
Physical Appearance Transistors are smaller in size compared to vacuum
tubes, thus, the size of the computer was also reduced.
Application: The cost of commercial production of these computers was very
high, though less than the first generation computers. The transistors had to be
assembled manually in second generation computers.
Examples: PDP-8, IBM 1401 and CDC 1604.
Second generation computers generated a lot of heat but much less than the
first generation computers. They required less maintenance than the first
generation computers.
Third Generation (1964 to 1971): Using Integrated Circuits
Hardware Technology: The third generation computers used the Integrated
Circuit (IC) chips. In an IC chip, multiple transistors are placed on a silicon chip.
Silicon is a type of semiconductor. The use of IC chip increased the speed and the
efficiency of computer, manifold. The keyboard and monitor were used to interact
with the third generation computer, instead of the punched card and printouts.
Software Technology: The keyboard and the monitor were interfaced through
the operating system. Operating system allowed different applications to run at the
same time. High-level languages were used extensively for programming, instead of
machine language and assembly language.
Computing Characteristics: The computation time was in nanoseconds.
Physical Appearance The size of these computers was quite small compared to
the second generation computers.
Application: Computers became accessible to mass audience. Computers
were produced commercially, and were smaller and cheaper than their
predecessors.
13

Examples: IBM 370, PDP 11.


The third generation computers used less power and generated less heat than
the second generation computers. The cost of the computer reduced significantly,
as individual components of the computer were not required to be assembled
manually. The maintenance cost of the computers was also less compared to their
predecessors.
Fourth Generation (1971 to present): Using Microprocessors
Hardware Technology: They use the Large Scale Integration (LSI) and the
Very Large Scale Integration (VLSI) technology. Thousands of transistors are
integrated on a small silicon chip using LSI technology. VLSI allows hundreds of
thousands of components to be integrated in a small chip. This era is marked by
the development of microprocessor. Microprocessor is a chip containing millions of
transistors and components, and, designed using LSI and VLSI technology. This
generation of computers gave rise to Personal Computer (PC). Semiconductor
memory replaced the earlier magnetic core memory, resulting in fast random access
to memory. Secondary storage device like magnetic disks became smaller in
physical size and larger in capacity. The linking of computers is another key
development of this era. The computers were linked to form networks that led to the
emergence of the Internet. This generation also saw the development of pointing
devices like mouse, and handheld devices.
Software Technology: Several new operating systems like the MS-DOS and
MS-Windows developed during this time. This generation of computers supported
Graphical User Interface (GUI). GUI is a user-friendly interface that allows user to
interact with the computer via menus and icons. High-level programming languages
are used for the writing of programs.
Computing Characteristics: The computation time is in picoseconds.
Physical Appearance: They are smaller than the computers of the previous
generation. Some can even fit into the palm of the hand.
Application: They became widely available for commercial purposes. Personal
computers became available to the home user.
Examples: The Intel 4004 chip was the first microprocessor. The components
of the computer like Central Processing Unit (CPU) and memory were located on a
single chip. In 1981, IBM introduced the first computer for home use. In 1984,
Apple introduced the Macintosh.
The microprocessor has resulted in the fourth generation computers being
smaller and cheaper than their predecessors. The fourth generation computers are
also portable and more reliable. They generate much lesser heat and require less
maintenance compared to their predecessors. GUI and pointing devices facilitate
easy use and learning on the computer. Networking has resulted in resource
sharing and communication among different computers.
14

Fifth Generation (Present and Next): Using Artificial Intelligence


The goal of fifth generation computing is to develop computers that are capable
of learning and self-organization. The fifth generation computers use Super Large
Scale Integrated (SLSI) chips that are able to store millions of components on a
single chip. These computers have large memory requirements.
This generation of computers uses parallel processing that allows several
instructions to be executed in parallel, instead of serial execution. Parallel
processing results in faster processing speed. The Intel dual core microprocessor
uses parallel processing.
The fifth generation computers are based on Artificial Intelligence (AI). They try
to simulate the human way of thinking and reasoning. Artificial Intelligence
includes areas like Expert System (ES), Natural Language Processing (NLP), speech
recognition, voice recognition, robotics, etc.
1.3.5 CLASSIFICATION OF COMPUTERS
The digital computers that are available nowadays vary in their sizes and
types. The computers are broadly classified into four categories based on their size
and type—(1) Microcomputers, (2) Minicomputers, (3) Mainframe computers, and
(4) Supercomputer.
Micro Computer System
These are also known as personal computers and are the ones mostly found in
big and small office, they are normally standalone computers known PC, or
Desktop Computers. Micro Computers are small and expensive designed for
individual use. It contains two types of memories RAM and ROM.
Microcomputers are small, low-cost and single-user digital computer. They
consist of CPU, input unit, output unit, storage unit and the software. Although
microcomputers are stand-alone machines, they can be connected together to
create a network of computers that can serve more than one user. IBM PC based on
Pentium microprocessor and Apple Macintosh are some examples of
microcomputers. Microcomputers include desktop computers, notebook computers
or laptop, tablet computer, handheld computer, smart phones and netbook.
Desktop Computer or Personal Computer (PC) is the most common type of
microcomputer. It is a stand-alone machine that can be placed on the desk.
Externally, it consists of three units—keyboard, monitor, and a system unit
containing the CPU, memory, hard disk drive, etc. It is not very expensive and is
suited to the needs of a single user at home, small business units, and
organizations. Apple, Microsoft, HP, Dell and Lenovo are some of the PC
manufacturers
Notebook Computers or Laptop resemble a notebook. They are portable and
have allthe features of a desktop computer. The advantage of the laptop is that it is
small in size (can be put inside a briefcase), can be carried anywhere, has a battery
backup and has all the functionality of the desktop. Laptops can be placed on the
15

lap while working (hence the name). Laptops are costlier than the desktop
machines
Netbook These are smaller notebooks optimized for low weight and low cost,
and are designed for accessing web-based applications. Starting with the earliest
netbook in late 2007, they have gained significant popularity now. Netbooks deliver
the performance needed to enjoy popular activities like streaming videos or music,
emailing, Web surfing or instant messaging. The word netbook was created as a
blend of Internet and notebook.
Tablet Computer has features of the notebook computer but it can accept
input from a stylus or a pen instead of the keyboard or mouse. It is a portable
computer. Tablet computer are the new kind of PCs.
Handheld Computer or Personal Digital Assistant (PDA) is a small computer
that can be held on the top of the palm. It is small in size. Instead of the keyboard,
PDA uses a pen or a stylus for input. PDAs do not have a disk drive. They have a
limited memory and are less powerful. PDAs can be connected to the Internet via a
wireless connection. Casio and Apple are some of the manufacturers of PDA. Over
the last few years, PDAs have merged into mobile phones to create smart phones.
Minicomputer systems
Minicomputers are digital computers, generally used in multi-user systems.
They have high processing speed and high storage capacity than the
microcomputers. Minicomputers can support 4–200 users simultaneously. The
users can access the minicomputer through their PCs or terminal. They are used
for real-time applications in industries, research centers, etc. PDP 11, IBM (8000
series) are some of the widely used minicomputers.
Main frame computer systems
Mainframe computers are multi-user, multi-programming and high
performance computers. They operate at a very high speed, have very large storage
capacity and can handle the workload of many users. Mainframe computers are
large and powerful systems generally used in centralized databases. The user
accesses the mainframe computer via a terminal that may be a dumb terminal, an
intelligent terminal or a PC. A dumb terminal cannot store data or do processing of
its own. It has the input and output device only. An intelligent terminal has the
input and output device, can do processing, but, cannot store data of its own. The
dumb and the intelligent terminal use the processing power and the storage facility
of the mainframe computer. Mainframe computers are used in organizations like
banks or companies, where many people require frequent access to the same data.
Some examples of mainframes are CDC 6600 and IBM ES000 series.
Super computer systems
Supercomputers are the fastest and the most expensive machines. They have
high processing speed compared to other computers. The speed of a supercomputer
is generally measured in FLOPS (Floating point Operations Per Second). Some of
the faster supercomputers can perform trillions of calculations per second.
16

Supercomputers are built by interconnecting thousands of processors that can


work in parallel.
Supercomputers are used for highly calculation-intensive tasks, such as,
weather forecasting, climate research (global warming), molecular research,
biological research, nuclear research and aircraft design. They are also used in
major universities, military agencies and scientific research laboratories. Some
examples of supercomputers are IBM Roadrunner, IBM Blue gene and Intel ASCI
red. PARAM is a series of supercomputer assembled in India by C-DAC (Center for
Development of Advanced Computing), in Pune. PARAM Padma is the latest
machine in this series. The peak computing power of PARAM Padma is 1 Tera FLOP
(TFLOP).
Computer Architecture
Mostly it refers to those attributes of a system visible to a programmer or those
attributes that have a direct impact on the logical execution of a program.
Examples of architectural attributes include:
• Instruction set designing
• Instruction format
• No. of bits used to represent various types of data
• Different addressing mechanism to access data
Computer organization:
It refers to the operational units and their interconnections that realize the
architectural specifications. Organizational attributes include those hardware
details transparent to the programmer, such as control signals; interfaces between
the computer and peripherals; and the memory technology used and peripherals;
and the memory technology used.
Difference between architecture and organization:
Computer Organization is study of the system from software point of view and
gives overall description of the system and working principles without going into
much detail. In other words, it is mainly about the programmer’s or user point of
view.
Computer Architecture is study of the system from hardware point of view and
emphasis on how the system is implemented. Basically, throws light on the
designer’s point of view.
In a system, there are a set of instructions, it is enough for programmer or
user to know what are a set of instructions present in case of computer
organization; whereas it is important for the system designer worries about how a
set of instructions are implemented, algorithm of implementation is the emphasis
in the case of architectural studies.
Ex: Two different models from a same vendor like Intel are brought to analyze.
Both the models (laptop and desk top) have same processor like core 2 duo. That
17

means both models understand the same instruction set as you know each
processor understands a fixed no of instructions. Hence forth their architecture is
same. Due to the placement of various hardware components, one model (laptop) is
slim and other is bulky. Hence their organization is different.
1.4 REVISION POINTS
• Computer meaning
• Input and output
• Central Processing Unit
• Generation of Computers
• Types of Computers
1.5 INTEXT QUESTIONS
1. What do you mean by a computer?
2. List out the characteristics of computers.
3. Explain the organization of computer.
4. Write a short note on CPU.
5. List out the developments in 5th gen computers.
1.6 SUMMARY
A fast and accurate data processing system that accepts data, performs
various operations on the data, has the capability to store.
Input devices are the devices which are used to feed programs and data to
the computer.
The Arithmetic Logic Unit (ALU) actually executes the instructions and
performs all the calculations and decisions
The output devices give the results of the process and computations to the
outside world.
The goal of fifth generation computing is to develop computers that are
capable of learning and self-organization.
Supercomputers are used for weather forecasting, climate research (global
warming), molecular research, biological research, nuclear research and
aircraft design.
1.7 TERMINAL EXERCISE
1. The CPU has _________, ___________ and _____________.
2. The first generation computer used _____________ in it.
3. VLSI means _____________________________
4. ___________________ is an example for micro computer.
5. A super computer is used in _____________ purpose.
18

1.8 SUPPLEMENTARY MATERIALS


1. http://gpnanakpur.ac.in/wp-
content/uploads/2019/04/computerfundamentals-e-notes.pdf
2. https://opentextbc.ca/computerstudies/chapter/classification-of-
generations-of-computers/
1.9 ASSIGNMENTS
1. Discuss the characteristics of different generation of computers.
1.10 SUGGESTED READING/REFERENCE
1. Kumar. K and Rajkumar. S, Computer Application in Business, Tata
McGraw Hill, New Delhi, 2009.
2. http://www.igntu.ac.in/eContent/IGNTU-eContent-799042056712-MA-
Linguistics-4-HarjitSingh-ComputationalLinguistics-1.pdf
1.1 LEARNING ACTIVITIES
1. Observe the computers around you and identify the type and generation of
the computer and discuss with your friends.
1.12 KEYWORDS
Computer
Central Processing Unit
Control Unit
Memory Unit
Micro computer
Super computer

19

LESSON - 2
INPUT OUTPUT AND STORAGE DEVICES
2.1 INTRODUCTION
Computer is a machine that has many peripheral devices attached to it. A
peripheral device is a hardware device that is required for the computer to perform
its primary function. The computer wants to receive data, store programme, data,
results and it has to produce data (output) when ever requested by the user. It can
be either internal or external, though we more commonly think of external devices
when talking about peripherals.
Most commonly when we talk about peripheral devices we are talking about
input and output devices. But storage is also an important device of the computer.
There are a wide variety of input and output devices, memory devices that we
commonly use, with each specialising in a certain task.
2.2 OBJECTIVES
• To understand the meaning of input, output and storage devices.
• To realize the varieties of input, output and storage devices used in various
computers
• To know the advantages and disadvantages of various input, output and
storage devices.
2.3 CONTENTS
2.3.1 Input Devices
2.3.2 Output Devices
2.3.2 Storage Devices
2.3.1 INPUT DEVICES
An input device is a device that sends data to a computer and that allows you
to interact, control it and add new information. For instance, if a computer does not
have an input device, it will operate by itself, but its settings, bug fixes, or other
user experiences can’t be modified.
In terms of computing, an input device is any hardware equipment used to
send data to computers thereby facilitating us to interact with it.
It has the capability of converting the raw data into the format or language
which is computer-readable and finally delivers the translated data to the Central
Processing Unit (CPU) for further processing.
The input devices are categorized based on their mode of inputs such as
Keyboard devices, Point and draw devices, Speech, and recognization devices.
Let us discuss some of the input devices that are commonly used in today’s
world.
20

Keyboard
A computer keyboard is an input device used to enter characters and
functions into the computer system by pressing buttons, or keys. It is the primary
device used to enter text. A keyboard typically contains keys for individual letters,
numbers and special characters, as well as keys for specific functions. A keyboard
is connected to a computer system using a cable or a wireless connection.
Most keyboards have a very similar layout. The individual keys for letters,
numbers and special characters are collectively called the character keys. The
layout of these keys is derived from the original layout of keys on a typewriter. The
most widely used layout in the English language is called QWERTY, named after
the sequence of the first six letters from the top left.
Mouse
The mouse is a small, movable device that lets you control a range of things on
a computer. Most types of mouse have two buttons, and some will have a wheel in
between the buttons. Most types of mouse connect to the computer with a cable,
and use the computer's power to work.
Some types of mouse are wireless. That means they do not permanently
connect to a computer with a cable. These types of mouse either need batteries to
run or require a recharging cable.
The mouse usually sits on the desk to the left or right of the keyboard. If you
write with your left hand, you should have the mouse to the left of the keyboard. If
you write with your right hand, have the mouse to the right of the keyboard.
A mouse is used to point at objects you see on the screen. By pointing at an
object, you tell the computer that you want to do something with that object.
The mouse pointer, or cursor, represents the mouse on the computer screen.
When you move the mouse across the top of a table, the cursor moves on the
computer screen in the same direction.
The cursor usually looks like an arrow, but it can change shape depending on
what it's pointing at. It's good to note that it's the very tip of the arrow that is the
sensitive part when clicking something on the screen.
Light pen
Light Pen (similar to the pen) is a pointing device which is used to select a
displayed menu item or draw pictures on the monitor screen. It consists of a
photocell and an optical system placed in a small tube. When its tip is moved over
the monitor screen, and pen button is pressed, its photocell sensing element
detects the screen location and sends the corresponding signals to the CPU.
Uses
• Light Pens can be used as input coordinate positions by providing necessary
arrangements.
• If background color or intensity, a light pen can be used as a locator.
• It is used as a standard pick device with many graphics system.
21

• It can be used as stroke input devices.


• It can be used as valuators
Track Ball
A trackball is a computer cursor control device used in many notebook and
laptop computers. The trackball is usually located in front of the keyboard toward
the user. Essentially, the trackball is an upside-down mouse that rotates in place
within a socket. The user rolls the ball to direct the cursor to the desired place on
the screen and can click one of two buttons (identical to mouse buttons) near the
trackball to select desktop objects or position the cursor for text entry.
Joy Stick
A joystick is an input device that controls a character or machine in a
computer program, such as a plane in a flight simulator. They look similar to the
control device you would find on an arcade game, but nearly always include extra
buttons for additional functionality.
The first joystick was invented at the U.S. Naval Research Laboratory by C. B.
Mirick and patented in 1926. It was a two-axis electronic joystick, similar to the
joysticks in use today, and was original designed for remotely piloting aircraft.
Scanner
It is an input device. The data or text is written on paper. The paper is feeded
to scanner. The paper written information is converted into electronic format; this
format is stored in the computer. The input documents can contain text,
handwritten material, picture extra.
By storing the document in a computer document became safe for longer
period of time. The document will be permanently stored for the future. We can
change the document when we need. The document can be printed when needed.
Scanning can be of the black and white or colored picture. On stored picture
2D or 3D rotations, scaling and other operations can be applied.
Types of image Scanner:
Flat Bed Scanner: It resembles a photocopy machine. It has a glass top on its
top. Glass top in further covered using a lid. The document to be scanned is kept
on glass plate. The light is passed underneath side of glass plate. The light is moved
left to right. The scanning is done the line by line. The process is repeated until the
complete line is scanned. Within 20-25 seconds a document of 4" * 6" can be
scanned.
Hand Held Scanner: It has a number of LED's (Light Emitting Diodes) the
LED's are arranged in the small case. It is called a Hand held Scanner because it
can be kept in hand which performs scanning. For scanning the scanner is moved
over document from the top towards the bottom. Its light is on, while we move it on
document. It is dragged very slowly over document. If dragging of the scanner over
the document is not proper, the conversion will not correct.
22

Graphic Tablet
The digitizer is an operator input device, which contains a large, smooth board
(the appearance is similar to the mechanical drawing board) & an electronic
tracking device, which can be changed over the surface to follow existing lines. The
electronic tracking device contains a switch for the user to record the desire x & y
coordinate positions. The coordinates can be entered into the computer memory or
stored or an off-line storage medium such as magnetic tape.
Advantages
• Drawing can easily be changed.
• It provides the capability of interactive graphics.
Disadvantages
• Costly
• Suitable only for applications which required high-resolution graphi
Microphone
A microphone is an input device that was developed by Emile Berliner in 1877.
It is used to convert sound waves into electric waves or input the audio into
computers. It captures audio by converting sound waves into an electrical signal,
which may be a digital or analog signal. This process can be implemented by a
computer or other digital audio devices.
Use of a microphone on the computer
• It is used for voice recording.
• It offers users the option of voice recognition.
• It allows users to record sound of musical instruments.
• It enables users to online chatting.
• It allows us for VoIP (Voice over internet protocol).
• It is also used for Computer gaming.
• Furthermore, it can record voice for singing, podcasts, and dictation.
Magnetic Ink Character Reader
A magnetic ink character recognition line (MICR) is a line of characters on a
check printed with a unique ink that allows the characters to be read by a reader-
sorter machine. Introduction of the MICR reader-sorter process allowed check
processing to be automated while making it more difficult to counterfeit checks.
Optical Character Reader
Optical Character Recognition, or OCR, is a technology that enables you to
convert different types of documents, such as scanned paper documents, PDF files
or images captured by a digital camera into editable and searchable data.
It is a computer peripheral device enabling letters, numbers, or other
characters usually printed on paper to be optically scanned and input to
23

a storage device, such as magnetic tape. The device uses the process
of optical character recognition
Bar Code Reader
A barcode scanner, also called a point-of-sale (POS) scanner or a price
scanner, is a device used to capture and read information contained in a barcode.
The scanner consists of a light source, a lens and a light sensor that translates
optical impulses into electrical ones. They also contain decoder circuitry analyzing
the barcode’s image data provided by the sensor and sending that content to a
computer.
A barcode scanner works by directing a beam of light across the barcode and
measuring the amount of light that is reflected back. The dark bars on the barcode
will reflect less light than the white spaces between them. The scanner then
converts the light energy into electrical energy, which is then converted into data by
the decoder and forwarded to a computer.
The most common kinds of scanner used to read one dimensional barcodes
are, pen wands, slot scanners, Charge Couple Device (CCD) scanners, image
scanners and laser scanners.
A pen wand scanner contains no moving parts and is known for its durability
and cost. The pen needs to stay in direct contact with the barcode, be held at a
specific angle and be moved across the barcode at a certain speed.
A slot scanner remains stationary and the item with the barcode is pulled
through the slot by hand. Slot scanners are usually used to scan barcodes on
identification cards.
A CCD scanner has a better read-range and doesn’t involve contact with the
barcode. This makes it ideal for use in retail sales. Typically, a CCD scanner is used
as a “gun” type interface and has to be held no more than one inch from the
barcode. Several different readings are made to reduce the possibility of errors,
every time a barcode is scanned. A disadvantage to the CCD scanner is that it can’t
read a barcode that is wider than its input screen.
An image scanner, also known as a camera reader, uses a small video camera
to capture an image of the barcode and then it uses sophisticated digital image
processing techniques to decode the barcode. An image scanner can read a barcode
from about 3 to 9 inches away and usually costs less than a laser scanner.
A laser scanner can either be handheld or stationary and doesn’t need to be
close to the barcode in order to read it effectively. The scanner uses a system of
mirrors and lenses that allow it to read barcodes regardless of their position and it
can easily read up to 24 inches away from the barcode. A laser scanner may
perform up to 500 scans per second, to reduce the possibility of errors. Specialized
long-range scanners are capable of reading a barcode up to 30 feet away.
24

2D barcode
A 2D (two-dimensional) barcode is a graphical image that stores information
both horizontally, as is the case of one-dimensional barcodes, and vertically. As a
result of their construction, 2D barcodes can store up to 7,089 characters,
significantly greater storage than is possible with the 20-character capacity of a
one-dimensional barcode.
Optical Mark Reader
Optical scanner, Computer input device that uses a light beam to scan codes,
text, or graphic images directly into a computer or computer system. Bar-
code scanners are used widely at point-of-sale terminals in retail stores. A
handheld scanner or bar-code pen is moved across the code, or the code itself is
moved by hand across a scanner built into a checkout counter or other surface,
and the computer stores or immediately processes the data in the bar code. After
identifying the product through its bar code, the computer determines its price and
feeds that information into the cash register. Optical scanners are also used
in fax machines and to input graphic material directly into personal computers.
Touch Panels
Touch Panels is a type of display screen that has a touch-sensitive transparent
panel covering the screen. A touch screen registers input when a finger or other
object comes in contact with the screen.
When the wave signals are interrupted by some contact with the screen, that
located is recorded. Touch screens have long been used in military applications.
Voice Systems (Voice Recognition)
Voice Recognition is one of the newest, most complex input techniques used to
interact with the computer. The user inputs data by speaking into a microphone.
The simplest form of voice recognition is a one-word command spoken by one
person. Each command is isolated with pauses between the words.
Voice Recognition is used in some graphics workstations as input devices to
accept voice commands. The voice-system input can be used to initiate graphics
operations or to enter data. These systems operate by matching an input against a
predefined dictionary of words and phrases.
Advantage
• More efficient device.
• Easy to use
• Unauthorized speakers can be identified
• Disadvantages:
• Very limited vocabulary
• Voice of different operators can't be distinguished.
25

2.3.2 OUTPUT DEVICES


Computer output devices receive information from the computer, and carry
data that has been processed by the computer to the user. Output devices provide
data in myriad different forms, some of which include audio, visual, and hard copy
media. The devices are usually used for display, projection, or for physical
reproduction. Monitors and printers are two of the most commonly-known output
devices used with a computer.
Computer output devices are all peripheral hardware, and are connected to
a computer by cables, or by wireless networking.
A computer can still function without an output device. However, without an
output device, there’s no way to determine what the computer is doing. There is no
indicator of errors, nor of the need for additional input. For example, if you detach
your monitor from your computer, the computer will still function, but it’s not going
to be very helpful.
Following are Output Devices commonly used in a computer.
• Monitors
• Printers
• Plotters
Monitor
CRT: This is one of the most traditional computer monitors. Big, bulky, and
perfect as training weights. Even though CRT monitors are rare and no longer
produced in large quantities, there are still some good sides to them.
At the heart of the CRT monitor are a cathode ray tube at the rear end and a
fluorescent screen at the front. The science geek-sounding cathode ray tube is
essentially a vacuum tube with an electron gun, which through another science
geek process called thermionic emission, generates a strong beam of electrons.
With the use of various electromagnetic devices, that beam of electrons can be
controlled then fired out of the tube at high speeds. The electrons finally hit the
fluorescent screen at various points to create an image.
Pros
• Because CRT monitors are deemed as “obsolete technology”, you can
probably find a used one at very affordable prices… or for free.
• No such thing as a “dead pixel” on a CRT monitor. Will go through that
below in LCD/LED monitors.
• Very wide viewing angle; You can clearly see the image on a CRT monitor
from almost any angle.
• Glare from the fluorescent screen.

Cons
• Big, bulky, occupies a lot of space.
• Power-hungry.
26

• If you are sharp, you will spot some flickering on a CRT monitor due to the
refresh rate.
• Leave the same image on the screen for too long, and the electrons will burn
it permanently onto the fluorescent screen.

Liquid Crystal Display (LCD)


This is the “flat piece of paper display” that took the market by storm,
otherwise also known as the “flat panel display”. Small, lightweight, somewhat
flexible, and power-saving – It is easy to see why they became so popular quickly.
There are a few layers on an LCD monitor.
• At the rear is a layer of cold cathode fluorescent (CCFL) backlighting.
• In the middle is a layer of liquid crystals sandwiched between polarizing
filters.
• Finally, in front is a layer of red, green, blue color filters.
Images are displayed on the screen by polarizing the light in varying amounts
and passing it through the liquid crystal layer.
Pros
• Compact, lightweight, space-saving. Perfect for both desktop and mobile
devices.
• LCD monitors don’t consume as much power as CRT monitors… To be
exact, they consume less than half of what CRTs do.
• Depending on the type of LCD screen, matte screens do not glare as CRTs
do.
• Minimal image flicker.
• No problems with image burn.

Cons
• Dead pixels can happen when the liquid crystals are manufactured badly.
These dead pixels will not respond to any signals, and will always be “stuck”
to one color… Which is very annoying when you even have one on the
screen.
• Due to the use of polarizing filters, the direction of the light is limited and we
will have problems with the viewing angle. In the older LCD monitors, you
can only see what is on the screen when sitting right in front of it.

Light-Emitting Diode Display (LED)


If you have been looking for a new monitor, or just poking around out of
curiosity, you will notice that people have been using the terms LED monitor and
LCD monitor interchangeably.
27

This is kind of confusing, but yes, LED monitors are essentially LCD… With
the exception that LED monitors no longer uses a fluorescent backlight, but LED
lights.
Twisted Nematic (TN) – Most affordable, high refresh rate, but bad color
reproduction and viewing angle.
In-plane Switching (IPS) – Slightly more expensive, but good viewing angles
and colors. Slightly lower refresh rate.
Multi-domain Vertical Alignment (MVA) – A cross between TN and IPS.
Patterned Vertical Alignment (PVA) – Also a cross between TN and IPS.
Projector
This is not really a “conventional monitor”, but you have probably seen one of
these around – Projectors are commonly used in schools, offices, and even in movie
theaters.
Remember how LCD monitors work? By controlling and bending light with
liquid crystals to display the image, modern LCD projects pretty much employ the
same idea. But instead of using the “power saving” LED lights, projectors use a
powerful light source such as halogen to project the image out onto a solid surface.
Pros
• Able to project images onto a wide area, turns an empty wall into a useful
screen.
• Used to be expensive, but getting affordable these days.
• Some projects are portable these days, good for presentations and movie
nights.

Cons
• No good in bright areas.
• Image quality and colors are definitely not as good as CRTs and LCDs…
Unless you are willing to spend thousands on a “movie grade” projector.
• The bright light source usually heats up quickly.

Comparison of CRT, LCD/LED, Projector

CRT LCD/LED Projector

Varies. There are


portable projectors,
Size Big, bulky, and heavy. Slim and lightweight.
as well as insane
large ones.

Varies. The newer


Can probably get a
Price LED displays can be Varies.
used one for cheap.
quite expensive.
28

CRT LCD/LED Projector

Very wide-angle,
angle, Depends. IPS panels
Viewing Very wide--angle… It’s
definitely better than have decently good
Angle a projection.
LCDs. viewing angles.

Depends, but matte


Glass screen that None… Unless you
Glare LCD panels do not
causes glare. project onto glass?
glare.

Slower refresh rate,


Flicker Negligible. Negligible.
you might see flickers.

Causes image burn if The bulb burns out if


Burn you leave the same Not affected. you leave it on for too
image on for too long. long.

Uses about half of


Power Power-hungry. Power-hungry.
hungry.
CRTs.

Printers
Printer is the most important output device, which is used to print data on
paper.
Types of Printers: There are many types of printers which are classified on
various criteria as shown in fig:
29

Impact Printers
The printers that print the characters by striking against the ribbon and onto
the papers are known as Impact Printers.
These Printers are of two types:

• Character Printers
• Line Printers

Non-Impact Printers
The printers that print the characters without striking against the ribbon and
onto the papers are called Non-Impact Printers. These printers print a complete
page at a time, therefore, also known as Page Printers.
Page Printers are of two types:

• Laser Printers
• Inkjet Printers

Dot Matrix Printers


Dot matrix has printed in the form of dots. A printer has a head which
contains nine pins. The nine pins are arranged one below other. Each pin can be
activated independently. All or only the same needles are activated at a time. When
needless is not activated, and then the tip of needle stay in the head. When pin
work, it comes out of the print head. In nine pin printer, pins are arranged in 5 * 7
matrixes.
Advantage
• Dot Matrix Printers prints output as dots, so it can print any shape of the
character. This allows the printer to print special character, charts, graphs,
etc.
• Dot Matrix Printers come under the category of impact printers. The printing
is done when the hammer pin strikes the inked ribbon. The impressions are
printed on paper. By placing multiple copies of carbon, multiple copies of
output can be produced.
• It is suitable for printing of invoices of companies.

Daisy Wheel Printers


Head is lying on a wheel and Pins corresponding to characters are like petals
of Daisy, that's why called Daisy wheel printer.
Advantage
• More reliable than DMPs
• Better Quality
• Disadvantage:
• Slower than DMPs
30

Drum Printers
These are line printers, which prints one line at a time. It consists of a drum.
The shape of the drum is cylindrical. The drum is solid and has characters
embossed on it in the form of vertical bands. The characters are in circular form.
Each band consists of some characters. Each line on drum consists of 132
characters. Because there are 96 lines so total characters are (132 * 95) = 12, 672.
Drum contains a number of hammers also.
Chain Printers
These are called as line printers. These are used to print one line at a line.
Basically, chain consists of links. Each link contains one character. Printers can
follow any character set style, i.e., 48, 64 or 96 characters. Printer consists of a
number of hammers also.
Advantages
• Chain or Band if damaged can be changed easily.
• It allows printing of different form.
• Different Scripts can be printed using this printer.
Disadvantages
• It cannot print charts and graphs.
• It cannot print characters of any shape.
• Chain Printers is impact printer, hammer strikes so it is noisy.

Non-Impact Printers
Inkjet Printers
These printers use a special link called electrostatic ink. The printer head has
a special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The
ink dropped is deflected by the electrostatic plate. The plate is fixed outside the
nozzle. The deflected ink settles on paper.
Advantages
• These produce high quality of output as compared to the dot matrix.
• A high-quality output can be produced using 64 nozzles printed.
• Inkjet can print characters in a variety of shapes.
• Inkjet can print special characters.
• The printer can print graphs and charts.

Disadvantages
• Inkjet Printers are slower than dot matrix printers.
• The cost of inkjet is more than a dot matrix printer.

Laser Printers
These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.
31

The output is generated in the following steps:

Step 1: The bits of data sent by processing unit act as triggers to turn the
laser beam on & off.
Step 2: The output device has a drum which is cleared & is given a positive
electric charge. To print a page the modulated laser beam passing from the laser
scans back & forth the surface of the drum. The positive electric charge on the
drum is stored on just those parts of the drum surface which are exposed to the
laser beam create the difference in electric which charges on the exposed drum
surface.
Step 3: The laser exposed parts of the drum attract an ink powder known as
toner.
Step 4: The attracted ink powder is transferred to paper.
Step 5: The ink particles are permanently fixed to the paper by using either
heat or pressure technique.
Step 6: The drum rotates back to the cleaner where a rubber blade cleans off
the excess ink & prepares the drum to print the next page.
Plotters
Plotters are a special type of output device. It is suitable for applications such as:

• Architectural plan of the building.


• CAD applications like the design of mechanical components of aircraft.
• Many engineering applications.

Advantage
• It can produce high-quality output on large sheets.
• It is used to provide the high precision drawing.
• It can produce graphics of various sizes.
• The speed of producing output is high.

Drum Plotter
It consists of a drum. Paper on which design is made is kept on the drum. The
drum can rotate in both directions. Plotters comprised of one or more pen and
penholders. The holders are mounted perpendicular to drum surface. The pens are
kept in the holder, which can move left to the right as well as right to the left. The
graph plotting program controls the movement of pen and drum.
Flatbed Plotter
It is used to draw complex design and graphs, charts. The Flatbed plotter can
be kept over the table. The plotter consists of pen and holder. The pen can draw
characters of various sizes. There can be one or more pens and pen holding
mechanism. Each pen has ink of different color. Different colors help to produce
multicolor design of document. The area of plotting is also variable. It can vary A4
to 21'*52'.
32

It is used to draw Cars, Ships, Airplanes, Shoe and dress designing, Road and
highway design
Graphics Software
There are two types of Graphics Software.

1. General Purpose Packages: Basic Functions in a general package include


those for generating picture components (straight lines, polygons, circles and other
figures), setting color and intensity values, selecting views, and applying
transformations.
Example of general purpose package is the GL (Graphics Library), GKS,
PHIGS, PHIGS+ etc.
2. Special Purpose Packages: These packages are designed for non
programmers, so that these users can use the graphics packages, without knowing
the inner details.
Example of special purpose package is

• Painting programs
• Package used for business purpose
• Package used for medical systems.
• CAD packages

2.3.3 STORAGE DEVICES


A memory is just like a human brain. It is used to store data and instructions.
Computer memory is the storage space in the computer, where data is to be
processed and instructions required for processing are stored. The memory is
divided into large number of small parts called cells. Each location or cell has a
unique address, which varies from zero to memory size minus one. For example, if
the computer has 64k words, then this memory unit has 64 * 1024 = 65536
memory locations. The address of these locations varies from 0 to 65535.
Memory is primarily of three types –
• Cache Memory
• Primary Memory/Main Memory
• Secondary Memory
Cache Memory
Cache memory is a very high speed semiconductor memory which can speed
up the CPU. It acts as a buffer between the CPU and the main memory. It is used to
hold those parts of data and program which are most frequently used by the CPU.
The parts of data and programs are transferred from the disk to cache memory by
the operating system, from where the CPU can access them.
33

Advantages
The advantages of cache memory are as follows -
• Cache memory is faster than main memory.
• It consumes less access time as compared to main memory.
• It stores the program that can be executed within a short period of time.
• It stores data for temporary use.
Disadvantages
The disadvantages of cache memory are as follows –
• Cache memory has limited capacity.
• It is very expensive.
Primary Memory (Main Memory)
Primary memory holds only those data and instructions on which the
computer is currently working. It has a limited capacity and data is lost when
power is switched off. It is generally made up of semiconductor device. These
memories are not as fast as registers. The data and instruction required to be
processed resides in the main memory. It is divided into two subcategories RAM
and ROM.
Characteristics of Main Memory
• These are semiconductor memories.
• Usually volatile memory.
• Data is lost in case power is switched off.
• It is the working memory of the computer.
• Faster than secondary memories.
• A computer cannot run without the primary memory.
Read Only Memory
ROM stands for Read Only Memory. The memory from which we can only read
but cannot write on it. This type of memory is non-volatile. The information is
stored permanently in such memories during manufacture. A ROM stores such
instructions that are required to start a computer. This operation is referred to
as bootstrap. ROM chips are not only used in the computer but also in other
electronic items like washing machine and microwave oven.
Various types of ROMs and their characteristics.
MROM (Masked ROM)
The very first ROMs were hard-wired devices that contained a pre-programmed
set of data or instructions. These kind of ROMs are known as masked ROMs, which
are inexpensive.
34

PROM (Programmable Read Only Memory)


PROM is read-only memory that can be modified only once by a user. The user
buys a blank PROM and enters the desired contents using a PROM program. Inside
the PROM chip, there are small fuses which are burnt open during programming. It
can be programmed only once and is not erasable.
EPROM (Erasable and Programmable Read Only Memory)
EPROM can be erased by exposing it to ultra-violet light for a duration of up to
40 minutes. Usually, an EPROM eraser achieves this function. During
programming, an electrical charge is trapped in an insulated gate region. The
charge is retained for more than 10 years because the charge has no leakage path.
For erasing this charge, ultra-violet light is passed through a quartz crystal window
(lid). This exposure to ultra-violet light dissipates the charge. During normal use,
the quartz lid is sealed with a sticker.
EEPROM (Electrically Erasable and Programmable Read Only Memory)
EEPROM is programmed and erased electrically. It can be erased and
reprogrammed about ten thousand times. Both erasing and programming take
about 4 to 10 ms (millisecond). In EEPROM, any location can be selectively erased
and programmed. EEPROMs can be erased one byte at a time, rather than erasing
the entire chip. Hence, the process of reprogramming is flexible but slow.
Advantages of ROM
The advantages of ROM are as follows −
• Non-volatile in nature
• Cannot be accidentally changed
• Cheaper than RAMs
• Easy to test
• More reliable than RAMs
• Static and do not require refreshing
• Contents are always known and can be verified
Random Access Memory
RAM (Random Access Memory) is the internal memory of the CPU for storing
data, program, and program result. It is a read/write memory which stores data
until the machine is working. As soon as the machine is switched off, data is
erased.
Access time in RAM is independent of the address, that is, each storage
location inside the memory is as easy to reach as other locations and takes the
same amount of time. Data in the RAM can be accessed randomly but it is very
expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or
if there is a power failure. Hence, a backup Uninterruptible Power System (UPS) is
35

often used with computers. RAM is small, both in terms of its physical size and in
the amount of data it can hold.
RAM is of two types −
• Static RAM (SRAM)
• Dynamic RAM (DRAM)
Static RAM (SRAM)
The word static indicates that the memory retains its contents as long as
power is being supplied. However, data is lost when the power gets down due to
volatile nature. SRAM chips use a matrix of 6-transistors and no capacitors.
Transistors do not require power to prevent leakage, so SRAM need not be refreshed
on a regular basis.
There is extra space in the matrix, hence SRAM uses more chips than DRAM
for the same amount of storage space, making the manufacturing costs higher.
SRAM is thus used as cache memory and has very fast access.
Characteristic of Static RAM
• Long life
• No need to refresh
• Faster
• Used as cache memory
• Large size
• Expensive
• High power consumption
Dynamic RAM (DRAM)
DRAM, unlike SRAM, must be continually refreshed in order to maintain the
data. This is done by placing the memory on a refresh circuit that rewrites the data
several hundred times per second. DRAM is used for most system memory as it is
cheap and small. All DRAMs are made up of memory cells, which are composed of
one capacitor and one transistor.
Characteristics of Dynamic RAM
• Short data lifetime
• Needs to be refreshed continuously
• Slower as compared to SRAM
• Used as RAM
• Smaller in size
• Less expensive
• Less power consumption
36

Secondary Memory
This type of memory is also known as external memory or non-volatile. It is
slower than the main memory. These are used for storing data/information
permanently. CPU directly does not access these memories, instead they are
accessed via input-output routines. The contents of secondary memories are first
transferred to the main memory, and then the CPU can access it. For example,
disk, CD-ROM, DVD, etc.
Characteristics of Secondary Memory
• These are magnetic and optical memories.
• It is known as the backup memory.
• It is a non-volatile memory.
• Data is permanently stored even if power is switched off.
• It is used for storage of data in a computer.
• Computer may run without the secondary memory.
• Slower than primary memories.
Types of Storage
External Hard Drive
These are hard drives similar to the type that is installed within a desktop
computer or laptop computer. The difference being that they can be plugged in to
the computer or removed and kept separate from the main computer. They typically
come in two sizes:
Desktop External Hard drive: Uses a 3.5 inch hard drive similar to that used
in desktop computers.
Portable External Hard drive: Uses a 2.5 inch hard drive similar to that used
in laptops.
Desktop External Hard Drives are generally cheaper than Portable External
Hard Drives for the same storage space. Desktop External Hard Drives are usually
faster and more robust.
Capacity: 160GB to 3TB (approx 3000GB)
Connection: Most common connections to the computer are through a USB
2.0 or USB3.0 connection. May also be available in a SATA or eSATA connector.
Advantages
• Very good option for local backups of large amounts of data.
• The cheapest storage option in terms of dollars per GB. Very reliable when
handled with care.
Disadvantages
• Can be very delicate. May be damaged if dropped or through electrical surge.
37

Solid State Drive (SSD)


Solid State Drives look and function similar to traditional mechanical/
magnetic hard drives but the similarities stop there. Internally, they are completely
different. They rely solely on semiconductors and electronics for data storage
making it a more reliable and robust than traditional magnetic. Since they have no
moving parts and rotating platters they use less power than traditional hard drives
and are much faster too.
With the fall in the prices of Solid State Drives and is lower power
usage, SSD’s are used extensively on laptops and mobile devices. External SSD’s
are also a viable option for data backups.
Capacity: 64GB to 256GB.
Connections: USB 2.0/3.0 and SATA.
Advantages
• Faster read and write performance.
• More robust and reliable than traditional magnetic hard drives.
• Highly portable. Can be easily taken offsite.
Disadvantages
• Still relatively expensive when compared to traditional hard drives.
• Storage space is typically less than that of traditional magnetic hard drives.
Network Attached Storage (NAS)
NAS are simply one or more regular IDE or SATA hard drives plugged in an
array storage enclosure and connected to a network Router or Hub through an
Ethernet port. Some of these NAS enclosures have ventilating fans to protect the
hard drives from overheating.
Advantages
• Very good option for local backups especially for networks and small
businesses.
• As several hard drives can be plugged in, NAS can hold very large amounts
of data
• Can be setup with Redundancy (RAID) increasing the reliability and/ or read
and write performance. Depending on the type of RAID level used, the NAS
can still function even if one hard drive in the RAID set fails. Or two hard
drives can be setup to double the read and write speed of single hard drive.
• The drive is always connected and available to the network making the NAS
a good option for implementing automated scheduled backups.
Disadvantages
• Significantly more expensive than using single External Hard Drives.
• Difficult to bring offsite making it very much a local backup hence still
susceptible to some events like theft and floods, fire etc.
38

USB Thumb Drive or Flash Drive


These are similar to Solid State Drives except that it is much smaller in size
and capacity. They have no moving parts making them quite robust. They are
extremely portable and can fit on a keychain. They are ideal for backing up a small
amount of data that need to be brought with you on the go.
Capacity: 4GB to 64GB.
Advantages:
• The most portable storage option. Can fit on a keychain making it an offsite
backup when you bring it with you.
• Much more robust then traditional magnetic hard drives.
Disadvantages
Relatively expensive per GB so can only be used for backing up a small amount
of data.
Optical Drive (CD/ DVD)
CD’s and DVD’s are ideal for storing a list of songs, movies, media or software
for distribution or for giving to a friend due to the very low cost per disk. They do
not make good storage options for backups due to their shorter lifespan, small
storage space and slower read and write speeds.
Capacity CD: 650MB to 900MB
Capacity DVD: 4.7GB to 17.08GB
Advantages
• Low cost per disk
Disadvantages
• Relatively shorter life span than other storage options.
• Not as reliable as other storage options like external hard disk and SSD. One
damaged disk in a backup set can make the whole backup unusable.
Cloud Storage
Cloud storage is storage space on commercial data center accessible from any
computer with Internet access. It is usually provided by a service provider. A limited
storage space may be provided free with more space available for a subscription
fee. Examples of service providers are Amazon S3, Google Drive, Sky Drive etc.
Advantages
• A very good offsite backup. Not affected by events and disasters such as
theft, floods, fire etc
Disadvantages
• More expensive than traditional external hard drives. Often requires an
ongoing subscription.
• Requires an Internet connection to access the cloud storage.
• Much slower than other local backups.
39

2.4 REVISION POINTS


• Types of input devices and its uses
• Types of output devices and its uses
• Storage devices commonly used in the computers
2.5 INTEXT QUESTIONS
1. What do you mean by i/o devices?
2. List any five input and output devices and explain.
3. What are the various types of RAM?
4. What do you mean by external storage devices?
2.6 SUMMARY
Input device is any hardware equipment used to send data to computers
thereby facilitating us to interact with it
Commonly used input devices are keyboard, mouse and scanners,
Computer output devices receive information from the computer, and carry
data that has been processed by the computer to the user.
Commonly used output devices are monitor, printer and plotter.
Computer memory is the storage space in the computer, where data is to be
processed and instructions required for processing are stored.
Main classifications of memory are primary, secondary and cache memory.
2.7 TERMINAL EXERCISE
1. ______________ devices are used to feed data to the computer.
2. Primary memory of the computer is __________________
3. _____________ memory is volatile in nature.
4. EPROM means _____________________
5. Device used to print architectural diagram of a building is ______________
2.8 SUPPLEMENTARY MATERIALS
1. https://teachcomputerscience.com/output-devices/
2. https://www.classmate4u.com/computer-input-devices/
3. http://theteacher.info/index.php/f451-fundamentals/3-1-1-
components/notes/50-i-o-s
2.9 ASSIGNMENTS
1. Explain the commonly used i/o and storage devices in a computer.
2.10 SUGGESTED READING/REFERENCE
1. Peter Norton, Introduction to Computers, 6th edition, Tata McGraw Hill,
New Delhi.
2. Ales Leon and Mathew Leon, Fundamentals to information Technology, Leon
Tech World, New Delhi, 2008.
40

2.11 LEARNING ACTIVITIES


1. Observe the input, output and storage devices that you come across in your
daily life. Study the characteristics of each and discuss with your
classmates.
2.12 KEYWORDS
Input devices
Output devices
RAM – Random Access Memory
ROM – Read Only Memory
Cloud Storage
Network Attached Storage.

41

LESSON - 3
INTRODUCTION TO SOFTWARE
3.1 INTRODUCTION
When we speak about a computer, we always remember out the physical
components of the computer and we do not give much importance to the virtual or
invisible components of the computer. Thanks to the influence created by mobile,
noe a days we think about the software of the system. Without a software a
computer is of no use and it is considered as an e-waste. Many of the computers
are discarded because of it software problem and not because of is hardware
issues, which can be rectified.
Software are the set of instructions that tell a computer what to do.
Software comprises the entire set of programs, procedures, and routines associated
with the operation of a computer system. The term was coined to differentiate these
instructions from hardware—i.e., the physical components of a computer system. A
set of instructions that directs a computer’s hardware to perform a task is called a
program, or software program.
3.2 OBJECTIVES
• To enrich the concept of software and its importance
• To throw insights about the types of software.
• To introduce operating system and enhance the knowledge about various
types of operating system
3.3 CONTENTS
3.3.1 Software
3.3.2Types of Software
3.3.3 Operating System
3.3.4 UNIX
3.3.5 LINUX
3.3.6 Windows OS
3.3.7 Freeware
3.3.1 SOFTWARE
It is the set of instructions that tell a computer what to do.
Software comprises the entire set of programs, procedures, and routines associated
with the operation of a computer system. The term was coined to differentiate these
instructions from hardware—i.e., the physical components of a computer system. A
set of instructions that directs a computer’s hardware to perform a task is called a
program, or software program.
Software is typically stored on an external long-term memory device, such as a
hard drive or magnetic diskette. When the program is in use, the computer reads it
from the storage device and temporarily places the instructions in random access
42

memory (RAM). The process of storing and then performing the instructions is
called “running,” or “executing,” a program. By contrast, software programs and
procedures that are permanently stored in a computer’s memory using a read-only
(ROM) technology are called firmware, or “hard software.”
The two main types of software are system software and application software.
System software controls a computer’s internal functioning, chiefly through
an operating system, and also controls such peripherals as monitors, printers, and
storage devices. Application software, by contrast, directs the computer to execute
commands given by the user and may be said to include any program that
processes data for a user. Application software thus includes word processors,
spreadsheets, database management, inventory and payroll programs, and many
other “applications.” A third software category is that of network software, which
coordinates communication between the computers linked in a network.
Software is the collection of data, programs, procedures, routines and
instructions that tell a computer or electronic device how to run, work and execute
specific tasks. This is in contrast to hardware, which is the physical system and
components that perform the work.
Software can take the form of one line of code or, like Microsoft’s Windows
operating system, span into millions.
Software also works with other software to join as a cohesive system. Your
smart phone is a collection of thousands of software components designed to work
together.
Code languages and styles vary in size and scope. The software used to work a
modern microwave is very different from the code that runs an Apple Mac.
3.3.2 TYPES OF SOFTWARE
Different types of software include:
• Application Software
• System Software
• Firmware
• Programming Software
• Driver Software
• Freeware
• Shareware
• Open Source Software
• Closed Source Software
• Utility Software
43

Application Software
Application software is a software program or group of programs designed for
end-users. There are many types of application software.
Types of Application Software and Examples
• Word Processing Software: Google Docs, Microsoft Word, WordPad and
Notepad
• Database Software: MySQL, Microsoft SQL Server, Microsoft Access, Oracle,
IBM DB2 and FoxPro
• Spreadsheet Software: Google Sheets, Apple Numbers and Microsoft Excel
• Multimedia Software: Media Player, Winamp, QuickTime and VLC Media
Player
• Presentation Software: Google Slides, Microsoft Powerpoint, Keynotes,
Prezzy
• Enterprise Software: customer relationship management (CRM) software
(HubSpot, Microsoft Dynamic 365)), project management tools (Jira,
Monday), marketing automation tools (Marketo, HubSpot), enterprise
resource planning (ERP) software (SAGE, Oracle, Microsoft Dynamics),
treasury management system (TMS) software (SAP S/4HANA Finance, Oracle
Treasury), business intelligence (BI) software (SAP Business Intelligence,
MicroStrategy, Microsoft Power BI)
• Information Worker Software: Documentation tools, resource management
tools
• Communication Software: Zoom, Google Meet, Skype
• Educational Software: Dictionaries – Encarta, Britannica; Mathematical:
MATLAB; Others: Google Earth, NASA World Wind
• Simulation Software: Flight and scientific simulators
• Content Access Software: Accessing content through media players, web
browsers
• Application Suites: Apache OpenOffice, Microsoft Office365, Apple’s iWork,
LibreOffice, G-Suite, Oracle E-Business Suite
• Software for Engineering and Product Development: IDE or Integrated
Development Environments
• Email Software: Microsoft Outlook, Gmail, Apple Mail
Benefits of Application Software
Applications are the lifeblood of our digital devices.
Mobile app developers create solutions to let businesses sell and market
themselves online. Financial applications run the stock market. The banking
system uses applications to transfer money and log transactions.
If a business needs a digital solution it usually comes in the form of an app.
44

System Software
System software provides a platform for other software and includes the
programs managing the computer itself, such as the computer’s operating system,
file management utilities and disk operating system (or DOS). The system’s files
consist of libraries of functions, system services, drivers for printers and other
hardware, system preferences and other configuration files. The programs in
system software encompass assemblers, compilers, file management tools, system
utilities and debuggers.
While application software is non-essential and won’t shut down your device
by being uninstalled, system software is essential and creates a platform that apps
sit inside.
Examples of System Software
System software runs things in the background and operating systems are an
example of system software.
For desktop computers, laptops and tablets:
• Microsoft Windows
• MacOS (for Apple devices)
• GNU/Linux
For smart phones:
• Apple’s iOS
• Google’s Android
• Windows Phone OS
Other examples include game engines, computational science software,
industrial automation software and software as a service application.
Other than operating systems, some people also classify programming software
and driver software as types of system software.
Benefits of System Software
Open-source operating systems let businesses create their own OS.
Firmware
Firmware is software that’s stored on a computer’s motherboard or chipset.
Its job is to ensure the device works directly. When you switch on your laptop,
the Basic Input Output System (BIOS) wakes everything up.
It checks the drive for errors then queries if the operating system is present. If
so, it then turns control over to the likes of Windows 10.
Programming Software
Software get developed by using programming software.
45

Most code is written in English using a specific format or syntax. High-level


programming languages are then converted into machine code. It then takes
another type of software called a compiler to achieve this.
Programming software, also known as a programming tool or software
development tool, is a program that assists software developers or programmers
with creating, debugging and maintaining other programs and applications.
Programming software is considered to be a subset of system software, although
there is debate regarding this.
Examples of Programming Software
It’s possible to write computer languages like Java or PHP in plain-text editor
apps, but more robust, industry-standard applications are available. Compilers,
assemblers, debuggers, interpreters etc. are examples of programming software.
Integrated development environments (IDEs) are combinations of all these software.
Most software developers use programming software apps like:
• GitHub
• GitLab
• Android Studio
• Visual Studio Code
• Eclipse
• XCode
• Notepad++
• Atom
They’re termed IDEs or integrated development environments and
programmers use them for a reason.
Benefits of Programming Software
IDEs have tools like highlighting errors in code. They also contain in-built
compilers and preview how the app will work before testing it on a phone.
Trying to track down a bug over thousands of lines of code is next to
impossible in a simple text editor. Yet an IDE makes that easier resulting in quicker
development time and robust software.
Driver Software
Driver software communicates with hardware and control devices and
peripherals attached to a computer. It does this by gathering input from the OS
(operating system) and giving instructions to the hardware to perform an action or
other designated task.
Internal components like the hard drive and processor each require its own
driver. If the wrong software’s installed the device won’t work correctly.
In older versions of Windows, device drivers became the bane of office life.
46

New peripherals like a printer required the correct driver. When the CD went
missing it took forever to find the right driver software online.
Thankfully Windows and other operating systems install and manage drivers
behind the scenes. The result is an optimised and working machine.
Examples of Driver Software
All hardware devices require drivers. For example:
• Graphic cards
• Network cards
• Mouse and keyboard
When you insert a USB flash drive into your computer, the OS recognizes it as
a new device. The driver then gets installed automatically to make it functional.
Benefits of Driver Software
Drivers are part of the system software category. Without them, nothing would
work.
Hardware manufacturers are usually responsible for creating driver software.
However, Linux and Chrome book often get overlooked because of their small
market share.
Thankfully the coding community comes to the rescue.
Someone writes the code to make the device work correctly on their system.
They then share the driver online for others to download and use.
Freeware
Freeware sounds like free software or open-source software but there’s a
difference.
Freeware software does not expose or share its source code. Yet the software
owner does not charge others to use it.
Freeware licences vary as to what the software can be used for and who can
share it.
Some developers only allow their freeware for private or personal use.
Businesses need a paid license or get written permission. An example of this
is GPT-3 – and only approved developers and marketers can get access to the
program.
Always read the small print and be wary of the copyright of freeware licenses.
Examples of Freeware
Freeware software examples cover a wide base of useful applications from
audio to virtual machines.
Benefits of Freeware
You pay nothing for fully developed software. You can uninstall it if you don’t
like the features. There are no companies ‘forcing’ you to upgrade.
47

Freeware also helps the online community to share and grow. Developers can
showcase their talents while businesses can avail of some excellent apps.
Shareware
Like freeware, shareware is free to use and share with others, but only for a
short time.
It acts as an evaluation. You can try some or all of the features before
committing to a purchase.
Examples of Shareware
WinZip is one of the most established shareware apps.
It started in 1991 when compression software wasn’t included in Windows.
Nearly thirty years later, it still sees high download volumes. The free trial is time-
limited but all versions include encryption.
Benefits of Shareware
Shareware lets you try the software for free before purchasing a full licence.
Some give a limited feature set or are time-locked. “Try before you buy” is a
great way to check if the software is right for your business’s needs.
Open Source Software
Open source means you can explore the actual code that the app was written
in.
Strict software licences restrict what another developer is able to do with the
code. However, the ethos behind open-source is to encourage development.
Open source means evolving the code to make it better for everyone.
Examples of Open Source Software
The Linux OS is the perfect example of open-source software.
Developers can download the source code and edit it as they see fit. New
flavours of Linux help target a certain need as a result.
Benefits of Open Source Software
Github.com is the top destination for coders to save and share their code.
Repositories are often open source and developers can find the right solution to
their issues easily. They can clone whole projects or download elements for free.
Closed Source Software
Most applications are closed source in that they do not expose the original
code.
Licenses are stringent. No unauthorized copying or cracking is allowed. The
app can be commercial or private but it requires payment of some kind to use.
Examples of Closed Source Software
Any app that hides or encrypts its source code is considered closed-source.
For example, Skype allows video conferencing. It’s owned by Microsoft and
although free to use, the corporation charges high-volume users a fee.
48

Benefits of Closed Source Software


Closed source software is designed for commercial use.
Developers are often employed to write code. That costs money. Therefore
companies don’t want their investment shared without recompense.
Utility Software
Utility software is designed to analyze and optimize a device.
These apps are usually bundled with an OS. They track performance and alert
the system if there’s a problem like overheating.
Examples of Utility Software
The Windows Task Manager shows all open processes in Windows. It details
performance over time and showcases how much memory each uses.
Utilities also include anti-virus software and backup apps.
Benefits of Utility Software
Keeping a close eye on overheating is essential as is scanning for malware.
Utility software helps to keep things stable. It’s designed to make your system
run smooth and not get damaged due to overuse.
3.3.3 OPERATING SYSTEM
An Operating System (OS) is system software that acts as an interface between
computer hardware components and the user. Every computer system must have
at least one operating system to run other programs. Applications like Browsers,
MS Office, Notepad Games, etc., need some environment to run and perform its
tasks.
The OS helps you to communicate with the computer without knowing how to
speak the computer's language. It is not possible for the user to use any computer
or mobile device without having an operating system.
History of OS
Operating systems were first developed in the late 1950s to manage tape
storage
The General Motors Research Lab implemented the first OS in the early 1950s
for their IBM 701
In the mid-1960s, operating systems started to use disks
In the late 1960s, the first version of the Unix OS was developed
The first OS built by Microsoft was DOS. It was built in 1981 by purchasing
the 86-DOS software from a Seattle company
The present-day popular OS Windows first came to existence in 1985 when a
GUI was created and paired with MS-DOS.
Types of Operating System (OS)
Following are the popular types of OS (Operating System):
• Batch Operating System
49

• Multitasking/Time Sharing OS
• Multiprocessing OS
• Real Time OS
• Distributed OS
• Network OS
• Mobile OS
Batch Operating System
Some computer processes are very lengthy and time-consuming. To speed the
same process, a job with a similar type of needs are batched together and run as a
group.
The user of a batch operating system never directly interacts with the
computer. In this type of OS, every user prepares his or her job on an offline device
like a punch card and submit it to the computer operator.
Multi-Tasking/Time-sharing Operating systems
Time-sharing operating system enables people located at a different
terminal(shell) to use a single computer system at the same time. The processor
time (CPU) which is shared among multiple users is termed as time sharing.
Real time OS
A real time operating system time interval to process and respond to inputs is
very small. Examples: Military Software Systems, Space Software Systems are the
Real time OS example.
Distributed Operating System
Distributed systems use many processors located in different machines to
provide very fast computation to its users.
Network Operating System
Network Operating System runs on a server. It provides the capability to serve
to manage data, user, groups, security, application, and other networking
functions.
Mobile OS
Mobile operating systems are those OS which is especially that are designed to
power smartphones, tablets, and wearables devices.
Some most famous mobile operating systems are Android and iOS, but others
include BlackBerry, Web, and watchOS.
Functions of Operating System
Some typical operating system functions may include managing memory, files,
processes, I/O system & devices, security, etc.
50

Below are the main functions of Operating System:

In an operating system software performs each of the function:


Process management: Process management helps OS to create and delete
processes. It also provides mechanisms for synchronization and communication
among processes.
Memory management: Memory management module performs the task of
allocation and de-allocation
allocation of memo
memory
ry space to programs in need of this resources.
File management: It manages all the file-related related activities such as
organization storage, retrieval, naming, sharing, and protection of files.
Device Management: Device management keeps tracks of all devices. This
module also responsible for this task is known as the I/O controller. It also
performs the task of allocation and de
de-allocation of the devices.
I/O System Management: One of the main objects of any OS is to hide the
peculiarities of that hardware dev
device from the user.
Secondary-Storage
Storage Management: Systems have several levels of storage
which includes primary storage, secondary storage, and cache storage. Instructions
and data must be stored in primary storage or cache so that a running program
can reference it.
Security: Security module protects the data and information of a computer
system against malware threat and authorized access.
Command interpretation: This module is interpreting commands given by
the and acting system resources to process that commands.
Networking: A distributed system is a group of processors which do not share
memory, hardware devices, or a clock. The processors communicate with one
another through the network.
Job accounting: Keeping track of time & resource used by various job and
users.
51

Communication Management: Coordination and assignment of compilers,


interpreters, and another software resource of the various users of the computer
systems.
Features of Operating System (OS)
Here is a list importan
important feature of OS:
• Protected and supervisor mode
• Allows disk access and file systems Device drivers Networking Security
• Program Execution
• Memory management Virtual Memory Multitasking
• Handling I/O operations
• Manipulation of the file system
• Error Detection and
d handling
• Resource allocation
• Information and Resource Protection

Characteristics of Operating System


• Allows you to hide details of hardware by creating an abstraction
• Easy to use with a GUI
• Offers an environment in which a user may execute programs/applications
• The operating system must make sure that the computer system convenient
to use
• Operating System acts as an intermediary among applications and the
hardware components
• It provides the computer system resources with easy to use format
• Acts
cts as an intermediator between all hardware's and software's of the system
• If any issue occurs in OS, you may lose all the contents which have been
stored in your system
52

• Operating system's software is quite expensive for small size organization


which adds burden on them. Example Windows
• It is never entirely secure as a threat can occur at any time
Kernel in Operating System
The kernel is the central component of a computer operating systems. The only
job performed by the kernel is to the manage the communicommunication
cation between the
software and the hardware. A Kernel is at the nucleus of a computer. It makes the
communication between the hardware and software possible. While the Kernel is
the innermost part of an operating system, a shell is the outermost one.

Introduction to Kernel
Features of Kennel
• Low-level
level scheduling of processes
• Inter-process
process communication
• Process synchronization
• Context switching
• Types of Kernels
There are many types of kernels that exists, but among them, the two most
popular kernels are:
1. Monolithic
A monolithic kernel is a single code or block of the program. It provides all the
required services offered by the operating system. It is a simplistic design which
creates a distinct communication layer between the hardware and software.
2. Microkernels
Microkernel manages all system resources. In this type of kernel, services are
implemented in different address space. The user services are stored in user
address space, and kernel services are stored under kernel address space. So, it
helps to reduce the size of both the kernel and operating system.
3.3.4 UNIX
UNIX can also be called as an operating system that has its utilization in both
work stations and servers. This is important for the development of the internet and
53

the reconfiguration of computer systems. Unix was developed by Ken Thompson,


Dennis Ritchie and other people working at AT&T laboratories. This was mainly
developed for the intention to help the programmers develop software rather than
non-programmers. However, now it is used by everyone because of its simplicity.
This is designed to be accessed by multiple people at a time and be multitasking
and time-sharing configuration.
UNIX Working
UNIX operated systems are classified into various sub context let us know
about them below-
• The first step is the Plain Text for data storage.
• This consists of the hierarchical file system.
• Considering treating devices and inter-process communications as a file.
The fourth part is the use of a large number of software tools in other words
which can be categorized into small programs connected through an
interpreter using pipes.
UNIX Application Programming Interface
Many of the operating systems have a simple view of how applications behave
depending on their use. The application written in the operating system reads the
data from disk, terminal, printer or tape and does the required processing on the
other end required output is produced on to the disk, terminal, printer or tape.
Generally, operating systems are inbuilt with well-implemented facilities to support
such kind of applications.
As the applications, demand raises the need for more sophisticated features
such as network access, inter-process communication, multitasking also increases.
These features are not documented and cannot be understood easily. The main
thing is these are callable only from assembly level language. The kind of program
written with these features is tough to handle and maintain.
The UNIX OS is written in C language the “input-processing-output”
application can be run easily without writing assembly language.
UNIX Architecture
The architecture of UNIX is basically divided into four main layers-
• Kernel
• Hardware
• Shell
• Files and directories
The first layer kernel deals with all the hardware connections all the
applications and commands are associated with it. Hardware is the second layer.
The third layer known as shell acts as the bridge between the user, user commands
and predefined UNIX commands. Last but not the least files and directories can be
concluded as the UNIX commands which will be UNIX programs in most cases. The
54

final layer is the user. This means the whole operating system is visible to the user
from the shell itself
Kernel
Amongst the four layer’s kernel is the most powerful one. The kernel contains
mainly utilities along with the master control program. Kernel program has the
power to start or stop a program and even handle the file system. It also suggests
which program to be selected when two resources try to access the device at the
same time for the same resource. As the kernel has special access to the OS this
leads to the division of space between user-space and kernel-space.
Kernel structure is designed in such a way it should support primary UNIX
requirements. Which are divided into two categories and listed below
• Process management.
• File management.
Process Management
The resource allocation in CPU, memory, and services are few things which
will be handled under process management.
File Management
File management deals with managing all the data in files needed by the
process while communicating with devices and regulating data transmission.
The main operations done by the kernel are
• Kernel ensures the running of user-given programs is done on time.
• Plays a role in memory allocation.
• Manages the swapping between memory and disk.
• Transports data between peripherals.
• The kernel also requests service from the process.
That’s the reason why the kernel is called as the heart of the UNIX system. The
kernel itself can be defined as a small program that contains enough data structure
to pass arguments and receive results from a call and the process them on the
calling process.
Hardware
Hardware can be defined as the system components which are seen through
the human eye and be touched like keyboard, monitors, etc., Hardware also
included speakers, clocks, devices in OS architecture.
Shell
The shell can easily be defined as the software program which acts as a
communication bridge between kernel and user. When the user gives the
commands the shell reads the commands, understands them and then sends a
request to execute the program. Then when the program is executed it again sends
the request to display the program to the user on-screen. The shell can also be
55

called a command interpreter. As told above the shell calls the kernel there are all
most 100 in build calls.
Various tasks which shell ask the kernel to do are
• File opening.
• File writing.
• Executing programs.
• Obtaining detailed information about the program.
• Termination of the process.
• Getting information about time and date.
Unix Files and Directories
This includes user-written and shell programs as well as libraries of UNIX
Directories
Directories in Unix have name, path, files, and folder. Rules for both files and
folders are the same. These are stored in the up-side-down hierarchical tree
structure. The main working flow of directories is as follows.
• Displays home directories.
• Copies files to other directories.
• Renaming directories.
• Deleting directories.
• Files
These are the files that contain data, text and program instructions. The main
workflow of files is
• Store user information like an image drawn or some content written.
• Mostly located under a directory.
• It does not allow/store the data of other files.
3.3.5 LINUX
Linux is an operating system or a kernel. It is distributed under an open
source license. Its functionality list is quite like UNIX.
Linux is an operating system or a kernel which germinated as an idea in the
mind of young and bright Linus Torvalds when he was a computer science student.
He used to work on the UNIX OS (proprietary software) and thought that it needed
improvements. However, when his suggestions were rejected by the designers of
UNIX, he thought of launching an OS which will be receptive to changes,
modifications suggested by its users.
As time passed by, he collaborated with other programmers in places like MIT
and applications for Linux started to appear. So around 1991, a working Linux
operating system with some applications was officially launched, and this was the
56

start of one of the most loved and open-source OS options available today. The
earlier versions of Linux were not so user friendly as they were in use by computer
programmers and Linus Torvalds never had it in mind to commercialize his
product.
This definitely curbed the Linux's popularity as other commercially oriented
Operating System Windows got famous. Nonetheless, the open-source aspect of the
Linux operating system made it more robust.
The main advantage of Linux was that programmers were able to use the
Linux Kernel to design their own custom operating systems. With time, a new range
of user-friendly OS's stormed the computer world. Now, Linux is one of the most
popular and widely used Kernel, and it is the backbone of popular operating
systems like Debian, Knoppix, Ubuntu, and Fedora. Nevertheless, the list does not
end here as there are thousands of OS's based on Linux which offer a variety of
functions to the users. Linux Kernel is normally used in combination of GNU
project by Dr. Richard Stallman. All mordern distributions of Linux are actually
distributions of Linux/GNU
Properties of Linux
Linux Pros
A lot of the advantages of Linux are a consequence of Linux' origins, deeply
rooted in UNIX, except for the first advantage, of course:
1. Linux is free:
If you want to spend absolutely nothing, you don't even have to pay the price
of a CD. Linux can be downloaded in its entirety from the Internet completely for
free. No registration fees, no costs per user, free updates, and freely available
source code in case you want to change the behavior of your system.
2. Most of all, Linux is free as in free speech:
The license commonly used is the GNU Public License (GPL). The license says
that anybody who may want to do so, has the right to change Linux and eventually
to redistribute a changed version, on the one condition that the code is still
available after redistribution.
3. Linux is portable to any hardware platform:
A vendor who wants to sell a new type of computer and who doesn't know
what kind of OS his new machine will run (say the CPU in your car or washing
machine), can take a Linux kernel and make it work on his hardware.
4. Linux is secure and versatile:
The security model used in Linux is based on the UNIX idea of security, which
is known to be robust and of proven quality. But Linux is not only fit for use as a
fort against enemy attacks from the Internet: it will adapt equally to other
situations, utilizing the same high standards for security. Your development
machine or control station will be as secure as your firewall.
57

5. Linux is scalable:
From a Palmtop with 2 MB of memory to a petabyte storage cluster with
hundreds of nodes: add or remove the appropriate packages and Linux fits all. You
don't need a supercomputer anymore, because you can use Linux to do big things
using the building blocks provided with the system. If you want to do little things,
such as making an operating system for an embedded processor or just recycling
your old 486, Linux will do that as well.
6. The Linux OS and most Linux applications have very short debug-times:
Because Linux has been developed and tested by thousands of people, both
errors and people to fix them are usually found rather quickly. It sometimes
happens that there are only a couple of hours between discovery and fixing of a
bug.
Linux Cons
• There are far too many different distributions:
"Quot capites, tot rationes", as the Romans already said: the more people,
the more opinions. At first glance, the amount of Linux distributions can be
frightening, or ridiculous, depending on your point of view. But it also
means that everyone will find what he or she needs. You don't need to be an
expert to find a suitable release.
When asked, generally every Linux user will say that the best distribution is
the specific version he is using. So which one should you choose? Don't
worry too much about that: all releases contain more or less the same set of
basic packages. On top of the basics, special third party software is added
making, for example, TurboLinux more suitable for the small and medium
enterprise, RedHat for servers and SuSE for workstations. However, the
differences are likely to be very superficial. The best strategy is to test a
couple of distributions; unfortunately not everybody has the time for this.
Luckily, there is plenty of advice on the subject of choosing your Linux. A
quick search on Google, using the keywords "choosing your distribution"
brings up tens of links to good advise. The Installation HOWTO also
discusses choosing your distribution.
• Linux is not very user friendly and confusing for beginners:
It must be said that Linux, at least the core system, is less user-friendly to
use than MS Windows and certainly more difficult than MacOS, but... In
light of its popularity, considerable effort has been made to make Linux even
easier to use, especially for new users.
• Is an Open Source product trustworthy?
How can something that is free also be reliable? Linux users have the choice
whether to use Linux or not, which gives them an enormous advantage
compared to users of proprietary software, who don't have that kind of
freedom. After long periods of testing, most Linux users come to the
58

conclusion that Linux is not only as good, but in many cases better and
faster that the traditional solutions. If Linux were not trustworthy, it would
have been long gone, never knowing the popularity it has now, with millions
of users. Now users can influence their systems and share their remarks
with the community, so the system gets better and better every day.
3.3. 6 WINDOWS OS
The oldest of all Microsoft’s operating systems is MS-DOS (Microsoft Disk
Operating System). MS-DOS is a text-based operating system. Users have to type
commands rather than use the friendlier graphical user interfaces (GUI’s) available
today. Despite its very basic appearance, MS-DOS is a very powerful operating
system. There are many advanced applications and games available for MS-DOS. A
version of MS-DOS underpins Windows. Many advanced administration tasks in
Windows can only be performed using MS-DOS.
Microsoft Windows is one of the most popular Graphical User Interface (GUI).
Multiple applications can execute simultaneously in Windows, and this is known
as “Multitasking”.
Windows Operating System uses both Keyboard and mouse as input devices.
Mouse is used to interact with Windows by clicking its icons. Keyboard is used to
enter alphabets, numerals and special characters.
Some of the functions of Windows Operating System are:
• Access applications (programs) on the computer (word processing, games,
spread sheets, calculators and so on).
• Load any new program on the computer.
• Manage hardware such as printers, scanners, mouse, digital cameras etc.,
• File management activities (For example creating, modifying, saving, deleting
files and folders).
• Change computer settings such as colour scheme, screen savers of your
monitor, etc.
Windows versions through the years
1985: Windows 1.0
The history of Microsoft Windows dates back to 1985, when Microsoft released
Microsoft Windows Version 1.01. Microsoft’s aim was to provide a friendly user-
interface known as a GUI (graphical user interface) which allowed for easier
navigation of the system features. Windows 1.01 never really caught on. The
release was a shaky start for the tech giant. Users found the software unstable.
(The amazing thing about Windows 1.01 is that it fitted on a single floppy disk).
However, the point-and-click interface made it easier for new users to operate a
computer. Windows 1.0 offered many of the common components found in today's
graphical user interface, such as scroll bars and "OK" buttons.
59

1987: Windows 2.0 and 2.11


Windows 2.0 was faster, more stable and had more GUI features. The GUI
was very slightly improved but still looked too similar to Windows 1.01.The system
introduced the control panel and ran the first versions of Excel and Word. Windows
2.0 supported extended memory, and Microsoft updated it for compatibility with
Intel's 80386 processor. It was during this time that Microsoft became the largest
software vendor in the world, just as computers were becoming more commonplace.
The fact that Windows systems were user-friendly and relatively affordable was a
contributing factor to the growing PC market.
1990: Windows 3.0
Windows 3.0 supported 16 colors and included the casual games familiar to
most Windows users: Solitaire, Minesweeper and Hearts. Games that required more
processing power still ran directly on MS-DOS. Exiting to DOS gave games direct
hardware access made more system resources available. Microsoft made an
enormous impression with Windows 3.0 and 3.1. Graphics and functionality were
drastically improved. The Windows 3 family provided multimedia capabilities as
well as vastly improved graphics and application support. Building on the success
of Windows 3.x, Microsoft released Microsoft Windows 3.11 for Workgroups. This
gave Windows the ability to function on a network.
1993: Windows New Technology (NT)
Windows NT's release marked the completion of a side project to build a new,
advanced OS. NT was 32-bit and had a hardware abstraction layer. DOS was
available through the command prompt, but it did not run the Windows OS.
Microsoft designed NT as a workstation OS for businesses rather than home users.
The system introduced the Start button.
1995: Windows 95
In 1995 Windows went through a major revamp and Microsoft Windows 95
was released. This provided greatly improved multimedia and a much more
polished user interface. The now familiar desktop and Start Menu appeared.
Internet and networking support was built in. Although Windows 95 was a home
user operating system, it proved to be very popular in schools and businesses.
Windows 95 facilitated hardware installation with its Plug and Play feature.
Microsoft also unveiled 32-bit color depth, enhanced multimedia capabilities and
TCP/IP network support.
1998: Windows 98
Microsoft Windows 98 was very similar to Windows 95, it offered a much tidier
display and enhanced multimedia support. Microsoft improved speed and Plug and
Play hardware support in Windows 98. The company also debuted USB support
and the Quick Launch bar in this release. DOS gaming began to wane as Windows
gaming technology improved. The popularity of the OS made it an attractive target
for malware. Microsoft integrated web technology into the Windows user interface
and built its own web browser into the desktop.
60

2000: Windows Millennium Edition (ME)


Windows ME (Millennium Edition) was the last use of the Windows 95
codebase. Its most notable new feature was System Restore. Many customers found
this release to be unstable. Some critics said ME stood for "mistake edition."
Microsoft released the professional desktop OS Windows 2000 (initially called
NT 5.0) in the same year for the business market. Improvements to the overall
operating system allowed for easier configuration and installation. Microsoft based
this OS on the more stable Windows NT code. Some home users installed Windows
2000 for its greater reliability. Microsoft updated Plug and Play support, which
spurred home users to switch to this OS. One big advantage of Windows 2000 was
that operating system settings could be modified easily without the need to restart
the machine. Windows 2000 proved to be a very stable operating system that
offered enhanced security and ease of administration.
2001: Windows XP
Microsoft delivered Windows XP as the first NT-based system with a version
aimed squarely at the home user. Home users and critics rated XP highly. The
system improved Windows appearance with colorful themes and provided a more
stable platform. Microsoft virtually ended gaming in DOS with this release. DirectX-
enabled features in 3D gaming that OpenGL had difficulties with. XP offered the
first Windows support for 64-bit computing, but it was not very well supported,
lacking drivers and applications to run.
2006: Windows Vista
Microsoft hyped Windows Vista after the company spent a lot of resources to
develop a more polished appearance. Vista had interesting visual effects but the OS
was slow to start and run.
Vista's flaws -- coupled with the fact that many older computers lacked the
resources to run the system -- led to many home and business users staying with
XP.
2009: Windows 7
Microsoft built Windows 7 on the Vista kernel. Windows 7 picked up Vista's
visual capabilities but featured more stability. To many end users, the biggest
changes between Vista and Windows 7 were faster boot times, new user interface
and the addition of Internet Explorer 8. With true 64bit support and more Direct X
features, Windows 7 proved to be a popular release for Windows users.
2012: Windows 8
Microsoft released Windows 8 with a number of enhancements and debuted its
tile based Metro user interface. Windows 8 took better advantage of multicore
processing, solid-state drives (SSD), touchscreens and other alternate input
methods.
61

2015: Windows 10
Microsoft announced Windows 10 in September 2014, skipping Windows 9
and launched on July 2015. Version 10 includes the Start menu, which was absent
from Windows 8. A responsive design feature called Continuum adapts the interface
depending on whether the user works with a touch screen or a keyboard and
mouse for input. New features like an onscreen back button simplified touch input.
Microsoft designed the OS to have a consistent interface across devices including
PCs, laptops and tablets.
Difference between Firmware and Operating System

Firmware Operating System

Define Firmware: Firmware is one kind of Define Operating System: OS


programming that is embedded on a chip provides functionality over and above
in the device which controls that specific that which is provided by the
device. firmware.

Firmware is programs that been encoded OS is a program that can be installed


by the manufacture of the IC or by the user and can be changed.
something and cannot be changed.

It is stored on non-volatile memory. OS is stored on the hard drive.

3.3.7 FREEWARE
Freeware (not to be confused with free software) is programming that is offered
at no cost and is a common class of small applications available for downloading
and use in most operating systems. Because it may be copyrighted, you may or
may not be able to reuse it in programming you are developing. The least restrictive
"no-cost" programs are un-copyrighted programs that are in the public domain.
When reusing public domain software in your own programs, it's good to know the
history of the program so that you can be sure it really is in the public domain.
Freeware means there are no paid licenses required to use the application, no
fees or donations necessary, no restrictions on how many times you can download
or open the program, and no expiration date.
However, it can still be restrictive in some ways. Free software, on the other
hand, is completely and totally void of restrictions and allows the user to do
absolutely whatever they want with the program.
Freeware vs. Free Software
Freeware is cost-free software and free software is copyright-free software. In
other words, freeware is software under copyright but available at no cost; free
software is software with no limitations or constraints, but might not be free in the
sense that there’s no price attached to it.
62

Free software can be modified and changed at the will of the user. This means
that the user can make changes to the core elements of the program, re-write
whatever they want, overwrite things, completely repurpose the program, fork it
into new software, etc.
For free software to truly be free requires the developer to release the program
without restrictions, which is normally accomplished by giving away the source
code. This type of software is often called open-source software, or free and open-
source software (FOSS).
Free software is also 100% legally redistributable and can be used to make a
profit. This is true even if the user didn’t spend anything for the free software or if
they make more money from the free software than what they paid for it. The idea
here is that the data is totally and completely available for whatever the user wants.
The following are considered the required freedoms that a user must be
granted for the software to be considered free software (Freedoms 1-3 require
access to the source code):
Freedom 0: You're able to run the program for any purpose.
Freedom 1: You can study how the program works, and change it to make it
do whatever you want.
Freedom 2: You're given the ability to share and make copies of the software
so that you can help others.
Freedom 3: You can improve on the program, and release your improvements
(and modified versions) to the public so that everyone benefits.
Some examples of free software include GIMP, LibreOffice, and Apache HTTP
Server.
A freeware application may or may not have its source code freely available.
The program itself does not cost and is completely usable without charge, but that
doesn’t mean that the program is editable and can be transformed to create
something new, or inspected to learn more about the inner-workings.
Freeware might also be restrictive. For example, one program might be free
only for private use and stop working if it’s found to be used for commercial
purposes, or maybe the software is restricted in functionality because there’s a paid
edition available that includes more advanced features.
Unlike the rights given to free software users, freeware users’ freedoms are
granted by the developer; some developers might give more or less access to the
program than others. They also might restrict the program from being used in
particular environments, lock down the source code, etc.
CCleaner, Skype, and AOMEI Backupper are examples of freeware.
Why Developers Release Freeware
Freeware often exists to advertise a developer's commercial software. This is
usually done by giving out a version with similar but limited features. For example,
63

this edition might have advertisements or some features might be locked down until
a license is provided.
Some programs might be available at no cost because the
installer file advertises other paid-for programs that the user might click on to
generate revenue for the developer.
Other freeware programs might not be profit-seeking but instead, are provided
to the public for free for educational purposes.
3.4 REVISION POINTS
• System software
• Application software
• Operating system
• UNIX, INUX
• Free ware
3.5 INTEXT QUESTIONS
1. Explain the various types of software.
2. Define OS.
3. What are the characteristics of UNIX?
4. List down the advantages od LINUX.
5. Write a short note on freeware.
3.6 SUMMARY
Application software is a software program or group of programs designed
for end-users.
System software provides a platform for other software and includes the
programs managing the computer itself,
Freeware software does not expose or share its source code. Yet the software
owner does not charge others to use it.
An Operating System (OS) is system software that acts as an interface
between computer hardware components and the user
UNIX is designed to be accessed by multiple people at a time and be
multitasking and time-sharing configuration
Programmers were able to use the Linux Kernel to design their own custom
operating systems
3.7 TERMINAL EXERCISE
1. ________________ is an interface between user and the computer.
2. GUI means ________________
3. Set of instruction to run a computer is ______________
4. _____________ is a free operating system
5. _____________ has the power to start or stop a program and even handle the
file system
64

3.8 SUPPLEMENTARY MATERIALS


1. https://www.coderus.com/software-101-a-complete-guide-to-the-different-
types-of-software/
2. https://edu.gcfglobal.org/en/computerbasics/understanding-operating-
systems/1/
3.9 ASSIGNMENTS
1. What do you mean by a operating system? List down various operating
systems and explain its characteristics.
3.10 SUGGESTED READING/REFERENCE
1. Parameshwaran. R, Computer Application in Business, S Chand Company
Ltd. New Delhi, 2004.
2. https://www.tutorialspoint.com
3.11 LEARNING ACTIVITIES
1. Check the operating systems used in your computer, class lab, offices and
mobile. Study the features of each operating system.
3.12 KEYWORDS
Operating system Process management
GUI- Graphical user interface File management
Kernel


65

LESSON - 4
PROGRAMMING LANGUAGES
4.1 INTRODUCTION
Computer programming languages allow us to give instructions to a computer
in a language the computer understands. Just as many human-based languages
exist, there are arrays of computer programming languages that programmers can
use to communicate with a computer. The portion of the language that a computer
can understand is called a “binary.” Translating programming language into binary
is known as “compiling.” Each language, from C Language to Python, has its own
distinct features, though many times there are commonalities between
programming languages.
These languages allow computers to quickly and efficiently process large and
complex swaths of information. For example, if a person is given a list of
randomized numbers ranging from one to ten thousand and is asked to place them
in ascending order, chances are that it will take a sizable amount of time and
include some errors.
4.2 OBJECTIVES
• To understand the purpose of computer language
• To understand the evolution of programming language
• To know the programming languages that are commonly used
4.3 CONTENTS
4.3.1 Computer Language
4.3.2 Evolution of Programming Languages
4.3.3 Most Commonly Used Programming Language
4.3.4 Compiler
4.3.5 Interpreter
4.3.1 COMPUTER LANGUAGE
A computer language is a method of communication with a computer. Types of
computer languages include:
• Construction language, all forms of communication by which a human can
specify an executable problem solution to a computer
o Command language, a language used to control the tasks of the
computer itself, such as starting other programs
o Configuration language, a language used to write configuration files
o Programming language, a formal language designed to communicate
instructions to a machine, particularly a computer
• Markup language, a grammar for annotating a document in a way that is
syntactically distinguishable from the text, such as HTML
66

• Modeling language, a formal language used to express information or


knowledge, often for use in computer system design
o Hardware description language, used to model integrated circuits
• Page description language, describes the appearance of a printed page in a
higher level than an actual output bitmap
• Query language, a language used to make queries in databases and
information systems
• Simulation language, a language used to describe simulations
• Specification language, a language used to describe what a system should
do
• Style sheet language, a computer language that expresses the presentation
of structured documents, such as CSS
• Transformation language, designed to transform some input text in a certain
formal language into a modified output text that meets some specific goal
4.3.2 EVOLUTION OF PROGRAMMING LANGUAGES
A programming language is mainly used to develop desktop applications,
websites, and mobile applications.
Types of programming language
1. Low-level programming language

Low-level language is machine-dependent (0s and 1s) programming language.


The processor runs low- level programs directly without the need of a compiler or
interpreter, so the programs written in low-level language can be run very fast.
Low-level language is further divided into two parts -

i. Machine Language

Machine language is a type of low-level programming language. It is also called


as machine code or object code. Machine language is easier to read because it is
normally displayed in binary or hexadecimal form (base 16) form. It does not
require a translator to convert the programs because computers directly
understand the machine language programs.
The advantage of machine language is that it helps the programmer to execute
the programs faster than the high-level programming language.
ii. Assembly Language

Assembly language (ASM) is also a type of low-level programming language


that is designed for specific processors. It represents the set of instructions in
a symbolic and human-understandable form. It uses an assembler to convert the
assembly language to machine language.
67

Each personal computer has a microprocessor that manages the computer's


arithmetical, logical, and control activities.
Each family of processors has its own set of instructions for handling various
operations such as getting input from keyboard, displaying information on screen
and performing various other jobs. These set of instructions are called 'machine
language instructions'.
A processor understands only machine language instructions, which are
strings of 1's and 0's. However, machine language is too obscure and complex for
using in software development. So, the low-level assembly language is designed for
a specific family of processors that represents various instructions in symbolic code
and a more understandable form.
Advantages of Assembly Language
Having an understanding of assembly language makes one aware of −
• How programs interface with OS, processor, and BIOS;
• How data is represented in memory and other external devices;
• How the processor accesses and executes instruction;
• How instructions access and process data;
• How a program accesses external devices.
Other advantages of using assembly language are −
• It requires less memory and execution time;
• It allows hardware-specific complex jobs in an easier way;
• It is suitable for time-critical jobs;
• It is most suitable for writing interrupt service routines and other memory
resident programs.
2. High-level programming language

High-level programming language (HLL) is designed for developing user-


friendly software programs and websites. This programming language requires a
compiler or interpreter to translate the program into machine language (execute the
program).
The main advantage of a high-level language is that it is easy to read, write,
and maintain.
High-level programming language includes Python, Java, JavaScript, PHP, C#,
C++, Objective C, Cobol, Perl, Pascal, LISP, FORTRAN, and Swift programming
language.
A high-level language is further divided into three parts -

i. Procedural Oriented programming language


Procedural Oriented Programming (POP) language is derived from structured
programming and based upon the procedure call concept. It divides a program into
small procedures called routines or functions.
68

Procedural Oriented programming language is used by a software programmer


to create a program that can be accomplished by using a programming editor like
IDE, Adobe Dreamweaver, or Microsoft Visual Studio.
The advantage of POP language is that it helps programmers to easily track the
program flow and code can be reused in different parts of the program.
ii. Object-Oriented Programming language
Object-Oriented Programming (OOP) language is based upon the objects. In
this programming language, programs are divided into small parts called objects. It
is used to implement real-world entities like inheritance, polymorphism,
abstraction, etc in the program to makes the program resusable, efficient, and
easy-to-use.
The main advantage of object-oriented programming is that OOP is faster and
easier to execute, maintain, modify, as well as debug.
Example: C++, Java, Python, C#, etc.
iii. Natural language
Natural language is a part of human languages such as English, Russian,
German, and Japanese. It is used by machines to understand, manipulate, and
interpret human's language. It is used by developers to perform tasks such as
translation, automatic summarization, Named Entity Recognition (NER),
relationship extraction, and topic segmentation.
The main advantage of natural language is that it helps users to ask questions
in any subject and directly respond within seconds.
3. Middle-level programming language
Middle-level programming language lies between the low-level programming
language and high-level programming language. It is also known as the
intermediate programming language and pseudo-language.
A middle-level programming language's advantages are that it supports the
features of high-level programming, it is a user-friendly language, and closely
related to machine language and human language.
4.3.3 MOST COMMONLY USED PROGRAMMING LANGUAGE
As we all know, the programming language makes our life simpler. Currently,
all sectors (like education, hospitals, banks, automobiles, and more) completely
depend upon the programming language.
There are dozens of programming languages used by the industries. Some
most widely used programming languages are given below -
Python
Python is one of the most widely used user-friendly programming languages. It
is an open-source and easy to learn programming language developed in the 1990s.
It is mostly used in Machine learning, Artificial intelligence, Big Data, GUI based
desktop applications, and Robotics.
69

Advantages
• Python is easy to read, easy to understand, and easy to write.
• It integrates with other programming languages like C, C++, and Java.
• Python executes code line-by-line, so it is easy for the programmer to find
the error that occurred in the code.
• Python is platform-independent means you can write code once and run it
anywhere.

Disadvantages
• Python is not suitable for developing mobile applications and games.
• Python works with the interpreter. That's why it is slower than other
programming languages like C and C++.
Java
Java is a simple, secure, platform-independent, reliable, architecture-neutral
high-level programming language developed by Sun Microsystems in 1995. Now,
Java is owned by Oracle. It is mainly used to develop bank, retail, information
technology, android, big data, research community, web, and desktop applications.
Advantages
• Java is easy to write, compile, learn, and debug as compared to other
programming languages.
• It provides an ability to run the same program on different platforms.
• It is a highly secured programming language because in java, there is no
concept of explicit pointers.
• It is capable of performing multiple tasks at the same time.

Disadvantages
• Java consumes more memory and slower than other programming
languages like C or C++.
• It does not provide a backup facility.

C
C is a popular, simple, and flexible general-purpose computer programming
language. Dennis M Ritchie develops it in 1972 at AT&T. It is a combination of both
low-level programming language as well as a high-level programming language. It is
used to design applications like Text Editors, Compilers, Network devices, and
many more.
Advantages
• C language is easy to learn.
• It is fast, efficient, portable, easy to extend, powerful, and flexible
programming language.
• It is used to perform complex calculations and operations such as MATLAB.
• It provides dynamic memory allocation to allocate memory at the run time.
70

Disadvantages
In the C programming language, it is very difficult to find the errors.

• C does not support the concepts of constructors, destructors, abstraction,


polymorphism, encapsulation, and namespace like OOPs.

C++
C++ is one of the thousands of programming languages that we use to develop
software. C++ programming language is developed by Bjarne Stroustrup in 1980. It
is similar to the C programming language but also includes some additional
features such as exception handling, object-oriented programming, type checking,
etc.
Advantages
• C++ is a simple and portable structured programming language.
• It supports OOPs features such as Abstraction, Inheritance, Encapsulation.
• It provides high-level abstraction and useful for a low-level programming
language, and more efficient for general-purpose.
• C++ is more compatible with the C language.

Disadvantages
• C++ programming language is not secured as compared to other
programming languages like Java or Python.
• C++ can not support garbage collection.
• It is difficult to debug large as well as complex web applications.
C#

C# (pronounced as C sharp) is a modern, general-purpose, and object-oriented


programming language used with XML based Web services on the .NET platform. It
is mainly designed to improve productivity in web applications. It is easier to learn
for those users who have sufficient knowledge of common programming languages
like C, C++, or Java.
Advantages
• C# is a modern, type-safe, easy, fast, and open-source programming
language that is easily integrated with Windows.
• The maintenance of C# (C sharp) is lower than the C++ programming
language.
• C# is a pure object-oriented programming language.
• C# includes a strong memory backup facility. That's why it avoids the
problem of memory leakage.
71

Disadvantages
• C# is less flexible because it is completely based on Microsoft .Net
framework.
• In C#, it is difficult to write, understand, debug, and maintain multithreaded
applications.

JavaScript
JavaScript is a type of scripting language that is used on both client-side as
well as a server-side. It is developed in the 1990s for the Netscape Navigator web
browser. It allows programmers to implement complex features to make web pages
alive. It helps programmers to create dynamic websites, servers, mobile
applications, animated graphics, games, and more.
Advantage
• JavaScript helps us to add behavior and interactivity on the web page.
• It can be used to decrease the loading time from the server.
• It has the ability to create attractive, dynamic websites, and rich interfaces.
• JavaScript is a simple, versatile, and lightweight programming language.
• JavaScript and its syntax are easy to understand.

Disadvantage
• JavaScript is completely based on the browser.
• It does not support multiple inheritance.
• It is less secure compared to other programming languages.

R
Currently, R programming is one of the popular programming languages that
is used in data analytics, scientific research, machine learning algorithms, and
statistical computing. It is developed in 1993 by Ross Ihaka and Robert Gentleman.
It helps marketers and data scientists to easily analyze, present, and visualize data.
Advantages
• R programming provides extensive support for Data Wrangling.
• It provides an easy-to-use interface.
• It runs on any platform like Windows, Linux, and Mac.
• It is an open-source and platform-independent programming language.

Disadvantages
• R programming does not support 3D graphics.
• It is slower than other programming languages.
72

PHP
PHP stands for Hypertext Preprocessor. It is an open-source, powerful server-
side scripting language mainly used to create static as well as dynamic websites. It
is developed by Rasmus Laird in 1994. Inside the php, we can also
write HTML, CSS, and JavaScript code. To save php file, file extension .php is used.
Advantages
• PHP is a more secure and easy-to-use programming language.
• It supports powerful online libraries.
• It can be run on a variety of operating systems such as Windows, Linux, and
Mac.
• It provides excellent compatibility with cloud services.

Disadvantages
• PHP is not capable of handling a large number of applications and not
suitable for large applications.
• It is quite difficult to maintain.

Go
Go or Golang is an open-source programming language. It is used to build
simple, reliable, and efficient software. It is developed by Robert Griesemer, Rob
Pike, and Ken Thompson in 2007.
Advantages
• Go language is easy-to-learn and use.
• It comes with the in-built testing tools.
• Go is a fast programming language.

Disadvantages
• Go language does not support generics.
• It does not support error handling.
• It supports a lack of frameworks.

Ruby
Ruby is an open-source, general-purpose, and pure object-oriented
programming language released in 1993. It is used in front-end and back-end web
development. It is mainly designed to write CGI (Common Gateway Interface)
scripts.
Advantages
• Ruby supports various GUI (Graphical User Interface) tools like GTK and
OpenGL.
• It is used to develop both internet as well as intranet applications.
• The code written in Ruby is small and contains less number of lines.
73

Disadvantages
• Ruby is slower than other programming languages.
• It is very difficult for programmers to debug the code written in Ruby.

4.3.4 COMPILER
A compiler is a software program that is responsible for changing initial
programmed code into a more basic machine language closer to the “bare metal” of
the hardware, and more readable by the computer itself. A high-level source
code that is written by a developer in a high-level programming language gets
translated into a lower-level object code by the compiler, to make the result
“digestible” to the processor.
Formally, the output of the compilation is called object code or sometimes an
object module. The object code is machine code that the processor can perform one
instruction at a time.
Compilers are needed because of the way that a traditional processor executes
object code. The processor uses logic gates to route signals on a circuit board,
manipulating binary high and low signals to work the computer’s arithmetic logic
unit. But that’s not how a human programmer builds the code: unlike this basic,
binary machine language, the initial high-level code consists of variables,
commands, functions, calls, methods and other assorted fixtures represented in a
mixture of arithmetic and lexical syntax. All of that needs to be put into a form that
the computer can understand in order to execute the program.
A compiler executes four major steps:

Scanning: The scanner reads one character at a time from the source code
and keeps track of which character is present in which line.
Lexical Analysis: The compiler converts the sequence of characters that
appear in the source code into a series of strings of characters (known as tokens),
which are associated by a specific rule by a program called a lexical analyzer. A
symbol table is used by the lexical analyzer to store the words in the source code
that correspond to the token generated.
Syntactic Analysis: In this step, syntax analysis is performed, which involves
preprocessing to determine whether the tokens created during lexical analysis are
in proper order as per their usage. The correct order of a set of keywords, which can
yield a desired result, is called syntax. The compiler has to check the source code to
ensure syntactic accuracy.
Semantic Analysis: This step consists of several intermediate steps. First, the
structure of tokens is checked, along with their order with respect to the grammar
in a given language. The meaning of the token structure is interpreted by
the parser and analyzer to finally generate an intermediate code, called object code.
The object code includes instructions that represent the processor action for a
corresponding token when encountered in the program. Finally, the entire code is
74

parsed and interpreted to check if any optimizations are possible. Once


optimizations can be performed, the appropriate modified tokens are inserted in the
object code to generate the final object code, which is saved inside a file.
4.3.5 INTERPRETER
An interpreter is a computer program that directly executes instructions
written in a programming or scripting language, without requiring them previously
to have been compiled into a machine language program. An interpreter generally
uses one of the following strategies for program execution:
Parse the source code and perform its behavior directly;
Translate source code into some efficient intermediate representation and
immediately execute this;
Explicitly execute stored precompiled code made by a compiler which is part of
the interpreter system.
Compliers and interpreters are programs that help convert the high level
language (Source Code) into machine codes to be understood by the computers.
Computer programs are usually written on high level languages. A high level
language is one that can be understood by humans. To make it clear, they contain
words and phrases from the languages in common use – English or other languages
for example. However, computers cannot understand high level languages as we
humans do. They can only understand the programs that are developed in binary
systems known as a machine code. To start with, a computer program is usually
written in high level language described as a source code. These source codes must
be converted into machine language and here comes the role of compilers and
interpreters.

Differences between Interpreter and Compiler

Interpreter translates just one Compiler scans the entire program and
statement of the program at a time translates the whole of it into machine
into machine code. code at once.

An interpreter takes very less time A compiler takes a lot of time to analyze
to analyze the source code. the source code. However, the overall time
However, the overall time to taken to execute the process is much
execute the process is much faster.
slower.

An interpreter does not generate an A compiler always generates an


intermediary code. Hence, an intermediary object code. It will need
interpreter is highly efficient in further linking. Hence more memory is
terms of its memory. needed.
75

Differences between Interpreter and Compiler

Keeps translating the program A compiler generates the error message


continuously till the first error is only after it scans the complete program
confronted. If any error is spotted, and hence debugging is relatively harder
it stops working and hence while working with a compiler.
debugging becomes easy.

Interpreters are used by Compliers are used by programming


programming languages like Ruby languages like C and C++ for example.
and Python for example.

4.4 REVISION POINTS


• Computer programming
• Programming Language
• Machine language
• Assembly Language
• Compiler interpreter
4.5 INTEXT QUESTIONS
1. What are the requisites for computer programming?
2. Write a short note on assembly language.
3. Explain the features of any five high level languages.
4. Write down the difference between an interpreter and compiler.
4.6 SUMMARY
A computer language is a method of communication with a computer
Low-level language is machine-dependent (0s and 1s) programming language
Assembly language represents the set of instructions in a symbolic and
human-understandable form
Middle-level programming language is known as the intermediate
programming language and pseudo-language.
High-level programming language (HLL) is designed for developing user-
friendly software programs and websites
4.7 TERMINAL EXERCISE
1. Machine language use ___ and ____ or programming.
2. ___________ translates just one statement of the program at a time into
machine code
3. ____________ scans the entire program and translates the whole of it into
machine code at once
4. PHP stands for ______________
5. Object-Oriented Programming (OOP) language is based upon the
______________
76

4.8 SUPPLEMENTARY MATERIALS


1. https://www.computerscience.org/resources/computer-programming-
languages/
2. https://en.wikipedia.org/wiki/Computer_language
4.9 ASSIGNMENTS
1. Write an essay about commonly used high level programming language.
4.10 SUGGESTED READING/REFERENCE
1. Rajaram. V, Introduction to Computer, 7th edition, Tata McGraw Hill, 2013.
2. http://gpnanakpur.ac.in/wp-
content/uploads/2019/04/computerfundamentals-e-notes.pdf
4.11 LEARNING ACTIVITIES
1. Search various resources and find how the applications and software are
developed. Have a discussion with programming professionals to learn the
updates.
4.12 KEYWORDS
Machine language
Assembly language
OOP – Object Oriented Programming
Compiler
Interpreter

77

LESSON - 5
SYSTEM DEVELOPMENT PROCESS
5.1 INTRODUCTION
Systems development is the process of defining, designing, testing and
implementing a new software application or program. It can include the internal
development of customized systems, the creation of database systems or the
acquisition of third party developed software.
Organizations aims to produce high quality systems that meet or exceed
customer expectations, based on customer requirements, by delivering systems
which move through each clearly defined phase, within scheduled time-frames and
cost estimates. They also adhere to important phases that are essential for our
developers such as planning, analysis, design and implementation. We shall
discuss some of the models used for software development process in this lesson.
5.2 OBJECTIVES
• To understand the common steps in software development
• To study various models used for software development
5.3 CONTENTS
5.3.1 Software Development Life Cycle
5.3.2 Waterfall Model
5.3.3 Iterative Model
5.3.4 Spiral Model
5.3.5 V-Model
5.3.6 Big Bang Model
5.3.1 SOFTWARE DEVELOPMENT LIFE CYCLE
Software Development Life Cycle (SDLC) is a process used by the software
industry to design, develop and test high quality softwares. The SDLC aims to
produce a high-quality software that meets or exceeds customer expectations,
reaches completion within times and cost estimates.
• SDLC is the acronym of Software Development Life Cycle.
• It is also called as Software Development Process.
• SDLC is a framework defining tasks performed at each step in the software
development process.
• ISO/IEC 12207 is an international standard for software life-cycle
processes. It aims to be the standard that defines all the tasks required for
developing and maintaining software.
What is SDLC?
SDLC is a process followed for a software project, within a software
organization. It consists of a detailed plan describing how to develop, maintain,
78

replace and alter or enhance specific software. The life cycle defines a methodology
for improving the quality of software and the overall development process.
The following figure is a graphical representation of the various stages of a typical
SDLC.

A typical Software Development Life Cycle consists of the following stages −


Stage 1: Planning and Requirement Analysis
Requirement analysis is the most important and fundamental stage in SDLC.
It is performed by the senior members of the team with inputs from the customer,
the sales department, market surveys and domain experts in the industry. This
information is then used to plan the basic project approach and to conduct product
feasibility study in the economical, operational and technical areas.
Planning for the quality assurance requirements and identification of the risks
associated with the project is also done in the planning stage. The outcome of the
technical feasibility study is to define the various technical approaches that can be
followed to implement the project successfully with minimum risks.
Stage 2: Defining Requirements
Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the customer or
the market analysts. This is done through an SRS (Software Requirement
Specification) document which consists of all the product requirements to be
designed and developed during the project life cycle.
79

Stage 3: Designing the Product Architecture


SRS is the reference for product architects to come out with the best
architecture for the product to be developed. Based on the requirements specified in
SRS, usually more than one design approach for the product architecture is
proposed and documented in a DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various
parameters as risk assessment, product robustness, design modularity, budget and
time constraints, the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product
along with its communication and data flow representation with the external and
third party modules (if any). The internal design of all the modules of the proposed
architecture should be clearly defined with the minutest of the details in DDS.
Stage 4: Building or Developing the Product
In this stage of SDLC the actual development starts and the product is built.
The programming code is generated as per DDS during this stage. If the design is
performed in a detailed and organized manner, code generation can be
accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate
the code. Different high level programming languages such as C, C++, Pascal, Java
and PHP are used for coding. The programming language is chosen with respect to
the type of software being developed.
Stage 5: Testing the Product
This stage is usually a subset of all the stages as in the modern SDLC models,
the testing activities are mostly involved in all the stages of SDLC. However, this
stage refers to the testing only stage of the product where product defects are
reported, tracked, fixed and retested, until the product reaches the quality
standards defined in the SRS.
Stage 6: Deployment in the Market and Maintenance
Once the product is tested and ready to be deployed it is released formally in
the appropriate market. Sometimes product deployment happens in stages as per
the business strategy of that organization. The product may first be released in a
limited segment and tested in the real business environment (UAT- User acceptance
testing).
Then based on the feedback, the product may be released as it is or with
suggested enhancements in the targeting market segment. After the product is
released in the market, its maintenance is done for the existing customer base.
SDLC Models
There are various software development life cycle models defined and designed
which are followed during the software development process. These models are also
80

referred as Software Development Process Models". Each process model follows a


Series of steps unique to its type to ensure success in the process of software
development.
Following are the most important and popular SDLC models followed in the
industry −
• Waterfall Model
• Iterative Model
• Spiral Model
• V-Model
• Big Bang Model
• Agile Model,
• RAD Model,
• Rapid Application Development and
• Prototyping Models.
5.3.2 SDLC - WATERFALL MODEL
The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear-sequential life cycle model. It is very simple to understand
and use. In a waterfall model, each phase must be completed before the next phase
can begin and there is no overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear
sequential flow. This means that any phase in the development process begins only
if the previous phase is complete. In this waterfall model, the phases do not
overlap.
Waterfall Model - Design
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In "The Waterfall" approach, the whole
process of software development is divided into separate phases. In this Waterfall
model, typically, the outcome of one phase acts as the input for the next phase
sequentially.
The following illustration is a representation of the different phases of the
Waterfall Model.
81

The sequential phases in Waterfall model are −


Requirement Gathering and analysis − All possible requirements of the
system to be developed are captured in this phase and documented in a
requirement specification document.
System Design − The requirement specifications from first phase are studied
in this phase and the system design is prepared. This system design helps in
specifying hardware and system requirements and helps in defining the overall
system architecture.
Implementation − With inputs from the system design, the system is first
developed in small programs called units, which are integrated in the next phase.
Each unit is developed and tested for its functionality, which is referred to as Unit
Testing.
Integration and Testing − All the units developed in the implementation
phase are integrated into a system after testing of each unit. Post integration the
entire system is tested for any faults and failures.
Deployment of system − Once the functional and non-functional testing is
done; the product is deployed in the customer environment or released into the
market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance the product
some better versions are released. Maintenance is done to deliver these changes in
the customer environment.
82

All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next phase is
started only after the defined set of goals are achieved for previous phase and it is
signed off, so the name "Waterfall Model". In this model, phases do not overlap.
Waterfall Model - Application
Every software developed is different and requires a suitable SDLC approach to
be followed based on the internal and external factors. Some situations where the
use of Waterfall model is most appropriate are −
• Requirements are very well documented, clear and fixed.
• Product definition is stable.
• Technology is understood and is not dynamic.
• There are no ambiguous requirements.
• Ample resources with required expertise are available to support the
product.
• The project is short.
Waterfall Model - Advantages
The advantages of waterfall development are that it allows for
departmentalization and control. A schedule can be set with deadlines for each
stage of development and a product can proceed through the development process
model phases one by one.
Development moves from concept, through design, implementation, testing,
installation, troubleshooting, and ends up at operation and maintenance. Each
phase of development proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
• Simple and easy to understand and use
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Clearly defined stages.
• Well understood milestones.
• Easy to arrange tasks.
• Process and results are well documented.
83

Waterfall Model - Disadvantages


The disadvantage of waterfall development is that it does not allow much
reflection or revision. Once an application is in the testing stage, it is very difficult
to go back and change something that was not well-documented or thought upon in
the concept stage.
The major disadvantages of the Waterfall Model are as follows −
• No working software is produced until late during the life cycle.
• High amounts of risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high
risk of changing. So, risk and uncertainty is high with this process model.
• It is difficult to measure progress within stages.
• Cannot accommodate changing requirements.
• Adjusting scope during the life cycle can end a project.
• Integration is done as a "big-bang. at the very end, which doesn't allow
identifying any technological or business bottleneck or challenges early.
5.3.3 SDLC - ITERATIVE MODEL
In the Iterative model, iterative process starts with a simple implementation of
a small set of the software requirements and iteratively enhances the evolving
versions until the complete system is implemented and ready to be deployed.
An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just
part of the software, which is then reviewed to identify further requirements. This
process is then repeated, producing a new version of the software at the end of each
iteration of the model.
Iterative Model - Design
Iterative process starts with a simple implementation of a subset of the
software requirements and iteratively enhances the evolving versions until the full
system is implemented. At each iteration, design modifications are made and new
functional capabilities are added. The basic idea behind this method is to develop a
system through repeated cycles (iterative) and in smaller portions at a time
(incremental).
The following illustration is a representation of the Iterative and Incremental model

84

Iterative and Incremental development is a combination of both iterative design


or iterative method and incremental build model for development. "During software
development, more than one iteration of the software development cycle may be in
progress at the same time." This process may be described as an "evolutionary
acquisition" or "incremental build" approach."
In this incremental model, the whole requirement is divided into various
builds. During each iteration, the development module goes through the
requirements, design, implementation and testing phases. Each subsequent release
of the module adds function to the previous release. The process continues till the
complete system is ready as per the requirement.
The key to a successful use of an iterative software development lifecycle is
rigorous validation of requirements, and verification & testing of each version of the
software against those requirements within each cycle of the model. As the software
evolves through successive cycles, tests must be repeated and extended to verify
each version of the software.
Iterative Model - Application
Like other SDLC models, Iterative and incremental development has some
specific applications in the software industry. This model is most often used in the
following scenarios −
• Requirements of the complete system are clearly defined and understood.
• Major requirements must be defined; however, some functionalities or
requested enhancements may evolve with time.
• There is a time to the market constraint.
• A new technology is being used and is being learnt by the development team
while working on the project.
• Resources with needed skill sets are not available and are planned to be
used on contract basis for specific iterations.
• There are some high-risk features and goals which may change in the
future.
85

Iterative Model - Pros and Cons


The advantage of this model is that there is a working model of the system at a
very early stage of development, which makes it easier to find functional or design
flaws. Finding issues at an early stage of development enables to take corrective
measures in a limited budget.
The disadvantage with this SDLC model is that it is applicable only to large
and bulky software development projects. This is because it is hard to break a
small software system into further small serviceable increments/modules.
The advantages of the Iterative and Incremental SDLC Model are as follows −
• Some working functionality can be developed quickly and early in the life
cycle.
• Results are obtained early and periodically.
• Parallel development can be planned.
• Progress can be measured.
• Less costly to change the scope/requirements.
• Testing and debugging during smaller iteration is easy.
• Risks are identified and resolved during iteration; and each iteration is an
easily managed milestone.
• Easier to manage risk - High risk part is done first.
• With every increment, operational product is delivered.
• Issues, challenges and risks identified from each increment can be
utilized/applied to the next increment.
• Risk analysis is better.
• It supports changing requirements.
• Initial Operating time is less.
• Better suited for large and mission-critical projects.
• During the life cycle, software is produced early which facilitates customer
evaluation and feedback.
The disadvantages of the Iterative and Incremental SDLC Model are as follows −
• More resources may be required.
• Although cost of change is lesser, but it is not very suitable for changing
requirements.
• More management attention is required.
• System architecture or design issues may arise because not all
requirements are gathered in the beginning of the entire life cycle.
• Defining increments may require definition of the complete system.
• Not suitable for smaller projects.
86

• Management complexity is more.


• End of project may not be known which is a risk.
• Highly skilled resources are required for risk analysis.
• Projects progress is highly dependent upon the risk analysis phase.
5.3.4 SDLC - SPIRAL MODEL
The spiral model combines the idea of iterative development with the
systematic, controlled aspects of the waterfall model. This Spiral model is a
combination of iterative development process model and sequential linear
development model i.e. the waterfall model with a very high emphasis on risk
analysis. It allows incremental releases of the product or incremental refinement
through each iteration around the spiral.
Spiral Model - Design
The spiral model has four phases. A software project repeatedly passes
through these phases in iterations called Spirals.
Identification
This phase starts with gathering the business requirements in the baseline
spiral. In the subsequent spirals as the product matures, identification of system
requirements, subsystem requirements and unit requirements are all done in this
phase.
This phase also includes understanding the system requirements by
continuous communication between the customer and the system analyst. At the
end of the spiral, the product is deployed in the identified market.
Design
The Design phase starts with the conceptual design in the baseline spiral and
involves architectural design, logical design of modules, physical product design
and the final design in the subsequent spirals.
Construct or Build
The Construct phase refers to production of the actual software product at
every spiral. In the baseline spiral, when the product is just thought of and the
design is being developed a POC (Proof of Concept) is developed in this phase to get
customer feedback.
Then in the subsequent spirals with higher clarity on requirements and design
details a working model of the software called build is produced with a version
number. These builds are sent to the customer for feedback.
Evaluation and Risk Analysis
Risk Analysis includes identifying, estimating and monitoring the technical
feasibility and management risks, such as schedule slippage and cost overrun.
After testing the build, at the end of first iteration, the customer evaluates the
software and provides feedback.
87

The following illustration is a representation of the Spiral Model, listing the


activities in each phase.

Based on the customer evaluation, the software development process enters


the next iteration and subsequently follows the linear approach to implement the
feedback suggested by the customer. The process of iterations along the spiral
continues throughout the life of the software.
Spiral Model Application
The Spiral Model is widely used in the software industry as it is in sync with
the natural development process of any product, i.e. learning with maturity which
involves minimum risk for the customer as well as the development firms.
The following points explain the typical uses of a Spiral Model −
• When there is a budget constraint and risk evaluation is important.
• For medium to high-risk projects.
• Long-term project commitment because of potential changes to economic
priorities as the requirements change with time.
• Customer is not sure of their requirements which is usually the case.
• Requirements are complex and need evaluation to get clarity.
• New product line which should be released in phases to get enough
customer feedback.
• Significant changes are expected in the product during the development
cycle.
88

Spiral Model - Pros and Cons


The advantage of spiral lifecycle model is that it allows elements of the product
to be added in, when they become available or known. This assures that there is no
conflict with previous requirements and design.
This method is consistent with approaches that have multiple software builds
and releases which allows making an orderly transition to a maintenance activity.
Another positive aspect of this method is that the spiral model forces an early user
involvement in the system development effort.
On the other side, it takes a very strict management to complete such
products and there is a risk of running the spiral in an indefinite loop. So, the
discipline of change and the extent of taking change requests is very important to
develop and deploy the product successfully.
The advantages of the Spiral SDLC Model are as follows −
• Changing requirements can be accommodated.
• Allows extensive use of prototypes.
• Requirements can be captured more accurately.
• Users see the system early.
• Development can be divided into smaller parts and the risky parts can be
developed earlier which helps in better risk management.
The disadvantages of the Spiral SDLC Model are as follows −
• Management is more complex.
• End of the project may not be known early.
• Not suitable for small or low risk projects and could be expensive for small
projects.
• Process is complex
• Spiral may go on indefinitely.
• Large number of intermediate stages requires excessive documentation.
5.3.5 SDLC - V-MODEL
The V-model is an SDLC model where execution of processes happens in a
sequential manner in a V-shape. It is also known as Verification and Validation
model.
The V-Model is an extension of the waterfall model and is based on the
association of a testing phase for each corresponding development stage. This
means that for every single phase in the development cycle, there is a directly
associated testing phase. This is a highly-disciplined model and the next phase
starts only after completion of the previous phase.
89

V-Model - Design
Under the V-Model,
Model, the corresponding testing phase of the development
develop phase
is planned in parallel. So, there are Verification phases on one side of the ‘V’ and
Validation phases on the other side. The Coding Phase joins the two sides of the V-
V
Model.
The following illustration depicts the different phases in a V
V-Model off the SDLC.

V-Model - Verification Phases


There are several Verification phases in the V
V-Model,
Model, each of these are
explained in detail below.
Business Requirement Analysis
This is the first phase in the development cycle where the product
requirements are understood from the customer’s perspective. This phase involves
detailed communication with the customer to understand his expectations and
exact requirement. This is a very important activity and needs to be managed well,
as most of the customers are not sure about what exactly they need.
The acceptance test design planning is done at this stage as business
requirements can be used as an input for acceptance testing.
System Design
Once you have the clear and detailed product requirements, it is time to design
de
the complete system. The system design will have the understanding and detailing
the complete hardware and communication setup for the product under
development. The system test plan is developed based on the system design. Doing
this at an earlier stage
age leaves more time for the actual test execution later.
90

Architectural Design
Architectural specifications are understood and designed in this phase.
Usually more than one technical approach is proposed and based on the technical
and financial feasibility the final decision is taken. The system design is broken
down further into modules taking up different functionality. This is also referred to
as High Level Design (HLD).
The data transfer and communication between the internal modules and with
the outside world (other systems) is clearly understood and defined in this stage.
With this information, integration tests can be designed and documented during
this stage.
Module Design
In this phase, the detailed internal design for all the system modules is
specified, referred to as Low Level Design (LLD). It is important that the design is
compatible with the other modules in the system architecture and the other
external systems. The unit tests are an essential part of any development process
and helps eliminate the maximum faults and errors at a very early stage. These
unit tests can be designed at this stage based on the internal module designs.
Coding Phase
The actual coding of the system modules designed in the design phase is taken
up in the Coding phase. The best suitable programming language is decided based
on the system and architectural requirements.
The coding is performed based on the coding guidelines and standards. The
code goes through numerous code reviews and is optimized for best performance
before the final build is checked into the repository.
Validation Phases
The different Validation Phases in a V-Model are explained in detail below.
Unit Testing
Unit tests designed in the module design phase are executed on the code
during this validation phase. Unit testing is the testing at code level and helps
eliminate bugs at an early stage, though all defects cannot be uncovered by unit
testing.
Integration Testing
Integration testing is associated with the architectural design phase.
Integration tests are performed to test the coexistence and communication of the
internal modules within the system.
System Testing
System testing is directly associated with the system design phase. System
tests check the entire system functionality and the communication of the system
under development with external systems. Most of the software and hardware
compatibility issues can be uncovered during this system test execution.
91

Acceptance Testing
Acceptance testing is associated with the business requirement analysis phase
and involves testing the product in user environment. Acceptance tests uncover the
compatibility issues with the other systems available in the user environment. It
also discovers the non-functional issues such as load and performance defects in
the actual user environment.
V- Model ─ Application
V- Model application is almost the same as the waterfall model, as both the
models are of sequential type. Requirements have to be very clear before the project
starts, because it is usually expensive to go back and make changes. This model is
used in the medical development field, as it is strictly a disciplined domain.
The following pointers are some of the most suitable scenarios to use the V-
Model application.
• Requirements are well defined, clearly documented and fixed.
• Product definition is stable.
• Technology is not dynamic and is well understood by the project team.
• There are no ambiguous or undefined requirements.
• The project is short.
V-Model - Pros and Cons
The advantage of the V-Model method is that it is very easy to understand and
apply. The simplicity of this model also makes it easier to manage. The
disadvantage is that the model is not flexible to changes and just in case there is a
requirement change, which is very common in today’s dynamic world, it becomes
very expensive to make the change.
The advantages of the V-Model method are as follows −
• This is a highly-disciplined model and Phases are completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
The disadvantages of the V-Model method are as follows −
• High risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high
risk of changing.
92

• Once an application is in the testing stage, it is difficult to go back and


change a functionality.
• No working software is produced until late during the life cycle.
5.3.6. SDLC - BIG BANG MODEL
The Big Bang model is an SDLC model where we do not follow any specific
process. The development just starts with the required money and efforts as the
input, and the output is the software developed which may or may not be as per
customer requirement. This Big Bang Model does not follow a process/procedure
and there is a very little planning required. Even the customer is not sure about
what exactly he wants and the requirements are implemented on the fly without
much analysis.
Usually this model is followed for small projects where the development teams
are very small.
Big Bang Model ─ Design and Application
The Big Bang Model comprises of focusing all the possible resources in the
software development and coding, with very little or no planning. The requirements
are understood and implemented as they come. Any changes required may or may
not need to revamp the complete software.
This model is ideal for small projects with one or two developers working
together and is also useful for academic or practice projects. It is an ideal model for
the product where requirements are not well understood and the final release date
is not given.
Big Bang Model - Pros and Cons
The advantage of this Big Bang Model is that it is very simple and requires
very little or no planning. Easy to manage and no formal procedure are required.
However, the Big Bang Model is a very high risk model and changes in the
requirements or misunderstood requirements may even lead to complete reversal or
scraping of the project. It is ideal for repetitive or small projects with minimum
risks.
The advantages of the Big Bang Model are as follows −
• This is a very simple model
• Little or no planning required
• Easy to manage
• Very few resources required
• Gives flexibility to developers
• It is a good learning aid for new comers or students.
93

The disadvantages of the Big Bang Model are as follows −


• Very High risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Can turn out to be very expensive if requirements are misunderstood.
5.4 REVISION POINTS
• Requirement analysis
• Product architecture
• Waterfall Model
• Iterative Model
• Spiral Model
• V-Model
• Big Bang Model
5.5 INTEXT QUESTIONS
1. What do you mean by software development?
2. Explain the steps in software development process.
3. Write a short note on requirement analysis.
4. Discuss the features of iterative model
5. Write a brief note on big bang model.
5.6 SUMMARY
The outcome of the technical feasibility study is to define the various technical
approaches
Software Requirement Specification document which consists of all the
product requirements to be designed and developed
DDS gives information on various parameters as risk assessment, product
robustness, design modularity, budget and time constraints
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to
generate the code
Waterfall Model was the first Process Model to be introduced and referred to
as a linear-sequential life cycle model.
5.7 TERMINAL EXERCISE
1. ______________ helps in specifying hardware and system requirements and
helps in defining the overall system architecture
2. ___________ tests are performed to test the coexistence and communication
of the internal modules within the system.
3. Departmentalization and control is followed in ______________ model
94

4. Evolving versions are used in ______________ model to develop systems


5. ______________ model combines iterative and waterfall model.
5.8 SUPPLEMENTARY MATERIALS
1. https://www.tutorialspoint.com/sdlc/sdlc_quick_guide.htm
2. https://www.javatpoint.com/software-engineering-sdlc-models
5.9 ASSIGNMENTS
1. Explain the different models of software development lifecycle.
5.10 SUGGESTED READING/REFERENCE
1. Ritendra Goel & Kakkar D.N., Computer Application in Management,
New Age Publishing, New Delhi, 2013.
2. https://www.innovativearchitects.com/KnowledgeCenter/basic-IT-
systems/8-SDLC-models.aspx
5.11 LEARNING ACTIVITIES
1. Form a group with your likeminded classmates an and perform a
requirement analysis and steps to execute for a system development.
5.12 KEYWORDS
DDS- Design Document Specification
SRS- Software Requirement Specification
UAT- User acceptance testing
Requirement analysis
Product architecture

95

LESSON - 6
SDLC MODELS
6.1 INTRODUCTION
There are various software development life cycle models defined and designed
which are followed during the software development process. These models are also
referred as Software Development Process Models". Each process model follows a
Series of steps unique to its type to ensure success in the process of software
development.
Following are some of the popular SDLC models followed in the industry and
are discussed detail in this lesson.
Agile Model,
RAD Model,
Prototyping Models.
6.2 OBJECTIVES
• To understand the steps and features of various models
• To study the advantages and disadvantages of each model
6.3 CONTENTS
6.3.1 Agile Model
6.3.2 RAD Model
6.3.3 Prototyping Model
6.3.1 SDLC - AGILE MODEL
Agile SDLC model is a combination of iterative and incremental process models
with focus on process adaptability and customer satisfaction by rapid delivery of
working software product. Agile Methods break the product into small incremental
builds. These builds are provided in iterations. Each iteration typically lasts from
about one to three weeks. Every iteration involves cross functional teams working
simultaneously on various areas like −
• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing and
• Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and
important stakeholders.
96

What is Agile?
Agile model believes that every project needs to be handled differently and the
existing methods need to be tailored to best suit the project requirements. In Agile,
the tasks are divided to time boxes (small time frames) to deliver specific features
for a release.
Iterative approach is taken and working software build is delivered after each
iteration. Each build is incremental in terms of features; the final build holds all the
features required by the customer.
Here is a graphical illustration of the Agile Model −

The Agile thought process had started early in the software development and
started becoming popular with time due to its flexibility and adaptability.
The most popular Agile methods include Rational Unified Process (1994),
Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software
Development, Feature Driven Development, and Dynamic Systems Development
97

Method (DSDM) (1995). These are now collectively referred to as Agile


Methodologies, after the Agile Manifesto was published in 2001.
Following are the Agile Manifesto principles −
• Individuals and interactions − In Agile development, self-organization and
motivation are important, as are interactions like co-location and pair
programming.
• Working software − Demo working software is considered the best means of
communication with the customers to understand their requirements,
instead of just depending on documentation.
• Customer collaboration − As the requirements cannot be gathered
completely in the beginning of the project due to various factors, continuous
customer interaction is very important to get proper product requirements.
• Responding to change − Agile Development is focused on quick responses
to change and continuous development.
Agile Vs Traditional SDLC Models
Agile is based on the adaptive software development methods, whereas the
traditional SDLC models like the waterfall model is based on a predictive approach.
Predictive teams in the traditional SDLC models usually work with detailed
planning and have a complete forecast of the exact tasks and features to be
delivered in the next few months or during the product life cycle.
Predictive methods entirely depend on the requirement analysis and
planning done in the beginning of cycle. Any changes to be incorporated go
through a strict change control management and prioritization.
Agile uses an adaptive approach where there is no detailed planning and
there is clarity on future tasks only in respect of what features need to be
developed. There is feature driven development and the team adapts to the
changing product requirements dynamically. The product is tested very frequently,
through the release iterations, minimizing the risk of any major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open
communication with minimum documentation are the typical features of Agile
development environment. The agile teams work in close collaboration with each
other and are most often located in the same geographical location.
Agile Model - Pros and Cons
Agile methods are being widely accepted in the software world recently.
However, this method may not always be suitable for all products. Here are some
pros and cons of the Agile model.
The advantages of the Agile Model are as follows −
• Is a very realistic approach to software development.
• Promotes teamwork and cross training.
• Functionality can be developed rapidly and demonstrated.
98

• Resource requirements are minimum.


• Suitable for fixed or changing requirements
• Delivers early partial working solutions.
• Good model for environments that change steadily.
• Minimal rules, documentation easily employed.
• Enables concurrent development and delivery within an overall planned
context.
• Little or no planning required.
• Easy to manage.
• Gives flexibility to developers.
• The disadvantages of the Agile Model are as follows −
• Not suitable for handling complex dependencies.
• More risk of sustainability, maintainability and extensibility.
• An overall plan, an agile leader and agile PM practice is a must without
which it will not work.
• Strict delivery management dictates the scope, functionality to be delivered,
and adjustments to meet the deadlines.
• Depends heavily on customer interaction, so if customer is not clear, team
can be driven in the wrong direction.
• There is a very high individual dependency, since there is minimum
documentation generated.
• Transfer of technology to new team members may be quite challenging due
to lack of documentation.
6.3.2 SDLC - RAD MODEL
The RAD (Rapid Application Development) model is based on prototyping
and iterative development with no specific planning involved. The process of writing
the software itself involves the planning required for developing the product.
Rapid Application Development focuses on gathering customer requirements
through workshops or focus groups, early testing of the prototypes by the customer
using iterative concept, reuse of the existing prototypes (components), continuous
integration and rapid delivery.
What is RAD?
Rapid application development is a software development methodology that
uses minimal planning in favor of rapid prototyping. A prototype is a working model
that is functionally equivalent to a component of the product.
In the RAD model, the functional modules are developed in parallel as
prototypes and are integrated to make the complete product for faster product
99

delivery. Since there is no detailed preplanning, it makes it easier to incorporate the


changes within the development process.
RAD projects follow iterative and incremental model and have small teams
comprising of developers, domain experts, customer representatives and other IT
resources working progressively on their component or prototype.
The most important aspect for this model to be successful is to make sure that
the prototypes developed are reusable.
RAD Model Design
RAD model distributes the analysis, design, build and test phases into a series
of short, iterative development cycles.
Following are the various phases of the RAD Model −
Business Modeling
The business model for the product under development is designed in terms of
flow of information and the distribution of information between various business
channels. A complete business analysis is performed to find the vital information
for business, how it can be obtained, how and when is the information processed
and what are the factors driving successful flow of information.
Data Modeling
The information gathered in the Business Modeling phase is reviewed and
analyzed to form sets of data objects vital for the business. The attributes of all data
sets is identified and defined. The relation between these data objects are
established and defined in detail in relevance to the business model.
Process Modeling
The data object sets defined in the Data Modeling phase are converted to
establish the business information flow needed to achieve specific business
objectives as per the business model. The process model for any changes or
enhancements to the data object sets is defined in this phase. Process descriptions
for adding, deleting, retrieving or modifying a data object are given.
Application Generation
The actual system is built and coding is done by using automation tools to
convert process and data models into actual prototypes.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are
independently tested during every iteration. However, the data flow and the
interfaces between all the components need to be thoroughly tested with complete
test coverage. Since most of the programming components have already been
tested, it reduces the risk of any major issues.
100

The following illustration describes the RAD Model in detail.

RAD Model Vs Traditional SDLC


The traditional SDLC follows a rigid process models with high emphasis on
requirement analysis and gathering before the coding starts. It puts pressure on the
customer to sign off the requirements before the project starts and the customer
doesn’t get the feel of the product as there is no working build available for a long
time.
The customer may need some changes after he gets to see the software.
However, the change process is quite rigid and it may not be feasible to incorporate
major changes in the product in the traditional SDLC.
The RAD model focuses on iterative and incremental delivery of working
models to the customer. This results in rapid delivery to the customer and
customer involvement during the complete development cycle of product reducing
the risk of non-conformance with the actual user requirements.
RAD Model - Application
RAD model can be applied successfully to the projects in which clear
modularization is possible. If the project cannot be broken into modules, RAD may
fail.
The following pointers describe the typical scenarios where RAD can be used −
• RAD should be used only when a system can be modularized to be delivered
in an incremental manner.
101

• It should be used if there is a high availability of designers for Modeling.


• It should be used only if the budget permits use of automated code
generating tools.
• RAD SDLC model should be chosen only if domain experts are available
with relevant business knowledge.
• Should be used where the requirements change during the project and
working prototypes are to be presented to customer in small iterations of
2-3 months.
RAD Model - Pros and Cons
RAD model enables rapid delivery as it reduces the overall development time
due to the reusability of the components and parallel development. RAD works well
only if high skilled engineers are available and the customer is also committed to
achieve the targeted prototype in the given time frame. If there is commitment
lacking on either side the model may fail.
The advantages of the RAD Model are as follows −
• Changing requirements can be accommodated.
• Progress can be measured.
• Iteration time can be short with use of powerful RAD tools.
• Productivity with fewer people in a short time.
• Reduced development time.
• Increases reusability of components.
• Quick initial reviews occur.
• Encourages customer feedback.
• Integration from very beginning solves a lot of integration issues.
The disadvantages of the RAD Model are as follows −
• Dependency on technically strong team members for identifying business
requirements.
• Only system that can be modularized can be built using RAD.
• Requires highly skilled developers/designers.
• High dependency on Modelling skills.
• Inapplicable to cheaper projects as cost of Modelling and automated code
generation is very high.
• Management complexity is more.
• Suitable for systems that are component based and scalable.
• Requires user involvement throughout the life cycle.
• Suitable for project requiring shorter development times.
102

6.3.3 SDLC - SOFTWARE PROTOTYPE MODEL


The Software Prototyping refers to building software application prototypes
which displays the functionality of the product under development, but may not
actually hold the exact logic of the original software.
Software prototyping is becoming very popular as a software development
model, as it enables to understand customer requirements at an early stage of
development. It helps get valuable feedback from the customer and helps software
designers and developers understand about what exactly is expected from the
product under development.
What is Software Prototyping?
Prototype is a working model of software with some limited functionality. The
prototype does not always hold the exact logic used in the actual software
application and is an extra effort to be considered under effort estimation.
Prototyping is used to allow the users evaluate developer proposals and try
them out before implementation. It also helps understand the requirements which
are user specific and may not have been considered by the developer during
product design.
Following is a stepwise approach explained to design a software prototype.
Basic Requirement Identification
This step involves understanding the very basics product requirements
especially in terms of user interface. The more intricate details of the internal
design and external aspects like performance and security can be ignored at this
stage.
Developing the initial Prototype
The initial Prototype is developed in this stage, where the very basic
requirements are showcased and user interfaces are provided. These features may
not exactly work in the same manner internally in the actual software developed.
While, the workarounds are used to give the same look and feel to the customer in
the prototype developed.
Review of the Prototype
The prototype developed is then presented to the customer and the other
important stakeholders in the project. The feedback is collected in an organized
manner and used for further enhancements in the product under development.
Revise and Enhance the Prototype
The feedback and the review comments are discussed during this stage and
some negotiations happen with the customer based on factors like – time and
budget constraints and technical feasibility of the actual implementation. The
changes accepted are again incorporated in the new Prototype developed and the
cycle repeats until the customer expectations are met.
Prototypes can have horizontal or vertical dimensions. A Horizontal prototype
displays the user interface for the product and gives a broader view of the entire
103

system, without concentrating on internal functions. A Vertical prototype on the


other side is a detailed elaboration of a specific function or a sub system in the
product.
The purpose of both horizontal and vertical prototype is different. Horizontal
prototypes are used to get more information on the user interface level and the
business requirements. It can even be presented in the sales demos to get business
in the market. Vertical prototypes are technical in nature and are used to get
details of the exact functioning of the sub systems. For example, database
requirements, interaction and data processing loads in a given sub system.
Software Prototyping - Types
There are different types of software prototypes used in the industry. Following
are the major software prototyping types used widely −
Throwaway/Rapid Prototyping
Throwaway prototyping is also called as rapid or close ended prototyping. This
type of prototyping uses very little efforts with minimum requirement analysis to
build a prototype. Once the actual requirements are understood, the prototype is
discarded and the actual system is developed with a much clear understanding of
user requirements.
Evolutionary Prototyping
Evolutionary prototyping also called as breadboard prototyping is based on
building actual functional prototypes with minimal functionality in the beginning.
The prototype developed forms the heart of the future prototypes on top of which
the entire system is built. By using evolutionary prototyping, the well-understood
requirements are included in the prototype and the requirements are added as and
when they are understood.
Incremental Prototyping
Incremental prototyping refers to building multiple functional prototypes of the
various sub-systems and then integrating all the available prototypes to form a
complete system.
Extreme Prototyping
Extreme prototyping is used in the web development domain. It consists of
three sequential phases. First, a basic prototype with all the existing pages is
presented in the HTML format. Then the data processing is simulated using a
prototype services layer. Finally, the services are implemented and integrated to the
final prototype. This process is called Extreme Prototyping used to draw attention to
the second phase of the process, where a fully functional UI is developed with very
little regard to the actual services.
Software Prototyping - Application
Software Prototyping is most useful in development of systems having high
level of user interactions such as online systems. Systems which need users to fill
out forms or go through various screens before data is processed can use
104

prototyping very effectively to give the exact look and feel even before the actual
software is developed.
Software that involves too much of data processing and most of the
functionality is internal with very little user interface does not usually benefit from
prototyping. Prototype development could be an extra overhead in such projects
and may need lot of extra efforts.
Software Prototyping - Pros and Cons
Software prototyping is used in typical cases and the decision should be taken
very carefully so that the efforts spent in building the prototype add considerable
value to the final software developed. The model has its own pros and cons
discussed as follows.
The advantages of the Prototyping Model are as follows −
• Increased user involvement in the product even before its implementation.
• Since a working model of the system is displayed, the users get a better
understanding of the system being developed.
• Reduces time and cost as the defects can be detected much earlier.
• Quicker user feedback is available leading to better solutions.
• Missing functionality can be identified easily.
• Confusing or difficult functions can be identified.
The Disadvantages of the Prototyping Model are as follows −
• Risk of insufficient requirement analysis owing to too much dependency on
the prototype.
• Users may get confused in the prototypes and actual systems.
• Practically, this methodology may increase the complexity of the system as
scope of the system may expand beyond original plans.
• Developers may try to reuse the existing prototypes to build the actual
system, even when it is not technically feasible.
• The effort invested in building prototypes may be too much if it is not
monitored properly.
6.4 REVISION POINTS
• Steps in Agile model
• Steps in RAD
• Steps in Prototyping
6.5 INTEXT QUESTIONS
1. Explain the manifesto principles of Agile model.
2. Compare Agile Vs Traditional models of SDLC.
3. Write a short note on Rapid Application Development.
4. Explain the phases of RAD model.
5. Explain the software prototyping types.
105

6.6 SUMMARY
Agile SDLC model is a combination of iterative and incremental process
models with focus on process adaptability and customer satisfaction by rapid
delivery of working software product.
Agile is based on the adaptive software development methods
In the RAD model, the functional modules are developed in parallel as
prototypes and are integrated.
Prototyping is used to allow the users evaluate developer proposals and try
them out before implementation
6.7 TERMINAL EXERCISE
1. _______________ is the backbone of this Agile methodology
2. _______________ follow iterative and incremental model
3. _____________ is a working model
4. ________________ can have horizontal or vertical dimensions.
5. Predictive methods entirely depend on the ___________________________ done
in the beginning of cycle.
6.8 SUPPLEMENTARY MATERIALS
1. https://www.javatpoint.com/software-engineering-sdlc-models
2. https://www.innovativearchitects.com/KnowledgeCenter/basic-IT-
systems/8-SDLC-models.aspx
6.9 ASSIGNMENTS
1. Discuss the basic requirements, steps and types of prototyping model of
software development.
6.10 SUGGESTED READING/REFERENCE
1. Ritendra Goel & Kakkar D.N., Computer Application in Management, New
Age Publishing, New Delhi, 2013.
2. https://www.tutorialspoint.com/sdlc/sdlc_quick_guide.htm
6.11 LEARNING ACTIVITIES
1. Form a group with your interested friends and discuss about the features,
advantages and disadvantages of various SDLC models.
6.12 KEYWORDS
Adaptive software development methods
Requirement analysis and planning
Predictive methods
Throwaway prototyping
Evolutionary prototyping
Extreme prototyping

106

LESSON - 7
SYSTEM, INPUT AND OUTPUT DESIGN
7.1 INTRODUCTION
System design is the process that bridges the gap between problem domain
and the existing system in a manageable way. Information systems in business are
file and database oriented. Data are accumulated into files that are processed or
maintained by the system. In an information system, input is the raw data that is
processed to produce output. During the input design, the developers must
consider the input devices such as PC, MICR, OMR, etc.
Outputs from computer systems are required primarily to communicate the
results of processing to users. Without quality output, the entire system may
appear to be so unnecessary that users will avoid using it, possibly causing it to
fail. The term output applies to any information produced by an information
system. Let us discuss the design of system, form, input and output in detail in this
chapter.
7.2 OBJECTIVES
• To understand the concept of system and its importance.
• To get insights in the system design concept and process
• To infuse the knowledge of input and form design.
• To learn about output design and its importance
• To introduce database design.
7.3 CONTENTS
7.3.1 System Design
7.3.2 File Design
7.3.3 Input Design
7.3.4 Output Design
7.3.5 Form Design
7.3.6 Database Design
7.3.1 SYSTEM DESIGN
System design is the phase that bridges the gap between problem domain and
the existing system in a manageable way. This phase focuses on the solution
domain, i.e. “how to implement?”
It is the phase where the SRS document is converted into a format that can be
implemented and decides how the system will operate.
In this phase, the complex activity of system development is divided into
several smaller sub-activities, which coordinate with each other to achieve the main
objective of system development.
107

Inputs to System Design


System design takes the following inputs −
• Statement of work
• Requirement determination plan
• Current situation analysis
• Proposed system requirements including a conceptual data model, modified
DFDs, and Metadata (data about data).
Outputs form System Design
System design gives the following outputs −
• Infrastructure and organizational changes for the proposed system.
• A data schema, often a relational schema.
• Metadata to define the tables/files and columns/data
columns/data-items.
• A function hierarchy diagram or web page map that graphically describes
the program structure.
• Actual or pseudocode for each module in the program.
• A prototype for
or the proposed system.
108

Types of System Design


Logical Design
Logical design pertains to an abstract representation of the data flow, inputs,
and outputs of the system. It describes the inputs (sources), outputs (destinations),
databases (data stores), procedures (data flows) all in a format that meets the user
requirements.
While preparing the logical design of a system, the system analyst specifies the
user needs at level of detail that virtually determines the information flow into and
out of the system and the required data sources. Data flow diagram, E-R diagram
modeling are used.
Physical Design
Physical design relates to the actual input and output processes of the system.
It focuses on how data is entered into a system, verified, processed, and displayed
as output.
It produces the working system by defining the design specification that
specifies exactly what the candidate system does. It is concerned with user
interface design, process design, and data design.
It consists of the following steps −
• Specifying the input/output media, designing the database, and specifying
backup procedures.
• Planning system implementation.
• Devising a test and implementation plan, and specifying any new hardware
and software.
• Updating costs, benefits, conversion dates, and system constraints.
Architectural Design
It is also known as high level design that focuses on the design of system
architecture. It describes the structure and behavior of the system. It defines the
structure and relationship between various modules of system development
process.
Detailed Design
It follows Architectural design and focuses on development of each module.
Conceptual Data Modeling
It is representation of organizational data which includes all the major entities
and relationship. System analysts develop a conceptual data model for the current
system that supports the scope and requirement for the proposed system.
The main aim of conceptual data modeling is to capture as much meaning of data
as possible. Most organization today use conceptual data modeling using E-R
model which uses special notation to represent as much meaning about data as
possible.
109

Entity Relationship Model


It is a technique used in database design that helps describe the relationship
between various entities of an organization.
Terms used in E-R model
• ENTITY − It specifies distinct real world items in an application. For
example: vendor, item, student, course, teachers, etc.
• RELATIONSHIP − They are the meaningful dependencies between entities.
For example, vendor supplies items, teacher teaches courses, then supplies
and course are relationship.
• ATTRIBUTES − It specifies the properties of relationships. For example,
vendor code, student name. Symbols used in E-R model and their respective
meanings −
The following table shows the symbols used in E-R model and their significance −

Symbol Meaning

Entity

Weak Entity

Relationship

Identity Relationship

Attributes

Key Attributes

Multivalued
110

Symbol Meaning

Composite Attribute

Derived Attributes

Total Participation of E2 in R

Cardinality Ratio 1:N for E1:E2 in R

Three types of relationships can exist between two sets of data: one-to-one, one-to-
many, and many-to-many.
File Organization
It describes how records are stored within a file.
There are four file organization methods −
• Serial − Records are stored in chronological order (in order as they are input
or occur). Examples − Recording of telephone charges, ATM transactions,
Telephone queues.
• Sequential − Records are stored in order based on a key field which
contains a value that uniquely identifies a record. Examples − Phone
directories.
• Direct (relative) − Each record is stored based on a physical address or
location on the device. Address is calculated from the value stored in the
record’s key field. Randomizing routine or hashing algorithm does the
conversion.
• Indexed − Records can be processed both sequentially and non-sequentially
using indexes.
111

Comparison of file organisation

File Access
One can access a file using either Sequential Access or Random Access. File
Access methods allow computer programs read or write records in a file.
Sequential Access
Every record on the file is processed starting with the first record until End of
File (EOF) is reached. It is efficient when a large number of ththee records on the file
need to be accessed at any given time. Data stored on a tape (sequential access) can
be accessed only sequentially.
Direct (Random) Access
Records are located by knowing their physical locations or addresses on the
device rather than their positions relative to other records. Data stored on a CD
device (direct-access)
access) can be accessed either sequentially or randomly.
Types of Files used in an Organization System
Following are the types of files used in an organization system −
• Master file − It contains the current information for a system. For example,
customer file, student file, telephone directory.
• Table file − It is a type of master file that changes infrequently and stored
in a tabular format. For example, storing Zipcode.
• Transaction file − It contains the day-to-day
day information generated from
business activities. It is used to update or process the master file. For
example, Addresses of the employees.
• Temporary file − It is created and used whenever needed by a system.
112

• Mirror file − They are the exact duplicates of other files. Help minimize the
risk of downtime in cases when the original becomes unusable. They must
be modified each time the original file is changed.
• Log files − They contain copies of master and transaction records in order
to chronicle any changes that are made to the master file. It facilitates
auditing and provides mechanism for recovery in case of system failure.
• Archive files − Backup files that contain historical versions of other files.
Documentation Control
Documentation is a process of recording the information for any reference or
operational purpose. It helps users, managers, and IT staff, who require it. It is
important that prepared document must be updated on regular basis to trace the
progress of the system easily.
After the implementation of system if the system is working improperly, then
documentation helps the administrator to understand the flow of data in the
system to correct the flaws and get the system working.
Programmers or systems analysts usually create program and system
documentation. Systems analysts usually are responsible for preparing
documentation to help users learn the system. In large companies, a technical
support team that includes technical writers might assist in the preparation of user
documentation and training materials.
Advantages
• It can reduce system downtime, cut costs, and speed up maintenance tasks.
• It provides the clear description of formal flow of present system and helps
to understand the type of input data and how the output can be produced.
• It provides effective and efficient way of communication between technical
and nontechnical users about system.
• It facilitates the training of new user so that he can easily understand the
flow of system.
• It helps the user to solve the problems such as troubleshooting and helps
the manager to take better final decisions of the organization system.
• It provides better control to the internal or external working of the system.
Types of Documentations
When it comes to System Design, there are following four main documentations −
• Program documentation
• System documentation
• Operations documentation
• User documentation
113

Program Documentation
• It describes inputs, outputs, and processing logic for all the program
modules.
• The program documentation process starts in the system analysis phase
and continues during implementation.
• This documentation guides programmers, who construct modules that are
well supported by internal and external comments and descriptions that
can be understood and maintained easily.
Operations Documentation
Operations documentation contains all the information needed for processing
and distributing online and printed output. Operations documentation should be
clear, concise, and available online if possible.
It includes the following information −
• Program, systems analyst, programmer, and system identification.
• Scheduling information for printed output, such as report, execution
frequency, and deadlines.
• Input files, their source, output files, and their destinations.
• E-mail and report distribution lists.
• Special forms required, including online forms.
• Error and informational messages to operators and restart procedures.
• Special instructions, such as security requirements.
User Documentation
It includes instructions and information to the users who will interact with the
system. For example, user manuals, help guides, and tutorials. User
documentation is valuable in training users and for reference purpose. It must be
clear, understandable, and readily accessible to users at all levels.
The users, system owners, analysts, and programmers, all put combined efforts to
develop a user’s guide.
A user documentation should include −
• A system overview that clearly describes all major system features,
capabilities, and limitations.
• Description of source document content, preparation, processing, and,
samples.
• Overview of menu and data entry screen options, contents, and processing
instructions.
• Examples of reports that are produced regularly or available at the user’s
request, including samples.
• Security and audit trail information.
114

• Explanation of responsibility for specific input, output, or processing


requirements.
• Procedures for requesting changes and reporting problems.
• Examples of exceptions and error situations.
• Frequently asked questions (FAQs).
• Explanation of how to get help and procedures for updating the user
manual.
System Documentation
System documentation serves as the technical specifications for the IS and
how the objectives of the IS are accomplished. Users, managers and IS owners need
never reference system documentation. System documentation provides the basis
for understanding the technical aspects of the IS when modifications are made.
• It describes each program within the IS and the entire IS itself.
• It describes the system’s functions, the way they are implemented, each
program's purpose within the entire IS with respect to the order of
execution, information passed to and from programs, and overall system
flow.
• It includes data dictionary entries, data flow diagrams, object models,
screen layouts, source documents, and the systems request that initiated
the project.
• Most of the system documentation is prepared during the system analysis
and system design phases.
• During systems implementation, an analyst must review system
documentation to verify that it is complete, accurate, and up-to-date, and
including any changes made during the implementation process.
7.3.2 FILE DESIGN
Information systems in business are file and database oriented. Data are
accumulated into files that are processed or maintained by the system. The systems
analyst is responsible for designing files, determining their contents and selecting a
method for organising the data.
A file is organized to ensure that records are available for processing. It should
be designed in the line with the activity and volatility of the information and the
nature of the storage media and devices. Other considerations are (1) cost of file
media (highest for disk, lowest for tape) (2) inquiry requirements (real – time versus
batch processing) and (3) file privacy, integrity, security, and confidentiality.
File Components
The following are the components of a file.
Basic file Related Keywords
Byte:- It is the smallest addressable unit in computer. A byte is a set of 8 bits
and represents a character.
115

Element:- It is a combination of one or more bytes. It is referred to as a field.


A field is actually a physical space on tape or disk. A roll number, age, name of
employee etc. are examples of it.
File: - It is a collection of similar records. The records will have the same fields
but different values in each record. The size of a file is limited by the size of memory
available. The data is stored in files according to user requirements. Some records
are processed daily whereas other are updated at random. Depending upon the way
the data will be used, the file is organized
Data Item: it is an Individual element of data is called data items also known
as fields or simply items. For example bank cheque consists of the following data
items, check number, date, payee, numeric amount, script amount, note, bank
identification, account number, and signature.
Record: The complete set of related data pertaining to an entry, such as a
bank cheque is a record Treated as a single unit. The bank cheque is therefore a
record consisting of seven separate fields related to the payment transaction. Each
field has a defined length and type (alphabetic, alphanumeric, or numeric). The
elements related to are combined into a record. An employee has a record with his
name, designation, basic pay, allowances, deductions etc. as its fields. A record
may have a unique key to identify a record e.g. employee number. Records are
represented as logical & physical records. A logical record maintains a logical
relationship among all the data items in the record. It is the way the program or
user sees the data. In contrast a physical record is the way data are recorded on a
storage medium.
The records are further divided into fixed length and variable Length Records
Fixed length records: When the number and size of data item in a record are
constant for every record, the record is called a fixed length record. The advantage
of fixed-length record is that they are always of the same size. Thus, the system
does not have to determine how long the record is or where it stops and the next
one begins, thus saving processing time.
Variable-length records: Variable Length records are less common in most
business applications than fixed-length designs because the latter are easier to
manage and meet most application needs. Record size may vary because the
individual data items vary in length (each record can have a different number of
bytes) or because the number of data items in a record changes from one
occurrence to another.
Record Key
To distinguish one specific record from another, systems analysts select one
data item in the record that is likely to be unique in all records of a file and use it
for identification purposes.
116

This item, called the record key, key attribute, or simply key, is already part of
the record and is not an additional data added to it. It is used just for the purpose
of identification.
Common examples of record keys are the part number in an inventory record,
the chart number in a patient medical record, the student number in a university
record, or the serial number of a manufactured product. Each of these record keys
has various other uses in the organization.
Entity
An entity is any person, place, thing, or event of interest to the organisation
and about which data are captured, stored, or processed. Patients and tests are
entities of interest in hospitals, while banking entities include customers and
cheques.
File and Database
File: A file is a collection of related records. Each record in a file is included
because it pertains to the same entity. A file of cheques, for example, consists only
of cheques. Inventory records and invoice do not belong in a cheque file, since they
pertain to different entities.
Database: A database is an integrated collection of file. Records for different
entities are typically stored in a database (whereas files store records for a single
entity). In a university database, records for students, courses, and faculty are
interrelated in the same database.
File Organization: Records are stored in files using a file organization. This
file organization determines how the records will be Stored, Located and Retrieved
easily. There are different types of file organization which are explained below.
Sequential Organization
Sequential organisation is the simplest way to store and retrieve records in a
file. In a sequential file, records are stored one after the other without concern for
the actual value of the data in the records. The first record stored is placed at the
beginning of the file. The second is stored right after the first, (there are no unused
positions), the third after the second, and so on. This order never changes in
sequential file organisation, unlike the other organisations to be discussed
Sequential organization simply means storing and sorting in physical,
contiguous blocks within files on tape or disk. Records are also in sequence within
each block. To access a record, previous records within the block are scanned.
Thus sequential record design is best suited for “get next” activities, reading one
record after another without a search delay.
In a sequential organization, records can be added only at the end of the file. It
is not possible to insert a record in the middle of the file, without rewriting the file.
In a data base system, however, a record may be inserted anywhere in the file,
which would automatically resequence the records following the inserted record.
Another approach is to add all new records at the end of the file and later sort the
117

file on a key (name, number, etc.). Obviously, in a 60,000- record file it is less time-
consuming to insert the few records directly than to sort the entire file.
In a sequential file update, transaction records are in the same sequence as in
the master file. Records from both files are matched, one record at a time, resulting
in an updated master file. For example, the system changes the customer’s city of
residence as specified in the transaction file (on floppy disk) and corrects it in the
master file. A “C” in the record number specifies “replace”; an “A,” “add”; and a “D,”
“delete.”
In a personal computer with two disk drives, the master file is loaded on a
diskette into drive A, while the transaction file is loaded on another diskette into
drive B. Updating the master file transfers data from drive B to A, controlled by the
software in memory.
Reading data in Sequential Organization
To read a sequential file, the system always starts at the beginning of the file
and reads its way up to the record, one record at a time. For example, if a
particular record happens to be the tenth one in a file, the system starts at the first
record and reads ahead one record at a time until the tenth is reached. It cannot go
directly to the tenth record in a sequential file without starting from the beginning.
In fact, the system does not know it is the tenth record. Depending on the nature of
the system being designed the file starts reading from the very beginning.
Sequential Organization (Searching Record)
Records are accessed in order of their appearance in the file. To find location of
cheque 1258 in a sequential file, we will call the cheque number 1258, the search
key. The program controls all the processing steps that follow. The first record is
read and its cheque number compared with the search key: 1240 (Let it be first)
versus 1258. Since the cheque number and search key do not match, the process is
repeated. The cheque number for the next record is 1244, and it also does not
match the search key. The process of reading and comparing records continues
until the cheque number and the search key match. Once the key number matches
the record is accessed. If the file does not contain a cheque numbered 1258, the
reading and comparing process continues until the end of the file is reached.
Direct-Access Organisation
In contrast to sequential organisation, processing a direct-access file does not
require the system to start at the first record in the file. Direct-access files are
keyed files. They associate a record with a specific key value and a particular
storage location. All records are stored by key at addresses rather than by position;
if the program knows the record key, it can determine the location address of a
record and retrieve it independently of every other record in the file.
In direct – access file organization, records are placed randomly throughout
the file. Records need not be in sequence because they are updated directly and
rewritten back in the same location. New records are added at the end of the file or
inserted in specific locations based on software commands.
118

Records are accessed by addresses that specify their disk locations. An


address is required for location a record, for linking records, or for establishing
relationships. Addresses are of two types: absolute and relative. An absolute
address represents the physical location of the record. It is usually stated in the
format of sector/track/record number. For example, 3/14/6 means go to sector 3,
track 14 of that sector, and the sixth record of the track. One problem with
absolute addresses is that they become invalid when the file that contains the
records is relocated on the disk. One way around this is to use pointers for the
updated records.
A relative address gives a record location relative to the beginning of the file.
There must be fixed-length records for reference. Another way of locating a record is
by the number of bytes it is from the beginning of the file. Unlike relative
addressing, if the file is move, pointers need not be updated, because the relative
location of the record remains the same regardless of the file location.
Thus each file organization method has advantages and limitations. Many
applications by their nature are best done sequentially. Payroll is a good example.
The system goes through the employee list, extracts the information and prepares
pay slips. There are no lengthy random-access seeks. In contrast, real-time
applications where response requirements are measured in seconds are candidates
for random-access design. Systems for answering inquires, booking airlines or
stadium seats, updating checking or savings accounts in a bank, or interacting
with a terminal are examples for randomaccess design.
Direct-Access Organisation (Direct Addressing)
In the cheque example, the direct access of records is demonstrated by using a
storage area that has a space reserved for every cheque number from 1240 to 1300.
The system uses the cheque number as a physical record key. Cheque number
1248 is stored at address 1248, the location reserved for the cheque with that
number. To retrieve that cheque from storage in a computer system, the program is
instructed to use the number 1248 as the search key. It knows that the key serves
as the address and thus goes directly to the assigned location for the record with
the key of 1248 and retrieves the record. The attractive feature of direct
organisation is that records are retrieved much more quickly than when the file
must be searched from the beginning. When storage is assigned for the file, it starts
at the lowest key value and extends to the highest key value.
Direct Access Organization (Drawbacks-Direct Accessing)
Storage must be allocated even though it will go unused. Another major
problem is prohibiting the use of direct addressing arises when the keys for the
records do not match storage addresses. Even if the analyst wants to use direct
addressing, it is impossible to do so if key values and addresses do not correspond.
For example, if keys contain characters (e.g., a key of AB1CD) in direct addressing
is not possible, since there is no address for AB1CD.
119

Direct Access Organization (Hash Addressing)


When direct addressing is not possible but direct access is necessary, the
analyst specifies the alternative access method of hashing. Hashing (also called key
transformation or randomising) refers to the process of deriving a storage address
from a record key. An algorithm (an arithmetic procedure) is devised to change a
key value into another value that serves as a storage address. (The data value in
the record itself does not change.). There is no perfect hashing algorithm, although
some are much better than others when it comes to minimising synonyms. In
practice, synonyms occur when the hashing procedure is applied on different keys
and produces the same address in storage. A separate overflow area is set aside to
provide for record storage when synonyms occur. When a record is stored, the
hashing algorithm is performed and the address derived. The program accesses
that storage area, and, if it is unused, the record is stored there. If there is already
a record stored there, the new record is written in the overflow area. When the
system must retrieve a record, the hashing algorithm is performed and the storage
address determined. Then the record in the storage area is checked. If it is not the
correct one (meaning that a synonym occurred earlier), the system automatically
goes to the overflow area and retrieves the record for processing.
Indexed Organisation
A third way of accessing records is through an index. The basic form of index
included a record key and the storage address for a record. To find a record when
the storage address is unknown (as with direct address and hashing structures), it
is necessary to scan the records. However, the search will be faster if an index is
used, since it takes less time to search an index than an entire file of data.
Characteristics of Indexed file Organisation:
An index is a separate file from the master file to which it pertains. Each
record in the index contains only two items of data: a record key and a storage
address. To find a specific record when the file is stored under an indexed
organisation, the index is first searched to find the key of the record wanted. When
it is found, the corresponding storage address is noted and then the program
accesses the record directly. This method uses a sequential scan of the index,
followed by direct access to the appropriate record. The index helps speed the
search compared with a sequential file, but it is slower than direct addressing.
When the master file is not in any specific order , this method of file organisation is
indexed non- sequential organisation. There is one entry in the index for every
record in the master file.
Indexed Sequential Organisation
The one most widely used method in information systems, which creates a
pseudo sequential file. Groups of records are stored in blocks with a capacity for a
specified amount of data. For example, the blocks can store up to 3150 pieces of
data. The first block, starting at address 1345, is in sequential order. The master
file stores individual blocks of records in sequential order. This is not a sequential
file, however, since all the records are not stored in physically adjacent positions;
120

think of it as a file of separate, full or partially full blocks, each in sequential order.
The adjacent blocks are not in ascending order
Like sequential organization, keyed sequential organization stores data in
physically contiguous blocks. The difference is in the use of indexes to locate
records. To understand this method, we need to distinguish among three areas in
disk storage: prime area, overflow area and index area. The prime area contains file
records stored by key or ID numbers. All records are initially stored in the prime
area. The overflow area contains records added to the file that cannot be placed in
logical sequence in the prime area. The index area is more like a data dictionary. It
contains keys of records and their locations on the disk. A pointer associated with
each key is an address that tells the system where to find a record.
In an airline reservation file, the index area might contain pointers to the
Chicago and Delhi flights. The Chicago flight points to the Chicago flight
information stored in the prime area. The Delhi flight points to the Delhi flight
information in the prime area. Lack of space to store the Brisbane flight in
sequential order make it necessary to load it in the overflow area. The overflow
pointer places it logically in sequential order in the prime area. The same
arrangement applies to the other flights.
Indexed-sequential organization reduces the magnitude of the sequential
search and provides quick access for sequential and direct processing. The primary
drawback is the extra storage space required for the index. It also takes longer to
search the index for data access or retrieval.
Chaining
File organization requires that relationships be established among data items.
It must show how characters form fields, fields form files, and files relate to one
another. Establishing relationships is done through chaining or the use of pointers.
The example on airline reservation file showed how pointers, link one record to
another. Part number retrieves a record. A better way is to chain the records by
linking a pointer to each. The pointer gives the address of the next part type of the
same class. The search method applies similarly to other parts in the file.
Inverted File
The other type of data structure commonly used in database management
systems is an inverted file. This approach uses an index to store information about
the location of records having particular attributes. In a fully inverted file, there is
one index for each type of data item in the data set . Each record in the index
contains the storage address of each record in the file that meets the attribute.
Some data items in a database will probably never be used to retrieve data.
Therefore, no index will be built for those data items. If not all attributes are
indexed, the database is only partially inverted, which is more common data
structure.
Like the indexed-sequential storage method, the inverted list organization
maintains an index. The two methods differ, however, in the index level and record
121

storage. The indexed- sequential method has a multiple index for a given key,
whereas the inverted list method has a single index for each key type. In an
inverted list, records are not necessarily stored necessarily stored in particular
sequence. They are placed in the data storage area, but indexes are updated for the
record keys and location.
Data for our flight reservation system has a separate Index area and a data
location area. The index area may contain flight number and a pointer to the record
present in the data location area. The data location area may have record numbers
along with all the details of the flight such as the flight number, flight description,
and flight departure time. These are all defined as keys, and a separate index is
maintained for each. In the data location area, flight information is in no particular
sequence. Assume that a passenger needs information about the Delhi flight. The
agent requests the record with the flight description “Delhi flight”. The Data Base
Management System (DBMS) then reads the single-level index sequentially until it
finds the key value for the Delhi flight. This value may have two records associated
with it. The DBMS essentially tells the agent the departing time of the flight.
Looking at inverted-list organization differently, suppose the passenger requests
information’s on a Delhi flight that departs at 8:15. The DBMS first searches the
flight description index for the value of the “Delhi flight.” It finds both the records.
Next it searches the flight departure index for these values. It finds that one of them
departs at 10:10, but the other departs at 8:15. The later record in the data
location area is displayed for follow-up.
It can be seen that inverted lists are best for applications that request specific
data on multiple keys. They are ideal for static files because additions and deletions
cause expensive pointer updating.
7.3.3 INPUT DESIGN
Introduction
Input Specification describes the manner in which data enter the systems for
processing. Input design features can ensure reliability of system and produce
results from accurate data. The input design also determines whether the user can
interact efficiently with the system.
In an information system, input is the raw data that is processed to produce
output. During the input design, the developers must consider the input devices
such as PC, MICR, OMR, etc.
Therefore, the quality of system input determines the quality of system output.
Well designed input forms and screens have following properties −
• It should serve specific purpose effectively such as storing, recording, and
retrieving the information.
• It ensures proper completion with accuracy.
• It should be easy to fill and straightforward.
• It should focus on user’s attention, consistency, and simplicity.
122

• All these objectives are obtained using the knowledge of basic design
principles regarding −
o What are the inputs needed for the system?
o How end users respond to different elements of forms and screens.
Objectives for Input Design
The objectives of input design are −
• To design data entry and input procedures
• To reduce input volume
• To design source documents for data capture or devise other data capture
methods
• To design input data records, data entry screens, user interface screens, etc.
• To use validation checks and develop effective input controls.
Controlling Amount of Input Data preparation and data entry operations
depend on people. Because labour costs are high, the cost of preparing and
entering data is high, so reducing data requirements can lower costs. The computer
may sit idle while data are being prepared & input for processing. By reducing
input requirements, the analyst can speed the entire process from data capture to
processing.
Avoiding Delay: Avoiding processing delays resulting from data preparation or
data entry operations should be one of the objectives of the analyst in designing
input.
Avoiding Errors: In Data the rate at which errors occur depends on the
quantity of data, since the smaller the amount of data fewer the opportunities for
errors. The analyst can reduce the number of errors by reducing the volume of data
that must be entered for each transaction.
Avoiding Extra Steps When the volume of transactions can't be reduced, the
analyst must be sure the process is as efficient as possible. Such input designs that
cause extra steps should be avoided.
Keeping The Process Simple There should not be so many controls on errors
that people will have difficulty using the system. The system should be such that it
is comfortable to use while providing the error control methods.
Data Input Methods
It is important to design appropriate data input methods to prevent errors
while entering data. These methods depend on whether the data is entered by
customers in forms manually and later entered by data entry operators, or data is
directly entered by users on the PCs.
A system should prevent user from making mistakes by −
• Clear form design by leaving enough space for writing legibly.
123

• Clear instructions to fill form.


• Clear form design.
• Reducing key strokes.
• Immediate error feedback.
• Some of the popular data input methods are −
• Batch input method (Offline data input method)
• Online data input method
• Computer readable forms
• Interactive data input
Input Integrity Controls
Input integrity controls include a number of methods to eliminate common
input errors by end-users. They also include checks on the value of individual
fields; both for format and the completeness of all inputs.
Audit trails for data entry and other system operations are created using
transaction logs which gives a record of all changes introduced in the database to
provide security and means of recovery in case of any failure.
7.3.4 OUTPUT DESIGN
The design of output is the most important task of any system. During output
design, developers identify the type of outputs needed, and consider the necessary
output controls and prototype report layouts.
One of the most important features of an information system for users is the
output it produces. Outputs from computer systems are required primarily to
communicate the results of processing to users. Without quality output, the entire
system may appear to be so unnecessary that users will avoid using it, possibly
causing it to fail. The term output applies to any information produced by an
information system
Objectives of Output Design
Convey information about past activities, current status or projections of the
future e.g. - a report on stock in hand shows current status, exception report e.g.
for electricity billing number of houses locked in a area. Signal important events,
opportunities problems or warnings. Trigger an action e.g. reorder level report
whether printed or displayed. Confirm an action e.g. report of goods received
The objectives of input design are −
• To develop output design that serves the intended purpose and eliminates
the production of unwanted output.
• To develop the output design that meets the end users requirements.
• To deliver the appropriate quantity of output.
• To form the output in appropriate format and direct it to the right person.
• To make the output available on time for making good decisions.
124

Some Key Output Questions


• Who will receive the output?
• What is its planned use?
• How much detail is needed?
• When and how often is the output needed?
• By What Method?
Contents of the Outputs
Data Items: The name of each data item along with its characteristics should
be recorded in a standard form: Whether it is alphabetic or numeric Valid and
specific range of values e.g. minimum, maximum fixed values or ranges. Size of
data item Position of decimal point, arithmetic sign or any other indicator The
objective is to present the same data item being referred to by various names or the
same name being used to describe different items
Data Totals: There is often a need to provide totals at various levels. Their
source must be identified and they must be defined and registered as data items.
The systems analyst must specify: - At what level(s) they are required e.g. subtotal,
grand total. The position e.g. at the end of line. What will cause them to occur e.g.
change of key or any other condition
Data Editing: It is not always desirable to print or display data as it is held
on a computer. The systems analyst must know whether the form in which it is
stored is suitable for the output. So if any editing is required he must specify it e.g.
• Decimal points to be inserted or not.
• Where the currency symbol should appear as prefix or suffix.
• Alignment of items e.g., right, left.
Output Media: Systems analyst also has to determine the most appropriate
medium for the outputs. This will involve consideration of wide range of devices
including
• Line Printer
• Graph plotter
• VDU
• Magnetic Media
• Microfilm
Following consideration is made while selecting an output media:
• Suitability of the device to the particular application.
• The need for hard copy and number of copies required.
• The response time required.
• The location of users
125

• The S/W and H/W available.


• The cost.
Developing A Printed Output Layout: The design of printed output will
determine its usefulness to the recipient. An output layout is the arrangement of
items on the output medium. When analysts design an output layout, they are
building a mock up of the actual report or document as it will appear after the
system is in operation. The layout should show the location and position of the
following.
• All variable information
o Item details
o Summaries and totals
o Separators e.g. dash & underline, control breaks
• All pre-printed details
o Headings
o Document name
o Organisation name and address
o Instructions
o Notes & comments
Common notations used in designing an output layout:
• Variable information
o X to denote that an alphabet or special character *,/ will be printed or
displayed.
o 9 to denote a number will be printed.
• Constant information
o The information written on the form as it should appear when printed.
Designing Printed Output
Headings: In every report- title of the report, date and time should be included
to tell the users what they are working with and on what date it was prepared. The
page number provides quick reference for the users who work with data found at
various locations throughout the report.
Column Headings: Before actually marking in the data fields, enter the
column headings. It is a good practice to use an underline, dash or some other
symbol to separate the column headings from the start of data. Every column
should have a heading that describe its contents.
Data & Details: Enter the description of the data below the column headings,
using the X and 9 conventions explained earlier and indicate size of data item.
126

Summaries: Some report designs specify summary information, column totals


or subtotals. Label all titles and headings as you wish them to appear, denote
variable data by X or 9 and indicate the maximum length of the field.
Guidelines for Report Design
• Reports and documents should be designed to read from left to right and top
to bottom.
• The most important items should be easiest to find e.g. in an inventory
report Item Number is the most important item. It is placed in the first
column.
• All pages should have a title and page number and show the date the output
was prepared.
• All columns should be labelled.
• Abbreviations should be avoided.
Let us now go through various types of outputs −
External Outputs
Manufacturers create and design external outputs for printers. External
outputs enable the system to leave the trigger actions on the part of their recipients
or confirm actions to their recipients.
Some of the external outputs are designed as turnaround outputs, which are
implemented as a form and re-enter the system as an input.
Internal outputs
Internal outputs are present inside the system, and used by end-users and
managers. They support the management in decision making and reporting.
There are three types of reports produced by management information −
• Detailed Reports − They contain present information which has almost no
filtering or restriction generated to assist management planning and control.
• Summary Reports − They contain trends and potential problems which are
categorized and summarized that are generated for managers who do not
want details.
• Exception Reports − They contain exceptions, filtered data to some
condition or standard before presenting it to the manager, as information.
Output Integrity Controls
Output integrity controls include routing codes to identify the receiving
system, and verification messages to confirm successful receipt of messages that
are handled by network protocol.
Printed or screen-format reports should include a date/time for report printing
and the data. Multipage reports contain report title or description, and pagination.
Pre-printed forms usually include a version number and effective date.
127

7.3.5 FORMS DESIGN


Both forms and reports are the product of input and output design and are
business document consisting of specified data. The main difference is that forms
provide fields for data input but reports are purely used for reading. For example,
order forms, employment and credit application, etc.
• During form designing, the designers should know −
o who will use them
o where would they be delivered
o the purpose of the form or report
• During form design, automated design tools enhance the developer’s ability
to prototype forms and reports and present them to end users for
evaluation.
Objectives of Good Form Design
A good form design is necessary to ensure the following −
• To keep the screen simple by giving proper sequence, information, and clear
captions.
• To meet the intended purpose by using appropriate forms.
• To ensure the completion of form with accuracy.
• To keep the forms attractive by using icons, inverse video, or blinking
cursors etc.
• To facilitate navigation.
Types of Forms
Flat Forms
• It is a single copy form prepared manually or by a machine and printed on a
paper. For additional copies of the original, carbon papers are inserted
between copies.
• It is a simplest and inexpensive form to design, print, and reproduce, which
uses less volume.
Unit Set/Snap out Forms
• These are papers with one-time carbons interleaved into unit sets for either
handwritten or machine use.
• Carbons may be either blue or black, standard grade medium intensity.
Generally, blue carbons are best for handwritten forms while black carbons
are best for machine use.
Continuous strip/Fanfold Forms
• These are multiple unit forms joined in a continuous strip with perforations
between each pair of forms.
• It is a less expensive method for large volume use.
128

No Carbon Required (NCR) Paper


• They use carbonless papers which have two chemical coatings (capsules),
one on the face and the other on the back of a sheet of paper.
• When pressure is applied, the two capsules interact and create an image.
7.3.6 DATA BASE DESIGN
A decade ago, database was unique to large corporations with mainframes.
Today it is recognized as standard of MIS and is available for virtually every size of
computer. Before the data base concept became operational, users had programs
the handled their own data independent of other users. It was a conventional file
environment with no data integration or sharing of common data across
applications. In a database environment, common data are available and used by
several users. Instead of each program (or user) managing its own data, data across
applications are shared by authorized users with the data base software managing
the data as an entity. A program now requests data through the data base
management system (DBMS), which determines data sharing.
Database: - It is a set of interrelated files. The files in combination tend to link
to a common solution. For example, a student attendance file, a student result file,
a student admission file, etc. are related to academic software pertaining to
students
Objectives of Data Base
The general theme behind a database is to handle information as an integrated
whole. There is none of the artificiality that is normally embedded in separate file or
applications. A database is a collection of interrelated data stored with minimum
redundancy to serve many users quickly and efficiently. The general objective is to
make information access easy, quick, inexpensive and flexible for the user. In data
base design, several specific objectives are considered:
Controlled redundancy: - Redundant data occupies space and, therefore, is
wasteful. If versions of the same data are in different phases of updating, the
system often gives conflicting information. A unique aspect of data base design is
storing data only once, which controls redundancy and improves system
performance.
Ease of learning and use: - A major feature of a user- friendly database
package is how easy it is to learn and use. Related to this point is that a database
can be modified without interfering with established ways of using the data.
Data independence: - An important database objective is changing hardware
and storage procedures or adding new data without having to rewrite application
programs. The database should be “ tunable” to improve performance without
rewriting programs.
More information at low cost: - Using, storing and modifying data at low cost
are important. Although hardware prices are falling, software and programming
129

costs are on the rise. This means that programming and software enhancements
should be kept simple and easy to update.
Accuracy and integrity: - The accuracy of a database ensures that data
quality and content remain constant. Integrity controls detect data inaccuracies
where they occur.
Recovery from failure: - With multi-user access to a database, the system
must recover quickly after it is down with no loss of transactions. This objective
also helps maintain data accuracy and integrity.
Privacy and security: - For data to remain private, security measures must
be taken to prevent unauthorized access. Database security means that data are
protected from various forms of destruction; users must be positively identified and
their actions monitored.
Performance: - This objective emphasizes response time to inquiries suitable
to the use of the data. How satisfactory the response time is depends on the nature
of the user-data base dialogue. For example, inquiries regarding airline seat
availability should be handled in a few seconds. On the other extreme, inquiries
regarding the total sale of a product over the past two weeks may be handled
satisfactorily in 50 seconds.
In a data base environment, the DBMS is the software that provides the
interface between the data file on disk and the program that requests processing.
The DBMS stores and manages data. The procedure is as follows:
1. The user requests a sales report through the application program. The
application program uses a data manipulation language (DML) to tell the
DBMS what is required.
2. The DBMS refers to the data model, which describes the view in a language
called the data definition language (DDL). The DBMS uses DDL to determine
how data must be structured to produce the user’s view.
3. The DBMS requests the input/output control system (IOCS) to retrieve the
information from physical storage as specified by the application program.
The output is the sales report.
To summarize,
1. DML manipulates data; it specifies what is required.
2. DDL describes how data are structured.
3. DBMS manages data according to DML requests and DDL descriptions.
4. DBMS performs several important functions:
5. Storing, retrieving, and updating data.
6. Creating program and data independence. Either one can be altered
independently of the other.
130

7. Enforcing procedures for data integrity. Data are immune from deliberate
alteration because the programmer has no direct method of altering physical
databases.
8. Reducing data redundancy. Data are stored and maintained only once.
9. Providing security facilities for defining users and enforcing authorization.
Access is limited to authorized users by passwords or similar schemes.
10. Reducing physical storage requirements by separating the logical and
physical aspects of the database.
7.4 REVISION POINTS
• System design
• E-R model
• Documentation
• File components
• Input and output design
• Form and database design
7.5 INTEXT QUESTIONS
1. Explain the steps in system design.
2. Explain the terms and symbol used in E-R model.
3. Explain the different types of documentation for a system.
4. List the components of a file.
5. Write short note on : Input design, Output design and database design.
7.6 SUMMARY
System design is the phase that bridges the gap between problem domain
and the existing system in a manageable way.
Operations documentation contains all the information needed for
processing and distributing online and printed output.
System documentation serves as the technical specifications for the IS and
how the objectives of the IS are accomplished.
File organization determines how the records will be stored, located and
retrieved easily.
The input design also determines whether the user can interact efficiently
with the system.
During output design, developers identify the type of outputs needed, and
consider the necessary output controls and prototype report layouts.
131

7.7 TERMINAL EXERCISE


1. ____________ specifies distinct real world items in an application.
2. _____________ are the meaningful dependencies between entities.
3. _______________ describes the structure and behavior of the system
4. _____________ document serves as the technical specifications.
5. Identifying the type of outputs needed, and designing the necessary output
controls are carried out in ____________
7.8 SUPPLEMENTARY MATERIALS
1. https://economictimes.indiatimes.com/definition/systems-design
2. https://www.slideshare.net/ShifaliSharma1/itftfile-design
7.9 ASSIGNMENTS
1. Discuss the different types of file organization and its characteristics with
suitable example.
7.10 SUGGESTED READING/REFERENCE
1. System Analysis, Design, and Development: Concepts, Principles, and
Practices, Charles S. Wasson, Wiley Publication.
2. https://www.nasa.gov/seh/4-design-process
7.11 LEARNING ACTIVITIES
1. Try to contact system professionals and discuss about the system
development process and other related activities.
7.12 KEYWORDS
File design
E-R Model: Entity Relationship model
File organization
Entity
Input design
Output design
Form design
Chaining.

132

LESSON - 8
INTERATION OF APPLICATION AND TEXT PROCESSING
8.1 INTRODUCTION
Application integration is the process of enabling individual applications to
work with one another. Application integration helps bridge the gap between
existing on-premises systems and fast-evolving cloud-based enterprise applications.
Microsoft Office is a suite of desktop productivity applications that is designed
specifically to be used for office or business use. It is a proprietary product of
Microsoft Corporation and was first released in 1990. Microsoft Office is available in
35 different languages and is supported by Windows, Mac and most Linux variants.
It mainly consists of Word, Excel, PowerPoint, Access, OneNote, Outlook and
Publisher applications.
Microsoft has dominated the business world for a long time. It is on every
computer, in every office, no matter if that is a library or a Fortune 500 company.
We use Microsoft Word to create the newsletters for our businesses and collate our
data with Microsoft Excel. As familiar as we are with these programs, you should
know that they’re always being improved in order to help us do more.
8.2 OBJECTIVES
• To know and understand the concept of application integration and its uses.
• To get knowledge of Ms-Office
• To improve the understanding of Microsoft Office, Powerpoint, excel and
Access.
• To learn the various operation in Ms-Office software.
8.3 CONTENTS
8.3.1 Application Integration
8.3.2 Text Processing Software
8.3.3 Ms-Office
8.3.4 Ms-Word
8.3.5 Ms-PowerPoint
8.3.5 Ms-Excel
8.3.6 Ms-Access
8.3.1 INTEGRATION OF APPLICATIONS
Application integration is the process of enabling individual applications—each
designed for its own specific purpose—to work with one another. By merging and
optimizing data and workflows between multiple software applications,
organizations can achieve integrations that modernize their infrastructures and
support agile business operations.
Application integration helps bridge the gap between existing on-premises
systems and fast-evolving cloud-based enterprise applications. Through seamlessly
interconnected processes and data exchanges, application integration allows
133

enterprises to orchestrate a variety of functions across their entire infrastructures,


enabling businesses to operate more effectively and efficiently.
Application integration concepts
When an organization considers moving forward with application integration,
there are various components required to orchestrate processes between two or
more applications successfully.
Application Programming Interface (API)
An API is a set of functions and procedures that specify how software
components should interact. They allow developers to easily and quickly access the
functionality of other software through well-defined data structures and have, as a
result, become a popular way of integrating applications, data, and services, in
recent years.
Events and actions
An event is an occurrence in your connected applications—such as a payment
being received. An event then triggers an action or series of actions, which can
include standard functionality—like creating, retrieving, or updating datasets—and
be application-specific—such as a new case being creating in Sales force.
Data mapping
Data mapping specifies the information exchange that's to be used. For
example, when you complete and submit contact forms in one application, this
event can trigger actions that map those form fields to other corresponding datasets
on other applications, categorizing the information entered into first name, last
name, status, etc. This simplifies the process of exporting data for easier grouping
and analysis.
Application vs. data integration
In discussions about the importance of integrated applications and services in
an enterprise setting, the terms "application integration" and "data integration" are
sometimes used interchangeably. However, the concept and use cases behind each
of these processes are very different.
What is data integration?
Data integration is the practice of locating and retrieving information from
various sources while combining that information into a unified structure and view.
Also referred to as batch-based data processing, data integration involves collecting
an extensive amount of data over time, storing it, and then eventually processing it
in batches. Data integration isn't necessarily conducted in real-time. It is commonly
used after processes have been completed, so it can provide administrators the data
portability they need to analyze application performance over time, eliminate
redundancies, and ensure data consistency and quality.
How is application integration different?
Application integration is different from data integration in that it directly links
multiple applications at a functional level. Application data may be linked in near
134

real-time, allowing organizations to create dynamic and highly adaptable


applications and services.
There are a number of proven application integration styles that can be used
independently or in combination—from modern API-led integration where business
IT assets can be exposed and discovered in a rapid and secure way, to more
traditional Service Oriented Architectures like the Enterprise Service Bus, where
you integrate different applications by putting a communication layer or ‘bus’
between them. This allows enterprises to orchestrate integration flows that trigger
after events occur in one application to the next. Because of these real-time
interactions, enterprises are able to stay agile, responding to new data feeds or
performance issues as they occur, rather than after the fact.
Benefits of application integration
There are many complexities that integration can resolve, but what are the
benefits? Integration provides value both on an organizational level as well as an
operational level— if you choose the right integration tool.
Organizational benefits
Integrating your applications across various clouds is an important step
toward synchronizing your data. However, you need an integration tool that allows
deployment of integration runtimes within multiple clouds. This allows you to
deploy close to your applications, resulting in lower latency times as processes run
directly within the cloud and lower costs from not needing to move data in and out
of platforms.
Operational benefits
The right application tool can also yield important timesaving, cost-cutting,
and performance-enhancing operational benefits:
• Access any data anywhere: With organizations diversifying their
application landscape (e.g., adopting SaaS applications, building new
solutions in the cloud) data is increasingly dispersed across multiple
environments. Integration tools that deploy across these environments
enable access from any system to any sort of data in any format.
• Resolve ‘endpoint individuality’: Each system or application has its own
idiosyncrasies that must be accounted for in any integration—error
handling, authentication protocols, load management, performance
optimization and more. Integration tools that handle these factors ‘out of
the box’ yield tremendous gains in productivity over coding and a higher
level of enterprise-class resiliency.
• Let integrators focus on integration: Purpose-built tooling can help
integrators focus less on the surrounding infrastructure and more on
building business logic. By addressing error recovery, fault tolerance, log
capture, performance analysis, message tracing, and transactional update
and recovery, an integration tool enables users to create integration flows
more without requiring a deep knowledge of the various platforms and
domains.
135

Uses of Integration
As more and more organizations concentrate on deploying agile integration
strategies, modernizing legacy systems is a primary focus. Industry-specific
examples include the following:
• Banking: By integrating customer accounts, loan applications services,
and other back-end systems with their mobile app, a bank can provide
services via a new digital channel and appeal to new customers.
• Manufacturing: Factories use hundreds or even thousands of devices to
monitor all aspects of the production line. By connecting the devices to
other systems (e.g., parts inventories, scheduling applications, systems
that control the manufacturing environment), manufacturers can uncover
insights that help them identify production problems and better balance
quality, cost, and throughput.
• Healthcare: By integrating a hospital patient’s record with an electronic
health record (EHR) system, anyone who treats the patient has access to
the patient’s history, treatments, and records from the primary care
physician and specialists, insurance providers, and more. As the patient
moves through different areas of the hospital, the relevant caregivers can
easily access the information they need to treat the patient most effectively.
• Organizations in any industry can leverage mission-critical systems through
integration:
• ERP systems: Enterprise resource planning (ERP) systems serve as a hub
for all business activities in the organization. By integrating ERP with
supporting applications and services, organizations can streamline and
automate mission-critical business processes, such as payment
processing, supply chain functions, sales lead tracking, and more.
• CRM platforms: When combined with other tools and services, customer
relationship management (CRM) platforms can maximize productivity and
efficiency by automating a number of sales, marketing, customer support,
and product development functions.
8.3.2 TEXT PROCESSING SOFTWARE
In computing, the term text processing refers to the theory and practice of
automating the creation or manipulation of electronic text. Text usually refers to all
the alphanumeric characters specified on the keyboard of the person engaging the
practice, but in general text means the abstraction layer immediately above the
standard character encoding of the target text. The term processing refers to
automated (or mechanized) processing, as opposed to the same manipulation done
manually.
Text processing involves computer commands which invoke content, content
changes, and cursor movement, for example to
• search and replace
• format
136

• generate a processed report of the content of, or


• Filter a file or report of a text file.
Text processing is, unlike an algorithm, is a manually administered sequence
of simpler macros that are the pattern-action expressions and filtering
mechanisms. In either case the programmer's intention is impressed indirectly
upon a given set of textual characters in the act of text processing. The results of a
text processing step are sometimes only hopeful, and the attempted mechanism is
often subject to multiple drafts through visual feedback, until the regular
expression or markup language details, or until the utility options, are fully
mastered.
Text processing is concerned mostly with producing textual characters at the
highest level of computing, where its activities are just below the practical uses of
computing—the manual transmission of information.
Ultimately all computing is text processing, from the self-compiling textual
characters of an assembler, through the automated programming language
generated to handle a blob of graphical data, and finally to the metacharacters of
regular expressions which groom existing text documents.
Text processing is its own automation.
8.3.3 MICROSOFT OFFICE
Microsoft Office (or simply Office) is a family of server software, and services
developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, in
Las Vegas. The first version of Office contained Microsoft Word, Microsoft Excel,
and Microsoft PowerPoint. Over the years, Office applications have grown
substantially closer with shared features such as a common spell checker, data
integration etc. Office is produced in several versions targeted towards different
end-users and computing environments. The original, and most widely used
version, is the desktop version, available for PCs running the Windows, Linux and
Mac OS operating systems. Office Online is a version of the software that runs
within a web browser, while Microsoft also maintains Office apps for Android and
iOS.
Microsoft Office is a suite of desktop productivity applications that is designed
specifically to be used for office or business use. It is a proprietary product of
Microsoft Corporation and was first released in 1990. Microsoft Office is available in
35 different languages and is supported by Windows, Mac and most Linux variants.
It mainly consists of Word, Excel, PowerPoint, Access, OneNote, Outlook and
Publisher applications.
Microsoft Office was primarily created to automate the manual office work with
a collection of purpose-built applications.
Microsoft Office is the most efficient suite of applications for document
creation, communication and business information analysis. For many functions,
the business platform has evolved from paper to the Web. Microsoft Office extends
137

desktop productivity to the web, streamlining the way you work and making it
easier to share, access and analyze information so you get better results. Office
2000 offers a multitude of new features. Of particular importance for this release
are the features that affect the entire suite. These Office-wide, or shared features
hold the key to the new realm of functionality enabled by Office. Office offers a new
Web-productivity work style that integrates core productivity tools with the Web to
streamline the process of sharing information and working with others. It makes it
easier to use an organization's intranet to access vital business information and
provides innovative analysis tools that help users make better, timelier business
decisions. Office delivers new levels of resiliency and intelligence, enabling users
and organizations to get up and running quickly, stay working and achieve great
results with fewer resources
Microsoft has dominated the business world for a long time. It is on every
computer, in every office, no matter if that is a library or a Fortune 500 company.
We use Microsoft Word to create the newsletters for our businesses and collate our
data with Microsoft Excel. As familiar as we are with these programs, you should
know that they’re always being improved in order to help us do more. The latest
features for Microsoft Office 365 not only maintain the comfortable framework that
we are used to, but they also bring with them incredible new features that we never
knew we needed. These ten new features of Microsoft Office will raise your
productivity, creativity, and efficiency in the workplace and at home.
Features of M.S. Office
Simultaneous Collaboration
This feature allows you to collaborate in real time with your colleagues and
staff and is available through Microsoft Word and PowerPoint Presentations.
Simultaneous Collaboration seamlessly shows what each team member is doing on
the document. It is important to note that while PowerPoint is compatible with
Simultaneous Collaboration, real-time typing is not yet available in PowerPoint
presentations.
Simple Sharing
Sharing documents should be easy. Microsoft Office new feature offers
straightforward document sharing with a simple share button. This button is in the
ribbon in Microsoft Word, Excel and PowerPoint. Now your team members can not
only quickly share documents, spreadsheets, and presentations but it also allows
you to access and change permissions. It is essential to note that co-authored files
must be shared through SharePoint or OneDrive.
Share Large Files As A URL
Large files are no longer a stalling point for businesses. Now if you need to
share a large file you can quickly do so with OneDrive, by sending it as a private
URL. This saves you a significant amount of time, as you no longer have to wait for
the file to upload. Additionally, the recipient no longer has to wait for the file to
download. Any large files that are attached to an email will automatically convert to
a link that the recipient can then open or download.
138

Helpful Versioning
No one is perfect. Sometimes you delete a section of a document, or
accidentally change the formatting, which is where versioning comes into play. You
can hastily revert the document to a previous version by clicking on File > History
to view and select from a list of all prior versions. Additionally, it is beneficial to
view various versions when you are tracking changes in drafts. While this isn’t a
new feature, it is newly improved. Microsoft knows how important this feature is, so
they have worked to ensure that this feature functions more efficiently each time.
Smart Lookup
The Smart Lookup feature provides you the ability to look up a word right from
inside your Microsoft word document, saving you the time of opening up a web page
or a dictionary. In the time that it would take to open a webpage, you would already
be back to your writing. To use this feature, you only need to highlight the word,
right-click, and select “Smart Lookup.”
Outlook Groups
Outlook has a new feature called Groups, which offers users a quick and
efficient way to work as a team without pre-created distribution lists. You can now
create a group of your colleagues or friends, giving this new group its own shared
inbox, calendar, file repository, and OneNote notes. This feature is incredibly
beneficial for task management and file sharing when working in teams.
New Charts in Microsoft
How many meetings have you sat through while staring at the same bar
graphs, with the same dull colors? Microsoft now offers six new chart types:
Treemap, Waterfall, Pareto, Histogram, Box and Whisker, and Sunburst. Each new
chart comes with a new layout and new possibilities. For example, Treemap
provides a hierarchical view of data, while Waterfall gives you a running total of
items as they are added and subtracted. Microsoft Word, Excel, and PowerPoint are
all compatible with these new charts.
One-Click Forecasting
Data is only important if it is used. One-click Forecasting, a new feature for
Excel, helps ensure that you use all the data that you are already collecting. This
feature allows users to view quick predictions based on selected portions of the
Excel Spreadsheet, which are made possible through Microsoft’s Exponential
Smoothing algorithm. This algorithm provides explicit short-term forecasts based
on the collective data within the spreadsheet.
Skype Integration
Skype is now integrated into Word and Excel, which means that while team
members are working on a shared document, they have all of Skype’s capabilities at
their fingertips. Now you can call, text, or screen share, right from either the Word
Document or the Excel Spreadsheet. This capability provides team members the
ability to effectively communicate, without having to leave their work, which saves
you crucial time. Every time you have to stop working to save the spreadsheet,
share the document, and plan a meeting, you are losing valuable time, and putting
139

a dent in your workflow. Skype integration allows you to continue working without
interruption.
Cross-Device Compatibility
When collaborating, it’s essential that you have strong cross-device support.
Microsoft has worked hard to ensure a smooth transition cross platforms and apps
by integrating them evenly with one another. This means that no matter which
platform and app you are using, you can be sure they are all on the same page.
A Common User Interface
While learning one application of the suite you get to learn the operational
basics of the other applications, while maintaining some uniqueness in the
applications. Consistency in MS-Office applications is in the form of:
• Tool –Bars
• Menus
• Dialog Boxes
• Customizable features and operational features are similar too.
Quick Access to Other Applications
The MS-Office provides the Microsoft Office Short cut Bar, which is used for
the following:
• Create a new file based on templates and wizards
• Opening existing files and automatically launching the related
applications
• Add tasks, make appointments, record tasks and add contacts and
journal entries.
• Create a new Outlook Message.
• Switch between and launch Microsoft Office Applications.
Sharing Data across Applications
Microsoft Office provides several means of sharing data between applications:
• Copying – copies the data from the source application to the target
applications using the clipboard.
• Linking-links the data from the source document to the target document
and saves with the source document.
• Embedding- embeds the data from the source document to the target
document and saves with the source document.
• Microsoft Office extends the data sharing beyond application integration
by providing workgroup integration with the Microsoft Outlook. Users can
mail documents, spreadsheets, presentations and data files from within
the source applications.
Providing a Common Language:
140

Providing the common language has been a more challenging goal from
Microsoft Office. It provides a common macro programming language for all the
applications –Visual Basic for the Applications.
Advanced presentation features
While PowerPoint is still one of the most popular and commonly-used
presentation solutions available, there are plenty of others who view it as dated—
particularly with more tech-savvy options like Prezi available. In order to stay
relevant, Microsoft has announced plans to incorporate more advanced
presentation features in Office . These include things like enhanced Morph and
Zoom capabilities to help you create a more sophisticated and dynamic
presentation. Those features are already included in Microsoft 365 subscriptions,
but are not available to people who are currently operating with Office.
Improved inking features
Those who use Microsoft Surface devices are probably already big fans of the
digital pen that allow them to draw, note, and doodle directly onto their device’s
screen. Office 2019 will introduce all new inking capabilities across all apps—such
as pressure sensitivity, tilt effects that adjust the ink’s thickness depending on the
angle of the pen, and even a roaming pencil case, which allows users to store and
organize their favorite pens, pencils, and highlighters to roam with them across
their different devices
Easier email management.
Microsoft has teased several new features to take some of the hassle and
headaches out of email management. According to Microsoft, these include things
like:
Updated contact cards
• Microsoft Office Groups
• @mentions
• Focused inbox
• Travel package cards
Microsoft is hopeful that these additions will help users manage their email far
more efficiently and effectively.
Power Map in Excel: Turning data into a map
Power Map is part of the powerful and interactive data visualization features in
Excel, which are enhanced with Power BI, the solution for analyzing, visualizing
and sharing data insights. You can turn rows of data into a 3D interactive map with
Power Map, which includes the ability to filter data using three different filters: List,
Range, or Advanced.
141

Editing and converting to PDF made easy


Finished editing our document, spreadsheet, presentation or publication?
Converting it to a PDF is made simple with the Save as PDF integrated feature. If
we want to edit a PDF, we can convert it into a Word document without the
formatting issues that come with copy-pasting or editing
50GB of email storage and Management
Each user gets 50GB of storage with Exchange Online for email, calendar,
tasks, notes, contacts and attachments. We can save more space in your mailbox
by using the OneDrive cloud storage to share attachments. It Enable the Reading
Pane in Outlook to reply to email without even opening it. The Reading Pane will
allow you to preview the email content, hit Reply and send your response, in the
same main Outlook window. Combine this feature with the Conversations View and
you’ll see at a glance if you’ve already replied to an email. You won’t have to open
different windows for each message.
Working anywhere from any device
We get the Office Web Apps with your Office or Microsoft 365 subscription, so
you can work on your cloud files from any up-to-date Web browser on an Internet
connected computer. You don’t have to install the Office software on the computer.
You also get access to Office Mobile Apps so you can use Word, Excel, PowerPoint,
Outlook and OneDrive directly from your smart phone or tablet while you’re on the
move! Download them from your Android or iOS device store.
Robust security and reliability Feature
With Microsoft taking on all the responsibility for security and reliability, User
can implement their own security feature with the office Documents.
With all these new features it is easy to see why Microsoft programs are
integrated into nearly every business. These new features allow you to increase
your productivity at work, whether it means real-time collaborating, or Skype
screen-sharing an Excel Spreadsheet for payroll. As with any Microsoft project, you
have access to ample support and videos to help get you acclimated to these new
features.
The primary applications that are included in Microsoft Office include the
following:
• Microsoft Word—A word processing application to create and edit letters,
articles, newsletters, flyers, and any other text-based document.
• Microsoft Excel—A spreadsheet application used primarily to record and
manipulate numbers with calculations, functions, links, and other
operations.
• Microsoft PowerPoint—A presentation application to create and present
slide shows with text, graphics, and videos.
• Microsoft OneNote—A note taking applications to record notes, screen
captures, documents and images.
142

In addition, Microsoft has three more applications that are included in an


extended version of Microsoft Office. These additional applications are
• Microsoft Outlook—An email client application that handles multiple email
accounts. It also includes a powerful address book.
• Microsoft Access—A database management system application. Access is a
powerful tool to build and use a database system.
• Microsoft Publisher—A desktop publishing application which provides more
advanced publishing tools than those included in Word.
Finally there are two more applications that are not usually considered part of
the Office Suite, but which also use that same layout and ribbon that are used in
the Office Suite. These two applications are
• Microsoft Visio—A graphical and drawing tool that is used for graphical
design of software, databases, office layouts, flowcharts, engineering
diagrams, and so forth.
• Microsoft Project—A project scheduling and control application that tracks
tasks, critical paths, resources, and completion times.
Microsoft has moved to a consistent layout and ribbon menu on all of these
applications. Therefore, as you learn the capabilities and shortcuts to use the
ribbon in Word, you will automatically learn many of the menu items available in
these other applications.
Besides desktop applications, Microsoft Office is available to use online or from
cloud under a lighter (Office Web Apps) and full (Office 365) version. As of 2016,
Microsoft Office 2016 is the latest version, available in 4 different variants including
Office Home Student 2016, Office Home Business 2016 and Office Professional 2
and the online/cloud Office 365 Home Premium.
8.3.4 MS-WORD
One of the most widely used programs of Microsoft Office suite, MS Word is a
word processor developed by Microsoft.
Since MS Word is one of the most used programs of the Office Suite, some
basic information regarding its creation and development has been given below:
• Charles Simonyi, a developer and Richard Brodie, a software engineer, were
the two creators of MS Word
• This program was initially named “Multi-Tool Word” but later, was renamed
as MS Word
• It was introduced in 1983
• Word for Windows is available standalone or as a part of MS Office suite
• MS Word for Mac was introduced by Microsoft as Word 1.0 in 1985
• The extension for any word file is “.doc”
143

Ms-Word not only supports word processing features but also DTP features.
Some of the important features of Ms-Word are listed below:
1. Using word you can create the document and edit them later, as and when
required, by adding more text, modifying the existing text, deleting/moving
some part of it.
2. Changing the size of the margins can reformat complete document or part of
text.
3. Font size and type of fonts can also be changed. Page numbers and Header
and Footer can be included.
4. Spelling can be checked and correction can be made automatically in the
entire document. Word count and other statistics can be generated.
5. Text can be formatted in columnar style as we see in the newspaper. Text
boxes can be made.
6. Tables can be made and included in the text.
7. Word also allows the user to mix the graphical pictures with the text.
Graphical pictures can either be created in word itself or can be imported
from outside like from Clip Art Gallery.
8. Word also has the facility of macros. Macros can be either attached to some
function/special keys or to a tool bar or to a menu.
9. It also provides online help of any option.
Different elements and categories which are available in MS Word doc:
• Home
This has options like font colour, font size, font style, alignment, bullets, line
spacing, etc. All the basic elements which one may need to edit their
document is available under the Home option.
• Insert
Tables, shapes, images, charts, graphs, header, footer, page number, etc.
can all be entered in the document. They are included in the “Insert”
category.
• Design
The template or the design in which you want your document to be created
can be selected under the Design tab. Choosing an appropriate tab will
enhance the appearance of your document.
• Page Layout
Under the Page Layout tab comes options like margins, orientation,
columns, lines, indentation, spacing, etc.
144

• References
This tab is the most useful for those who are creating a thesis or writing
books or lengthy documents. Options like citation, footnote, table of
contents, caption, bibliography, etc. can be found under this tab.
• Review
Spell check, grammar, Thesaurus, word count, language, translation,
comments, etc. can all be tracked under the review tab. This acts as an
advantage for those who get their documents reviewed on MS Word.
Apart from all the above-mentioned features, the page can be set in different
views and layouts, which can be added and optimised using the View tab on the
Word document. Margins and scales are also available for the benefit of the users.
Uses of MS Word
Given below are the different fields in which MS Word is used and simplifies
the work of an individual:
• In Education: It is considered as one of the simplest tools which can be
used by both teachers and students. Creating notes is easier using MS Word
as they can be made more interactive by adding shapes and images. It is
also convenient to make assignments on MS Word and submitting them
online
• In Workplace: Submitting letters, bills, creating reports, letterheads, sample
documents, can all easily be done using MS Word
• Creating & Updating Resume: One of the best tools to create your resumes
and is easy to edit and make changes in it as per your experience
• For Authors: Since separate options are available for bibliography, table of
contents, etc., it is the best tool which can be used by authors for writing
books and adjusting it as per the layout and alignment of your choice
8.3.5 MS-POWER POINT
A PowerPoint presentation is a presentation created using Microsoft
PowerPoint software. The presentation is a collection of individual slides that
contain information on a topic. PowerPoint presentations are commonly used in
business meetings and for training and educational purposes. Microsoft PowerPoint
is a software product used to perform computer based presentations. There are
various circumstances in which a presentation is made: teaching a class,
introducing a product to sell, explaining an organizational structure, etc. The
preparation and the actual delivery of each are quite different. PowerPoint typically
comes with a set of preloaded themes for you to choose from. These can range from
simple color changes to complete format layouts with accompanying font text.
Themes can be applied through the whole presentation or a single slide. Using the
page setup allows you to optimize the presentation for the display size; for instance,
you should use a larger screen ratio when displaying on a projector compared to a
computer screen.
145

Features of MS PowerPoint
Microsoft first rolled out MS PowerPoint in 1987. PowerPoint software features
and formatting options include a wizard that walks you through the presentation
creation process. Design templates---prepackaged background designs and font
styles that will be applied to all slides in a presentation. When viewing a
presentation, slide progression can be manual, using the computer mouse or
keyboard to progress to the next slide, or slides can be set up to progress after a
specified length of time. Slide introductions and transitions can be added to the
slides.
With each version, new features in PowerPoint become available that help
make creating presentations easier. Microsoft PowerPoint is one of the most popular
programs for making presentations. With an intuitive graphical interface, in-built
editing tools, and more, it transformed presenting information for students,
businesses, and everything in between. Nevertheless, the program keeps evolving.
With PowerPoint 2019 comes a plethora of new features, helping even the greenest
of users make visually engaging presentations.
There are several new features in PowerPoint that were missing in previous
releases. This just goes to show that Microsoft does listen to what their users want.
Read on to learn about the new features in PowerPoint!
1. Insert Vectors
Tired of fuzzy images? Now your PowerPoint slides can boast the clarity and
sharpness of scalable vector graphic (SVG) pictures. You can edit vectors, such as
changing their color and size. PowerPoint 2019 can also handle SVG images with
filters on them. Insert your vector image in the SVG format, like you normally do
with other pictures.
A Format menu will appear in the ribbon. Here you can find different options
to play around with. Transform your vector into line art, use an eyedropper to select
a color from your slide, or convert it to an Office Shape. This allows the vector’s
disassembly for arranging as you please!
2. Conduct Slide Shows with Digital Pens
Presenting your slides is now easier than before! Use a compatible digital pen
like a wireless remote for a comparatively hands-free presenting experience. Of
course, you will need to update to the latest Windows 10 version. You also need a
digital pen (e.g. the Surface Pen) and a computer that supports Bluetooth. First,
enable Bluetooth on both devices. Next, pair your computer and pen through
Settings > Devices > Bluetooth & Other Devices.
Once paired, adjust the settings for the pen’s shortcut button. Navigate to
Settings > Devices > Pen & Windows Ink. Here you can see settings for choosing
how many clicks of the shortcut button launch which action. Check the box for the
option “Allow apps to override the shortcut button behavior.”
146

Now you are all set! When you start your presentation, activate the slide show. Tap
the button once to go forward, then briefly hold it down to go backward.
3. Morph Transition
Want smoother animations? PowerPoint 2019 brings you the Morph effect for
sleeker transitions. Two slides should have a minimum of one object in common
with each other. Make a copy of the first slide you want to Morph, then on the copy
modify the object.
Navigate to the Transitions tab, choose Morph. Play around with the Effect
Options until the Morph effect works to your preference.

4. Zoom for PowerPoint


For more dynamic cuts in between slides, try Zoom. This is one of the new
features for PowerPoint that allows navigation between selected slides using a
zooming effect.
You can explain a point on one slide, then jump to an image on another slide
ahead to describe their interconnection with Zoom. To find this tool, head to the
Insert tab > Links section, and select the arrow next to the Zoom for options.
147

5. Introducing:
oducing: The Text Highlighter
PowerPoint 2019 has a Text Highlighter tool now – just like MS Word. A small
feature, but popular enough to include in this release. Go to the Home tab and
select the highlight option. Choose your preferred color, then glide tthe
he cursor over
the text you want to accentuate.

6. Use 3D Models
One of the many new features in PowerPoint for graphics includes inserting 3D
models. PowerPoint 2019 makes it simple to put a 3D model in your presentation.
Users can also rotate the 3D model 360 degrees for maximum impact. Head over to
the Insert tab > Illustrations section > 3D Models to locate and insert your file.

7. Image Backgrounds are Easier to Remove


Those who make image
image-heavy
heavy PPTs will breathe a sigh of relief! PowerPoint
2019 perceives the overall background sections automatically. While users could
mark areas for removal before, now they can draw free
free-form
form outlines. This makes
for more proportionate background removal compared to the straight lines in
previous PowerPoint versions!
ions!
Insert your image and click on the picture formatting options that appear on
the tabbed ribbon and select Background Removal.
148

8. Export in 4K Resolution
Prefer a video for a smooth presenting experience? Now you can choose 4K
resolution! Simply go to File > Export > Create a Video, and select the Ultra HD (4K)
option.

9. Recording and Inserting Media


If you like adding narrative elements in your presentations, the new features in
PowerPoint will delight you. PowerPoint 2019 has a builtbuilt-in
in recording
recordin for audio,
video, and inking gestures, and even the computer screen!
You can find these options together in the Insert tab in the “Media” section. All
you need is a microphone, speakers, and a soundcard for audio, or an optional
webcam for video. Simply capture slide timings, narration, and inking gestures for
your PowerPoint. The recording is like any other PPT – save it as a video or view it
as a Slide Show.
149

10. Real-time collaboration


PowerPoint 2019 makes it easy to share and collaborate with other presenters.
You can even view who is working when you are, as an indicator showing what slide
they are on.
First, Save your PPT file in OneDrive. Then, navigate from File > Share > Share
with People, and select the contacts you want to collaborate with. You can also use
the keyboard shortcut Alt + Z +S to send requests inviting people to work with you.
Introduction to Ms-Excel
MS Excel or Microsoft Excel is one of the most popular and widely used
spreadsheet programs introduced by the Microsoft Corporation. This powerful
spreadsheet program comes included with the Microsoft Office Suite. Almost all
professionals and people in business use this spreadsheet program, whether they
are just entry-level users or advanced users. It allows users to improve their
spreadsheet skills with its simple user interface and help them organize their data
efficiently. Since Excel has been a spreadsheet solution for users for a very long
time, it has gained many essential features over time. So, we must know some of its
essential features and improve our skills and overall efficiency to work on it.
Microsoft Excel is a general-purpose electronic spreadsheet used to organize,
calculate, and analyze data. The task you can complete with Excel ranges from
preparing a simple family budget, preparing a purchase order, or managing a
complex accounting ledger for a medium size business.
8.3.6 MS EXCEL
We discuss the most popular features of MS Excel. However, they are not
limited. Based on learning stages, the number of features keeps on increasing. It
will not be wrong to say that the features of MS Excel are almost countless. There
are a number of features that are available in Excel to make your task easier. Some
of the main features are:
• AutoFormat - lets you to choose many preset table formatting options.
• AutoSum - helps you to add the contents of a cluster of adjacent cells.
• List AutoFill - automatically extends cell formatting when a new item is
added to the end of a list.
• AutoShapes toolbar- will allow you to draw a number of geometrical shapes,
arrows, flowchart elements, stars and more. With these shapes you can
draw your own graphs.
• Drag and Drop - feature will help you to reposition the data and text by
simply dragging the data with the help of mouse.
• Charts - features will help you in presenting a graphical representation of
your data in the form of Pie, Bar, Line charts and more.
150

• PivotTable - flips and sums data in seconds and allows you to perform data
analysis and generating reports like periodic financial statements, statistical
reports, etc. You can also analyse complex data relationships graphically.
• Shortcut Menus - commands that are appropriate to the task that you are
doing appear by clicking the right mouse button.
Let us now discuss the essential features of MS Excel in datail:
Header and Footer
MS Excel allows users to insert header and footer into their spreadsheet
document files. A header is the top margin of each page in an Excel Worksheet,
while a footer
er is the bottom margin of each page in an Excel worksheet. These are
the valuable components for Excel sheets as they appear on every page of the
document. Users can enter any text or numbers to include header and footer in
their Excel document. For examp
example- title of the document, user/ author name, page
numbers, etc.
Apart from this, the main advantage of header and footer in Excel is that this
feature allows users to insert a watermark into their Excel documents.

Shortcut Keys
The use of shortcut keys in Excel is one of the main features of this powerful
spreadsheet program. MS Excel has an extensive range of shortcut keys that help
users reduce their working time. The keyboard shortcuts are essential alternatives
to using a mouse or a touch screen to p
perform
erform most excel commands instantly.
151

Since Excel has a pretty long list of shortcut keys, we discuss the few essential
Excel shortcuts keys below:

Shortcut Key(s) Action


Ctrl + N New Workbook
Ctrl + C Copy Selected Contents
Ctrl + V Paste Contents from Clipboard
Ctrl + Alt + V Launch 'Paste Special' dialog box
Ctrl + X Cut Selected Contents
Ctrl + Y Redo Previous Change(s)
Ctrl + Z Undo Last Change(s)
Ctrl + P Print Current Workbook/ Worksheet(s)

Inserting New Worksheets


By default, Excel typically starts with 1 to 3 worksheets whenever we create a
new workbook. However, we can insert as many sheets as we need depending on
our system resources and capability. Inserting and deleting the worksheets in Excel
is so easy, and this can be perform
performeded while working on the worksheets from within
the Excel program at any point in time.
To insert a new worksheet in any workbook, we need to click on the 'PLUS'
sign from the bottom of the Excel window. It looks like the following image:

Additionally, we can press the shortcut keys Shift + F11 on the keyboard to
insert a new worksheet instantly.
152

Besides, we can also delete any desired worksheet with ease. First, we must
click on the desired worksheet name and then select the 'Delete' option from
right-click
the menu options.

Find and Replace Command


MS Excel allows users to use the 'Find and Replace' command to find any
specific data (text and numbers both) from the worksheet and further replace the
data with the new one if desired. This feature is mainly bene
beneficial
ficial while working on
vast amounts of data. Using this feature, users quickly find the needed data and
efficiently reduce the work process time.

To use this feature, we need to navigate to the Home > Find and
Select. Additionally, we can also use the ffollowing
ollowing shortcuts for quick access to
individual features:

Shortcut Key(s) Action


Ctrl + F To display the 'Find and Replace' window with the Find tab
selected
Ctrl + H To display the 'Find and Replace' window with the Replace
tab selected
153

Protecting with the Password


Protecting the data in Excel is very easy. Excel has a built
built-in
in feature to enable
users to password-protect
protect their desired documents. Once a user has enabled the
password for any specific Excel file, it cannot be opened on any device/ system
without the particular password. This eventually keeps the data protected from
unauthorized access and makes it easier to share the documents over email or the
Internet.
To protect the excel documents with the password, we need to navigate to
the File
le > Protect Workbook and select the desired protection tactics.
Data Sorting and Filtering
Instead of using the 'Find and Replace' feature, users can also use 'Data
Sorting and Filtering' to quickly find the desired data based on the specific
parameters. Then, users can either reorder the data or pick out just the needed
data from the vast data sets. This particular feature eventually saves time and
makes spreadsheets look neat, clean, and effective.
In particular, the sorting feature helps us to manage th
the
e order of the data in
different manners, such as the lowest to highest, highest to lowest, alphabetically,
etc. We must navigate to the Home > Sort and Filter and then select the
appropriate sorting option from the list to use the sorting feature. Additionally,
Additio we
can right-click
click on the selected range of data and then select the 'Sort' option from
the list.
Besides, filtering of data in Excel is one of the most used options. It allows
users to filter the data using advanced techniques. We must navigate to the Home
> Sort and Filter and then select the appropriate filtering option from the list to
use the filtering feature. Despite this, we can use keyboard shortcuts Ctrl + Shift +
L to filter any data available under the filter section quickly.

Once the filtering


iltering feature is applied to the data, Excel displays the drop-down
drop
icon on the top of the cells in each corresponding column. This looks like this:
154

Users can apply the 'Sorting and Filtering' feature on one or more columns.
Built-in Formulae
Excel has a wide range of built
built-in
in formulae that allow users to perform
different operations on the data in worksheets. Using functions and formulae to
manipulate numbers and getting desired results is one of the most powerful
features of MS Excel. It contains more than 450 functions and formulae, enabling
users to perform basic to complex operations efficiently.
To access the formulae in Excel, we are required to navigate to the 'Formulas' tab.
155

The basic formulae include SUM, AVERAGE, MINIMUM, MULTIPLY, etc. Let us
understand this feature with the following example:
Suppose we have specific numerical values in cell A1 and cell A2, and we want
to add these two values and get the result in cell A3. Thus, we apply the sum
function in cell A3, i.e., =SUM(A1,A2). This way, Excel displays the sum of values
from cell A1 and A2 in cell A3.

Similarly, we can use other operations on the data in spreadsheets.


Paste Special Feature
The 'Paste Special' option is another useful feature in MS Excel. This allows
various individualal options whenever we try to paste any contents from the
clipboard. For example, by default, if we press the shortcut 'Ctrl + V' or select the
'Paste' option from the right
right-click
click menu list, Excel paste the contents from the
clipboard similar to the copied content or the original contents. Therefore, if we
want to paste only some specific part from the clipboard, such as the values
without formatting or the formatting without values, we can use the 'Paste Special'
feature.
This particular feature in Excel a
allows
llows users to increase the overall workplace
productivity and reduce the processing time to some extent. Let us understand this
feature with an example:
Suppose we have the following data where we have the sum formula in the A5
cell for the range of cells from A1 to A4.
156

This means that cell A5 is dependent on the above cells to display the
corresponding results (value). Therefore, if we copy the results from cell A5 and
paste it on another cell using the 'Paste' option, Excel will only paste the
corresponding formula but not the value.
If we want to paste as values only, we must copy cell A5 and then use the
'Paste Special' feature in a cell where we want to paste values. In our case, we paste
the values in C5 Cell. When we select the 'Paste Speci
Special'
al' or press its shortcut keys,
such as Alt + E + S, we get the following window.

We have many options here to paste the contents as desired. However, since
we want to paste only the values, we must select the 'Values' option and then click
on the 'OK' button on cell C5.
Thus, Excel only pastes the values from the cell A5, i.e.:
Similarly, we can use other options as per our requirements.
Pivot Tables
Pivot Tables help summarize vast amounts of data from the database
organized so that the first row conta
contains
ins a heading and others contain values or
categories. Besides, there should be no blank rows in the selected range of data.
This feature is beneficial to analyze and compare data easily.
To insert Pivot Tables in Excel, we must first select the range of cells
c or table
and then navigate to Insert > Tables > Recommended PivotTables.
Once the user applies Pivot Tables in Excel, Excel then creates the PivotTables
in a new worksheet and displays the different fields, allowing users to rearrange the
data as desired.
157

Conditional Formatting
Conditional Formatting in Excel is another helpful feature that allows users to
change the formatting of a cell based on the contents or range of the cells. This
particular feature is mainly beneficial to focus on essential as aspects
pects of specific
desired values in spreadsheets. For example, conditional formatting features enable
users to fill in different colors to highlight the essential aspects of data in
spreadsheets.
Users can also apply basic fonts and cell formattings such a
ass font style, size, and
other font attributes.

To highlight the contents, users get various rules and styles and can even
create their custom rules as per needs.
Charts and Graphics
Excel allows users to create different types of charts based on the data in
sheets. Users can also use different built
built-in
in shapes and images if desired. In
addition to this, Excel also enables users to use mixed charts, meaning that we can
use/ combine two styles of charts in the same worksheet.
For example, we can use the line chart and the column chart on the same
range of data. This feature is mainly beneficial where users need to highlight two
different types of information or a range of values that changes significantly.
To insert charts and other graphics objects, we are rrequired
equired to navigate to
the Insert tab.
158

Auto-Fill Data
Although it is a minor feature, it is very much useful for regular users. Using
the Auto-fill
fill feature, users can fill data in series. For example
example- values from 1 to 10
or even more, weekdays, months name, dates, etc.
8.3.7 MS-ACCESS
Regular Microsoft Office users are not as familiar with Microsoft Access as they
are with Microsoft Word, Excel or PowerPoint.
Since Microsoft Access is a relational database application included in the
Microsoft Office Suite,
ite, which allows users to enter, manage, and run reports on a
larger scale, it is most suitable for those who need to quickly organize a large
amount of data.
It is layered somewhere between Excel, which is ideal for individuals with
small data storage and
d SQL Servers required by larger teams and corporations.
With the help of Microsoft, Access users can effectively manage important
information by storing it conveniently for future reference, reporting, and analysis.
As the name suggests, users will have a access
ccess to organized information in their
database with minimal effort.
Microsoft Access Features
• Ideal for individual users and smaller teams
• Easier than a client--server database to understand and use
• Import and export to other Microsoft Office and other ap
applications
plications
• Ready templates for regular users to create and publish data
• Allows building and publishing Web databases effortlessly
159

• A user-friendly feature ‘Tell Me’ for assistance


• Allows developers to create custom solutions using VBA code
• Hide/Show option for Ribbon
• Allows users to Report View Eliminates Extra Reports
• Allows output Reports in PDF format
Ideal for individual users and smaller teams
Microsoft Access is a simple desktop application that does not require any
specific hardware or license to function. Hence, it is ideal for individual users and
smaller teams who do not require larger and complicated databases for an extra
price.
Microsoft Access offers numerous advantages over database solutions over
SQL Server and Oracle, where the need for a database is not very high. Many
smaller teams can now avoid the purchase of massive resource for their basic and
simple needs with the help of Microsoft Access.
Further, they do not need anyone to administer or monitor Access which
regular client-server databases may require.
Moreover, it comes with an option of allowing users to turn their database into
the application, which was introduced after Microsoft Access 2007.
It is easier than a client-server database to understand and use
Personal computer applications like Microsoft Access have taken over many
applications which were earlier relegated to a client-server database in the past.
Users of Access need not undergo any special training to acquire the skills
necessary to use the application.
Many users have self-tutored themselves to use Microsoft Access for their
individual needs. Creating and administering a database is quite simple and easy
with the help of a readily available, well-organized set of templates which will be
discussed further in this article.
Users of Microsoft Excel will find Access easier to master since it is similar to
Excel through the fundamental concepts of databases are much different from
Excel.
Learning and managing this powerful application using the most effective
techniques and tools available is possible even for new users who have little or no
knowledge of managing databases.
Microsoft Access stores different data sets known as Tables that are potentially
related, which, when combined together, increases data flexibility.
By relating and joining tables together, Access, a Relational Database
Management System (RDBMS) on the whole, provides more flexible reporting and
functionality within this application to the users, which makes their task simple
and easy.
160

Import and export to other Microsoft Office and other applications


One of the most useful features of Microsoft Access is its ability to interface
with data from many other programs.
It is also easier to combine data created in other programs, transfer data
between two other programs,
rograms, or accumulate and store data over the long term,
occasionally exporting data to other programs such as Excel for analysis.
With Microsoft Access, there are a number of ways to copy an object, such as a
table or form, from one database to another e effortlessly.
ffortlessly. Besides copying and
pasting an object, Microsoft Access also allows exporting an object which comes
with more options.
With Access, it is convenient to export the table definition and the data in the
table, or export just the table definition, which a blank copy of the table as
required. You can also save the details of the operation as an export specification
for future use.

The Import and Export Link group displays icons for all the data formats that
Microsoft Access can Import or Export Data to. On clicking more, users will can see
more formats that Microsoft Access can work with.
For further convenience of the users, the import/export wizard helps users
with the task and saving the details of that operation as a specification.
Ready templates
lates for regular users to create and publish data
Microsoft Access helps users in creating and managing databases even if they
have minimal experience in tthe
he field. This is made possible with the help of several
Microsoft Access templates, which has everything ready for use.
On opening a specific template file, the user finds a new database with tables,
forms, macros, reports, and other necessary fields whic
which
h is already created, saving
time and effort.
The templates gallery conveniently comprises both desktop and web-based
web
templates for the user to choose from. For creating an Access database for personal
use, the best option would be to go with a desktop tem
template.
For creating databases for publishing on a SharePoint server, it is
recommended that the user choose the Web
Web-based templates.
For example, the Desktop Customer Service Template from Microsoft Access
2013 helps users to create a customer service data
database
base to manage multiple issues
tracking assignments, priority, status, and resolution with ease.
161

The Microsoft Access Templates for Employee and Vendor Project Marketing
helps users create an extensive marketing project database to track time-sensitive
deliverable’s, employee roles, and their priced vendors.
Allows building and publishing Web databases effortlessly
Users of Microsoft Access can either design their own database or create a
database using a readily available template as per their requirement.
Those who are tech-savvy and familiar with Web Databases would ideally
design their own database by creating a blank database on which they would create
the tables that their database would need on Access.
Those who need help or are not aware of what tables their project would
require can make use of the templates available for them. Microsoft Access
templates have a huge compilation for some commonly used databases that users
would require.
Even new users can create a database using a template by the following steps.
Open Access and open the backstage view by clicking on FILE.
Find the required template among the templates you would see there.
If the required template is not found, users can search Microsoft Office online
to find additional templates.
Tap or click the template which is suitable for the purpose and ensure that the
selected template is specific for either for a desktop database or Web Publishing.
Enter a file name and select a location to store the created database.
Tap or click the Create button to create the database.
It is as easy as that.
An user-friendly feature, ‘Tell Me’ For Assistance
The new user-friendly feature ‘Tell Me’ being introduced in Microsoft Access
2016 works like an assistant helping users complete the task quickly.
The feature is available as a text box on the ribbon in Microsoft Access 2016
that says Tell me what you want to do with a bulb beside it.
Users can enter words and phrases in the text field related to what they want
to do next and quickly get to features they want to use or actions they want to
perform. It also provides help related to what is being searched for.
For example, when the word ‘filter’ is entered, all the filter related option in the
application will crop up. Users do not have to hunt any feature down through a
maze of menus with the Tell me bar available.
Moreover, unlike help assistants of the past, this feature doesn’t tell the user
how to perform a specific function; it simply offers a simple step actually to do it.
Allows developers to create custom solutions using VBA code
Visual Basic for Applications (VBA) is a programming language that can be
used with Microsoft Access. Developers using Access can create custom solutions
162

for their database using VBA code, an effective programming language that consists
of a code/command for specific programs.
The instructions will cause actions to take place automatically when the
program is executed. This
his powerful feature allows developers to extend basic
custom end-user
user solutions to a professional solution by using advanced
automation, data validation, error trapping, and multi
multi-user
user support in their
databases.
Hide/Show option for Ribbon.
The Microsoft Access window consists of a variety of components helping users
to work more efficiently. The important components are the Navigation Pane,
Access work area, ribbon, shortcut menus, and Quick Access Toolbar.
Toolbar Some of
these components are common to other Office apps, whereas others are unique to
Microsoft Access.
The ribbon contains five tabs used to accomplish various tasks on the
computer related to organizing and managing the contents of the open window in
Microsoft Access.
It is located near the top of the window below the title bar and provides easy,
central access to the tasks performed while creating a database.
The ribbon available in Microsoft Access consists of tabs tabs,, groups, and
commands. Each tab contains a collection of groups, and each group contains
related functions. It can be further customized to suit the user requirement.
163

At times the ribbon, which initially displays several main tabs, can be a bit
irritating. Now users have the option to hide the ribbon when not required to have a
clutter-free screen.
The minimize option for the ribbon can be set for the Ribbon in Access by
double-clicking for future sessions. Now users do not have to worry about the
ribbon infringing on their work area.
Report View Eliminates Extra Reports
With Microsoft Access, users can choose four different ways to view reports:
• Report view
• Print Preview
• Layout view
• Design view.
The report view shows the report on the screen as users would prefer to see it.
A very useful feature of Microsoft Access, the new Report View allows users to
perform ad hoc filters on a report similar to the way they can filter forms.
Users have a choice to pick which fields they desire to have on their reports by
choosing their preference from more than one table or query.
By filtering specific column value or words that begin with or contain a similar
letter or data range.
The resulting reports show exactly what the viewer wants to see, with
summaries automatically recalculated. Moreover, this requires no special
programming skills on behalf of the user.
Further, developers can add grouping levels, set the order for their records and
sort record either in ascending or descending order. Finally, the user will see the
report as exactly they want it with all the extra unwanted fields eliminated.
With little effort, the readability of the reports will be enhanced, and they
become more viewer-friendly.
With the help of hide Duplicates property to Yes, viewers can rid of unwanted
duplicate reports when needed.
Output Reports in PDF format
Today more and more users are sharing database through electronic image
formats or popularly known as fixed formats like PDF by Adobe Systems and XPS
by Microsoft.
Access allows users to create reports in electronic image files through the
EXTERNAL DATA tab on the ribbon. These reports can be viewed even by users
who do not have Access installed on their system since the PDF file can be opened
on Adobe Reader.
With the reports and database shared in PDF format, the applications of
Access has been enhanced significantly. Users love this output type, which is
164

welcome as a useful addition, making it convenient and simple to distribute reports


via email, which saves paper and contributes to green computing.
With all the above features and many more, no wonder that Microsoft Access is
today the world’s leading database system, with millions of users and hundreds of
thousands of developers benefiting from it.
8.4 REVISION POINTS
• Application programme interface
• Text Processing
• Common user interface
• Features of Ms-Word
• Features of PowerPoint
• Features of Excel
8.5 INTEXT QUESTIONS
1. Enumerate the benefits of application integration.
2. Write a short note on text processing software.
3. Explain the features of Ms-Office.
4. Explain the features of PowerPoint.
5. List down the uses of Access.
6. What is conditional formatting?
8.6 SUMMARY
Application integration allows enterprises to orchestrate a variety of
functions across their entire infrastructures, enabling businesses to operate
more effectively and efficiently.
Text processing involves computer commands which invoke content, content
changes, and cursor movement
Microsoft Office is the most efficient suite of applications for document
creation, communication and business information analysis
Microsoft Office. It provides a common macro programming language for
all the applications –Visual Basic for the Applications.
PowerPoint software features and formatting options include a wizard that
walks you through the presentation creation process.
Microsoft Excel is a general-purpose electronic spreadsheet used to organize,
calculate, and analyze data
Microsoft Access is a relational database application, which allows users to
enter, manage, and run reports on a larger scale, it is most suitable for
those who need to quickly organize a large amount of data.
165

8.7 TERMINAL EXERCISE


1. An _________ triggers an action or series of actions,
2. _____________ refers to the theory and practice of automating the creation or
manipulation of electronic text
3. Microsoft Office was first released in the year ______________
4. _____________ of Ms-Word the most useful for creating a thesis or writing
books or lengthy documents
5. Short cut key for opening new workbook is _______________
8.8 SUPPLEMENTARY MATERIALS
1. https://www.howtogalaxy.com/top-10-new-features-of-ms-powerpoint/
2. https://www.javatpoint.com/features-of-ms-excel
3. https://byjus.com/govt-exams/microsoft-word/
8.9 ASSIGNMENTS
1. Write down the features of Ms-Word, Ms-Excel, Ms-PowerPoint, Ms-Access.
8.10 SUGGESTED READING/REFERENCE
1. Ed Bott, Practical Microsoft Windows 2002 Professional, PHT, Microsoft
Press, Mumbai, 2002.
2. http://www.rncollegehajipur.in/rn/uploads/products/BBA%20SEM%20IV
%20MS%20Office%20by%20Dr.%20Rakesh%20Ranjan.pdf
3. https://www.comtech-networking.com/blog/item/608-10-microsoft-office-
features-you-will-love/
8.11 LEARNING ACTIVITIES
1. Prepare a word document with tables and charts.
2. Prepare a mark sheet for 30 students with conditional formatting.
3. Try to make a PPT presentation of your assignment.
4. Prepare a data base and use it with mail merge.
8.12 KEYWORDS
Data Mapping
Page set-up
Page layout
Slide transition
Inserting media
Pivot tables
Database
Reports.

166

LESSON - 9
DATA PROCESSING
9.1 INTRODUCTION
Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data scientists and data
engineers in an organization. The raw data is collected, filtered, sorted, processed,
analyzed, stored, and then presented in a readable format. There are many types
and methods of data processing. Lest us discuss these topic in this chapter.
9.2 OBJECTIVES
• To understand the concept of data processing
• To get an insight in the application of data processing
• To analyse the data processing cycle
• To recognize the types of data processing.
9.3 CONTENTS
9.3.1 Data Processing
9.3.2 Data Processing Cycle
9.3.3 Types of Data Processing
9.3.4 Data Processing Methods
9.3.1 DATA PROCESSING
Data processing means manipulation of data by a computer. It includes the
conversion of raw data to machine-readable form, flow of data through
the CPU and memory to output devices, and formatting or transformation of
output. Any use of computers to perform defined operations on data can be
included under data processing. In the commercial world, data processing refers to
the processing of data required to run organizations and businesses.
Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data scientists and data
engineers in an organization. The raw data is collected, filtered, sorted, processed,
analyzed, stored, and then presented in a readable format.
Data processing is crucial for organizations to create better business strategies
and increase their competitive edge. By converting the data into a readable format
like graphs, charts, and documents, employees throughout the organization can
understand and use the data.
Processing of data is becoming a popular topic because of the various new
laws and uses associated with the data. Big companies and MNCs are collecting
data by various means which comprises of personal information, customer data,
health information, contact information, location data etc. Due to collection of this
data, there is an increasing concern over how it is collected and how it will be used.
167

Collecting, storing and processing the sensitive information such as income,


medical records, spatial information etc is becoming a concern worldwide. New laws
are being framed to regulate what data is collected and how it is processed and
keeping in mind the user privacy.
Applications of Data Processing
• Commercial Data Processing: Commercial data processing involves a large
volume of input data, relatively few computational operations, and a large
volume of output. For example, an insurance company needs to keep
records on tens or hundreds of thousands of policies, print and mail bills,
and receive and post payments.
• Data Analysis: In a science or engineering field, the terms data processing
and information systems are considered too broad, and the more specialized
term data analysis is typically used. Data analysis makes use of specialized
and highly accurate algorithms and statistical calculations that are less
often observed in the typical general business environment.
• Real World Applications: With the implementation of proper security
algorithms and protocols, it can be ensured that the inputs and the
processed information is safe and stored securely without unauthorized
access or changes. With properly processed data, researchers can write
scholarly materials and use them for educational purposes. The same can be
applied for evaluation of economic and such areas and factors. Healthcare
industry retrieves information quickly of information and even save lives.
Apart from that, illness details and records of treatment techniques can
make it less time-consuming for finding solutions and help in reducing the
suffering of the patients.
• Almost all fields: It is impossible to think of any area which is untouched
by data processing or its use. Let it be agriculture, manufacturing or service
industry, meteorological department, urban planning, transportation
systems, banking and educational institutions. It is required at all places
with varied level of complexity.
9.3.2 DATA PROCESSING CYCLE
The data processing cycle consists of a series of steps where raw data (input) is
fed into a process (CPU) to produce actionable insights (output). Each step is taken
in a specific order, but the entire process is repeated in a cyclic manner. The first
data processing cycle's output can be stored and fed as the input for the next
cycle.
1. Data collection
Collecting data is the first step in data processing. Data is pulled from
available sources, including data lakes and data warehouses. It is important that
the data sources available are trustworthy and well-built so the data collected (and
later used as information) is of the highest possible quality.
The type of raw data collected has a huge impact on the output produced. Hence,
raw data should be gathered from defined and accurate sources so that the
168

subsequent findings are valid and usable. Raw data can include monetary figures,
website cookies, profit/loss statements of a company, user behavior, etc.
2. Data preparation
Once the data is collected, it then enters the data preparation stage. Data
preparation, often referred to as “pre-processing” is the stage at which raw data is
cleaned up and organized for the following stage of data processing. During
preparation, raw data is diligently checked for any errors. The purpose of this step
is to eliminate bad data (redundant, incomplete, or incorrect data) and begin to
create high-quality data for the best business intelligence.
Data preparation or data cleaning is the process of sorting and filtering the
raw data to remove unnecessary and inaccurate data. Raw data is checked for
errors, duplication, miscalculations or missing data, and transformed into a
suitable form for further analysis and processing. This is done to ensure that only
the highest quality data is fed into the processing unit.
3. Data input
The clean data is then entered into its destination, and translated into a
language that it can understand. Data input is the first stage in which raw data
begins to take the form of usable information. The raw data is converted into
machine readable form and fed into the processing unit. This can be in the form of
data entry through a keyboard, scanner or any other input source.
4. Processing
During this stage, the data inputted to the computer in the previous stage is
actually processed for interpretation. Processing is done using machine
learning algorithms, though the process itself may vary slightly depending on the
source of data being processed (data lakes, social networks, connected devices etc.)
and its intended use (examining advertising patterns, medical diagnosis from
connected devices, determining customer needs, etc.). the raw data is subjected to
various data processing methods using machine learning and artificial intelligence
algorithms to generate a desirable output. This step may vary slightly from process
to process depending on the source of data being processed (data lakes, online
databases, connected devices, etc.) and the intended use of the output.
5. Data output/interpretation
The output/interpretation stage is the stage at which data is finally usable to
non-data scientists. It is translated, readable, and often in the form of graphs,
videos, images, plain text, etc. Members of the company or institution can now
begin to self-serve the data for their own data analytics projects. This output can be
stored and further processed in the next data processing cycle.
6. Data storage
The final stage of data processing is storage. After all of the data is processed,
it is then stored for future use. While some information may be put to use
immediately, much of it will serve a purpose later on. Plus, properly stored data is a
necessity for compliance with data protection legislation like General Data
169

Protection Regulation. When data is properly stored, it can be quickly and easily
accessed by members of the organization when needed.
The data and metadata is stored for further use. This allows for quick access
and retrieval of information whenever needed, and also allows it to be used as input
in the next data processing cycle directly.
9.3.3. TYPES OF DATA PROCESSING
There are different types of data processing based on the source of data and
the steps taken by the processing unit to generate an output. There is no one-size-
fits-all method that can be used for processing raw data.
There are number of methods and techniques which can be adopted for
processing of data depending upon the requirements, time availability, software and
hardware capability of the technology being used for data processing. There are
number of types of data processing methods.
Batch Processing
This is one of the widely used type of data processing which is also known as
Serial/Sequential, Tacked/Queued offline processing. The fundamental of this type
of processing is that different jobs of different users are processed in the order
received. Once the stacking of jobs is complete they are provided/sent for
processing while maintaining the same order. This processing of a large volume of
data helps in reducing the processing cost thus making it data processing
economical. Batch Processing is a method where the information to be organized is
sorted into groups to allow for efficient and sequential processing.
Batch Processing can be defined as concurrent, simultaneous, or sequential
execution of an activity. Simultaneous Batch processing occurs when they are
executed by the same resource for all the cases at the same time. Sequential Batch
processing occurs when they are executed by the same resource for different cases
either immediately or immediately after one another.
Concurrent Batch processing means when they are executed by the same
resources but partially overlapping in time. It is used mostly in financial
applications or at the places where additional levels of security are required. In this
processing, the computational time is relatively less because by applying a function
to the whole data altogether extracts the output. It is able to complete work with a
very less amount of human intervention.
Real time processing
As the name suggests this method is used for carrying out real-time
processing. This is required where the results are displayed immediately or in
lowest time possible. The data fed to the software is used almost instantaneously
for processing purpose. The nature of processing of this type of data processing
requires use of internet connection and data is stored/used online. No lag is
expected/acceptable in this type and receiving and processing of transaction is
carried out simultaneously. This method is costly than batch processing as the
hardware and software capabilities are better. Example includes banking system,
170

tickets booking for flights, trains, movie tickets, rental agencies etc. This technique
can respond almost immediately to various signals to acquire and process
information. These involve high maintenance and upfront cost attributed to very
advanced technology and computing power. Time saved is maximum in this case as
the output is seen in real time. For example in banking transactions.
In real-time, it help in reducing the time lag between occurrence and
processing to almost nil. Huge chunks of data are being poured into systems off
organizations, hence storing and processing it in a real-time environment would
change the scenario.
Most organizations want to have real-time insights into the data so as to
understand the environment within or outside their organization fully. This is
where the need for a system arises that would be able to handle real-time data
processing and analytics. This type of processing provides results as and when it
happens. The most common method is to take the data directly from its source,
which may also be referred to as stream, and draw conclusions without actually
transferring or downloading it. Another major technique in real-time processing is
Data virtualization techniques where meaningful information is pulled for the needs
of data processing while the data remains in its source form.
Online Processing
This processing method is a part of automatic processing method. This method
at times known as direct or random access processing. Under this method the job
received by the system is processed at same time of receiving. This can be
considered and often mixed with real-time processing. This system features random
and rapid input of transaction and user defined/ demanded direct access to
databases/content when needed. This is a method that utilizes Internet
connections and equipment directly attached to a computer. This allows the data to
be stored in one place and being used at an altogether different place. Cloud
computing can be considered as an example which uses this type of processing. It
is used mainly for information recording and research.
In the parlance of today’s database systems, “online” that signifies
“interactive”, within the bounds of patience.” Online processing is the opposite of
“batch” processing. Online processing can be built out of a number of relatively
more simple operators, much as traditional query processing engines are built.
Online Processing Analytical operations typically involve major fractions of large
databases. It should, therefore be surprising that today’s Online analytical systems
provide interactive performance. The secret to their success is precomputation.
In most Online Analytical Processing systems, the answer to each point and
click is computed long before the user even starts the application. In fact, many
Online processing systems do that computation relatively inefficiently, but since the
processing is done in advance, the end-user does not see the performance problem.
This type of processing is used when data is to be processed continuously, and it is
fed into the system automatically.
171

Distributed Processing
This method is commonly utilized by remote workstations connected to one big
central workstation or server. ATMs are good examples of this data processing
method. All the end machines run on a fixed software located at a particular place
and make use of exactly same information and sets of instruction.
Multiprocessing
This type of processing perhaps the most widely used types of data processing.
It is used almost everywhere and forms the basic of all computing devices relying on
processors. Multi processing makes use of CPUs (more than one CPU). The task or
sets of operations are divided between CPUs available simultaneously thus
increasing efficiency and throughput. The break down of jobs which needs be
performed are sent to different CPUs working parallel within the mainframe. The
result and benefit of this type of processing is the reduction in time required and
increasing the output. Moreover CPUs work independently as they are not
dependent on other CPU, failure of one CPU does not result in halting the complete
process as the other CPUs continue to work. Examples include processing of data
and instructions in computer, laptops, mobile phones etc.
Time sharing
Time based used of CPU is the core of this data processing type. The single
CPU is used by multiple users. All users share same CPU but the time allocated to
all users might differ. The processing takes place at different intervals for different
users as per allocated time. Since multiple users can uses this type it is also
referred as multi access system. This is done by providing a terminal for their link
to main CPU and the time available is calculated by dividing the CPU time between
all the available users as scheduled.
Commercial Data Processing
The commercial data processing means a method of applying standard
relational databases, and it includes the usage of batch processing. It involves
providing huge data as input into the system and creating a large volume of output
but using less computational operations. It basically combines commerce and
computers for making it useful for a business. The data that is processed through
this system is usually standardized and therefore has a much lower chance of
errors.
Many manual works are automated through the use of computers to make it
easy and error-proof. Computers are used in business to take raw data and process
it into a form of information that is useful to the business. Accounting programs
are prototypical examples of data processing applications. An Information System
(IS) is the field that studies such as organizational computer systems.
Scientific Data Processing
Unlike commercial data processing, scientific data processing involves a large
use of computational operations but lower volumes of inputs as well as outputs.
The computational operations include arithmetical and comparison operations. In
172

this type of processing, any chances of errors are not acceptable as it would lead to
wrongful decision making. Hence the process of validating, sorting, and
standardizing the data is done very carefully, and a wide variety of scientific
methods are used to ensure no wrong relationships and conclusions are reached.
This takes a longer time than in commercial data processing. The common
examples of scientific data processing include processing, manage, and distribute
science data products and facilitate scientific analysis of algorithms, calibration
data, and data products as well as maintaining all software, calibration data, under
strict configuration control.

Type Uses
Batch Processing Data is collected and processed in batches. Used for large
amounts of data.
Eg: payroll system
Real-time Processing Data is processed within seconds when the input is
given. Used for small amounts of data.
Eg: withdrawing money from ATM
Online Processing Data is automatically fed into the CPU as soon as it
becomes available. Used for continuous processing of
data.
Eg: barcode scanning
Multiprocessing Data is broken down into frames and processed using
two or more CPUs within a single computer system. Also
known as parallel processing.
Eg: weather forecasting
Time-sharing Allocates computer resources and data in time slots to
several users simultaneously.
9.3.4 DATA PROCESSING METHODS
There are three main data processing methods - manual, mechanical and
electronic.
Manual Data Processing
Data is processed manually without using any machine or tool to get the
required results. In manual data processing, all the calculations and logical
operations are performed manually on the data. Similarly, data is transferred
manually from one place to another. This method of data processing is very slow,
and errors may also occur in the output. Mostly, Data is processed manually in
many small business firms as well as government offices & institutions. In an
educational institute, for example, marks sheets, fee receipts, and other financial
calculations (or transactions) are performed by hand. The entire process of data
collection, filtering, sorting, calculation and other logical operations are all done
with human intervention without the use of any other electronic device or
automation software. It is a low-cost method and requires little to no tools, but
produces high errors, high labor costs and lots of time.
173

This method is avoided as far as possible because of the very high probability
of error, labour intensive and very time-consuming. This type of data processing
forms the very primitive stage when technology was not available, or it was not
affordable. With the advancement of technology, the dependency on manual
methods has drastically decreased. This also makes processing expensive and
requires large manpower depending on the data required to be processed. Example
includes selling of commodity on shop.
Mechanical Data Processing
Data is processed mechanically through the use of devices and machines.
These can include simple devices such as calculators, typewriters, printing press,
etc. Simple data processing operations can be achieved with this method. This
method of data processing is faster and more accurate than manual data
processing. These are faster than the manual mode but still form the early stages of
data processing. With invention and evolution of more complex machines with
better computing power this type of processing also started fading away.
Examination boards and printing press use mechanical data processing devices
frequently. Any device which facilitates data processing can be considered under
this category. The output from this method is still very limited. It has much lesser
errors than manual data processing, but the increase of data has made this method
more complex and difficult.
Electronic Data Processing
Data is processed with modern technologies using data processing software
and programs. A set of instructions is given to the software to process the data and
yield output. Electronic data processing is also known as EDP, a frequently used
term for automatic information processing. It uses the computers to collect,
manipulate, record, classification and to summarize data. EPD meaning can be
described as the processing of data using electronic means such as computers,
calculators, servers and other similar electronic data processing equipment. A
computer is the best example of an EDP system. Use of a data processing
system ensures accurate and rapid data processing. This method is the most
expensive but provides the fastest processing speeds with the highest reliability and
accuracy of output.
Examples of Data Processing
Data processing occurs in our daily lives whether we may be aware of it or not.
Here are some real-life examples of data processing:
• A stock trading software that converts millions of stock data into a simple
graph
• An e-commerce company uses the search history of customers to
recommend similar products
• A digital marketing company uses demographic data of people to strategize
location-specific campaigns
• A self-driving car uses real-time data from sensors to detect if there are
pedestrians and other cars on the road
174

Different types of output files obtained as “processed” data


• Plain text file – These constitute the simplest form of processed data. Most
of these files are user readable and easy to comprehend. Very negligible or
no further processing is required in these type of files. These are exported as
notepad or WordPad files.
• Table/ spreadsheet – This file format is most suitable for numeric data.
Having digits in rows and columns allows the user to perform various
operations. For ex, filtering & sorting in ascending/descending order to
make it easy to understand and use. Various mathematical operations can
be applied when using this file output.
• Charts & Graphs – Option to get the output in the form of charts and
graphs is handy and now forms standard features in most of the software.
This option is beneficial when dealing with numerical values reflecting
trends and growth/decline. There are ample charts and graphs are available
to match diverse requirements. At times there exists situation when there is
a need to have a user-defined option. In case no inbuilt chart or graph is
available then the option to create own charts, i.e., custom charts/graphs
come handy.
• Maps/Vector or image file – When dealing with spatial data the option to
export the processed data into maps, vector and image files is of great use.
Having the information on maps is of particular use for urban planners who
work on different types of maps. Image files are obtained when dealing with
graphics and do not constitute any human readable input.
• Other formats/ raw files – These are the software specific file formats
which can be used and processed by specialized software. These output files
may not be a complete product and require further processing. Thus there
will need to perform steps multiple times.
9.4 REVISION POINTS
• Data processing
• Application of data processing
• Data processing cycle
• Types of data processing
• Methods of data processing
9.5 INTEXT QUESTIONS
1. Enumerate the application of data processing.
2. List-down the steps in data processing cycle and explain.
3. Write a short note on commercial data processing.
4. What do you mean by scientific data processing?
5. Explain the methods of data processing.
9.6 SUMMARY
Processing of data is becoming a popular topic because of the various new
laws and uses associated with the data
175

The data processing cycle consists of a series of steps where raw data (input)
is fed into a process (CPU) to produce actionable insights (output).
In manual data processing, all the calculations and logical operations are
performed manually on the data. Similarly, data is transferred manually
from one place to another.
9.7 TERMINAL EXERCISE
1. __________________ often referred to as “pre-processing” is the stage at which
raw data is cleaned up
2. ___________________ can be defined as concurrent, simultaneous, or
sequential execution of an activity.
3. ____________________ is required where the results are displayed immediately
or in lowest time possible.
4. Example for distributed processing is ______________
5. A set of instructions is given to the software to process the data and yield
output is called _________________
9.8 SUPPLEMENTARY MATERIALS
1. https://www.jigsawacademy.com/blogs/data-science/types-of-data-
processing/
2. https://www.talend.com/resources/what-is-data-processing/
9.9 ASSIGNMENTS
1. Explain the various types of data processing.
9.10 SUGGESTED READING/REFERENCE
1. https://www.simplilearn.com/what-is-data-processing-article
2. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
9.11 LEARNING ACTIVITIES
1. Identify the type of data processing that you face in various departments of
banks, colleges, schools and government offices. Have a discussion about
that type among your friend and teachers.
9.12 KEYWORDS
Batch processing
Real-time processing
Online processing
Distributed processing.
Time sharing
Multiprocessing
Image file.

176

LESSON - 10
HIERARCHY OF DATA
10.1 INTRODUCTION
Data is the most important input of an organization. It has to be processed to
obtain information. Importance of data processing is now realised as it is necessary
to keep every bit of collected data in order. This makes it easy to use, keep it stored
and work with it. Moreover, digitized data is easy to be approached anywhere via
emails, clouds and other data storing apps and devices.
10.2 OBJECTIVES
• To understand the hierarchy of data
• To realize the data processing system and its types
• To realize the importance of data processing systems
10.3 CONTENTS
10.3.1 Hierarchy of Data
10.3.2 Data Processing Systems
10.3.3 Types of Data Processing Systems
10.3.4 Management of Data Processing System in Business Organization
10.3.5 Importance of Data Processing
10.3.1 HIERARCHY OF DATA
Data are the principal resources of an organization. Data stored in computer
systems form a hierarchy extending from a single bit to a database, the major
record-keeping entity of a firm. Each higher rung of this hierarchy is organized from
the components below it.
Data are logically organized into:
• Bits (characters)
• Fields
• Records
• Files
• Databases
Bit (Character) - a bit is the smallest unit of data representation (value of a bit
may be a 0 or 1). Eight bits make a byte which can represent a character or a
special symbol in a character code.
Field - a field consists of a grouping of characters. A data field represents an
attribute (a characteristic or quality) of some entity (object, person, place, or event).
Record - a record represents a collection of attributes that describe a real-
world entity. A record consists of fields, with each field describing an attribute of
the entity.
177

File - a group of related records. Files are frequently classified by the


application for which they are primarily used (employee file). A primary key in a
file is the field (or fields) whose value identifies a record among others in a data file.
Database - is an integrated collection of logically related records or files. A
database consolidates records previously stored in separate files into a common
pool of data records that provides data for many applications. The data is managed
by systems software called database management systems (DBMS). The data stored
in a database is independent of the application programs using it and of the types
of secondary storage devices on which it is stored.
10.3.2 DATA PROCESSING SYSTEMS
A data processing system is a combination of machines, people, and
processes that for a set of inputs produces a defined set of outputs. The inputs and
outputs are interpreted as data, facts, information etc. depending on the
interpreter's relation to the system.
A data processing system may involve some combination of:
• Conversion converting data to another form or Language.
• Validation – Ensuring that supplied data is "clean, correct and useful."
• Sorting – "arranging items in some sequence and/or in different sets."
• Summarization – reducing detail data to its main points.
• Aggregation – combining multiple pieces of data.
• Analysis – the "collection, organization, analysis, interpretation and
presentation of data.".
• Reporting – list detail or summary data or computed information.
10.3.3 TYPES OF DATA PROCESSING SYSTEMS
Scientific data processing
Scientific data processing "usually involves a great deal of computation
(arithmetic and comparison operations) upon a relatively small amount of input
data, resulting in a small volume of output
Commercial data processing
Commercial data processing "involves a large volume of input data, relatively
few computational operations, and a large volume of output. Accounting programs
are the prototypical examples of data processing applications. Information Systems
(IS) is the field that studies such organizational computer systems.
Data analysis
"Data analysis is a body of methods that help to describe facts, detect
patterns, develop explanations, and test hypotheses. For example, data analysis
might be used to look at sales and customer data to "identify connections between
products to allow for cross selling campaigns.
178

By service type
Transaction processing systems
A transaction process system and transaction processing are often contrasted
with a batch process system and batch processing, where many requests are all
executed at one time. The former requires the interaction of a user, whereas batch
processing does not require user involvement. In batch processing the results of
each transaction are not immediately available. Additionally, there is a delay while
the many requests are being organized, stored and eventually executed. In
transaction processing there is no delay and the results of each transaction are
immediately available. During the delay time for batch processing, errors can occur.
Although errors can occur in transaction processing, they are infrequent and
tolerated, but do not warrant shutting down the entire system.
To achieve performance, reliability and consistency, data must be readily
accessible in a data warehouse, backup procedures must be in place and the
recovery process must be in place to deal with system failure, human failure,
computer viruses, software applications or natural disasters.
Information storage and retrieval systems
An information storage and retrieval system (ISRS) is a network with a built-in
user interface that facilitates the creation, searching, and modification of stored
data. An ISRS is typically a peer-to-peer ( P2P ) network operated and maintained
by private individuals or independent organizations, but accessible to the general
public. Some, but not all, ISRSs can be accessed from the Internet. (The largest
ISRS in the world is the Internet itself.)
Characteristics of an ISRS include lack of centralization, graceful degradation
in the event of hardware failure, and the ability to rapidly adapt to changing
demands and resources. The lack of centralization helps to ensure that
catastrophic data loss does not occur because of hardware or program failure, or
because of the activities of malicious hackers. Graceful degradation is provided by
redundancy of data and programming among multiple computers. The physical and
electronic diversity of an ISRS, along with the existence of multiple operating
platforms, enhances robustness, flexibility, and adaptability. (These characteristics
can also result in a certain amount of chaos.) In addition to these features, some
ISRSs offer anonymity, at least in theory, to contributors and users of the
information.
A significant difference between an ISRS and a database management system
( DBMS ) is the fact that an ISRS is intended for general public use, while a DBMS
is likely to be proprietary, with access privileges restricted to authorized entities. In
addition, an ISRS, having no centralized management, is less well-organized than a
DBMS.
Command and control systems
A command, control, and communication (C3) system is an information
system employed within a military organization. It is a general phrase that
179

incorporates strategic and tactical systems. Consequently, a combat direction


system, tactical data system, or warning and control system may each be
considered C3 systems. The following definitions of terms are commonly accepted
in the military context.
1. Command--The functional exercise of authority, based upon knowledge, to
attain an objective or goal.
2. Control--The process of verifying and correcting activity such that the
objective or goal of command is accomplished.
3. Communications--The ability and function of providing the necessary liaison
to exercise effective command between tactical or strategic units of
command.
4. Strategic--Broadly defined methods of accomplishing a goal.
5. Tactical--Narrowly defined methods of accomplishing objectives en route to a
strategic goal.
Process control systems
Process control systems (PCS), sometimes called industrial control systems
(ICS), function as pieces of equipment along the production line
during manufacturing that test the process in a variety of ways, and return data for
monitoring and troubleshooting. Many types of process control systems exist,
including supervisory control and data acquisition (SCADA), programmable logic
controllers (PLC), or distributed control systems (DCS), and they work to gather and
transmit data obtained during the manufacturing process.
The PCS can be a relatively simple item with a sensor, often called a primary
transducer, that receives an input, along with a controller that processes the input,
and a receiver that processes an output.
More complicated PCS devices are robotic and perform many tasks. The PCS
devices can communicate their data to a company’s enterprise resource planning
(ERP) computer application through middleware software called a manufacturing
execution system (MES).
Message switching systems
Message switching is a connectionless network switching technique where the
entire message is routed from the source node to the destination node, one hop at a
time. It was a precursor of packet switching.
Packet switching treats each message as an individual unit. Before sending the
message, the sender node adds the destination address to the message. It is then
delivered entirely to the next intermediate switching node. The intermediate node
stores the message in its entirety, checks for transmission errors, inspects the
destination address and then delivers it to the next node. The process continues till
the message reaches the destination.
In the switching node, the incoming message is not discarded if the required
outgoing circuit is busy. Instead, it is stored in a queue for that route and
180

retransmitted when the required route is available. This is called store and forward
network.
10.3.4 MANAGEMENT OF DATA PROCESSING SYSTEM IN BUSINESS ORGANIZATION
Data processing systems comprise the interaction of people, processes, and
equipment to generate usable information from raw data. Thus, data processing
system management involves the administration of the people and equipment
aspects of the system including all the processes outlined as follows:
• Data conversion—changing the data into the required format that can be
processed.
• Data cleanup—removing irregularities in the data before processing.
• Organizing—categorizing the data into sets/groups.
• Analysis—discovering and generating valuable information from the data.
• Reporting—presenting the information.
Management of the different aspects of the data processing system involves
planning where resources necessary in the system are allocated to the different
functions. Management also includes the process of organizing all the functions to
ensure seamless operation of the system. Control is an oversight role of
management and ensures that the data processing system works as expected and
delivers the output required. Control also ensures that any issues affecting the
system are realized and addressed.
A data processing system is a combination of processes that, for any given
amount of inputted data, produces a corresponding amount of outputted data. This
processing can be manual, automatic, or electronic, though most modern data
processing is done through the use of computers. As such, the management of data
usually entails highly specific and specialized knowledge concerning the programs
that collect and organize data sets. This is especially true in the age of "Big Data," a
term that refers to the modern introduction of data sets that are so large and
complex that traditional methods of storing and analyzing data are rendered
obsolete. In these cases, the data is usually organized into individualized
components that are more easily decipherable by data analysts, specialists who are
employed by businesses for the purpose of data management.
Importance of data processing in business
Importance of data processing includes increased productivity and profits,
better decisions, more accurate and reliable. Further cost reduction, ease in
storage, distributing and report making followed by better analysis and
presentation are other advantages. The need to process data is now widely realized
and reflected in every field of work. Let the work be done in a business atmosphere
or for educational research purpose, data management systems are used by every
business. It is a multidimensional process which is involved in almost every field of
human life.
181

Generally speaking, the term “Data Processing” is used where you have to
collect innumerable data files from different sources. You have to arrange them in a
way that can be practically beneficial for the purpose you have gathered all that
material. It is a task of synchronizing collected data from different sources and
convert it to an organized form . This makes it easy to understand and retrieve the
specific information anytime.
There are various data processing methods which include manual data
processing, mechanical data processing and electronic data processing. Data
processing is one of the most important daily tasks especially when dealing with big
data and performing data mining. All those fields where we can expect a huge data
available to settle down like education, banking or transportation now realises the
importance of data processing. With the emergence of fields like data science, data
analysis, big data etc. the need to process data and to understand the importance
of processing the data is crucial.
Nowadays more and more data is collected for academic, scientific research,
private & personal use, institutional use, commercial use. This collected info needs
to be stored, sorted, filtered, analyse and presented and even require data transfer
for it to be of any use. This process can be simple or complex depending on the
scale at which collection is done and the complexity of the results which are
required to be obtained. The time consumed in obtaining the desired result depends
on the operations which need to be performed on the collected data and on the
nature of the output file required to be obtained. This problem becomes starker
when dealing with the very large volume of data.
The need for processing becomes more and more critical in such cases. In
such cases, data mining and data management come into play without which
optimal results cannot be obtained. Each stage starting from collection to
presentation has a direct effect on the output and usefulness of the processed data.
Sharing the dataset with third party must be done carefully and as per written
agreement & service agreement. This prevents data theft, misuse and loss of data.
Processed data makes it easy to arrange it by type and information, it saves a
lot of space. This also helps in making sure that all staff and workers can learn and
understand it easily. They can implement it in the work, which can otherwise take
up more time and end up in providing a decreased output. This can harm the
interests of the business or organization.
Most businesses and fields require data for providing a good quality of service.
Collecting data and it’s implications are very important aspects of managing and
ensuring statistical authenticity. It is particularly essential for services concerned
with financial technologies. This is so because transactions and payment details
need to be properly stored for easy access by customers as well as the company
officials upon need. Processing is not limited to computers and can be done
manually as well.
182

The invention of computer technology was one of the most important events of
all time. With the improvement of computer technology and ease of use, it has
become a popular technology in the hands of many. Data processing has also
become popular with computer systems making it easier to be handled and for
specialised requirements companies look for a data scientist. Data processing is a
field that has numerous applications in most fields like business, education,
healthcare, research and more. The importance is increasing with the increase in
advancement in areas like data science, machine learning, artificial intelligence,
data quality and data security etc. When discussing about data processing, it’s
good to know about data collection, business intelligence, data analysis, data
sources and quality information etc.
All of the departments which align data processing as their most important
task daily, have much more to do in different areas of working field. There are some
priorities which they need to do at any cost and then there are some pending works
to complete as well. Study shows that it is a general practice to waste a lots of time
doing manual data processing. Modern world calls for an automated data
processing and the increased use of structured data. This is done by using different
reliable apps so that the remaining fields can have their required manpower to
complete daily tasks.
10.3.5 IMPORTANCE OF DATA PROCESSING
Data is being collected by almost everyone either knowingly or unknowingly.
Collection of data is the first step but processing of data is another vital activity.
Companies, institutes & various groups all over the world are engaged in the work
of data processing. While talking about the importance of data processing it is
equally important to be aware about the related aspects right starting from
the methods of data collection, data processing, data processing cycle, information
processing cycle, methods of data processing, types of data processing, data
presentation and analysis till the data management best practices. Some of the
many purposes that call for an efficient data processing and important ones are
mentioned below:
Yields better results & increases productivity
A company or organisation which possess data or have access to required data
is undoubtedly at advantage. Data is not just simply numbers and tables but the
undue advantage which an entity can have over its competitors. Data can be
processed in different forms to obtain the required information, without data it will
be impossible to take a good decision. A decision taken after the data is analysed
gives confidence while taking decision as the stats and required details are available
with the group. Having access to structured data which can be used to obtain
meaningful information offers any company a competitive advantage. Real time
processing of data is the foundation and backbone of many companies such as
those in banking and record keeping sector.
183

Having analysed the data as per requirement you will gain insight to the areas
in which improvements needs to be made. Data Visualisation & data mapping are
particularly important those key areas can be prioritized and addressed
accordingly. To boost the productivity in terms of sales, better understanding of
topic or for further processing of data required area can be selected. Similarly you
can analyse the areas which require minimal intervention so that extra workforce (if
any) can be assigned for other words. Potential areas with maximum benefits can
be identified and investment can be made in those key areas to improve the
efficiency and profit.
Report Making is simplified:
In almost all of the activities, data is used hugely for the purpose of collecting
certain values and making reports. It had been taking so long to prepare a report
when manual data processing was utilized. But now it has been simplified and all
of the process of report making is done so speedily. Once you have processed and
placed all data according to a certain framework, you can just gather the needed
data with a few clicks. In many cases preparing a report has now become matter of
minutes. Your report looks more organised & authentic. It contains relevant
information obtained after a logical processing of data. Data presentation and
analysis is also simplified and becomes more effective.
Speed, Accurate and more reliable
It is important to make sure that the collection of facts and figures is done
quite speedily and without making any errors. When data is collected and filtered
through computers, there are no or negligible chance of errors. It is almost
guaranteed that the further processes will be done with maximum possible
accuracy. If the input data is accurate then the output is always accurate.
Processing can be done at a greater speed and with higher accuracy when the right
combination of software’s is used. Another importance of data processing is the
major advantage when working in a competitive environment. It is not uncommon
to have access to same data. Data and information with better quality is more
reliable. Predictive modeling, data cleaning, data validation, batch processing are
necessary for accurate data.
Storage and distribution is easy when data is processed:
When we have piles of data, we need a huge place to store it and there is a
huge chance of missing information and confusions. When the data is processed
through computers, you do not need an extra room to store all those hard files and
papers. All of your data is processed and labelled through a complete computerized
setup so you do not need to get confused at any stage. It is easy to take out any
information from a processed data instead of an unprocessed one. Having your data
stored in a digital form rather than having the hard copies is another aspect which
highlights the importance of data processing.
Cost Reduction
Data once collected acts as asset for any group and having it stored provides
easy access to when required. This eliminates the need to collect data again and
184

again. Moreover it is very easy and convenient to make copied of the stored data
when stored in digital form. Sending or transferring the data is also much easier
and eases the use of this data for research purposes. This directly helps in cost
reduction. The cost or loss which a company might incur because of lack of
information is also drastically reduced. This is so as processed data enables it to
take a wise and informed decision thus again saving on huge cost.
Digitization has made processes quite cost-effective. That is why students are
using computers and laptops to prepare their assignments. Some even use online
essay writing services and keep all the record in digital form instead of printed one.
Teachers are also keeping records on computers instead of taking piles of papers
with them to check every day.
Safe and secure
Having processed data in digital fulfils another very essential requirement of
information being secure. Since the value of data has increased over the time,
incidents of data theft is not something which is unheard. Once the data is
processed it is easier and essential to keep it secure. This can be done by use of
various paid and free softwares. This will prevent any unauthorized access to the
data and thus keeping your data safe. You can also encrypt your data if the need is
felt.
Importance of data processing is now realised as it is necessary to keep every
bit of collected data in order. This makes it easy to use, keep it stored and work
with. Moreover, digitized data is easy to be approached anywhere via emails, clouds
and other data storing apps and devices.
Other benefits and merits of Data Processing are:
1. Data processing makes it easier to validate actions and changes and
transactions easily and reduce dependence on computational power for
collecting them on demand from a basic form. Transaction processing
systems are highly dependent on real time data processing.
2. Insurance claims can be easily handled and settled with properly processed
data and make it time-saving for the police authorities as well. Keeping and
managing health records, creating electronic health records is now possible
due to batch processing, powerful & reliable data warehouses.
3. Data processing can also be made to include image processing and make it
easier to present any data to users in a readable format that is liked by
them.
4. Invoices can be easily generated for services which have been used and
make the customer experience better.
5. Data processing in the form of word processing can help in making
documents which are readable and likeable by readers and be made even
more engaging.
Another importance of data processing is that the governments can use the
processed data for saving time during surveys and also provide services and
185

amenities to places based on the geographical and economic information. With safe
and reliable data, reports can be easily made and troubleshooting processes can be
made easier and less time-consuming.
From sorting of data to aggregation of a similar type of data, data processing
can help a lot in making everything organized and ensure a smooth workflow that
satisfies both the users of the data as well as managers of the data. With the
current trends, it is a very good field and can hope to engage more people to make
itself into a good industry.
10.4 REVISION POINTS
• Hierarchy of data
• Types of data processing system
• Data processing in Business
10.5 INTEXT QUESTIONS
1. Define: bit, byte, data, record, file, and database
2. Write a short note on data analysis.
3. Explain the information storage and retrieval system
4. Explain the importance of process control system
5. Enumerate the importance of data processing in business.
10.6 SUMMARY
Database is an integrated collection of logically related records or files
Scientific data processing usually involves a great deal of computation
Data processing system management involves the administration of the
people and equipment aspects of the system including all the processes.
Data processing yields better results & increases productivity
Storage and distribution is easy when data is processed
10.7 TERMINAL EXERCISE
1. _____________ consists of grouping of characters.
2. _______________ is a combination of machines, people, and processes that
for a set of inputs produces a defined set of outputs
3. _____________ ensures that supplied data is clean, correct and useful.
4. In ____________ system many requests are all executed at one time
5. _____________ is categorizing the data into sets/groups.
186

10.8 SUPPLEMENTARY MATERIALS


1. https://www.techopedia.com/definition/707/transaction-process-system-
tps
2. http://www.gvpcew.ac.in/Material%203%20Units/4%20IT%20IRS.pdf
3. https://fas.org/man/dod-101/navy/docs/fun/part20.htm
10.9 ASSIGNMENTS
1. Explain various data processing system based on their service.
10.10 SUGGESTED READING/REFERENCE
1. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
2. https://www.talend.com/resources/what-is-data-processing/
3. https://www.simplilearn.com/what-is-data-processing-article
10.11 LEARNING ACTIVITIES
1. Form a tem and discuss the use of information in various departments of
organization and its processing.
10.12 KEYWORDS
Primary key
Transaction processing system
Process control system
Message switching system

187

LESSON - 11
APPLICATION PORTFOLIO DEVELOPMENT
11.1 INTRODUCTION
Application portfolio refers to an organization’s collection of software
applications and software-based services, which it uses to attain its goals or
objectives. Managing these resources is often referred to as application portfolio
management (APM).
11.2 OBJECTIVES
• To understand the concept of application portfolio
• To get verse with programme development cycle
11.3 CONTENTS
11.3.1 Application Portfolio Development
11.3.2 Programme Development Cycle
11.3.1 APPLICATION PORTFOLIO DEVELOPMENT
Application portfolio development and tools are increasingly important as CIOs
and IT managers are learning and finding new ways to keep their older
infrastructure in line while still building further software and applications.
Although CIOs and IT leaders need to keep on top of the development, they might
need help or a solution that will monitor continuously. It can be somewhat of an
uphill battle to stay online and keep operations in line.
In a perfect world, applications will update themselves and everything will fall
in line – or IT departments will have an unlimited budget with an unlimited amount
of employees with all of the best skills. Unfortunately, that isn’t really possible. The
reality is that budgets are what they are and businesses can’t continually hire new
people while keeping the older ones as well.
Application portfolio development solutions help to improve how data and
applications are managed throughout development – without going over budget or
slowing down development.
When development your application portfolio, there are many things that you
need to keep track off – and with so many fires burning, it can be difficult to do so.
With application portfolio development, specialized software can be used to
assess the health and effectiveness of your applications. Develop tool that will
uncover which apps are working toward your business goals, which ones need to be
repaired in order to work, and which ones are just taking away from your budget
and need to be retired. It serves to look deep into the apps and see what they are
capable of doing – and what they can’t do. When it comes to application
modernization, the development solutions can analyze which apps will be
problematic and how to effectively bring them up to date. Either address individual
apps and change them, or develop a new solution.
188

Another part of application portfolio development is application migrations. If


you need to change the programming language of your apps, whether to make them
work better or just to improve outdated infrastructure. If you are trying to migrate
to the Cloud or another storage site, the system have to help to make it that much
easier.
How Application Portfolio Development Helps
Application portfolio development helps you with creating roadmaps for future
updates and new application building. This will help you to determine what
changes need to be made to your application portfolio, when to make those
changes, and what the costs will be for each and every change. You will be able to
rationalize each of those steps as well as see how things shift when you make any
changes.
Application portfolio development solutions will reduce the complexity in the
choices that you make, of course, but also the complexity in the application
portfolio itself. You’ll be able to easy identify which applications can be retired,
which ones are redundant, and which ones are working well. When you complete
these steps, your processes are streamlined and you will be able to better distribute
any and all resources and well as allocate your resources in a more effective way.
When combine with the other benefits of application portfolio development, you will
have reduced costs for the organization overall.
As you build and develop your application portfolio, you will be able to better
align your goals with the mission of your company.
Understanding portfolio development, you will be able to:
Baseline Your Portfolio: Get a fact-based assessment of your critical
applications as quick as possible.
Segment to Prioritize: Identify applications that are good candidates for
changes: merges, retirements, modernizations.
Set Objectives: Understand what you need to do to get to the next level – and
build a plan to get there.
Continuous Monitoring: Maintaining connection to your key metrics to track
health and improvement continuously.
11.3.2 PROGRAM DEVELOPMENT PROCESS
The various stages in the development of a computer program are:
1. Problem Definition
2. Program Design
3. Coding
4. Debugging
5. Testing
6. Documentation
7. Maintenance
189

Problem Definition
• The first step in the process of program development is the thorough
understanding and identification of the problem for which is the program or
software is to be developed.
• In this step the problem has to be defined formally.
• All the factors like Input/output, processing requi requirement,
rement, memory
requirements, error handling, interfacing with other programs have to be
taken into consideration in this stage.
Program Design
• The next stage is the program design. The software developer makes use of
tools like algorithms and flowcharts to develop the design of the program.
o Algorithm
o Flowchart
Coding
• Once the design process is complete, the actual computer program is
written, i.e. the instructions are written in a computer language.
• Coding is generally a very small part of the entire progra
program
m development
process and also a less time consuming activity in reality.
• In this process all the syntax errors i.e. errors related to spelling, missing
commas, undefined labels etc. are eliminated.
• For effective coding some of the guide lines which are ap
applied
plied are:
o Use of meaningful names and labels of variables,
190

o Simple and clear expressions,


o Modularity with emphasis on making modules generalized,
o Making use of comments and indenting the code properly,
o Avoiding jumps in the program to transfer control.
Debugging
• At this stage the errors in the programs are detected and corrected.
• This stage of program development is an important process. Debugging is
also known as program validation.
• Some common errors which might occur in the programs include:
o Un initialization of variables.
o Reversing of order of operands.
o Confusion of numbers and characters.
o Inverting of conditions eg jumping on zero instead of on not zero.
Testing
• The program is tested on a number of suitable test cases.
• A test plan of the program has to be done at the stage of the program design
itself.
• This ensures a thorough understanding of the specifications.
• The most trivial and the most special cases should be identified and tested.
• It is always useful to include the maximum and minimum values of all
variables as test data.
Documentation
• Documentation is a very essential step in the program development.
• Documentation help the users and the people who maintain the software.
• This ensures that future modification if required can be done easily. Also it
is required during redesigning and maintenance.
Maintenance
• Updating and correction of the program for changed conditions and field
experience is accounted for in maintenance.
• Maintenance becomes essential in following situations:
o Change in specification,
o Change in equipment,
o Errors which are found during the actual execution of the program.
11.4 REVISION POINTS
• Application portfolio development
• Programme development cycle
191

11.5 INTEXT QUESTIONS


1. What do you mean by application portfolio?
2. State the importance of application portfolio.
3. Write a short note on coding.
4. List down the importance of documentation.
11.6 SUMMARY
Application portfolio development solutions help to improve how data and
applications are managed throughout development – without going over
budget or slowing down development.
Application portfolio development helps you with creating roadmaps for
future updates and new application building
11.7 TERMINAL EXERCISE
1. Baseline portfolio means ______________
2. _____________________ is maintaining connection to your key metrics to track
health and improvement continuously.
3. ________________ is also known as program validation.
11.8 SUPPLEMENTARY MATERIALS
1. https://www.techopedia.com/definition/163/application-portfolio
2. https://fas.org/man/dod-101/navy/docs/fun/part20.htm
11.9 ASSIGNMENTS
1. Discuss the steps in programme development cycle.
11.10 SUGGESTED READING/REFERENCE
1. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
2. Kumar. K and Rajkumar. S, Computer Application in Business, Tata
McGraw Hill, New Delhi, 2009.
11.11 LEARNING ACTIVITIES
1. Try to develop a programme in your own language for a problem or purpose.
11.12 KEYWORDS
Application portfolio
Algorithm
Flowchart
Coding
Debugging

192

LESSON - 12
FLOW CHART AND EVALUATION
12.1 INTRODUCTION
A flowchart is a type of diagram that represents a workflow or process. A
flowchart can also be defined as a diagrammatic representation of an algorithm, a
step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order by
connecting the boxes with arrows. This diagrammatic representation illustrates a
solution model to a given problem. Flowcharts are used in analyzing, designing,
documenting or managing a process or program in various fields.
12.2 OBJECTIVES
• To understand the purpose of flow chart
• To recognize the symbols used in flowchart
• To appreciate the factors to evaluate software
12.3 CONTENTS
12.3.1 Flowchart
12.3.2 Symbols for Creating a Flowchart
12.3.3 Intermediate & Advanced Flowchart
12.3.5 Symbols Input & Output Symbols
12.3.6 Merging & Connecting Symbols
12.3.7 Evaluation of software Package
12.3.1 FLOWCHART
A flowchart is a diagram that depicts a process, system or computer algorithm.
They are widely used in multiple fields to document, study, plan, improve and
communicate often complex processes in clear, easy-to-understand diagrams.
Flowcharts, sometimes spelled as flow charts, use rectangles, ovals, diamonds and
potentially numerous other shapes to define the type of step, along with connecting
arrows to define flow and sequence. They can range from simple, hand-drawn
charts to comprehensive computer-drawn diagrams depicting multiple steps and
routes. If we consider all the various forms of flowcharts, they are one of the most
common diagrams on the planet, used by both technical and non-technical people
in numerous fields. Flowcharts are sometimes called by more specialized names
such as Process Flowchart, Process Map, Functional Flowchart, Business Process
Mapping, Business Process Modeling and Notation (BPMN), or Process Flow
Diagram (PFD).
193

12.3.2 SYMBOLS FOR CREATING A FLOWCHART


1. The Oval
An End or Beginning
eginning While Creating a Flowchart

The oval, or terminator, is used to represent the start and end of a process.
2. The Rectangle
A Step in the Flowcharting Process

The rectangle is your go go-to


to symbol once you've started flowcharting. It
represents any step in the process you’re diagramming and is the workhorse of the
flowchart diagram. Use rectangles to capture process steps like basic tasks or
actions in your process.
3. The Arrow
Indicate Directional Flow

The arrow is used to guide the viewer al


along
ong their flowcharting path. And while
there are many different types of arrow tips to choose from, we recommend sticking
with one or two for your entire flowchart. This keeps your diagram looking clean,
but also allows you to emphasize certain steps in yo
your process.
4. The Diamond
Indicate a Decision

The diamond symbolizes that a decision is required to move forward. This


could be a binary, this-or
or-that
that choice or a more complex decision with multiple
choices.
194

12.3.3 INTERMEDIATE & ADVANCED FLOWCHART SYM


SYMBOLS
As we know, flowcharts are made up of a sequence of actions, data, services,
and/or materials. They illustrate where data is being input and output, where
information is being stored, what decisions need to be made, and which people
need to be involved.
ed. In addition the basic flowchart conventions, rules, and
symbols, these intermediate flowchart symbols will help you describe your process
with even more detail.
Document Symbols

Single and multiple document icons show that there are additional points of
reference involved in your flowchart. These might be used to indicate items like
“create an invoice” or “review testing paperwork.”

Data Symbols

Data symbols clarify where the data your flowchart references is being stored.
12.3.4 INPUT & OUTPUT SYMBOLS

Input and output symbols show where and how data is coming in and out
throughout your process.
195

12.3.5 MERGING & CONNECTING


NECTING SYMBOLS

Merging and connector symbols make it easier to connect flowcharts that span
multiple pages.
Additional Usefull Flowchart Symbols

The below are a few additional symbols that prove flowcharting competence
when put to good use.
196

12.3.6 EVALUATION OF SOFTWARE PACKAGE


A software evaluation is a type of assessment that seeks to determine if
software or a combination of software programs is the best possible fit for the needs
of the organisation. The idea is to look closely at the resources and tools provided
by the software that is either currently in use or is being examined as a possible
addition to programs already in use by that organisation. Based on a prepared list
of criteria along with some practical experimentation, a software evaluation makes
it possible to determine if the products would be helpful to the organisation or if
some other combination of software products would serve to better advantage.
There are several factors to consider with any software evaluation. One has to
do with compatibility of the software with the hardware resources already in place
on the client’s network or computer equipment. Here, the focus is on the type
of operating system the software requires in order to function, as well as the
amount of memory and capacity that the hardware currently provides. This is
particularly important if there is no budget for hardware and memory upgrades
that would accommodate the software under consideration.
Another key factor in software evaluation is how well the proposed software
package will interact with other applications already in place. For example, if a
proposed word processing program were found to be unable to easily import and
export data from the sales database currently in use by the company, this would
mean additional time spent preparing mailing pieces to customers and prospects.
In like manner, if software used to process customer orders will not download to
the accounting software, this can add more manual steps to the preparation of
invoices. The right combination of software programs can streamline essential
functions, allowing employees more time to devote to other activities that help to
enhance the process of revenue generation.
Software evaluation is necessary to make sure that all software used by an
individual or business is actually increasing the efficiency of the operation rather
than creating additional work loads. While individuals and companies can conduct
this type of evaluation on their own, there are also consultants who can engage in
product and software assessment for a client, making suggestions for any changes
or additions that would be in the best interests of the client. This approach can
often uncover issues that would be overlooked otherwise, ultimately saving the
company a great deal of money in terms of labor and other types of operational
costs.
Software Evaluation: Criteria-based Assessment
The assessment involves checking whether the software, and the project that
develops it, conforms to various characteristics or exhibits various qualities that are
expected of sustainable software. The more characteristics that are satisfied, the
more sustainable the software. Please note that not all qualities have equal weight
e.g. having an OSI-approved open source licence is of more importance than
avoiding TAB characters in text files.
197

In performing the evaluation, you may want to consider how different user
classes affect the importance of the criteria. For example, for Usability-
Understandability, a small set of well-defined, accurate, task-oriented user
documentation may be comprehensive for Users but inadequate for Developers.
Assessments specific to user classes allow the requirements of these specific user
classes to be factored in and so, for example, show that a project rates highly for
Users but poorly for Developers, or vice versa.
Scoring can also be affected by the nature of the software itself e.g. for
Learnability one could envisage an application that has been well-designed, offers
context-sensitive help etc. and consequently is so easy to use that tutorials aren’t
needed. Portability can apply to both the software and its development
infrastructure e.g. the open source software OGSA-DAI2 can be built, compiled and
tested on Unix, Windows or Linux (and so is highly portable for Users and User-
Developers). However, its Ruby test framework cannot yet run on Windows, so
running integration tests would involve the manual setup of OGSA-DAI servers (so
this is far less portable for Developers and, especially, Members).
The assessment criteria are grouped as follows.
Criterion Sub-criterion Notes – to what extent is/does the
software…
Usability Understandability Easily understood?
Documentation Comprehensive, appropriate, well-
structured user
documentation?
Buildability Straightforward to build on a supported
system?
Installability Straightforward to install on a supported
system?
Learnability Easy to learn how to use its functions?
Sustainability Identity Project/software identity is clear and
and unique?
maintainability Copyright Easy to see who owns the
project/software?
Licencing Adoption of appropriate licence?
Governance Easy to understand how the project is
run and the development of the software
managed?
Community Evidence of current/future community?
Accessibility Evidence of current/future ability to
download?
Testability Easy to test correctness of source code?
Portability Usable on multiple platforms?
Supportability Evidence of current/future developer
support?
198

Criterion Sub-criterion Notes – to what extent is/does the


software…
Analysability Easy to understand at the source level?
Changeability Easy to modify and contribute changes
to developers?
Evolvability Evidence of current/future
development?
Interoperability Interoperable with other
required/related software?
Software Evaluation: Tutorial-based Assessment
A tutorial-based evaluation is a reflection of your subjective experiences in
learning, building, installing, configuring and using the client’s software. The result
is a pragmatic and candid report based on your experiences, explaining what you
achieved with the software, following its documentation and using the related
resources available e.g. web pages, tutorials, e-mail archives etc. It also includes
the issues and shortcomings you observed, and can also include suggested
workarounds and fixes for these issues. As a result, your experiences, when written
up, can serve the clients as a practical guide for getting the software to work.
Your evaluation centres around carrying out typical tasks using the software.
The nature of the tasks will depend upon the software and you should ask the
client about any tasks they particularly want the evaluation to consider. Ideally,
these should be representative of the core functionality of the software and its
intended purpose. They should also cover the tasks of getting into a position
whereby the software can be used e.g. downloading and installing it for users, or,
for developers, setting up a development environment and getting the source code.
If you’ve not already explicitly agreed upon the user classes from whose perspective
the evaluation will be undertaken, then the tasks will implicitly determine these:
• User evaluations evaluate software from a user’s perspective. This
evaluation focuses on the usability of the software as-is without the need
for writing code. This can include using a web portal, a command-line tool,
or a graphical user interface.
• User-developer evaluations evaluate software from the perspective of a
user who is developing against APIs offered by the software and which
allow users to develop and use their own components with the software.
This includes how easy it is to install, configure and use the software, set
up a development environment, and develop and use components.
• Developer evaluations evaluate software from the perspective of someone
who wishes to change the software itself e.g. to fix bugs, add features or
recode components. An example of such a developer would be a new
developer joining the original developers of the software.
• Member evaluations are the same as developer evaluations in that they
evaluate software from the perspective of someone who wishes to change
the existing software. However, their scope may be beyond that of a
199

developer evaluation and cover certain resources not available to


developers. This might apply for example, to a project that releases their
code as open source but has many project resources available only to
members of their project e.g. test scripts or additional source code.
Your evaluation consists of doing the tasks, following any available
documentation in a logical (or suggested) fashion to achieve eventual successful use
of the software. Successful use is defined as being able to execute the task(s)
successfully. Specific directions about evaluations from each of these perspectives
are given below.
You should agree at the outset with the client how you should ask for help,
report issues and bugs. Ideally, you’ll both agree that you’ll use the same
mechanisms as offered to actual users so if, for example, users must e-mail issues
then do that, if developers use an issue tracker than ask for access to do that. This
allows you both to assess how the client handles support but also to ensure the
client becomes aware of problems as they occur and, if, for example bug/issue
trackers are used, for the client to have these logged.
12.4 REVISION POINTS
• Flowchart symbols
• Programme evaluation criteria
12.5 INTEXT QUESTIONS
1. What are the symbols for creating a flow chart?
2. What is the purpose of rectangle box in a flow chart?
3. What are symbols used for data representation.
4. What do you mean by criteria based evaluation of software?
5. What do you mean by tutorial based evaluation of software?
12.6 SUMMARY
A flowchart is a diagram that depicts a process, system or computer
algorithm.
They are widely used in multiple fields to document, study, plan, improve
and communicate often complex processes in clear, easy-to-understand
diagrams
A software evaluation is a type of assessment that seeks to determine if
software or a combination of software programs is the best possible fit for
the needs of the organization
Criteria-based Assessment involves checking whether the software, and the
project that develops it, conforms to various characteristics or exhibits
various qualities that are expected of sustainable software
A tutorial-based evaluation is a reflection of your subjective experiences in
learning, building, installing, configuring and using the client’s software
200

12.7 TERMINAL EXERCISE


1. _______________ symbol is used for connecting the flowcharts.
2. A diamond symbol represents _________________
3. A ___________________ is a reflection of your subjective experiences
4. _______________ and ________________ are the variables for criteria based
evaluation
12.8 SUPPLEMENTARY MATERIALS
1. https://www.techopedia.com/definition/163/application-portfolio
2. https://www.gliffy.com/blog/guide-to-flowchart-symbols
12.9 ASSIGNMENTS
1. Discuss the advantages and disadvantages of various programme evaluation
methods
12.10 SUGGESTED READING/REFERENCE
1. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
2. Parameshwaran. R, Computer Application in Business, S Chand Company
Ltd. New Delhi, 2004.
12.11 LEARNING ACTIVITIES
1. Develop a flowchart for getting five numbers as input and finding the biggest
among the five.
12.12 KEYWORDS
Flowcharts
Usability
Sustainability
Maintainability

201

LESSON - 13
NETWORKING - INTRODUCTION
13.1 INTRODUCTION
Networking is referred as connecting computers electronically for the purpose
of sharing information. Resources such as files, applications, printers and software
are common information shared in a networking. The advantage of networking can
be seen clearly in terms of security, efficiency, manageability and cost effectiveness
as it allows collaboration between users in a wide range. Basically, network
consists of hardware component such as computer, hubs, switches, routers and
other devices which form the network infrastructure. These are the devices that
play an important role in data transfer from one place to another using different
technology such as radio waves and wires. Let us discuss some concepts of
networking in this chapter.
13.2 OBJECTIVES
• To understand the basics of networking
• To learn about the components of networking
• To gain knowledge on benefits of networking
13.3 CONTENTS
13.3.1 Networking and its Components
13.3.2 Benefits of Networks
13.3.1 NETWORKING AND ITS COMPONENTS
A computer network consists of two or more computing devices that are
connected in order to share the components of a network, its resources and the
information you store there. The most basic computer network (which consists of
just two connected computers) can expand and become more usable when
additional computers join and add their resources to those being shared.
The first computer, yours, is commonly referred to as your local computer. It is
more likely to be used as a location where you do work, a workstation, than as a
storage or controlling location, a server. As more and more computers are
connected to a network and share their resources, the network becomes a more
powerful tool, because employees using a network with more information and more
capability are able to accomplish more through those added computers or
additional resources.
The real power of networking computers becomes apparent if you envision
your own network growing and then connecting it with other distinct networks,
enabling communication and resource sharing across both networks. That is, one
network can be connected to another network and become a more powerful tool
because of the greater resources. For example, you could connect the network you
and your classmates develop for this course to similarly constructed networks from
other introductory networking classes if you wanted them to share your information
and networked resources. Those classes could be within your own school, or they
202

could be anywhere in the world. Wherever that newly joined network is, the
communication and resource sharing activities in that new network could then be
shared with anyone connected to your network. All you have to do is join that new
network’s community or allow its members to join yours.
In addition, a company’s cost of doing business can be reduced as a result of
sharing data and resources. Instead of having individual copies of the data at
several locations around the company, and needing to keep all of them similarly
updated, a company using a network can have just one shared copy of that data
and share it, needing to keep only that one set of data updated. Furthermore,
sharing networked resources (like printers) means that more people can use a
particular resource and a wider variety of resources (like different printers) can be
used by each network user. Any time a company can do more with less, or buy
fewer items to do the same job, its total costs are reduced, and it is able to make
more money than spent.
Components of network
Networks are made up of various devices—computers, switches, routers—
connected together by cables or wireless signals. Understanding the basics of how
networks are put together is an important step in building a network in a
community or neighborhood.
The basic components are as follows:
• Clients and servers—how services such as e-mail and web pages connect
using networks.
• IP addresses—how devices on a network can be found.
• Network hubs, switches and cables—the hardware building blocks of any
network.
• Routers and firewalls—how to organize and control the flow of traffic on a
network.
Clients and Servers
An important relationship on networks is that of the server and the client. A
server is a computer that holds content and services such as a website, a media
file, or a chat application. A good example of a server is the computer that holds the
website for Google’s search page: http://www.google.com. The server holds that
page, and sends it out when requested.
A client is a different computer, such as your laptop or cell phone, that
requests to view, download, or use the content. The client can connect over a
network to exchange information. For instance, when you request Google’s search
page with your web browser, your computer is the client.
In the example below, two computers are connected together with an Ethernet
cable. These computers are able to see each other and communicate over the cable.
The client computer asks for a website from the server computer. The website is
delivered from the server, and displayed on the client’s web browser.
203

Most requests and content delivery on networks are similar to, or are based
on, a client to server relationship. On a network, the server can be located almost
anywhere, and if the client has the address, it can access the content on the server.
IP Addresses
In order to send and direct data across a network, computers need to be able
to identify destinations
ations and origins. This identification is an IP
IP—Internet
Internet Protocol—
Protocol
address. An IP address is just a set of four numbers between 1 and 254, separated
by dots. An example of an IP address is 173.194.43.7.
An IP address is similar to a street address. Parts o off the address describe
where in the world the building is located, another part narrows it down to a state
or city, then the area within that state or city, then the location on the street.
Below we can see 192.168.1 Street. On it are three houses:

The complete
omplete addresses for each of these houses are: 192.168.1.20,
192.168.1.21, and 192.168.1.22.
There are different classifications, or types of IP addresses. A network can be
public, or it can be private. Public IP addresses are accessible anywhere on the
Internet.
nternet. Private IP addresses are not, and most are typically hidden behind a
device with a public IP address.
Here we can see an example
example—a a street with two buildings with public IP
addresses—representing
representing computers with addresses that are visible to the entire
ent
Internet. These buildings might be anywhere in the world, but their addresses are
complete, so we know exactly where they are and can send messages to them.
204

To see an example of how public and private IP addresses are commonly used,
let’s take another look at 192.168.1 Street. We have a new building on the street.
That building has a public IP address, and a private IP address. There is also a
fence that blocks the rest of the Internet from seeing and passing messages to
addresses on the street.

The postal building controls messages that travel between the Internet and the
street, keeping track of messages that leave the street, and directs return messages
to the right house. On the street, it has the address 192.168.1.1, and on the
Internet it has the address 74.10.10.50.
Network Hubs and Switches
Traditionally, computers are connected to each other using cables—creating
cables a
network. The cable used most often is Ethernet, which consists of four pairs of
wires inside of a plastic jacket. It is physicall
physically
y similar to phone cables, but can
transport much more data.
But cables and computers alone do not make a good network, so one early
solution was to use a network hub. The Ethernet cables from the computer connect
to the device similar to the hub of a bike wheel—where
where all of the spokes come
together in the center.
An example of how a hub works is shown below. Computer A wants to send a
message to computer B. It sends the message through the Ethernet cable to the
hub, then the hub repeats the message to all of the connected computers.
205

A network using a hub can slow down if many computers are sending
messages, since they may try and send messages at the same time and confuse the
hub. To help with this problem, networks began to use another device called
a switch.
itch. Instead of repeating all messages that come in, a switch only sends the
message to the intended destination. This eliminates the unnecessary repetition of
the hub.
Using a switch, computer A sends a message to computer B—theB other
computers do not seee the message. Those computers can send other messages at
the same time without interfering.

Switches do have a limitation though


though—they
they only know about the addresses of
equipment that is plugged directly into them. So, you can only send messages to a
small number of devices—however
however many ports the switch has! If you need to send a
message to a computer on another network, it will need to be sent through a router,
which we discuss next.
206

Routers and Firewalls


Routers do the majority of the hard work on a network - they make the
decisions about all the messages that travel on the network, and whether to pass
messages to and from outside networks. There are three main functions:
Separate and Bridge: Routers separate networks into sections, or bridge
different
ent networks together, as we see in the example above
above—the
the private network of
192.168.1 Street is bridged to the Internet with a public IP address.
Assign IPs: They can assign IP addresses. In the example of 192.168.1 Street,
if a new house is built on the street, it would get whatever the next highest house
number available. In the case of routers, they assign IP addresses using DHCP—
DHCP
Dynamic Host Configuration Protocol.
Firewall and Protect: They can filter messages or keep users out of private
networks. Most st routers have a Firewall built in. This is a software function that
keeps unwanted messages from reaching the computers on the inside, or private
part, of the network.
Let us take another look at 192.168.1 Street, and the postal service building
we included
ded when it had a public address for the entire street. As it turns out, that
postal service building is acting as a Router.

In this case, the postal service building is routing messages between the rest of
the Internet using its public address and the s street
treet with private addresses.
Definitions
DHCP—Dynamic
Dynamic Host Configuration Protocol: It assigns IP addresses to client
devices, such as desktop computers, laptops, and phones, when they are plugged
into Ethernet or connect to Wireless networks.
Ethernet: A type of networking protocol
protocol—it
it defines the types of cables and
connections that are used to wire computers, switches, and routers together. Most
often Ethernet cabling is Category 5 or 6, made up of twisted pair wiring similar to
phone cables.
Hub: A network
rk device that repeats the traffic it receives to all connected
devices.
207

Switch: A network device that sends traffic it receives to a specific connected


device, such as a single desktop computer or laptop.
Router: A network device that can bridge between different networks,
determine what traffic can pass between them, and perform other functions on a
network, such as assigning IP addresses.
Firewall: A function typically performed by routers, this filters traffic between
networks and can protect them from interference or attacks.
13.3.2 BENEFITS OF NETWORKS
Once computers were connected by networks, information sharing increased
dramatically. People found that more data helped them make better decisions, and
companies started saving money. Many original networks were designed to facilitate
communication, but they were quickly expanded as businesses noticed increased
productivity and reduced costs.
Sharing Information
The initial reason for developing most computer networks was to assist users
with sharing their increased output, especially between computers in the same
general vicinity, as shown in Figure 1.4. However, users wanted not only to share
information with others, they wanted to communicate about that information after
someone else had it, too. In addition to transmitting the user’s original information,
computer networks enabled those users to discuss what was being transmitted,
and this resulted in even more communication. Additional network
communications techniques thus came into being, such as e-mail and video
conferencing. Furthermore, with the increases in the sizes of networks, sharing no
longer had to be concerned with proximity. The use of networks has effectively
erased distance and time constraints. You can communicate almost instantly to
anywhere in the world that is connected to your network
Networks are an effective way to communicate. Using networks, companies
can send the same information to large numbers of employees or customers quickly
and efficiently. Examples include company newsletters and announcements for
employees, as well as advertisements and purchase information for customers.
Also, individual employees are more likely to communicate with larger numbers of
individuals both inside and outside the company using e-mail, an electronic means
of communicating that is similar to mail but done on computers, usually over the
Internet, over networks
Sharing Resources
The ability to share resources was another reason networks were created, and
it is still one of the main purposes for using networks. The inevitable technology
creep extends the computer user’s involvement in technology because companies
expect employees to learn new systems as they are installed. Companies also look
for ways to make the best use of their investments by sharing the purchased
resources among multiple departments. Some of the resources that are commonly
shared over computer networks are as follows.
208

Peripherals
Additional components that attach to a computer, called peripherals, like
printers, scanners, and speakers, are connected to a computer to expand its use.
When there are multiple users and computers, it soon becomes apparent that the
peripheral devices are seldom fully utilized. Money can be saved if some of these
peripherals are shared, instead of having to purchase a separate set for each
computer. Networking enables the sharing of peripherals.
The ability to share printers was very often enough of a cost savings for
companies to invest in implementing and supporting a simple network. The
company could then also realize additional cost savings as it shared additional
peripheral devices, such as faxes, modems, scanners, plotters, and virtually any
other device that connects to computers. Sharing peripherals often ends up
producing significant cost savings and more than justifies the expense of adding a
network.
Storage
Large amounts of storage capacity, usually in fast, very powerful computers,
can be set up to act as storage locations for data. The access to it could be
controlled by the person storing the data. Data can be stored centrally so that it is
accessible to any user who needed it.
Applications
Cost and space savings are achieved when computer users can centrally store
their software applications. Applications, such as those used for preparing taxes,
creating text documents, or playing computer games, have grown in complexity and
size and often take up considerable local storage. Installing an application once on
a network and then sharing it cuts down on the storage space required when
multiple users need the same application.
Assisting Collaboration
Having digital information and the ability to share it instantly with others over
networks, can have multiple people working on the same process collectively.
Early computer users found that once they created something and sent it
out for review, the comments returned often led to important adjustments that
would improve the original product. Such collaboration assisted the widespread use
of computers because it provided a tangible benefit that businesses could associate
with the increased costs of installing computers in the first place.
Many software makers took this early form of collaboration into consideration
and added that feature to the capabilities of their software. The newest versions of
the applications included in Microsoft’s Office suite (such as Word, Access, Excel,
and PowerPoint) allow multiple users to access and make changes to the same
document at the same time. That way, all users can work together on the original
document, and changes made by any collaborating member are immediately posted
within the document.
209

Facilitating Centralized Management


Maintenance personnel started using networks to assist with the management
tasks associated with the network’s operation and maintenance.
Standardization and interoperability were developed, which meant computers
worked the same way and could work with each other. Having more similarities
meant lower support costs. These savings were usually due to economies of scale
brought about by buying more similar computers and obtaining a lower per-unit
cost. Companies purchase similar equipment to obtain the benefit of savings. The
network could be used to help maintain similar components and further increase
efficiency and reduce the total amount companies would spend on a particular
component over that equipment’s usable lifetime, called total cost of ownership
(TCO).
Managing Software
Using the network help reduce software costs. Centralizing the installation of
the software also reduce operation costs because the installations could be
accomplished remotely—over the network. The computer programs that are needed
to perform the installations are stored on servers and made accessible over the
network. The maintenance personnel would then simply log on to the network from
a client computer and install the needed applications using the installation
software stored on the server.
Within the past few years, even more savings have been achieved by having
the centralized server initiate the software installations or updates on the client
computers without the need for maintenance personnel to actually visit any of the
clients.
Maintaining the Network
Purchasing similar equipment for use on the network meant that network
maintenance costs are reduced because there are fewer dissimilar components.
Maintenance workers no longer have to attend numerous training sessions on
many different components, which meant they could spend more time maintaining
the actual components.
Backing-up Data
A network minimizes the time spent backing up of necessary files. In the event
of a hardware or software failure that causes information or applications to be lost,
vital information and necessary applications can be restored if sufficient backups
exist. The backup process is normally a regular activity in a company, and all
transactions between scheduled backups are recorded so that the files can be
restored as completely as possible. Technicians can access the backup files and
recorded transactions from a central location without having to physically visit the
source computers.
13.4 REVISION POINTS
• Networking definition
• Components of network
210

• Terms definition
• Benefits of networking
13.5 INTEXT QUESTIONS
1. Define network
2. Itemize the components of networking.
3. Write short note on IP address.
4. What do you mean by hub?
5. What are the resources shared by networking?
13.6 SUMMARY
A computer network consists of two or more computing devices that are
connected in order to share the components of a network, its resources and
the information you store there.
Networks are made up of various devices—computers, switches, routers—
connected together by cables or wireless signals
An IP address is just a set of four numbers between 1 and 254, separated by
dots is used to send and direct data across a network
The initial reason for developing most computer networks was to assist
users with sharing their increased output
The ability to share resources was another reason networks were created,
and it is still one of the main purposes for using networks.
13.7 TERMINAL EXERCISE
1. ______________ is used to find device in the networking
2. __________________ are used to organize and control the flow of traffic on a
network
3. DHCP means _____________________
4. _______________ defines the types of cables and connections that are used to
wire computers, switches, and routers together
5. Additional components that attach to a computer is called _______________
13.8 SUPPLEMENTARY MATERIALS
1. https://commotionwireless.net/docs/cck/networking/learn-networking-
basics/
2. https://www3.nd.edu/~cpoellab/teaching/cse40814_fall14/networks.pdf
13.9 ASSIGNMENTS
1. Write an essay about the benefits of networking in a bank.
211

13.10 SUGGESTED READING/REFERENCE


1. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
2. Kumar. K and Rajkumar. S, Computer Application in Business, Tata
McGraw Hill, New Delhi, 2009.
13.11 LEARNING ACTIVITIES
1. Get into a library (or any other networked premises) and have a discussion
about the networking of the library and the benefits of networking.
13.12 KEYWORDS
Networking
IP address
Router and firewall
Ethernet
Hub
Sharing resources

212

LESSON - 14
CLASSIFICATION OF NETWORKS
14.1 INTRODUCTION
Networking is referred as connecting computers electronically for the purpose
of sharing information. Resources such as files, applications, printers and software
are common information shared in a networking. The advantage of networking can
be seen clearly in terms of security, efficiency, manageability and cost effectiveness
as it allows collaboration between users in a wide range. In this chapter we shall
discuss the classification of networks and its features.
14.2 OBJECTIVES
• To study about the classification of networks
• To understand the merits and demerits of each network.
14.3 CONTENTS
14.3.1 Classification Based on their Geography
14.3.2 Wireless LAN
14.3.3 Bluetooth
14.3.4 Internet
14.3.5 Intranet
14.3.6 Extranet
14.3.1 CLASSIFICATION BASED ON THEIR GEOGRAPHY
Networks are frequently classified according to the geographical boundaries
the network spans. Two basic geographical designations for networks—local area
network (LAN) and wide area network (WAN)—are the most common. A third
designation, metropolitan area network (MAN), is also used, although its use has
become clouded (because it might not be a clear-cut classification anymore) as
networks continue connecting to the Internet.
LAN (Local Area Network)
• Local Area Network is a group of computers connected to each other in a
small area such as building, office.
• LAN is used for connecting two or more personal computers through a
communication medium such as twisted pair, coaxial cable, etc.
• This network usually has the highest speed components and fastest
communications equipment.
• It is less costly as it is built with inexpensive hardware such as hubs,
network adapters, and ethernet cables.
• The data is transferred at an extremely faster rate in Local Area Network.
• Local Area Network provides higher security.
213

MAN (Metropolitan Area Network)


• A metropolitan area network is a network that covers a larger geographic
area by interconnecting a different LAN to form a larger network.
• Government agencies use MAN to connect to the citizens and private
industries.
• In MAN, various LANs are connected to each other through a telephone
exchange line.
• The most widely used protocols in MAN are RS-232, Frame Relay, ATM,
ISDN, OC-3, ADSL, etc.
• It has a higher range than Local Area Network (LAN).
Uses of Metropolitan Area Network:
• MAN is used in communication between the banks in a city.
• It can be used in an Airline Reservation.
• It can be used in a college within a city.
• It can also be used for communication in the military.
WAN (Wide Area Network)
• A Wide Area Network is a network that extends over a large geographical
area such as states or countries.
• A Wide Area Network is quite bigger network than the LAN.
• A Wide Area Network is not limited to a single location, but it spans over a
large geographical area through a telephone line, fibre optic cable or satellite
links.
• The internet is one of the biggest WAN in the world.
• A Wide Area Network is widely used in the field of Business, government,
and education.
Examples of Wide Area Network
Mobile Broadband: A 4G network is widely used across a region or country.
Last mile: A telecom company is used to provide the internet services to the
customers in hundreds of cities by connecting their home with fiber.
Private network: A bank provides a private network that connects the 44
offices. This network is made by using the telephone leased line provided by the
telecom company.
Advantages of Wide Area Network
Following are the advantages of the Wide Area Network:
• Geographical area: A Wide Area Network provides a large geographical area.
Suppose if the branch of our office is in a different city then we can connect
with them through WAN. The internet provides a leased line through which
we can connect with another branch.
• Centralized data: In case of WAN network, data is centralized. Therefore, we
do not need to buy the emails, files or back up servers.
214

• Get updated files: Software companies work on the live server. Therefore,
the programmers get the updated files within seconds.
• Exchange messages: In a WAN network, messages are transmitted fast. The
web application like Facebook, Whatsapp, Skype allows you to communicate
with friends.
• Sharing of software and resources: In WAN network, we can share the
software and other resources like a hard drive, RAM.
• Global business: We can do the business over the internet globally.
• High bandwidth: If we use the leased lines for our company then this gives
the high bandwidth. The high bandwidth increases the data transfer rate
which in turn increases the productivity of our company.
Disadvantages of Wide Area Network
The following are the disadvantages of the Wide Area Network:
• Security issue: A WAN network has more security issues as compared to
LAN and MAN network as all the technologies are combined together that
creates the security problem.
• Needs Firewall & antivirus software: The data is transferred on the
internet which can be changed or hacked by the hackers, so the firewall
needs to be used. Some people can inject the virus in our system so
antivirus is needed to protect from such a virus.
• High Setup cost: An installation cost of the WAN network is high as it
involves the purchasing of routers, switches.
• Troubleshooting problems: It covers a large area so fixing the problem is
difficult.
14.3.2 WIRELESS LAN
Wireless LAN stands for Wireless Local Area Network. It is also called LAWN
(Local Area Wireless Network). WLAN is one in which a mobile user can connect to a
Local Area Network (LAN) through a wireless connection.
The IEEE 802.11 group of standards defines the technologies for wireless
LANs. For path sharing, 802.11 standard uses the Ethernet protocol and CSMA/CA
(carrier sense multiple access with collision avoidance). It also uses an encryption
method i.e. wired equivalent privacy algorithm.
Wireless LANs provide high speed data communication in small areas such as
building or an office. WLANs allow users to move around in a confined area while
they are still connected to the network.
In some instance wireless LAN technology is used to save costs and avoid
laying cable, while in other cases, it is the only option for providing high-speed
internet access to the public. Whatever the reason, wireless solutions are popping
up everywhere.
215

Advantages of WLANs
• Flexibility: Within radio coverage, nodes can communicate without further
restriction. Radio waves can penetrate walls, senders and receivers can be
placed anywhere (also non-visible, e.g., within devices, in walls etc.).
• Planning: Only wireless ad-hoc networks allow for communication without
previous planning, any wired network needs wiring plans.
• Design: Wireless networks allow for the design of independent, small devices
which can for example be put into a pocket. Cables not only restrict users
but also designers of small notepads, PDAs, etc.
• Robustness: Wireless networks can handle disasters, e.g., earthquakes,
flood etc. whereas, networks requiring a wired infrastructure will usually
break down completely in disasters.
• Cost: The cost of installing and maintaining a wireless LAN is on average
lower than the cost of installing and maintaining a traditional wired LAN, for
two reasons. First, after providing wireless access to the wireless network via
an access point for the first user, adding additional users to a network will
not increase the cost. And second, wireless LAN eliminates the direct costs of
cabling and the labor associated with installing and repairing it.
• Ease of Use: Wireless LAN is easy to use and the users need very little new
information to take advantage of WLANs.
Disadvantages of WLANs
• Quality of Services: Quality of wireless LAN is typically lower than wired
networks. The main reason for this is the lower bandwidth due to limitations
is radio transmission, higher error rates due to interference and higher
delay/delay variation due to extensive error correction and detection
mechanisms.
• Proprietary Solutions: Due to slow standardization procedures, many
companies have come up with proprietary solutions offering standardization
functionality plus many enhanced features. Most components today adhere
to the basic standards IEEE 802.11a or 802.11b.
• Restrictions: Several govt. and non-govt. institutions world-wide regulate
the operation and restrict frequencies to minimize interference.
• Global operation: Wireless LAN products are sold in all countries so,
national and international frequency regulations have to be considered.
• Low Power: Devices communicating via a wireless LAN are typically power
consuming, also wireless devices running on battery power. Whereas the
LAN design should take this into account and implement special power
saving modes and power management functions.
216

• License free operation: LAN operators don't want to apply for a special
license to be able to use the product. The equipment must operate in a
license free band, such as the 2.4 GHz ISM band.
• Robust transmission technology: If wireless LAN uses radio transmission,
many other electrical devices can interfere with them (such as vacuum
cleaner, train engines, hair dryers, etc.).Wireless LAN transceivers cannot be
adjusted for perfect transmission is a standard office or production
environment.
Fundamentals of WLANs
HiperLAN
• HiperLAN stands for High performance LAN. While all of the previous
technologies have been designed specifically for an adhoc environment,
HiperLAN is derived from traditional LAN environments and can support
multimedia data and asynchronous data effectively at high rates (23.5
Mbps).
• A LAN extension via access points can be implemented using standard
features of the HiperLAN/1 specification. However, HiperLAN does not
necessarily require any type of access point infrastructure for its operation.
• HiperLAN was started in 1992, and standards were published in 1995. It
employs the 5.15GHz and 17.1 GHz frequency bands and has a data rate of
23.5 Mbps with coverage of 50m and mobility< 10 m/s.
• It supports a packet-oriented structure, which can be used for networks
with or without a central control (BS-MS and ad-hoc). It supports 25 audio
connections at 32kbps with a maximum latency of 10 ms, one video
connection of 2 Mbps with 100 ms latency, and a data rate of 13.4 Mbps.
• HiperLAN/1 is specifically designed to support adhoc computing for
multimedia systems, where there is no requirement to deploy a centralized
infrastructure. It effectively supports MPEG or other state of the art real time
digital audio and video standards.
• The HiperLAN/1 MAC is compatible with the standard MAC service
interface, enabling support for existing applications to remain unchanged.
• HiperLAN 2 has been specifically developed to have a wired infrastructure,
providing short-range wireless access to wired networks such as IP and
ATM.
The two main differences between HiperLAN types 1 and 2 are as follows:
• Type 1 has a distributed MAC with QoS provisions, whereas type 2 has a
centralized schedule MAC.
• Type 1 is based on Gaussian minimum shift keying (GMSK), whereas type 2
is based on OFDM.
• HiperLAN/2 automatically performs handoff to the nearest access point. The
access point is basically a radio BS that covers an area of about 30 to 150
meters, depending on the environment. MANETs can also be created easily.
217

The goals of HiperLAN are as follows:


• QoS (to build multiservice network)
• Strong security
• Handoff when moving between local area and wide areas
• Increased throughput
• Ease of use, deployment, and maintenance
• Affordability
• Scalability
One of the primary features of HiperLAN/2 is its high speed transmission rates
(up to 54 Mbps). It uses a modulation method called OFDM to transmit analog
signals. It is connection oriented, and traffic is transmitted on bidirectional links for
unicast traffic and unidirectional links toward the MSs for multicast and broadcast
traff
This connection oriented approach makes support for QoS easy, which in turn
depends on how the HiperLAN/2 network incorporates with the fixed network using
Ethernet, ATM, or IP.
The HiperLAN/2 architecture shown in the figure allows for interoperation
with virtually any type of fixed network, making the technology both network and
application independent.
HiperLAN/2 networks can be deployed at "hot spot" areas such as airports and
hotels, as an easy way of offering remote access and internet services.
Home RF Technology
• A typical home needs a network inside the house for access to a public
network telephone and internet, entertainment networks (cable television,
digital audio and video with the IEEE 1394), transfer and sharing of data
and resources (printer, internet connection), and home control and
automation.
• The device should be able to self-configure and maintain connectivity with
the network. The devices need to be plug and play enabled so that they are
available to all other clients on the network as soon as they are switched on,
which requires automatic device discovery and identification in the system.
• Home networking technology should also be able to accommodate any and
all lookup services, such as Jini. Home RF products allow you to
simultaneously share a single internet connection with all of your computers
- without the hassle of new wires, cables or jacks.
• A network consists of resource providers, which are gateways to different
resources like phone lines, cable modem, satellite dish, and so on, and the
devices connected to them such as cordless phone, printers and fileservers,
and TV.
218

• The goal of Home RF is to integrate all of these into a single network suitable
for all applications and to remove all wires and utilize RF links in the
network suitable for all applications.
• This includes sharing PC, printer, fileserver, phone, internet connection, and
so on, enabling multiplayer gaming using different PCs and consoles inside
the home, and providing complete control on all devices from a single mobile
controller.
• With Home RF, a cordless phone can connect to PSTN but also connect
through a PC for enhanced services. Home RF makes an assumption that
simultaneous support for both voice and data is needed.
Advantages of Home RF
• In Home RF all devices can share the same connection, for voice or data at
the same time.
• Home RF provides the foundation for a broad range of interoperable
consumer devices for wireless digital communication between PCs and
consumer electronic devices anywhere in and around the home.
• The working group includes Compaq computer corp. Ericson enterprise
network, IBM Intel corp., Motorola corp. and other.
• A specification for wireless communication in the home called the shared
wireless access protocol (SWAP) has been developed.
IEEE 802.11 Standards
IEEE 802.11 is a set of standards for the wireless area network (WLAN), which
was implemented in 1997 and was used in the industrial, scientific, and medical
(ISM) band. IEEE 802.11 was quickly implemented throughout a wide region, but
under its standards the network occasionally receives interference from devices
such as cordless phones and microwave ovens. The aim of IEEE 802.11 is to
provide wireless network connection for fixed, portable, and moving stations within
ten to hundreds of meters with one medium access control (MAC) and several
physical layer specifications. This was later called 802.11a. The major protocols
include IEEE 802.11n; their most significant differences lie in the specification of
the PHY layer.
13.3.3 BLUETOOTH
Bluetooth is one of the major wireless technologies developed to achieve WPAN
(wireless personal area network). It is used to connect devices of different functions
such as telephones, computers (laptop or desktop), notebooks, cameras, printers,
and so on.
Architecture of Bluetooth
• Bluetooth devices can interact with other Bluetooth devices in several ways
in the figure. In the simplest scheme, one of the devices acts as the master
and (up to) seven other slaves.
219

• A network with a master and one or more slaves associated with it is known
as a piconet. A single channel (and bandwidth) is shared among all devices
in the piconet.
• Each of the active slaves has an assigned 3-bit active member address.
many other slaves can remain synchronized to the master though remaining
inactive slaves, referred to as parked nodes.
• The master regulates channel access for all active nodes and parked nodes.
Of two piconets are close to each other, they have overlapping coverage
areas.
• This scenario, in which nodes of two piconets intermingle, is called a
scatternet. Slaves in one piconet can participate in another piconet as either
a master or slave through time division multiplexing.
• In a scatternet, the two (or more) piconets are not synchronized in either
time or frequency. Each of the piconets operates in its own frequency
hopping channel, and any devices in multiple piconets participate at the
appropriate time via time division multiplexing.
• The Bluetooth baseband technology supports two link types. Synchronous
connection oriented (SCO) types, used primarily for voice, and asynchronous
connectionless (ACL) type, essentially for packet data.
14.3.4 INTERNET
Internet is called the network of networks. It is a global communication system
that links together thousands of individual networks. In other words, internet is a
collection of interlinked computer networks, connected by copper wires, fiber-optic
cables, wireless connections, etc. As a result, a computer can virtually connect to
other computers in any network. These connections allow users to interchange
messages, to communicate in real time (getting instant messages and responses), to
share data and programs and to access limitless information.
Basics of Internet Architecture
Internet architecture is a meta-network, which refers to a congregation of
thousands of distinct networks interacting with a common protocol. In simple
terms, it is referred as an internetwork that is connected using protocols. Protocol
used is TCP/IP. This protocol connects any two networks that differ in hardware,
software and design.
Process
TCP/IP provides end to end transmission, i.e., each and every node on one
network has the ability to communicate with any other node on the network.
Layers of Internet Architecture
Internet architecture consists of three layers −
220

Internet Protocol (IP)


In order to communicate, we need our data to be encapsulated as Internet
Protocol (IP) packets. These IP packets travel across number of hosts in a network
through routing to reach the destination. However IP does not support error
detection and error recovery, and is incapable of d
detecting
etecting loss of packets.
Transmission Control Protocol (TCP)
TCP stands for "Transmission Control Protocol". It provides end to end
transmission of data, i.e., from source to destination. It is a very complex protocol
as it supports recovery of lost packe
packets.
Application Protocol
Third layer in internet architecture is the application layer which has different
protocols on which the internet services are built. Some of the examples of internet
services include email (SMTP facilitates email feature), file tr transfer
ansfer (FTP facilitates
file transfer feature), etc.
Internet-Based Services
Some of the basic services available to Internet users are −
• Email − A fast, easy, and inexpensive way to communicate with other
Internet users around the world.
• Telnet − Allows a user to log into a remote computer as though it were a
local system.
• FTP − Allows a user to transfer virtually every kind of file that can be stored
on a computer from one Internet
Internet-connected
connected computer to another.
• UseNet news − A distributed bulletin board that offers a combination news
and discussion service on thousands of topics.
• World Wide Web (WWW) − A hypertext interface to Internet information
resources.
WWW
WWW stands for World orld Wide Web.
eb. A technical definition of the World Wide
Web is − All the resources
rces and users on the Internet that are using the Hypertext
Transfer Protocol (HTTP).
221

A broader definition comes from the organization that Web inventor Tim Berners-
Lee helped found, the World Wide Web Consortium (W3C): The World Wide Web is
the universe of network-accessible information, an embodiment of human
knowledge.
In simple terms, The World Wide Web is a way of exchanging information
between computers on the Internet, tying them together into a vast collection of
interactive multimedia resources.
HTTP
HTTP stands for Hypertext Transfer Protocol. This is the protocol being used to
transfer hypertext documents that makes the World Wide Web possible.
A standard web address such as Yahoo.com is called a URL and here the
prefix http indicates its protocol
URL
URL stands for Uniform Resource Locator, and is used to specify addresses on
the World Wide Web. A URL is the fundamental network identification for any
resource connected to the web (e.g., hypertext pages, images, and sound files).
A URL will have the following format −
protocol://hostname/other_information
The protocol specifies how information is transferred from a link. The protocol
used for web resources is HyperText Transfer Protocol (HTTP). Other protocols
compatible with most web browsers include FTP, telnet, newsgroups, and Gopher.
The protocol is followed by a colon, two slashes, and then the domain name.
The domain name is the computer on which the resource is located.
Links to particular files or subdirectories may be further specified after the
domain name. The directory names are separated by single forward slashes.
Website
Website is a collection of various pages written in HTML markup language.
There are millions of websites available on the web. Each page available on the
website is called a web page and first page of any website is called home page for
that site.
Web Server
Every Website sits on a computer known as a Web server. This server is always
connected to the internet. Every Web server that is connected to the Internet is
given a unique address made up of a series of four numbers between 0 and 256
separated by periods. For example, 68.178.157.132 or 68.122.35.127.
When you register a Web address, also known as a domain name, such as
tutorialspoint.com you have to specify the IP address of the Web server that will
host the site.
We will see different type of Web servers in a separate chapter.
222

Web Browser
Web Browsers are software installed on your PC. To access the Web you need a
web browsers, such as Netscape Navigator, Microsoft Internet Explorer or Mozilla
Firefox.
Currently you must be using any sort of Web browser while you are navigating
through my site tutorialspoint.com. On the Web, when you navigate through pages
of information this is commonly known as browsing or surfing.
We will see different type of Web browsers in a separate chapter.
SMTP Server
SMTP stands for Simple Mail Transfer Protocol Server. This server takes care
of delivering emails from one server to another server. When you send an email to
an email address, it is delivered to its recipient by a SMTP Server.
ISP
ISP stands for Internet Service Provider. They are the companies who provide
you service in terms of internet connection to connect to the internet.
You will buy space on a Web Server from any Internet Service Provider. This
space will be used to host your Website.
HTML
HTML stands for Hyper Text Markup Language. This is the language in which
we write web pages for any Website. Even the page you are reading right now is
written in HTML.
This is a subset of Standard Generalized Mark-Up Language (SGML) for
electronic publishing, the specific standard used for the World Wide Web.
Hyperlink
A hyperlink or simply a link is a selectable element in an electronic document
that serves as an access point to other electronic resources. Typically, you click the
hyperlink to access the linked resource. Familiar hyperlinks include buttons, icons,
image maps, and clickable text links.
DNS
DNS stands for Domain Name System. When someone types in your domain
name, www.example.com, your browser will ask the Domain Name System to find
the IP that hosts your site. When you register your domain name, your IP address
should be put in a DNS along with your domain name. Without doing it your
domain name will not be functioning properly.
W3Cs
W3C stands for World Wide Web Consortium which is an international
consortium of companies involved with the Internet and the Web.
The W3C was founded in 1994 by Tim Berners-Lee, the original architect of
the World Wide Web. The organization's purpose is to develop open standards so
that the Web evolves in a single direction rather than being splintered among
competing factions. The W3C is the chief standards body for HTTP and HTML.
223

14.3.5 INTRANET
Intranets are private networks used by organizations to distribute
communications exclusively to their workforce, and they’ve been used for decades
by enterprises for internal communications.
Intranets are run, created, and updated by a dedicated intranet or digital workplace
team. These teams use a variety of cross-functional skills to run the intranet.
The Spark Trajectory Intranet and Digital Workplace skills matrix shows how
intranet and digital workplace teams have a variety of skill sets that stem from
technology and IT management, content and communication, user experience
design, and social and collaboration management.
All these skill sets lend themselves to creating a tool that incorporates
communication, database management, and design. The Sparks skills matrix also
highlights how the department and its responsibilities are not as fixed or defined as
HR or IT. Because the intranet team is responsible for both creating, uploading to,
and managing the intranet, it can be difficult to uncover where exactly new features
and tools need to be added.
The intranet is not the digital workplace. The goal of which is to break down
communication barriers and foster efficiency, innovation, and growth. It is not a
one-size-fits-all solution but rather a best-in-class set of platforms and tools that
make work happen seamlessly. A successful digital workplace uses intelligent
workflows to make everything work on-demand and with less friction.
Internet Vs Intranet
The difference between the internet and the intranet is simple: the internet is a
public network that is not owned by any entity, while an intranet is privately owned
and not accessible to just anyone who can get online.
Advantages of an intranet
Despite the fact that new technology is emerging to advance the field of
communications, for many companies, there are still key advantages to having an
intranet made and managed by a dedicated in-house team. Here are three strategic
benefits of having a company intranet.
Easy storage of files and information
Every organization has hundreds, if not thousands, of differently formatted
files floating among email threads, Google Drive, or hard drives on laptops or
desktops. Having a company intranet makes it easy to store and access all your
files in one central location. Any communication that happens on an intranet is
also saved for as long as the intranet is up. This makes it easier for individuals to
search for past posts from their company intranet.
Easy ways to communicate among employees
Intranets usually feature user profiles similar to your LinkedIn profile. They
contain a photo, job title and description, and contact information. Any employee
with access to the intranet can discover new colleagues and message them through
the intranet. This facilitates increased collaboration and helps establish a workforce
network.
224

A good Content Management System (CMS)


Intranets started off in the 1990s as a simple welcome page with sparse
information about an enterprise. Since then, they have evolved to include
discussion forums, blogs, and social features. This has enabled internal
communicators to be able to use their intranet to share key messages and
organizational updates. CMS intranet enables internal communicators to easily
create and share content across the system.
Disadvantages of an intranet
Obviously, tools that have been in use for decades can also come to feel clunky
and outdated, particularly if they’re not updated or modified to keep pace with the
company’s needs. Here are five common disadvantages to having an intranet for
many companies.
Information is difficult to find
Intranets have traditionally been structured in an outdated format that forces
employees to search through thousands of pages or folders to find the information
that they need. Intranets don’t pull relevant and current stories in the way that we
are used to receiving information. Content is often out of date because publishing
to intranets can be difficult and time-consuming. Moreover, you can’t target
employees or personalize communications, so the information is relevant and
timely.
With information being hard to find, workers may get disillusioned with
keeping up with intranet communications. This negatively impacts the employee
experience and lessens the effect of your messages on broader business objectives.
Desk-less workers can’t access intranets easily
Intranets are limited in their ability to reach all employees as they are
sometimes only accessible on the desktop. Remote workers, such as retail workers,
nurses, warehouse staff, and truck drivers, may not have the ability to access
intranets. Frontline workers may not have a company laptop, home access to the
intranet, or regular access to the internet. Even if these workers wait until they
have a computer to access the intranet, the information on the intranet is often out
of date or disorganized, making it difficult for employees to quickly find the content
they need. In order to reach every worker, you need to make sure that your
communication tools support remote and field workers.
Mobile isn’t supported
It is no secret that as a society we are addicted to our cell phones. Over 85% of
millennial have their own smart phone. Many intranets do not support mobile or
need to be updated with the next mobile release. If you already have an intranet, it
can be tempting to wait until the next update that has a mobile feature instead of
committing to a communications makeover. But by waiting to make the switch
over, you are losing an easy, accessible way to communicate with your entire
workforce. Mobile communications, especially native apps, are convenient for every
worker and more likely to keep them engaged.
225

Updates to intranets can take a long time


Communication technology and software are constantly improving and
evolving. We are accustomed to regularly scheduled software updates, new releases,
and exciting new features. In comparison, it can take years for new intranet
versions to come out. Microsoft, for example, is on a three-year release cycle. This
lag prevents internal communicators with intranets from getting access to the latest
strategies to keep workers informed and up to date. With these big new releases,
employees need to learn complicated new features and adjust their old workflows.
New releases may not integrate well with other software that you use or be tedious
to install if you wait too long between releases.
Intranet portal solutions are not user-friendly
If you’re thinking of an intranet refresh or turning to an intranet software
solution, it may not be the best investment. This is just an overlay to your existing
intranet, so all your same problems are just covered over with a prettier interface.
You’ll be stuck with the same bad search, outdated technology, and lack of mobile
responsiveness.
14.3.6 EXTRANET
An extranet is a private network that only authorized users can access. These
authorized users may include business partners, suppliers, and even some
customers. They can use the extranet to exchange information with each other
without having to enter the host company’s main network.
An extranet is like a secure file room located somewhere off the company
premises. Only those issued a key can enter and browse through the filing cabinets.
Benefits of Using an Extranet
Extranets offer various benefits to users, especially companies, regardless of
size, which include:
1. Increased efficiency
Businesses, small and large alike, typically work with multiple external
vendors and partners to produce outputs or complete tasks. An extranet can help
them manage the task workflow effectively, allowing them to ensure the prompt
completion of deliverables.
2. Improved collaboration and knowledge sharing
Sharing documents and real-time updates seamlessly is possible among all
company stakeholders through extranets. They also offer a safe place to do so
without worrying about external security risks or the leakage of confidential data.
The improved collaboration results in innovation and improvements in overall
processes.
3. Enhanced communication and engagement
Extranets serve as an avenue for the management to communicate updates,
make announcements, and share news to all stakeholders. That leads to increased
engagement and active participation from all of an organization’s members.
226

Disadvantages of Using an Extranet


1. Costs
One of the downsides of extranet use is that it is expensive to set up and
maintain. The costs include payment for hardware and software and IT staff
training and salaries who may need to build it from the ground up.
Maintenance costs can also rack up. Companies that don’t have enough
financial capacity to establish dedicated extranets can opt for third-party-managed
or -hosted extranets. Smaller companies can subscribe to cloud-based extranet
services, which often come with plug-and-play features. An example of such a
service is Microsoft Sharepoint. Note, though, that while users don’t need to hire
people to build their extranets from scratch, they’d still require servers to run
applications and staff to keep these running.
2. Security
Another drawback of using third-party extranets has to do with security.
Unauthorized users can gain access to these if their providers have lax security
measures in place. That could lead to the loss of proprietary and other confidential
information. That is why in-house staff typically manage extranets—doing so limits
access to authorized users, alleviating risks of data loss.
Intranet vs extranet
The distinction between an intranet and an extranet is that while an intranet
is limited in access to an organization, an extranet may be accessed
by authorized third parties such as customers, vendors, or partners.
14.4 REVISION POINTS
• LAN, WAN, MAN, WLAN
• Internet
• Intranet
• Extranet
14.5 INTEXT QUESTIONS
1. Explain the network classification based on geography.
2. List-down the uses of Metropolitan Area Network.
3. Specify advantages of WLAN.
4. Write short note on HyperLAN.
5. What is the difference between internet and intranet?
14.6 SUMMARY
LAN is used for connecting two or more personal computers through a
communication medium such as twisted pair, coaxial cable, etc.
A Wide Area Network is not limited to a single location, but it spans over a
large geographical area through a telephone line, fibre optic cable or satellite
links.
Wireless LANs provide high speed data communication in small areas such
as building or an office.
227

Internet is a collection of interlinked computer networks, connected by


copper wires, fiber-optic cables, wireless connections, etc.
Telnet allows a user to log into a remote computer as though it were a local
system.
FTP allows a user to transfer virtually every kind of file that can be stored
on a computer from one Internet-connected computer to another.
14.7 TERMINAL EXERCISE
1. _________________ is a group of computers connected to each other in a small
area such as building, office
2. _____________ is one of the biggest WAN in the world.
3. ________________ standards defines the technologies for wireless LANs
4. ______________ is called the network of networks
5. TCP stands for ____________________________
14.8 SUPPLEMENTARY MATERIALS
1. https://www3.nd.edu/~cpoellab/teaching/cse40814_fall14/networks.pdf
2. https://www.tutorialspoint.com
14.9 ASSIGNMENTS
1. Explain the advantages and disadvantages of Intranet and Extranet
14.10 SUGGESTED READING/REFERENCE
1. Parameshwaran. R, Computer Application in Business, S Chand Company
Ltd. New Delhi, 2004.
2. Rajaram. V, Introduction to Computer, 7th edition, Tata McGraw Hill, 2013.
14.11 LEARNING ACTIVITIES
1. Try to explore the evolution of networking and internet.
14.12 KEYWORDS
LAN – Local Area Network
WAN – Wide Area Network
MAN – Metropolitan Area Network
TCP – Transaction Control Protocol
IP _ internet Protocol
Application protocol
Telnet
FTP – File Transfer Protocol
WWW - World Wide Web
HTTP – Hyper-Text Transfer Protocol
URL – Unique Resource Locator
Web browser
HTML – Hyper Text Mark-up Language

228

LESSON - 15
NETWORKING TOPOLOGIES
TOPOLOGIE
15.1 INTRODUCTION
Networking is the core of any organization today for data and information
sharing. We have discussed the basics of networking in earlier chapters. There are
many methods to connect a computer or peripheral to a network. In this chapter we
shall discuss about the topologies that are used to incorporate a computer in a
network.
15.2 OBJECTIVES
• To understand the various topology in networking.
• To study the advantages and disadvantages of various topologies.
15.3 CONTENTS
15.3.1 Definition
15.3.2 Bus Topology
15.3.3 Ring Topology
15.3.4 Star Topology
15.3.5 Tree Topology
15.3.6 Mesh Topology
15.3.7 Hybrid Topology
15.3.1 DEFINITION
Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and logical
topology.
Physical topology is th
the
e geometric representation of all the nodes in a network.
229

15.3.2 BUS TOPOLOGY


Features
• The bus topology is designed in such a way that all the stations are
connected through a single cable known as a backbone cable.
• Each node is either connected to the backbone cable by drop cable or
directly connected to the backbone cable.
• When a node wants to send a message over the network, it puts a message
over the network. All the stations available in the network will receive the
message whether it has been addr
addressed or not.

• The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard
networks.
• The configuration of a bus topology is quite simpler as compared to other
topologies.
• The backbone cable is considered as a "single lane" through which the
message is broadcast to all the stations.
• The most common access method of the bus topologies is CSMA (Carrier
Sense Multiple Access).
CSMA: It is a media access control used to control the data flow so that data
integrity is maintained, i.e., the packets do not get lost. There are two alternative
ways of handling the problems that occur when two nodes send the messages
simultaneously.
• CSMA CD: CSMA CD ((Collision detection)) is an access method used to
detect the collision. Once the collision is detected, th the
e sender will stop
recovery after the collision".
transmitting the data. Therefore, it works on ""recovery collision
• CSMA CA: CSMA CA (Collision Avoidance) is an access method used to
avoid the collision by checking whether the transmission media is busy or
not. If busy, then the sender waits until the media becomes idle. This
technique effectively reduces the possibility of the collision. It does not work
on "recovery after the collision".
230

Advantages of Bus topology


• Low-cost cable: In bus topology, nodes are directly connected to the cable
without passing through a hub. Therefore, the initial cost of installation is
low.
• Moderate data speeds: Coaxial or twisted pair cables are mainly used in
bus-based
based networks that support upto 10 Mbps.
• Familiar technology: Bus topology y is a familiar technology as the
installation and troubleshooting techniques are well known, and hardware
components are easily available.
• Limited failure: A failure in one node will not have any effect on other
nodes.
Disadvantages of Bus topology
• Extensive cabling: A bus topology is quite simpler, but still it requires a lot
of cabling.
• Difficult troubleshooting: It requires specialized test equipment to
determine the cable faults. If any fault occurs in the cable, then it would
disrupt the communication for all the nodes.
• Signal interference: If two nodes send the messages simultaneously, then
the signals of both the nodes collide with each other.
• Reconfiguration difficult: Adding new devices to the network would slow
down the network.
• Attenuation: Attenuation
uation is a loss of signal leads to communication issues.
Repeaters are used to regenerate the signal.
15.3.3 RING TOPOLOGY

• Ring topology is like a bus topology, but with connected ends.


• The node that receives the message from the previous computer will
retransmit to the next node.
• The data flows in one direction, i.e., it is unidirectional.
• The data flows in a single loop continuously known as an endless loop.
231

• It has no terminated ends, i.e., each node is connected to other node and
having no termination point.
• The data in a ring topology flow in a clockwise direction.
• The most common access method of the ring topology is token passing.
o Token passing: It is a network access method in which token is
passed from one node to another node.
o Token: It is a frame that circulates around the network.
Working of Token passing
• A token moves around the network, and it is passed from computer to
computer until it reaches the destination.
• The sender modifies the token by putting the address along with the data.
• The data is passed from one device to another device until the destination
address matches. Once the token received by the destination device, then it
sends the acknowledgment to the sender.
• In a ring topology, a token is used as a carrier.
Advantages of Ring topology
• Network Management: Faulty devices can be removed from the network
without bringing the network down.
• Product availability: Many hardware and software tools for network
operation and monitoring are available.
• Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the
installation cost is very low.
• Reliable: It is a more reliable network because the communication system is
not dependent on the single host computer.
Disadvantages of Ring topology
• Difficult troubleshooting: It requires specialized test equipment to
determine the cable faults. If any fault occurs in the cable, then it would
disrupt the communication for all the nodes.
• Failure: The breakdown in one station leads to the failure of the overall
network.
• Reconfiguration difficult: Adding new devices to the network would slow
down the network.
• Delay: Communication delay is directly proportional to the number of
nodes. Adding new devices increases the communication delay.
232

15.3.4 STAR TOPOLOGY

• Star topology is an arrangement of the network in which every node is


connected to the central hub, switch or a central computer.
• The central computer is known as a server,, and the peripheral devices
attached to the server are known as clients.
• Coaxial cable or RJ--45 cables are
re used to connect the computers.
• Hubs or Switches are mainly used as connection devices in a physical star
topology.
• Star topology is the most popular topology in network implementation.
Advantages of Star topology
• Efficient troubleshooting: Troubleshooting
ng is quite efficient in a star
topology as compared to bus topology. In a bus topology, the manager has to
inspect the kilometers of cable. In a star topology, all the stations are
connected to the centralized network. Therefore, the network administrator
has to go to the single station to troubleshoot the problem.
• Network control: Complex network control features can be easily
implemented in the star topology. Any changes made in the star topology are
automatically accommodated.
• Limited failure: As each s station
tation is connected to the central hub with its
own cable, therefore failure in one cable will not affect the entire network.
• Familiar technology: Star topology is a familiar technology as its tools are
cost-effective.
• Easily expandable: It is easily expandable
dable as new stations can be added to
the open ports on the hub.
• Cost effective: Star topology networks are cost
cost-effective
effective as it uses
inexpensive coaxial cable.
• High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet
100BaseT is one of the most popular Star topology networks.
233

Disadvantages of Star topology


• A Central point of failure: If the central hub or switch goes down, then all
the connected nodes will not be able to communicate with each other.
• Cable: Sometimes cable routing becomes d
difficult
ifficult when a significant amount
of routing is required.
15.3.5 TREE TOPOLOGY

• Tree topology combines the characteristics of bus topology and star topology.
• A tree topology is a type of structure in which all the computers are
connected with each other in hierarchical fashion.
• The top-most
most node in tree topology is known as a root node, and all other
nodes are the descendants of the root node.
• There is only one path exists between two nodes for the data transmission.
Thus, it forms a parent
parent-child hierarchy.
Advantages of Tree topology
• Support for broadband transmission: Tree topology is mainly used to
provide broadband transmission, i.e., signals are sent over long distances
without being attenuated.
• Easily expandable: We can add the new device to the existing ting network.
Therefore, we can say that tree topology is easily expandable.
• Easily manageable: In tree topology, the whole network is divided into
segments known as star networks which can be easily managed and
maintained.
• Error detection: Error detection and error correction are very easy in a tree
topology.
• Limited failure: The breakdown in one station does not affect the entire
network.
• Point-to-point
point wiring: It has point-to-point
point wiring for individual segments.
Disadvantages of Tree topology
• Difficult troubleshooting: If any fault occurs in the node, then it becomes
difficult to troubleshoot the problem.
• High cost: Devices required for broadband transmission are very costly.
234

• Failure: A tree topology mainly relies on main bus cable and failure in main
bus
s cable will damage the overall network.
• Reconfiguration difficult: If new devices are added, then it becomes
difficult to reconfigure.
15.3.6 MESH TOPOLOGY

• Mesh technology is an arrangement of the network in which computers are


interconnected with each other through various redundant connections.
• There are multiple paths from one computer to another computer.
• It does not contain the switch, hub or any central computer which acts as a
central point of communication.
• The Internet is an example of the mesh topology.
• Mesh topology is mainly used for WAN implementations where
communication failures are a critical concern.
• Mesh topology is mainly used for wireless networks.
• Mesh topology can be formed by using the formula:
Number of cables = (n*(n
(n*(n-1))/2;
Where n is the number of nodes that represents the network.
Mesh topology is divided into two categories:
• Fully connected mesh topology
• Partially connected mesh topology

• Full Mesh Topology: In a full mesh topology, each computer is connected to


all the computers
rs available in the network.
• Partial Mesh Topology: In a partial mesh topology, not all but certain
computers are connected to those computers with which they communicate
frequently.
235

Advantages of Mesh topology


Reliable: The mesh topology networks are very reliable as if any link
breakdown will not affect the communication between connected computers.
Fast Communication: Communication is very fast between the nodes.
Easier Reconfiguration: Adding new devices would not disrupt the
communication between other devices.
Disadvantages of Mesh topology
• Cost: A mesh topology contains a large number of connected devices such
as a router and more transmission media than other topologies.
• Management: Mesh topology networks are very large and very difficult to
maintain and manage. If the network is not monitored carefully, then the
communication link failure goes undetected.
• Efficiency: In this topology, redundant connections are high that reduces
the efficiency of the network.
15.3.7 HYBRID TOPOLOGY

• The combination of various different topologies is known as Hybrid


topology.
• A Hybrid topology is a connection between different links and nodes to
transfer the data.
• When two or more different topologies are combined together is termed as
Hybrid topology and if similar ttopologies
opologies are connected with each other will
not result in Hybrid topology. For example, if there exist a ring topology in
one branch of ICICI bank and bus topology in another branch of ICICI bank,
connecting these two topologies will result in Hybrid topo
topology.
Advantages of Hybrid Topology
• Reliable: If a fault occurs in any part of the network will not affect the
functioning of the rest of the network.
236

• Scalable: Size of the network can be easily expanded by adding new devices
without affecting the functionality of the existing network.
• Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
• Effective: Hybrid topology is very effective as it can be designed in such a
way that the strength of the network is maximized and weakness of the
network is minimized.
Disadvantages of Hybrid topology
• Complex design: The major drawback of the Hybrid topology is the design of
the Hybrid network. It is very difficult to design the architecture of the
Hybrid network.
• Costly Hub: The Hubs used in the Hybrid topology are very expensive as
these hubs are different from usual Hubs used in other topologies.
• Costly infrastructure: The infrastructure cost is very high as a hybrid
network requires a lot of cabling, network devices, etc.
15.4 REVISION POINTS
• Definitions of various topologies
• Advantages and disadvantages of various topologies
15.5 INTEXT QUESTIONS
1. Define topology.
2. List down the features of bus topology.
3. Write down the advantages of tree topology.
4. Record the features of mesh topology.
5. What are the advantages of hybrid topology?
15.6 SUMMARY
Topology defines the structure of the network of how all the components are
interconnected to each other
The bus topology is designed in such a way that all the stations are
connected through a single cable known as a backbone cable
The most common access method of the ring topology is token passing
Star topology is an arrangement of the network in which every node is
connected to the central hub, switch or a central computer.
Mesh technology is an arrangement of the network in which computers are
interconnected with each other through various redundant connections
15.7 TERMINAL EXERCISE
1. The backbone cable is considered as a ___________ through which the
message is broadcast to all the stations.
2. CSMA stands for ________________________
3. The data flows is unidirectional in ________________
237

4. _________________are mainly used as connection devices in a physical star


topology.
5. _________________ combines the characteristics of bus topology and star
topology.
15.8 SUPPLEMENTARY MATERIALS
1. https://www.javatpoint.com/fundamentals-of-wlans
2. https://www.tutorialspoint.com
15.9 ASSIGNMENTS
1. Write an essay about the various topologies for networking. List-down its
advantages and disadvantages.
15.10 SUGGESTED READING/REFERENCE
1. Peter Norton, Introduction to Computers, 6th edition, Tata McGraw Hill,
New Delhi.
2. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
15.11 LEARNING ACTIVITIES
1. Identify a place where networking is done. Find out which kind of topology is
followed and why.
15.12 KEYWORDS
Topology
Token passing
CSMA
Client
Server

238

LESSON - 16
ADVANCED NETWORKING AND VIRUS
16.1 INTRODUCTION
Networking is the core of any organization today for data and information
sharing and we have discussed the basics of networking in earlier chapters. Todays
business are run using sophisticated, hi-speed and complex networks. Here we
shall discuss the advanced mode of networking and its uses to business.
Since networking faces the exposure of many users there are lot of threats
associated with it. One form of threat is virus. Let us have a look into it in this
chapter.
16.2 OBJECTIVES
• To understand the advanced methods of networking
• To study the pros and cons of advanced networking methods.
• To get knowledge about virus and its impact.
16.3 CONTENTS
16.3.1 Virtual Private Networking
16.3.2 Peer to Peer Networking
16.3.3 Client Server Networking
16.3.4 Virus
16.3.5 Anti-Virus
16.3.1 VIRTUAL PRIVATE NETWORK
A virtual private network, or VPN, is an encrypted connection over the Internet
from a device to a network. The encrypted connection helps ensure that sensitive
data is safely transmitted.
VPN stands for virtual private network. A virtual private network (VPN) is a
technology that creates a safe and encrypted connection over a less secure network,
such as the internet. Virtual Private network is a way to extend a private network
using a public network such as internet. The name only suggests that it is Virtual
“private network” i.e. user can be the part of local network sitting at a remote
location. It makes use of tunneling protocols to establish a secure connection.
Surfing the web or transacting on an unsecured Wi-Fi network means you
could be exposing your private information and browsing habits. That’s why a
virtual private network, better known as a VPN, should be a must for anyone
concerned about their online security and privacy.
The encryption and anonymity that a VPN provides helps protect your online
activities: sending emails, shopping online, or paying bills. VPNs also help keep
your web browsing anonymous.
VPNs essentially create a data tunnel between your local network and an exit
node in another location, which could be thousands of miles away, making it seem
239

as if you’re in another place. This benefit allows online freedom, or the ability to
access your favorite apps and websites while on the go.
A VPN can hide a lot of information that can put your privacy at risk. Here are five
of them.
1. Your browsing history
2. Your IP address and location
3. Your location for streaming
4. Your devices
5. Your web activity — to maintain internet freedom
Features of VPN
1. VPN also ensures security by providing an encrypted tunnel between client
and vpn server.
2. VPN is used to bypass many blocked sites.
3. VPN facilitates Anonymous browsing by hiding your ip address.
4. Also most appropriate Search engine optimization(SEO) is done by analyzing
the data from VPN providers which provide country wise stats of browsing a
particular product . This method of SEO is used widely my many internet
marketing managers to form new strategies.
How to choose a VPN
A smart way to stay secure when using public Wi-Fi is to use a VPN solution.
But what’s the best way to choose a virtual private network? Here are some
questions to ask when you’re choosing a VPN provider.
1. Do they respect your privacy? The point of using a VPN is to protect your
privacy, so it’s crucial that your VPN provider respects your privacy, too.
They should have a no-log policy, which means that they never track or log
your online activities.
2. Do they run the most current protocol? OpenVPN provides stronger
security than other protocols, such as PPTP. OpenVPN is an open-source
software that supports all the major operating systems.
3. Do they set data limits? Depending on your internet usage, bandwidth
may be a large deciding factor for you. Make sure their services match your
needs by checking to see if you’ll get full, unmetered bandwidth without data
limits.
4. Where are the servers located? Decide which server locations are
important to you. If you want to appear as if you’re accessing the Web from a
certain locale, make sure there’s a server in that country.
5. Will you be able to set up VPN access on multiple devices? If you are like
the average consumer, you typically use between three and five devices.
Ideally, you’d be able to use the VPN on all of them at the same time.
240

6. How much will it cost? If price is important to you, then you may think
that a free VPN is the best option.
16.3.2 PEER TO PEER NETWORK
The peer to peer computing architecture contains nodes that are equal
participants in data sharing. All the tasks are equally divided between all the nodes.
The nodes interact with each other as required as share resources.

Characteristics of Peer to Peer Computing


The different characteristics of peer to peer networks are as follows −
Peer to peer networks are usually formed by groups of a dozen or less
computers. These computers all store their data using individual security but also
share data with all the other nodes.
The nodes in peer to peer networks both use resources and provide resources.
So, if the nodes increase, then the resource sharing capacity of the peer to peer
network increases. This is different than client server networks where the server
gets overwhelmed if the nodes increase.
Since nodes in peer to peer networks act as both clients and servers, it is
difficult to provide adequate security for the nodes. This can lea
lead
d to denial of
service attacks.
Most modern operating systems such as Windows and Mac OS contain
software to implement peer to peer networks.
Advantages of Peer to Peer Computing
Some advantages of peer to peer computing are as follows −
Each computer in thehe peer to peer network manages itself. So, the network is
quite easy to set up and maintain.
In the client server network, the server handles all the requests of the clients.
This provision is not required in peer to peer computing and the cost of the server
ser is
saved.
It is easy to scale the peer to peer network and add more nodes. This only
increases the data sharing capacity of the system.
None of the nodes in the peer to peer network are dependent on the others for
their functioning.
241

Disadvantages of Peer
er to Peer Computing
Some disadvantages of peer to peer computing are as follows −
It is difficult to backup the data as it is stored in different computer systems
and there is no central server.
It is difficult to provide overall security in the peer to p
peer
eer network as each
system is independent and contains its own data.
16.3.3 CLIENT-SERVER
SERVER NETWORKS
Client-server
server networks are computer networks that use a dedicated computer
(server) to store
ore data, manage/provide resources and control user access.
The server acts as a central point on the network upon which the other
computers connect to.
A computer that connects to the se
server is called a client.

A client-server
server network is usually preferred over a peer-to-peer
peer network that
doesn’t have a central server to manage the network.
Network server functions
A client-server
server network may have more than one server, each dedicated to
handling a specific function.
Functions may include:
• Data storage
• Handling security
• Hosting shared applications
• Managing an internet connection
• Scheduling and running backups
• Email services
242

• Print jobs
• Domain name services
• Storing usernames and passwords to control access
• Assigning levels of access to resources
• Monitoring network traffic
Benefits of a client-server network
• Generally more secure than peer-to-peer networks
• One client computer crashing does not effect the other computers
• Easier to recover files as backups can be controlled centrally by the network
administrator
• Files and resources are easier to share and control from server
• Improved levels of security as files are centralised
• It’s easier to administrate the whole network using a server
• Faster performance as each computer is only fulfilling one role
• Security is potentially cheaper and easier when done centrally
• Individual users do not have to worry about backups or security
• Larger networks can be created
Drawbacks of a client-server network
• Servers can be expensive to buy and maintain
• A network technician will often be required
• Trickier to set up with specialist knowledge needed
• Over-all set up cost is more expensive than a peer-to-peer network
• Server failure will probably disrupt all computers on the network
16.3.4 VIRUS
A computer virus is a type of malicious code or program written to alter the
way a computer operates and is designed to spread from one computer to another.
A virus operates by inserting or attaching itself to a legitimate program or document
that supports macros in order to execute its code. In the process, a virus has the
potential to cause unexpected or damaging effects, such as harming the system
software by corrupting or destroying data.
Once a virus has successfully attached to a program, file, or document, the
virus will lie dormant until circumstances cause the computer or device to execute
its code. In order for a virus to infect your computer, you have to run the infected
program, which in turn causes the virus code to be executed.
A virus can remain dormant on your computer, without showing major signs
or symptoms. However, once the virus infects your computer, the virus can infect
243

other computers on the same network. Stealing passwords or data, logging


keystrokes, corrupting files, spamming your email contacts, and even taking over
your machine are just some of the devastating and irritating things a virus can do.
While some viruses can be playful in intent and effect, others can have
profound and damaging effects. This includes erasing data or causing permanent
damage to your hard disk.
A computer virus attack can produce following symptoms.
• Frequent pop-up windows. Pop-ups might encourage you to visit unusual
sites. Or they might prod you to download antivirus or other software
programs.
• Changes to your homepage. Your usual homepage may change to another
website, for instance. Plus, you may be unable to reset it.
• Mass emails being sent from your email account. A criminal may take
control of your account or send emails in your name from another infected
computer.
• Frequent crashes. A virus can inflict major damage on your hard drive. This
may cause your device to freeze or crash. It may also prevent your device
from coming back on.
• Unusually slow computer performance. A sudden change of processing
speed could signal that your computer has a virus.
• Unknown programs that start up when you turn on your computer. You
may become aware of the unfamiliar program when you start your
computer. Or you might notice it by checking your computer’s list of active
applications.
• Unusual activities like password changes. This could prevent you from
logging into your computer.
Different types of computer viruses
1. Boot sector virus: This type of virus can take control when you start — or
boot — your computer. One way it can spread is by plugging an infected
USB drive into your computer.
2. Web scripting virus: This type of virus exploits the code of web browsers
and web pages. If you access such a web page, the virus can infect your
computer.
3. Browser hijacker: This type of virus “hijacks” certain web browser
functions, and you may be automatically directed to an unintended website.
4. Resident virus: This is a general term for any virus that inserts itself in a
computer system’s memory. A resident virus can execute anytime when an
operating system loads.
5. Direct action virus: This type of virus comes into action when you execute
a file containing a virus. Otherwise, it remains dormant.
244

6. Polymorphic virus: A polymorphic virus changes its code each time an


infected file is executed. It does this to evade antivirus programs.
7. File infector virus: This common virus inserts malicious code into
executable files — files used to perform certain functions or operations on a
system.
8. Multipartite virus: This kind of virus infects and spreads in multiple ways.
It can infect both program files and system sectors.
9. Macro virus: Macro viruses are written in the same macro language used for
software applications. Such viruses spread when you open an infected
document, often through email attachments.
16.3.5 ANTI-VIRUS
Antivirus software is a type of program designed and developed to protect
computers from malware like viruses, computer worms, spyware, botnets, rootkits,
key loggers and such. Antivirus programs function to scan, detect and remove
viruses from your computer. There are many versions and types of anti-virus
programs that are on the market. However, the prime objective of any antivirus
program is to protect computers and remove viruses once detected.
Most antivirus programs incorporate both automated and manual filtering
abilities. The instant scanning option may check files - downloaded from the
Internet, discs that are embedded into the PC, and files that are made by software
installers. The programmed scanning process may likewise check the entire hard
drive on a day-to-day basis. The manual scanning system enables you to check
single documents or even to scan the complete network at whatever point you feel it
is necessary.
Features of an Effective Antivirus
The following features of any antivirus are to be looked for when you decide on
installing one
• Proactive scanning for malwares, and deleting once detected
• Default-Deny Protection – Default-Deny protection that is implemented to
prevent the entry of suspicious files by default.
• Auto Sandbox Technology – A virtual environment where suspicious and
unknown files are secluded and run to check for any malicious activity
without interfering with the normal operations.
• Containment Technology – Validates and authorizes the programs that are
executable and ensures that the processes are run without effecting the
regular operations of the system.
• Host Intrusion Protection System (HIPS) – This feature works on a protocol-
based intrusion prevention system, that oversees all the application and
program activities that are processed in the system. The HIPS terminates
any malicious activities once found. This prevents the malware from
infecting the operating system, registry keys or personal data or system
memory.
245

16.4 REVISION POINTS


• Virtual private networking
• Peer to peer networking
• Client server networking
• Virus
• Anti-Virus
16.5 INTEXT QUESTIONS
1. What are the information that a VPN can hide?
2. What are the characteristics of peer to peer network?
3. Write down the functions of server in a client server network.
4. Enumerate the symptoms of virus attack.
5. Itemize the features of effective anti-virus software.
16.6 SUMMARY
A virtual private network, or VPN, is an encrypted connection over the
Internet from a device to a network. The encrypted connection helps ensure
that sensitive data is safely transmitted.
The nodes in peer to peer networks both use resources and provide
resources
Client-server networks are computer networks that use a dedicated
computer (server) to store data, manage/provide resources and control user
access
A computer virus is a type of malicious code or program written to alter the
way a computer operates and is designed to spread from one computer to
another
Antivirus software is a type of program designed and developed to protect
computers from malware like viruses, computer worms, spyware.
16.7 TERMINAL EXERCISE
1. ______________ create a data tunnel between your local network and an exit
node
2. _______________________ architecture contains nodes that are equal
participants in data sharing.
3. A computer that connects to the server is called a ______
4. ____________ type of virus take control when you start or boot your
computer.
5. _______________ is a general term for any virus that inserts itself in a
computer system’s memory.
246

16.8 SUPPLEMENTARY MATERIALS


1. https://www3.nd.edu/~cpoellab/teaching/cse40814_fall14/networks.pdf
2. https://www.tutorialspoint.com
16.9 ASSIGNMENTS
1. Explain the special features of advanced networking technologies.
16.10 SUGGESTED READING/REFERENCE
1. Parameshwaran. R, Computer Application in Business, S Chand Company
Ltd. New Delhi, 2004.
2. Rajaram. V, Introduction to Computer, 7th edition, Tata McGraw Hill, 2013.
16.11 LEARNING ACTIVITIES
1. Identify the places where VPN, Peer to Peer networking and Client Server
Networking is used. What is the purpose it is being used there?
16.12 KEYWORDS
Virtual private networking
Search engine optimization
Server computer
Client
Virus
Spam
Boot sector virus
Web scripting virus
Auto sandbox technology
Containment technology.

247

LESSON -17
IT STRATEGIC ALIGNMENT
17.1 INTRODUCTION
Information technology is the use of computers to store and process facts and
figures into a useful, organized, form. “Data” is the raw material: numbers and
facts. “Information” is the raw material organized in a useful way. Numbers are
data. A telephone book full of numbers is information. To emphasize the role of
communications some people use the acronym ICT which stands for Information
and Communication Technology.
IT sector is so well evolved that it is directly or indirectly influencing the
working of various other sectors and industries. IT sector acts as a supporting
figure for various sectors such as healthcare, aviation, education, manufacturing
sector, telecommunications sector, various government initiatives and departments
etc.
The IT industry is one which is not limited to software development alone, but
also it can be applied in libraries, hospitals, banks, shops, prisons, hotels, airports,
train stations and many other places through database management systems, or
through custom-made software as seen fit.
17.2 OBJECTIVES
• To understand the concepts of Information technology
• To recognize the meaning IT strategy
• To get insights of IT strategy and business alignment
• To study the various models for IT strategies.
17.3 CONTENTS
17.3.1 Information Technology Features
17.3.2 IT Strategy
17.3.3 IT-Business Alignment Model
17.3.4 IT and Porter’s Five Forces
17.3.5 Value Chain Model
17.3.6 Strategic Resources and Capabilities
17.3.1 INFORMATION TECHNOLOGY: FEATURES
i. The Importance of Information Storage & Retrieval Systems in an Organization
Information is a critical business resource and like any other critical resource
must be properly managed. Constantly evolving technology, however, is changing
the way that even very small businesses manage vital business information. An
information or records management system -- most often electronic -- designed to
capture, process, store and retrieve information is the glue that holds a business
together.
248

ii. Regulatory Compliance


Unlike a public company, a privately held business isn’t subject to most federal
and state government compliance requirements. Despite this, many choose to
comply voluntarily, both to provide transparency and enhance the business’s public
image. In addition, small-business owners must store and maintain tax information
so, in case of an audit, the information is readily accessible. A well-organized
information storage and retrieval system that follows compliance regulations and
tax record-keeping guidelines significantly increases a business owner’s confidence
the business is fully complying.
iii. Efficiency and Productivity
Any time a business owner or employees spend searching through stacks of
loose files or spend trying locate missing or misfiled records is inefficient,
unproductive and can prove costly to a small business. A good information storage
and retrieval system, including an effective indexing system, not only decreases the
chances information will be misfiled but also speeds up the storing and retrieval of
information. The resulting time-saving benefit increases office efficiency and
productivity while decreasing stress and anxiety.
iv. Improve Working Environment
It can be disheartening to anyone walking through an office area to see vital
business documents and other information stacked on top of file cabinets or in
boxes next to office workstations. Not only does this create a stressful and poor
working environment, but if customers see this, can cause customers to form a
negative perception of the business. Contrast this with an office area in which file
cabinets, aisles and workstations are clear and neatly organized to see how
important it is for even a small business to have a well-organized information
storage and retrieval system.
v. Electronic Vs. Manual System
Although a very small business may choose to institute a manual system, the
importance of electronic information storage and retrieval systems lie in the fact
that electronic systems reduce storage space requirements and decrease equipment
and labor costs. In contrast, a manual system requires budgetary allotments for
storage space, filing equipment and administrative expenses to maintain an
organized filing system. Additionally, it can be significantly easier to provide and
monitor internal controls designed to deter fraud, waste and abuse as well as
ensure the business is complying with information privacy requirements with an
electronic system.
17.3.2 IT STRATEGY
IT strategy (information technology strategy) is a comprehensive plan that
outlines how technology should be used to meet IT and business goals. An IT
strategy is a written document that details the multiple factors that affect the
organization's investment in and use of technology. Ideally, this strategy should
support and shape an organization's overall business strategy.
249

IT has become a critical part of almost every department. As a result,


businesses have high expectations of their technology investment. New technologies
must now reduce costs, increase security, enhance productivity or improve
workflows and communications.
But despite the important role IT strategy plays, it often ends up as an
afterthought. Bolted on to the business strategy, rather than forming an integral
part of it.
This is often because IT is seen as a separate entity. One where technology
drives investments, instead of the business’ own aims and objectives.
This divided approach, by both the business as a whole and by IT, is
counterproductive. And usually only serves to increase the disconnect between the
two.
IT strategies should cover all facets of technology management, including cost
management, human capital management, hardware and software management,
vendor management and risk management.
Executing an IT strategy requires strong IT leadership; the chief information
officer (CIO) and chief technology officer (CTO) need to work closely with business,
budget and legal departments as well as with other lines of business and user
groups to achieve its success.
Organizations formalize their IT strategy in a written document or balanced
scorecard strategy map. The plan and its documentation should be flexible enough
to change in response to new organizational circumstances, market and industry
conditions, business priorities and objectives, budgetary constraints, available skill
sets and core competencies, technology advances, and user needs.
IT strategies are also called technology strategies or an IT technology strategic
plan. IT strategies should also be designed to be agile. For example, IT strategies for
some organizations had to change in 2020 due to the pandemic.
Basics of an IT strategy
A strong IT strategy provides a blueprint of how technology supports and
shapes the organization's overall business strategy. Its strategic goals should mirror
business projects (aka business alignment) and take into account the needs of key
stakeholders including employees, customers and business partners.
The strategy should offer a look at the organization's current technology
posture and provide an idea of where IT should head over the next three to five
years.
There are different models that can help executives construct an IT strategy.
Most contain certain key elements including:
• A high-level overview of the IT department that covers its mission, core
values, objectives and approaches to accomplishing its goals.
• Current budgets and spending forecasts for a multi-year timeline.
250

• An outline of current and future IT projects and initiatives with timelines


and milestones.
• A catalog of existing enterprise architecture; IT department capabilities and
capacities; and future needs and requirements with details about
infrastructure, staffing and other necessary resources.
• An analysis of IT's strengths and weaknesses.
• A list of the internal and external forces (such as market and industry
trends) that shape current technology requirements and innovations. This
includes the future forces expected to shape IT.
• A prediction of the potential opportunities and vulnerabilities that will
necessitate technology responses to best position the organization for
success.
17.3.3 IT-BUSINESS ALIGNMENT MODEL
Aligning the IT and business strategies involves more than combining them
into a single document. Under this model, every aspect of the IT strategy should
support the goals of the business.
This means that every IT-related investment, activity, service or project must
create or optimise business value. Additionally, to further improve alignment, IT
should be looking to achieve business-related metrics. Not IT related ones.
Alignment is not the sole responsibility of IT though. Leadership teams need to
be aware of their IT resources, something often neglected. They must also
understand how technology plays a role in their business’ effectiveness. And how it
can impact their competitiveness as well as the speed of response to market
change.
The key to an aligned IT and business strategy is mutual leadership and
accountability. IT must increase accountability for their results, but they must also
hold the business accountable for IT.
An IT strategy has become a critical element for organizational leadership in
recent decades. Its growing importance mirrors the rise of technology as a critical
element for business success. The importance of an IT strategy has been amplified
as organizations focus on digital transformation.
Technology is essential for creating new business models, products and
services; enhancing customer service and customer experience; increasing sales;
enabling workers and improving productivity; and supporting interactions with
vendors and other business partners. As such, organizations must make a
technology strategy to accomplish these as well as compete against other
organizations with the same objectives.
Some organizations may decide to forgo a separate IT strategy, particularly
platform companies and other businesses whose product is based on technology
251

offerings. Instead, these organizations may fold IT strategies into the overall
business strategy to create a single unified document.

Although the IT strategy by nature needs to address complex technical details,


it should not be considered a technical document. Instead, it should be considered
a business document. As such, it should be written in clear, concise language
that's free of technical jargon.
Business–IT alignmentent integrates the information technology to the strategy,
mission, and goals of the organization.
Key characteristics to achieve this alignment are:

• The organization must view information technology as an instrument to


transform the business. This includ
includes
es exploring other revenue streams and
integrating other facets of their business into each other. For example, using
one central data warehouse to combine two separate, but partnering par
businesses.
• An organization must hold customer service, both externally and internally,
at the utmost importance. Communication between the organization and
their customers must not be lost.
• An organization must rotate both IT and business professi professionals
onals across
different departments and job functions. They must have the knowledge and
experience of both sides of the business so that understanding and
communication is achieved. Once those three characteristics are achieved,
• An organization must provide clear and specific goals to both the IT and
business employees. This will create the integration of both entities to
achieve a common goal.
252

• Ensure that IT and business employees understand how the company


makes or loses money. This is important so that money is not carelessly
poured into the IT department and there is no return on that investment.
• Organizations must create a vibrant and inclusive company culture. There
must not only be informational unity, but a company as whole.
How to create an IT strategy
Just as there are varying models for the document itself, there are multiple
ways to approach creating an IT strategy. Commonalities do exist, however. For
example, an initial review of the organization's existing strategic IT plan and related
documents is a good first step in any IT strategy.
This first step should be followed by an assessment of how the organization is
meeting established objectives, milestones, benchmarks and relevant key
performance indicators. The assessment should identify the technology currently in
use and the gaps that exist between these current IT operations and the objectives
and strategic goals outlined in the ongoing strategic plans.
Senior IT leaders then need to collaborate with their business-side
counterparts to develop the IT strategy further. Resources such as research reports
should be looked for in order to understand the business and technology trends
that will impact the organization's market.
This creates a groundwork for IT executives to develop short and long-term
objectives, budget projections, technology predictions, the perceived future
opportunities and vulnerabilities that go into the technology strategy. At this point,
an organization should have the corresponding summaries needed for the final
document.
A strong IT strategy relies not just on creating the plan, but also on proper
implementation of it. These documents won't do any good if they're ignored after
completion.
The documents should be used to guide tactical technology decisions, thereby
helping the IT department align its day-to-day operations with the overall business
model and mission.
However, adherence to the IT strategy should not be overly rigid. The potential
fast pace of technology advancements and innovation would require organizations
to be agile if they want to seize unforeseen developments. This will help an
organization be more competitive and better serve its market.
The technology strategy needs to be flexible. CIOs, CTOs and other executives
must also be nimble, and they should expect to reassess and redevelop the
technology strategy at least annually and possibly revisit it even more frequently.
Revisits of IT strategies should be done to verify tactical plans align with the
technology strategy, and to verify the technology strategy remains aligned with the
overall organizational mission -- as it changes in response to shifting dynamics.
253

Benefits of aligning your IT strategy with your business strategy


Utilising an IT-business alignment model can help improve your business’s
performance, Leading to more efficient processes, faster response times and more
efficient supply chains. This is because all aspects of a business are not only
working towards common goals but are aware of what other areas are doing.
• Everyone is working towards a common goal
• IT supports the business strategy, adds value and drives success
• Easier to control and manage risk and compliance issues
• Meets the IT demands consistently and efficiently
• Increases the agility of the business and allows it to react to changes
• Creates greater integrations and collaboration between departments
What are the potential drawbacks?
It’s important to remember that alignment is not a one-off, box-ticking activity.
Alignment is something which requires regular reviews, assessments and
adjustments.
Even businesses who achieve alignment can experience disconnect over time.
As needs and priorities change, other departments will react. They’ll develop their
own strategies which work for them. But will have little thought to how they must
adjust IT to support these developments.
If this continues, pain-points and organisational friction will arise. But the
cause of this is rarely identified. The business strategy has developed, but IT has
stagnated.
To summarize
The leadership team should review the IT strategy alongside business strategy
each quarter. This helps determine how technology is helping – or hindering –
growth.
This process benefits greatly from an external consultant. For companies
without an internal IT team, the consultant provides important technical insight.
And where companies do have an internal IT team, the consultant provides a much
needed objective point of view.
Although aligning the IT and business strategy takes time and cooperation, it
is an exercise worth completing. When IT is working to support business goals it
leads to happier, more productive teams, smarter investments and a greater return.
17.3.4 IT AND PORTER’S FIVE FORCES
IT is changing the rules of business and competition in three ways. Firstly,
because of new technology, business models are changing. This, in turn, is
changing the industry characteristics. Secondly, using IT, new strategies are being
formulated to service not only the customers, but also to meet objectives for all
stakeholders. An early adoption of technology helps the firm to maintain its
competitive advantage and remains as top-of-mind recall for potential customers.
254

This forces competitors to devise similar technology-enabled strategies and become


“me too” in the process. And lastly, because of technology, new business models are
being created, which would not have been possible otherwise. In the subsequent
paragraphs, these three impacts are discussed in detail.
According to Porter, a firm is influenced by five forces, which also affect its
sustainable growth and profitability. These forces are suppliers’ bargaining power,
customers’ bargaining power, threat of new entrant (meaning how difficult it is for a
new player to enter the industry), threat of new products/services that can
substitute existing products/services at cheaper rate and threat/barrier of exiting
the industry (meaning how difficult it is to leave the present industry). However, it
is seen that these five forces have different degrees of impact in different industries.
Even individual force has a different level of impact on different firms in different
industries. Over a period of time the individual forces and their collective impact
will change as the government policies and macroeconomic and environment
conditions change. These changes can affect attractiveness of the industry
IT and Buying Power
With IT these forces can be controlled and monitored, which can become a big
factor in maintaining industry characteristics. For example, with technology-
enabled systems, purchase orders can be automated, vendor selection can be
faster, and inventory management can be automated. These features along with
automated billing capability will help an organization engaged in engineering
projects such as Larsen & Toubro,9 to reduce lead time for procurement, improve
vendor relationship, and be able to meet service-level agreements made with
different stakeholders. Thus the power of buyers has increased tremendously.
IT and Entry Barrier
Banks and Financial institutions (FI) have been providing services to
customers through traditional manual systems for a long time. Customers had to
be physically available at the bank counters and in financial service providers’
offices for carrying out business transactions. With the introduction of technology,
online banking system has become a de facto feature. Similarly, FI provide online
and mobile investment facilities for customers, which allow them to take
investment decisions from any part of the world without physical presence in banks
and FI offices. However, these features require complex software and a high degree
of computer networking for cloud computing. All these need to be managed and
maintained by skilled personnel. Training the skilled personnel is a must as they
need to constantly update themselves with the latest in technology (such as cloud
computing, software as service, green technology). Also the end users of these
online systems need to be trained so that they can use the systems and increase
their productivity. These require heavy amount of investment in technology,
training, and human resources. The heavy investment has increased the barrier to
enter this industry
255

IT and Threats of Substitutes


In automobile industry, there is a constant need to introduce new models and
better quality vehicles to meet the changing taste of customers. Introduction of
MPFI (Multi-Point Fuel Injection), CRDI (Common Rail Diesel Injection) engines
have increased product life cycles of many models. Also, there is a constant need to
upgrade features of existing products to induce existing customers to upgrade their
vehicles. These require rapid changes in design, ability to manufacture new as well
as modified vehicles as per the changed design specifications Traditional
manufacturing systems will not be able to cope with the demand for meeting high
quality with reduction in manufacturing cycle time. Introduction of flexible
manufacturing system made design easier while shortening manufacturing cycle.
Thus, with the help of technology the organizations can manufacture new products
with shorter lead time and reduce threat of substitutes.
IT and Industry Rivalry
In service industries such as Airlines, Railways, Banking services, the pricing
strategies and service levels are almost similar. There is little to differentiate
between two service organizations and this makes competition in the industry fierce
to get the share of the pie. With technology adoption, each organization has been
trying consciously to reduce the operation time with respect to servicing a customer
request, delivering the service with service-level agreement, and finally preparing
invoice. Each time a firm introduces technology to get an edge with respect to its
competitors other players immediately invest in similar technologies to remain
competitive. Thus, technology has increased the level of competition among players
in the same industry to get a portion of the pie
IT and Selling Power
Gone are the days, when suppliers and buyers were not well informed about a
firm’s products, services and its strategies. With the help of the Internet, buyers,
suppliers, financial analysts, customers are able to know details about the firm’s
strategies and quality of products and services. Several blogs sites and review sites
are available to comment on the value delivered by the firm. This not only makes
the entire business process transparent to buyers and sellers, but also has impact
relationship with buyers and sellers as well as with linkages throughout the supply
channel. For example, for Maruti, introduction of ERP (enterprise resource
planning) has changed relationship with its buyers and suppliers. Earlier, even
though there was enough trust, still the buyers and suppliers did not have much
insight into the production planning process of Maruti. With ERP, Maruti has
become more transparent, which has helped buyers and sellers to plan accordingly.
As a result, Maruti could demand raw materials at a competitive price and buyers
could meet customer demands for new vehicles. A similar example is Nano, a
vehicle manufactured by Tata Motors, where bargaining power of buyers and sellers
has dramatically changed because of ERP. Buyers (in this case vehicle retail
outlets) are able to predict accurately the delivery time for their customers, and
256

sellers have been able to renegotiate the prices of raw materials because of
increased volume of production.
Role of Managers in IT-Enabled Strategy
Firstly, the managers need to understand industry characteristics and impact
of IT on them. They should be able to predict the role of IT in their industry and the
way it might affect industry characteristics in future. IT might change each force
separately and also the combined effect of all these five forces. Even the boundaries
of business models in the industry might change. Thought leaders in industries will
be able to see the foreseeable changes and become ready for that. They even might
use IT to change the characteristics so that their competitors are forced to follow
the same.
Secondly, managers think of changes in the business models through
collaboration with organizations within as well as outside the industry. Unless
managers foresee these changes in business boundaries, they would not be able to
take full advantage of technology. In many of the recent benchmarking exercises in
IT industry, competitors have collaborated with each other through online forums
to share best practices with each other and gain customers confidence.
Thirdly, managers should manage the change that will be necessitated
because of technology adoption. They should devise a plan that will prioritize
technology investment in different departments/functions, develop business case
for investment and prepare a roadmap for implementing new technology. Business
managers should work with IT managers to decide architecture, integration of
applications, and choice of right technology so that business alignment is achieved
while maintaining competitive advantage.
Finally, managers should use IT to create a learning organization. A
technology-enabled learning organization should use technology to capture learning
from business transactions, create a knowledge repository, and then share these
best practices throughout the organization.
17.3.5 VALUE CHAIN MODEL
A value chain is a business model that describes the full range of activities
needed to create a product or service. For companies that produce goods, a value
chain comprises the steps that involve bringing a product from conception to
distribution, and everything in between—such as procuring raw materials,
manufacturing functions, and marketing activities.
A company conducts a value-chain analysis by evaluating the detailed
procedures involved in each step of its business. The purpose of a value-chain
analysis is to increase production efficiency so that a company can deliver
maximum value for the least possible cost.
Because of ever-increasing competition for unbeatable prices, exceptional
products, and customer loyalty, companies must continually examine the value
they create in order to retain their competitive advantage. A value chain can help a
257

company to discern areas of its business that are inefficient, then implement
strategies that will optimize its procedures for maximum efficiency and profitability.
In addition to ensuring that production mechanics are seamless and efficient,
it's critical that businesses keep customers feeling confident and secure enough to
remain loyal.
The overarching goal of a value chain is to deliver the most value for the least
cost in order to create a competitive advantage.
"Competitive advantage cannot be understood by looking at a firm as a whole.
It stems from the many discrete activities a firm performs in designing, producing,
marketing, delivering, and supporting its product"(Michael E Porter).
In other words, it's important
ortant to maximize value at each specific point in a firm's
processes.
Elements in Porter's Value Chain
Rather than looking at departments or accounting cost types, Porter's Value
Chain focuses on systems, and how inputs are changed into the outputs purchased
by consumers. Using this viewpoint, Porter described a chain of activities common
to all businesses, and he divided them into primary and support activities, as
shown below.

Primary Activities
Primary activities relate directly to the physical creation, sale, maintenance
and support of a product or service. They consist of the following:
Inbound logistics – These are all the processes related to receiving, storing,
and distributing inputs internally. Your supplier relationships are a key factor in
creating value here.
Operations – These are the transformation activities that change inputs into
outputs that are sold to customers. Here, your operational systems create value.
258

Outbound logistics – These activities deliver your product or service to your


customer. These are things like collection, storage, and distribution systems, and
they may be internal or external to your organization.
Marketing and sales – These are the processes you use to persuade clients to
purchase from you instead of your competitors. The benefits you offer, and how well
you communicate them, are sources of value here.
Service – These are the activities related to maintaining the value of your
product or service to your customers, once it's been purchased.
Support Activities
These activities support the primary functions above. In our diagram, the
dotted lines show that each support, or secondary, activity can play a role in each
primary activity. For example, procurement supports operations with certain
activities, but it also supports marketing and sales with other activities.
Procurement (purchasing) – This is what the organization does to get the
resources it needs to operate. This includes finding vendors and negotiating best
prices.
Human resource management – This is how well a company recruits, hires,
trains, motivates, rewards, and retains its workers. People are a significant source
of value, so businesses can create a clear advantage with good HR practices.
Technological development – These activities relate to managing and
processing information, as well as protecting a company's knowledge base.
Minimizing information technology costs, staying current with technological
advances, and maintaining technical excellence are sources of value creation.
Infrastructure – These are a company's support systems, and the functions
that allow it to maintain daily operations. Accounting, legal, administrative, and
general management are examples of necessary infrastructure that businesses can
use to their advantage.
Companies use IT in these primary and support activities as "building blocks"
to create a valuable product or service for their competitive advantage.
17.3.6 STRATEGIC RESOURCES AND CAPABILITIES
Strategic resources are those allowing you to achieve strategic goals and
therefore generate sustainable competitive advantage. There are 4 'generic' kinds of
resources:
Human resource: The heart of the issue with Human Resources is the skills-
base of the business. What skills does the business already possess? Are they
sufficient to meet the needs of the chosen strategy? Could the skills-base be flexed
/ stretched to meet the new requirements?
Financial resource: Financial resources concern the ability of the business to
"finance" its chosen strategy. For example, a strategy that requires significant
investment in new products, distribution channels, production capacity and
259

working capital will place great strain on the business finances. Such a strategy
needs to be very carefully managed from a finance point-of-view.
Physical resource: The category of physical resources covers wide range of
operational resources concerned with the physical capability to deliver a strategy.
This includes production facility, marketing facility and information technology
facility.
Intangible resource: patents, know-how, relationships.etc.
However, these must qualify the following criteria to be of strategic importance
(i.e to be core competences): Valuability, rarity, un-substitutability, inimitability.
The resource based theory explains the strategic and capabilities in best
manner. Resource-based theory contends that the possession of strategic
resources provides an organization with a golden opportunity to develop competitive
advantages over its rivals (“Resource-Based Theory: The Basics”) (Barney, 1991).
These competitive advantages in turn can help the organization enjoy strong profits,
especially over time.
According to resource-based theory, organizations that own “strategic
resources” have important competitive advantages over organizations that do not.
Some resources, such as cash and trucks, are not considered to be strategic
resources because an organization’s competitors can readily acquire them. Instead,
a resource is strategic to the extent that it is valuable, rare, difficult to imitate, and
non substitutable.
Resource-based theory can be confusing because the term resources are used
in many different ways within everyday common language. It is important to
distinguish strategic resources from other resources. To most individuals, cash is
an important resource. Tangible goods such as one’s car and home are also vital
resources. When analyzing organizations, however, common resources such as
cash and vehicles are not considered to be strategic resources. Resources such as
cash and vehicles are valuable, of course, but an organization’s competitors can
readily acquire them. Thus an organization cannot hope to create an enduring
competitive advantage around common resources.
A strategic resource is an asset that is valuable, rare, difficult to imitate,
and non-substitutable.
Valuable resources aid in improving the organization’s effectiveness and
efficiency while neutralizing the opportunities and threats of competitors.
Difficult-To-Imitate resources often involve legally protected intellectual
property such as trademarks, patents, or copyrights. Other difficult-to-imitate
resources, such a brand names, usually need time to develop fully.
Rare resources are those held by few or no other competitor
260

Non-Substitutable resources exist when the resource combination of other


firms cannot duplicate the strategy provided by the resource bundle of a particular
firm.
Resources can be divided into two main types:
Tangible resources are resources that can be readily seen, touched, and
quantified. Physical assets such as a firm’s property, plant, and equipment are
considered to be tangible resources, as is cash.
Intangible resources are quite difficult to see, touch, or quantify. Intangible
resources include, for example, the knowledge and skills of employees, a firm’s
reputation, and a firm’s culture.
Resources are money-derived. You can spend money and immediately acquire
a resource. For example A physical server is a resource because you can buy one
and have it shipped to you.
Resources and capabilities are the basic building blocks that organizations use
to create strategies. These two building blocks are tightly linked—capabilities tend
to arise from using resources over time.
While resources refer to what an organization owns, capabilities refer to what
the organization can do. More specifically, capabilities refer to the firm’s ability to
bundle, manage, or otherwise exploit resources in a manner that provides value
added and, hopefully, advantage over competitors.
Capabilities are what the organization can do based on the resources it
possesses, another key concept within resource-based theory. A good and easy-to-
remember way to distinguish resources and capabilities is this: resources refer to
what an organization owns, capabilities refer to what the organization can do .
Capabilities tend to arise or expand over time as a firm takes actions that build on
its strategic resources. Capabilities are important in part because they are how
organizations capture the potential value that resources offer. Customers do not
simply send money to an organization because it owns strategic resources. Instead,
capabilities are needed to bundle, manage, and otherwise exploit resources in a
manner that provides value added to customers and creates advantages over
competitors.
Some firms develop a dynamic capability, the unique ability to improve,
update, or create new capabilities, especially in reaction to changes in its
environment. A firm that enjoys a dynamic capability is skilled at continually
adjusting its array of capabilities to keep pace with changes in its environment.
Capabilities are things that organizations develop with time. One cannot
spend money and immediately acquire a capability. Project management, service
management, IT governance, and process improvement are all capabilities. IT
management capabilities, though costly to develop, improve IT effectiveness and
reduce IT costs.
261

17.4 REVISION POINTS


• Information technology
• IT strategy and its basics
• IT-Business model alignment
• Porter’s five force model
• Value chain model
17.5 INTEXT QUESTIONS
1. Explain the features of Information Technology.
2. What are the key elements required to construct IT strategy?
3. List out the benefits of aligning your IT strategy with your business strategy
4. Explain the porters five force model for IT strategy.
5. What are the characteristics of strategic resource?
17.6 SUMMARY
Information technology is the use of computers to store and process facts
and figures into a useful, organized, form.
IT strategy (information technology strategy) is a comprehensive plan that
outlines how technology should be used to meet IT and business goals.
Every IT-related investment, activity, service or project must create
or optimise business value
Utilising an IT-business alignment model can help improve your business’s
performance
According to Porter, a firm is influenced by five forces, which also affect its
sustainable growth and profitability
A value chain is a business model that describes the full range of activities
needed to create a product or service
A strategic resource is an asset that is valuable, rare, difficult to imitate,
and non-substitutable
17.7 TERMINAL EXERCISE
1. ______________ is a written document that details the multiple factors that
affect the organization's investment in and use of technology
2. ______________requires regular reviews, assessments and adjustments.
3. Activities that relate directly to the physical creation, sale, maintenance and
support of a product or service is called _______________ in value chain.
4. ____________ refer to what the organization can do.
262

17.8 SUPPLEMENTARY MATERIALS


1. https://searchcio.techtarget.com/definition/IT-strategy-information-
technology-strategy
2. https://www.quostar.com/blog/why-align-business-strategy-it-strategy/
17.9 ASSIGNMENTS
1. Explain the value chain model for IT strategy.
17.10 SUGGESTED READING/REFERENCE
1. https://www.researchgate.net/publication/226349758_IT_and_Porter's_Co
mpetitive_Forces_Model_and_Strategies
2. https://opentextbc.ca/strategicmanagement/chapter/resource-based-
theory/#f4.2desc
17.11 LEARNING ACTIVITIES
1. Select an organization of your interest and analyse it using different strategy
models. Understand how they are using it for competitive advantage.
17.12 KEYWORDS
Strategy
Bargaining power,
Value chain
Resource
Capabilities

263

LESSON - 18
INFORMATION TECHNOLOGY PLANNING
18.1 INTRODUCTION
Information technology planning is a discipline within the information
technology and information systems domain and is concerned with making
the planning process for information technology investments and decision-making a
quicker, more flexible, and more thoroughly aligned process.
18.2 OBJECTIVES
• To understand the concept of IT planning
• To recognize the steps in IT planning.
18.3 CONTENTS
18.3.1 IT planning
18.3.2 Establish Leadership And Support.
18.3.3 Assess Your Resources.
18.3.4 Define Your Needs.
18.3.5 Explore Solutions.
18.3.6 Write the Plan.
18.3.7 Get Funding. And Implement the Plan
18.3.1 IT PLANNING
IT planning guides the use of resources for IT systems and services used
throughout the organisation.
IT planning has three components: IT governance, IT leadership development,
and IT strategic planning.
IT governance defines the processes, components, structures, and participants
for making decisions regarding the use of IT.
IT leadership development defines who will lead and drive IT strategies to a
successful conclusion. It also prepares and develops the current and next
generation of IT leaders across the organization.
Information Technology planning is a process that takes time and resources in
order to understand what is appropriate for the organization. Program directors and
their management staff may use this resource to further their understanding of
what is involved in technology planning.
• Establish leadership and support
• Assess your resources
• Define your needs
• Explore solutions
• Write the plan
264

• Get funding
• Implement the plan
Effective technology planning is an involved process. It takes a commitment of
time and resources from senior managers and other staff. In order to make good
decisions, an organization also needs to understand key aspects of technology.
But through technology planning, organizations can make significant gains.
Sound technology management leads to greater productivity, increased staff
morale, and improved service to clients through having machines that work,
networks that give access to information, and applications that are appropriate for
an organization's mission.
Information can transform organizations by giving them the tools to
understand the environment they're working in, to measure the effectiveness of
their actions, and to counter opposing information from other groups and policy
makers. Technology is uniquely positioned to harness the power of information.
Technology planning is a process
1. Establish leadership and support.
2. Assess your resources.
3. Define your needs.
4. Explore solutions.
5. Write the plan.
6. Get funding.
7. Implement the plan.
18.3.2 ESTABLISH LEADERSHIP AND SUPPORT
Setting up a Information Technology team and ensuring management and staff
buy-in will allow you to get started with the whole organization behind you.
A tech plan isn't written in a day. The process behind the writing is the most
important part, and the process is all about how staff work together to find the best
solutions.
Information Technology Team: It is crucial that the Information Technology
plan be a product of the whole organization, not just one staff person's brainchild.
Nonprofit Information Technology experts all recommend that you set up a
Information Technology team to lead your Information Technology planning
process, if you do not have a team already. An IT team should be made up of a wide
range of staff members. It is very important to have your executive director or
another person in management involved. Your team might be composed of a board
member, the executive director, a project manager, an administrative assistant, an
accountant and a development director, as well as your system administrator, if
you have one. Set up a regular meeting schedule to review progress on the plan.
265

Make sure to distribute responsibilities and set clear expectations so that each
person is involved in the process.
Lead Person: It is crucial to have one person who is designated to lead the
Information Technology team and coordinate the whole process. That person need
not be someone who is already in a management position, but should be someone
with leadership capabilities and relative comfort with Information Technology.
Management Support: It is next to impossible to do a Information Technology
plan and carry it out without active support from management. Management is the
key to financial support and funding for the plan. It also makes a huge difference if
you can convince your management to stand up and talk to staff about the plan.
One strategy for convincing management is to describe the current costs of not
doing a plan. Let them know how many hours of staff time are wasted, and how
much money is lost trying to make the current system work. If your organization
requires a major Information Technology overhaul, management will appreciate a
plan which is broken into implementation phases, so that they are not faced with
funding the entire initiative in one budget year. Even if management is reluctant,
they should be consulted and informed at every major step.
18.3.3 ASSESS YOUR RESOURCES
The first step in planning is to assess your existing Information Technology.
What do you have in place? How well is it working?
The key is to spend some time asking yourself what is working, and what
needs improvement. What Information Technology do you have in place in your
organization? What Information Technology skills does your staff have? Who does
your organization rely on for Information Technology support?
One part of assessment is taking a basic inventory of the computers and
software in your organization. A hardware inventory worksheet can give you a sense
of the overall capacity and range of workstations in your organization. A software
inventory worksheet can give you an overview of the software resources and how
they are distributed on different computers.
By taking this step, you can help avoid buying redundant technologies or
incompatible technologies, and you can help assess whether any of your current
Information Technology is obsolete.
In the hardware inventory worksheet, you will want to write down the following
items for each computer:
• User
• Brand
• Model
• Serial Number
• Monitor type
• Processor type and speed
266

• RAM
• Hard disk capacity
• Available hard disk space
• Operating system
• Modem or network card (if any)
• Ports available (USB, FireWire, SCSI, etc.)
• Floppy, CD, or DVD drive (Be specific: indicate the type of floppy drive or
whether you have a CD, CD-R, CD-RW, DVD, DVD-R, DVD+RW, DVD-RW,
or DVD-RAM drive)
Any additional equipment attached to the computer
Other equipment such as network printers, switches, firewalls, modems, etc.
In the software inventory worksheet, you will want to mark down major
software packages that you use, along with their version numbers.
There's more to an assessment than listing your hardware and software. For
example, you need to document your network set-up, access policies, and
protocols; document your services, including centralized databases, email, and
groupware; and document your management practices, from staffing to written
policies.
The most important part of assessment is to ask yourself some questions
about how well your systems are currently working.
18.3.4 DEFINE YOUR NEEDS
Why do you need Information Technology? What will new Information
Technology help you do that you can't do already? Defining your needs will enable
you to choose the most efficient solutions.
The trick to defining your needs is to describe what you want to do with
Information Technology, not what you think you need to buy. Consider the
problems you might run into in your organization--new policies to institute,
procedures you need to follow to find new funding, and new staff members to work
into your organization's structure. Then consider all the potential tools, including
Information Technology tools that you might use to solve these problems.
Start by thinking more abstractly, then begin to discuss how Information
Technology might help you solve your problems and help your organization better
fulfill its mission. What might your staff members be able to accomplish with a new
intranet? What new capability will make a critical difference to productivity?
Put together a good Information Technology team, one that represents all the
major program and administrative areas of the organization--including a decision
maker who is involved in strategic planning--and technical staff. Remember that a
team full of people who have technical skills is not necessarily the best equipped to
think of Information Technology in terms of your organization's mission. It also
267

helps if the Information Technology team gathers input from staff about their
needs. You can get staff input through a survey, or through individual interviews.
The more you can connect your Information Technology needs to your larger
mission as an organization, the better your plan will be. Its recommendations will
be more useful and meaningful, as well as more convincing to potential funders.
As you define your needs, develop a sense of what your priorities are. What is
mission-critical for the next month, and what can wait half a year? For instance, a
nonprofit might decide that backing up all data takes first priority, while developing
a website for funders can wait a few months.
Also look to other organizations in your sector to learn about best practices in
Information Technology. While you don't want to follow other organizations blindly,
keeping abreast of changes in your field is essential to being able to take advantage
of Information Technology in a timely manner.
18.3.5 EXPLORE SOLUTIONS
The next step is to research existing Information Technology options and
decide on ones that meet your needs at a minimum cost.
Once you have assessed your resources and defined your needs, the next step
is to make a concrete plan for how to meet those needs. This phase of Information
Technology planning requires the most technical knowledge.
If you have not already been working with a consultant, you may want to hire
one at this point. Make sure any consultants you hire know what your budget
range is. Tell them what support resources you will have available so they do not
recommend a system that requires extensive maintenance if you do not have the
staff time or expertise for it.
Deciding on concrete solutions that fit within your budget can be the most
difficult part of Information Technology planning. It's important to make sure that
all the solutions you pick are compatible. For instance, if you want a new database,
a new back-up system, and a new network, you will have to make sure that the
database can be shared across the type of network you are getting, and the back-up
system can copy the database when it is open, if necessary. Information Technology
is interdependent and there are dozens of options with different price tags for each
Information Technology decision, so negotiating your priorities can get very tricky.
The important thing is to go back to your original vision of how Information
Technology can help you accomplish your mission. What are the key new functions
you want Information Technology to fill? Consider price, of course, but don't get
locked into an inexpensive Information Technology that won't grow with you and
won't work with future technologies.
Before you decide on a solution or defer to a consultant, make sure you have a
solid understanding of the different options. Start for background information and
further resources to answer overarching Information Technology questions:
• What type of network do you need?
268

• What hardware purchases or upgrades should you make?


• What software should you choose?
• What is the best way to connect to the Internet?
• How should you go about getting your organization a presence on the web?
• How can you take care of your database needs?
• What is the best way to train your staff?
• How can you take care of regular system administration needs?
18.3.6 WRITE THE PLAN
Your written plan should document your resources, needs, and solutions, as
well as your budget.
There are many different forms that nonprofits use to write their Information
Technology plans. No matter which form you choose, there are several key
elements:
Organizational profile
Who are you as an organization? How are you organized and what kind of
services do you provide? Your Information Technology plan should open with a
short description of your organization that will set the context for your Information
Technology needs. A good organizational profile should include your mission and
vision as well as practical details on your current work. This overview will ground
the Information Technology plan in your mission and will remind everyone, from
your staff and board to potential funders that this isn't just about Information
Technology. It's about equipping your organization to better serve its mission.
Information Technology vision
Your organization's Information Technology vision (together with a brief section
about the current state of Information Technology in your organization) can take
the form of an executive summary. How will Information Technology further your
organization's mission? What is your long-range vision for Information Technology
use? While this isn't necessarily an essential part of the plan, this section can give
an overview of your Information Technology goals as they relate to your
organization's mission.
If you include this section, it should draw heavily on the work you did early in
the Information Technology planning process to define your needs.
Projects
The body of your Information Technology plan is a description of the
Information Technology projects you will undertake. For each project, you will want
to provide the following information:
Description: A brief description of what it is you plan to do. For example, if
your project is "Establish a Local Area Network," the beginning of your description
might read:
269

• "Nonprofit A currently has 10 independent workstations, with no network


connecting them. This project will set up a peer-to-peer, local area network
for these 10 computers, using Windows 98 as a network operating system."
Benefits: What will this project enable your organization to do that you could
not do before? It helps to describe the benefits up front, so that it is clear how
much of a priority the project is. A description of the benefits will also be useful if
you show your Information Technology plan to funders, since it may convince them
of the importance of the Information Technology investment you describe. See the
article on How Information Technology is Funded: The Basics for more information
on writing a Information Technology Funding Proposal.
Two sample benefits of setting up a Local Area Network might include:
• A local area network will allow Nonprofit A to set up cost-effective shared
access to the Internet and email. Nonprofit A's current usage of email and
the web is limited to one-at-a-time use. The possibility of simultaneous,
continuous access would allow staff members to use the Internet frequently
for research and to use email as an efficient communication tool with
colleagues in other organizations and with clients, members, and
participating organizations.
• Nonprofit A has several departments that all keep data about the same
clients. Currently, these departments have no way to access each other's
records. A local area network would allow all staff to input and access data
in one database which is shared across the network. Having one database
will save time that is now wasted in multiple entries of the same data and
will create one accurate source for all client information.
Tasks: What tasks will it take to complete this project? Listing the tasks will
be extremely useful in establishing a timeline and beginning the implementation.
Three sample tasks associated with setting up a local area network might be:
• Lay down wiring
• Configure workstations
• Train staff to access shared files over the network
Cost: At the end of your description of a project, list the costs associated with
each step. For instance, one cost associated with setting up a local area network
might be:
Purchase five 10 Base T Ethernet Cards
Budget
No Information Technology plan is complete without a budget. Creating a
budget is the only way to tell whether or not your plan is practical.
The budget should include estimated costs for all aspects of the projects you
have listed. A common mistake is to include only hardware and software purchases
in the budget. Actually, a good rule of thumb is that approximately 70 percent of
270

your Information Technology spending should go to technical support and training,


and only 30 percent to Information Technology purchases.
If you plan to hire a system administrator, factor in the system administrator's
salary. If you plan to use a consultant for regular troubleshooting and
maintenance, you should estimate the frequency and cost of the consultant's
services.
Be sure to include staff assignments and time budgets. Although this may be
harder to estimate than your other costs, it can help you determine when it makes
more sense to hire outside help.
If your Information Technology plan is divided into different phases, you may
want to divide your budget into phases as well. This will allow you to distinguish
immediate investments from more long-term projects.
Timeline or critical path
A timeline would include the phases of work and the deadlines for
implementation of your plan.
For nonprofits that need to raise the funds before they can implement a
Information Technology plan, a critical path may be more appropriate; similar to a
timeline, it would indicate the order in which the different aspects of the project
need to be completed.
18.3.7 GET FUNDING
You can now use your Information Technology plan as a key element in
seeking Information Technology funding.
Implement the Plan
Setting a timeline, assigning responsibilities, and evaluating your progress will
make your plan a reality.
As the poet Robert Burns once said, "The best laid plans of mice and men go
oft awry." The sad truth is that many Information Technology plans sit on a shelf
and are never carried out. Implementation is not automatic. It requires conscious
planning in its own right. Even if you have been working with a consultant all along
to do your Information Technology plan, managing the implementation is your
organization's task. Only you can think through who can carry out different aspects
of the plan and when they will do it.
The following elements are key to a successful implementation:
Designate a point person. One person should be in charge of overseeing the
process. This is not necessarily a technical role, but a management role. This
person may also communicate with and oversee consultants who implement parts
of the plan. They will report back periodically to the Information Technology team
and to management.
Break projects into tasks. Make sure the individual steps are clear so you can
monitor progress.
271

Assign responsibilities. Make clear which staff member will carry out which
task.
Establish a timeline. Set milestones and target dates for different phases of
your plan.
Evaluate your success. Evaluation should be built into any planning process,
and Information Technology planning is no exception. Decide beforehand what
indicators of success you will look for. Build evaluation checkpoints into your
timeline.
Update your Information Technology plan. A Information Technology plan
should be a living, breathing document. As new needs and priorities come up,
modify the plan accordingly! If one Information Technology project does not help
you as you hoped, you are free to go back to the plan to rethink and rewrite.
18.4 REVISION POINTS
• Features of Information technology
• Steps in information technology planning process.
18.5 INTEXT QUESTIONS
1. What are the three components of IT planning?
2. What do you mean by assessing the resources?
3. How to define your IT needs?
4. Write a short note on organizational profile.
5. List out the key elements for successful implementation of IT planning.
18.6 SUMMARY
IT planning guides the use of resources for IT systems and services used
throughout the organisation.
Information Technology planning is a process that takes time and resources
in order to understand what is appropriate for the organization
IT planning team might be composed of a board member, the executive
director, a project manager, an administrative assistant, an accountant and
a development director, as well as your system administrator,
One part of assessment is taking a basic inventory of the computers and
software in your organization
The trick to defining your needs is to describe what you want to do with
Information Technology
The budget should include estimated costs for all aspects of the projects you
18.7 have listed
272

18.7 TERMINAL EXERCISE


1. _____________ defines the processes, components, structures, and
participants for making decisions regarding the use of IT
2. ________________ development defines who will lead and drive IT strategies to
a successful conclusion
3. _______________________ is a process that takes time and resources in order
to understand what is appropriate for the organization
4. _________________ should be with leadership capabilities and relative comfort
with Information Technology.
18.8 SUPPLEMENTARY MATERIALS
1. https://clintonwhitehouse2.archives.gov/OMB/inforeg/infotech.html
2. https://paginas.fe.up.pt/~acbrito/laudon/ch1/chpt1-4main.htm
18. 9 ASSIGNMENTS
1. Discuss the steps in information technology planning process.
18.10 SUGGESTED READING/REFERENCE
1. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
2. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
18.11 LEARNING ACTIVITIES
1. Consider the case of nationalized banks. How it has been computerized?
Connect the planning steps that you read in this chapter with the
conversion process.
18.12 KEYWORDS
IT governance
IT leadership
Organizational profile
Information technology vision
Budget
Critical path

273

LESSON - 19

MANAGING IS DEPARTMENT
19.1 INTRODUCTION
Firms organize their Information Services function in very different ways,
reflecting the nature of their business, their general structure and business
strategy, their history, and the way they wish to provide information services to the
business units. Utmost care should be taken to manage the vulnerabilities of
information system. If something goes wrong then the entire oranisation will have
to face the consequences or adverse effects. In this chapter let us discuss about hoe
information system department is managed in corporate.
19.2 OBJECTIVES
• To know about the people in-charge of information system
• To understand the major operations in information system
• To be familiar about the vulnerabilities in information management
• To create awareness of various controls for information system
19.3 CONTENTS
19.3.1 Managing Information System
19.3.2 Managing Information Systems Operations
19.3.3 Threats to Security, Privacy, and Confidentiality In Is Operations
19.3.4 Risk Assessment in Safeguarding Information Systems
19.3.4.1 General Controls
19.3.4.2 Application Controls
19.3.5 Auditing Information Systems
19.3.1 MANAGING INFORMATION SYSTEM
The Information Services (IS) department is the unit responsible for providing
or coordinating the delivery of computer-based information services in an
organization. These services include:
1. Developing, maintaining, and maintaining organizational information
systems
2. Facilitating the acquisition and adaptation of software and hardware.
3. Coordinates the delivery of many of these services, rather than providing all
of them itself.
Firms organize their Information Services function in very different ways,
reflecting the nature of their business, their general structure and business
strategy, their history, and the way they wish to provide information services to the
business units. Most of the IS departments remain centralized. Traditional
functional structure is a more contemporary structure of a centralized IS unit. This
structure is far better suited to servicing a firm's business units with specialized
consulting and end-user oriented services.
274

Centralized IS departments are giving way in many firms to the IS function


decentralized to the business units of the firm. In a decentralized structure:
The corporate IS department is principally responsible for the corporate
information system infrastructure - telecommunications network and management
of corporate databases.
Developing and maintaining corporate information systems standards
Supervising systems integrators who perform information services for the firm
under outsourcing arrangements
Interacting with vendors to ensure quantity discounts and other benefits of
corporate scale.
Many companies have created a senior management position, the Chief
Information Officer (CIO), to oversee the use of information technology.
Responsibilities include ensuring the coordination of the overall corporate
information technology effort.
The primary advantage of decentralization is that it contains departmental IS
groups who report directly to the heads of their business units. These members are
familiar with the units specific needs and are responsive to its concerns. Members
of the Information Service units possess a wide variety of skills. Most of these
people combine their technology expertise with an understanding of the corporate
business lines they serve.
Trend: With the increasing role of outsourcing and acquisition of software
packages, the IS units of most firms are expected to become smaller over time, yet
its specialists will have to offer enhanced expertise in both technology and business
processes.
Information Systems Specialists
Two principal occupations of IS specialists include: analysts and
programmers.
1. Responsibilities of analysts include:
• Analyze the users' information requirements, develops systems
prototypes, and often designs information systems based on the
requirements specification.
• Play the key role in translating business problems and opportunities into
information systems
• Provide a liaison between the users and other IS specialists
• Problem solver who can perform a variety of tasks relating to defining,
developing, and exploiting information systems. They must combine
business knowledge with a keen understanding of the potential of
technology in order to communicate effectively with end users on the one
hand and technical specialists or programmers on the other.
• Senior analysts act as project managers during system development
275

2. Responsibilities of the programmers include:


• Implement the analyst's specifications. A systems designer translates
these specifications of what the system is expected to do into high-level
specifications for the needed system components.
• Develop and test the programs that satisfy the requirements established
by the analysts, using the design specifications worked up by the
designer.
• Maintain the programs. These applications programmers are supported
by systems programmers who maintain systems software and have
considerable technical expertise.
Many organizations have created a senior management position, the Chief
Information Officer (CIO) who is responsible for information services. The CIO has
the following responsibilities:
1. Coordinates the entire corporate IS effort and has the primary responsibility
for linking the IS plans and implementation to the company's business
plans.
2. Focus the attention of top corporate management on the contribution that
information technology can make to the firm's business.
19.3.2 MANAGING INFORMATION SYSTEMS OPERATIONS
The objective of the IS operations staff is to keep information systems running
smoothly: to process transactions with an acceptable response time, deliver reports
on time, and ensure reliable and efficient operation of data centers and
telecommunications networks. In the fact of the general trend toward distribution of
the information processing function and the growth of end-user computing,
corporate data centers retain their vital role as repositories of corporate database.
Like any other major corporate asset, information systems must be controllable.
Functions of IS Operations
The principal concern of IS operations is to ensure that information services
are delivered in an uninterrupted, reliable, and secure fashion. Challenges include:
1. Managing the distributed hardware environment
2. Variety of possible organizational designs for the IS function itself
(centralization and decentralization, partial or total outsourcing).
3. Design of the system must be selected to match corporate objectives and
then this design must be modified as the objectives change.
Major functions of IS operations include:
1. Data entry
2. Operations of computer systems in data centers
3. Operational support for the equipment in the hands of end users and
support of end users with information centers and help desks
276

4. Maintenance of wide area telecommunications links and local area networks


5. Maintenance of databases, including periodic reorganizations for efficiency's
sake
6. Production control in data centers
7. Production support
8. Ensuring the physical security of operations, including the operation of
firewalls against unauthorized access over the Internet
9. Controlled distribution of information output, such as reports, perhaps in an
electronic form
10. Dealing with vendors and consultants, in particular, supervising the vendors
to whom services have been outsourced.
11. Planning the necessary processing and telecommunications capacities
12. Protecting the systems from a variety of threats to its security
19.3.3 THREATS TO SECURITY, PRIVACY, AND CONFIDENTIALITY IN IS OPERATIONS
It is necessary for an organization to identify the nature of possible threats to
its information systems and establish a set of measures, called controls, to ensure
their security (and, beyond that, to also ensure the privacy and confidentiality of
information stored in the systems). It is then necessary to continually control the
controls with the auditing process.
Information system security is the integrity and safety of its resources and
activities.
Privacy is an individual's right to retain certain information about himself or
herself without disclosure. Comprehensive security safeguards are a prerequisite
for the privacy of individuals with respect to the information stored about them in
information systems.
Confidentiality is the status accorded to data, limiting its use and
dissemination. Thus, we can keep certain data confidential to enforce our privacy
policies.
Information Systems Security and Threats to It
The security of information systems is maintained by measures taken to
prevent threats to these systems or to detect and correct the effects of any damage.
Information system security aims to protect corporate assets or, at least, to limit
their loss. Security measures limit access to information to authorized individuals;
there can be no privacy or confidentiality of data records without adequate security.
Security threats have four principal sources which include:
1. Human error
2. Computer abuse or crime
3. Natural and political disasters
4. Failures of hardware or software
277

Computer Crime and Abuse


Computer crime is defined as any illegal act in which a computer is used as
the primary tool. Computer abuse is unethical use of a computer.
Security threats related to computer crime or abuse include:
1. Impersonation: Gaining access to a system by identifying oneself as another
person. Having defeated the identification and authentication controls
employed by the system, the impersonator enjoys the privileges of a
legitimate user.
2. Trojan horse method: Concealing within an authorized program a set of
instructions that will cause unauthorized actions.
3. Logic bomb: Unauthorized instructions, often introduced with the Trojan
horse technique, which stay dormant until a specific event occurs (or until a
specific time comes, as the instructions may keep checking the computer's
internal clock), at which time they effect an unauthorized act.
4. Computer Viruses Segments of code that are able to perform malicious acts
and insert copies of themselves into other programs in the system and onto
the diskettes placed in the Ainfected@ PC. Because of this replication, a
virus will progressively infect Ahealthy@ programs and systems. Close
relatives of viruses are worms: independent programs that make and
transmit copies of themselves through telecommunications networks.
Computer viruses have become a pervasive threat in personal computing.
5. Denial of Service Rendering the system unusable by legitimate users.
6. Dial Diddling: Changing data before or during input, often to change the
contents of a database.
7. Salami Technique: Diverting small amounts of money from a large number
of accounts maintained by the system. These small amounts will not be
noticed.
8. Spoofing: Configuring a computer system to masquerade as another system
over the network in order to gain unauthorized access to the resources the
system being mimicked is entitled to.
9. Superzapping: Using a systems program that can bypass regular system
controls to perform unauthorized acts.
10. Scavenging: Unauthorized access to information by searching through the
residue after a job has been run on a computer. Techniques range from
searching wastebaskets or dumpsters for printouts to scanning the contents
of a computer's memory.
11. Data Leakage: V variety of methods for obtaining the data stored in a
system. The data may be encoded into an innocuous report in sophisticated
ways, for example, as the number of characters per line.
278

12. Wiretapping: Tapping computer telecommunications lines to obtain


information.
Some of the techniques listed may be used for a direct gain of financial
resources, others for industrial espionage, while yet others simply for destructive
purposes.
Probably the most important unrecognized threat today is the theft of portable
computers, with access codes and information in their memories. Also to be
considered are the losses due to the theft of intellectual property, such as software,
product development information, customer information, or internal corporate
documents.
Computer Viruses
Computer viruses are the most frequently encountered threats to end-user
computing and the best-known form of computer threat. A computer virus is a
piece of program code that attaches copies of itself to other programs and thus
replicates itself.
Characteristics of computer viruses:
1. The attacked program may work properly, but, at some point, will perform a
malicious or destructive act intended by the attacker who write the virus.
2. Although a computer virus may attack a multi-user system with shared disk
facilities, viruses are best known for their rapid spread in a personal
computer environment. In this environment, they proliferate through
infected diskettes or programs downloaded from the Internet or other
networks.
3. Most viruses are insidious, and their presence is not obvious after the
infection. In the meantime, they infect other programs.
4. Two principal types of viruses are boot infectors and program infectors.
• Boot infectors replace the contents of the first sector of the diskette or
hard disk. These are the viruses that most commonly occur in personal
computing.
• Program infectors copy themselves into the executable files stored on
the hard disk.
Protection against viruses requires the following measures:
1. Only original manufacturers diskettes or reliable Internet sites should be
used for any program introduced into the system. Pirated software can also
lead to the spread of viruses.
2. Commercial antiviral software should be used regularly to scan the system.
Moreover, recent versions of such software should be used, since every new
virus strains are being spread by attackers.
3. To guard against viruses in files downloaded from the Internet, one should
use utilities which can work with browsers.
279

4. Regular backup of files will help restore them if a virus is detected


5. A contingency plan for a virus attack is necessary
19.3.4 RISK ASSESSMENT IN SAFEGUARDING INFORMATION SYSTEMS
In a distributed systems environment, with virtually every employee of an
organization having some form of access to systems, security threats are an
extremely serious concern. Multiple connections to the Internet open the field to
interlopers all over the world.
Methods of assessing vulnerabilities include:
1. Risk assessment procedure: a methodical evaluation of the probability of
losses due to security exposures and the extent of these losses. Risk is
defined as the product of the amount that may be lost due to a security
exposure and the probability that such a loss will occur. This probability can
be estimated by the frequency of such occurrences in the past.
2. Scenario analysis: a method of system control which involves simulated
attacks on the system in order to determine its vulnerability
Information Systems Controls: General Controls
The Role of Information Systems Controls
To ensure secure operations of information systems and thus safeguard assets
and the data stored in these systems, and to ensure that applications achieve their
objectives in an efficient manner, an organization needs to institute a set of policies,
procedures, and technological measures, collectively called controls.
IS controls may be designed to:
1. Prevent an error or an attack from taking effect
2. Detect a violation
3. Detect and correct an exceptional situation.
Information systems controls are classified as:
1. General controls - controls applying to the whole of an organizations IS
activity
2. Application controls - controls which are specific to a given application
(payroll)
19.3.4.1 GENERAL CONTROLS
General controls cover all the systems of an organization or one of its
subunits.
280

Administrative Controls
Administrative controls aim to ensure that the entire control framework is
instituted, continually supported by management, and enforced with proper
procedures, including audits.
Administration controls include:
1. Published controls policy
2. Formal procedures
3. Screening of personnel
4. Continuing supervision of personnel
5. Separation of duties
Systems Development and Maintenance Controls
Internal IS auditors should be involved through the entire systems
development process. They should:
1. Participate in major milestones and sign off on the appropriate deliverables.
They need to ensure that the system is secure, and also auditable.
2. Participants in the post implementation review that follows the system being
placed in operation.
3. Must check that the appropriate system documentation is developed and
maintained
4. During systems maintenance, ensure that only authorized changes are made
to the system and that the appropriate version of the system goes into
operation
Operations Controls
Operations controls are the policies, procedures, and technology established to
ensure that data centers are operated in a reliable fashion. Included among these
controls are:
1. Controls over access to the data center
2. Control over operations personnel
3. Control over maintenance of computer equipment
4. Control over archival storage
Physical Protection of Data Centers
Operations controls in data centers must be supplemented by a set of controls
that will protect these centers from the elements and from environmental attacks.
Some of these controls include:
1. Environmental controls (air conditioning, humidification etc.) as required by
the equipment.
2. Emergency power sources must be available. A battery-based
uninterruptible power supply (UPS) should be installed to provide
continuous operation in case of total or partial power failure.
281

3. The more secure the data is, the more of a requirement for shielding the
radiation to contain it from being detected outside the data center.
Hardware Controls
A computer's central processor contains circuitry for detection and, in some
cases, correction of certain processing errors. Some of these include:
1. Parity check in which each byte of data in storage contains an additional bit,
called a parity bit, which helps detect an erroneous change in the value of a
single bit during processing.
2. Processor hardware usually has at least two states:
• Privileged state - in which any operation can be performed. A user cannot
enter privileged state, as it is reserved for system software.
• User state - in which only some operations can be done.
3. Fault-tolerant computer systems - these systems continue to operate after
some of their processing components fail. Fault-tolerant computer systems
are built with redundant components; they generally include several
processors in a multiprocessing configuration. If one of the processors fail,
the other (others) can provide degraded, yet effective, service.
Identification, Authentication, and Firewalls
Controlling Access to Corporate Computer Systems
In today's computing environment, users as well as interlopers may attempt to
access a computer system from virtually anywhere. We need to ensure that only
authorized accesses take place.
Characteristics of identification and authentication
1. A user first identified themselves to the system, typically with a name or an
account number
2. The system then looks up the authentication information stored for the
identified user and does a double-check
3. The system requests the user to provide a password or another means by
which they can be authenticated.
A variety of security features are implemented to increase the effectiveness of
passwords. The features include:
1. Regular and frequent password changes
2. Use of a combination of letters and digits
3. Prevention of the use of a common word, easily associated with the user
Biometric security features are also implemented. These systems rely on using
the personal characteristics. Features include:
1. Voice verification
2. Fingerprints
3. Hand geometry
282

4. Signature dynamics
5. Keystroke analysis
6. Retina scanning
7. Face recognition
8. Genetic pattern analysis
A firewall is a hardware and software facility that prevents access to a firm's
Intranet from the public Internet, but allows access to the Internet. The use of a
firewall is to insure that only authorized traffic passes through.
Encryption: Controlling Access to Information
A different way to prohibit access to information is to keep it in a form that is
not intelligible to an unauthorized user. Encryption is the transformation of data
into a form that is unreadable to anyone without an appropriate decryption key.
Encryption is gaining particular importance as electronic commerce over
telecommunications networks is gaining momentum.
Encryption renders access to encoded data useless to an interloper who has
managed to gain access to the system by masquerading as a legitimate user, or to
an industrial spy who can employ a rather simple receiver to pick up data sent over
a satellite telecommunications link. Thus, the technique is important not only in
the protection of the system boundary but also in the communications and
database controls.
1. The two most important encryption techniques are the:
2. Private-key Data Encryption Standard (DES)
3. Public-key encryption
Encryption is scrambling data, or any text in general, into a cipher that can be
decoded only if one has the appropriate key (i.e., bit pattern). It renders the encoded
data useless to an interloper. The major disadvantage of the DES is that keys must
be distributed in a secure manner. Since the keys must be changed frequently, this
represents significant exposure. Also, a prior relationship between the sender and
the receiver is necessary in order for them to share the same private key.
In a public-key systems, two keys are needed to ensure secure transmission;
one is the encoding key and the other is the decoding key. Because the secret
decoding key cannot be derived from the encoding key, the encoding key can be
made public therefore, they do not require secure distribution of keys between
parties prior to their communication. Drawback of public-key encryption and
decryption is that they are more time-consuming than the private key systems, and
can significantly degrade performance of transaction processing systems.
Controls of Last Resort: Disaster Recovery Planning
Two controls of last resort should be available:
1. Adequate insurance for the residual risk
2. A disaster recovery plan
283

A disaster recovery plan specifies how a company will maintain the


information services necessary for its business operations in the face of disaster.
In disaster recovery planning, the first task is to identify the necessary
business functions to be supported by the plan, since covering less vital functions
is, in general, too costly.
A disaster recovery plan for these functions should contain four components:
1. An emergency plan: specifies the situation when a disaster is to be declared
and the actions to be taken by various employees
2. A backup plan: Specifies how information processing will be carried out
during the emergency. It details how backup computer tapes or disks are to
be maintained and specifies the facility, called the recovery site, where they
can be run on very short tine notice. Also, backup telecommunications
facilities need to be specified. Some companies maintain a
telecommunications link between their data centers and the recovery site in
order to have access to the latest data if disaster strikes.
Alternatives for a recovery site include:
• A company owned backup facility, distant geographically from the data
center.
• A reciprocal agreement with a company that runs a compatible
computer system
• A hot site or a shell (cold site) offered by a disaster recovery firm under
contract. A hot site is a facility that operates computers compatible with
the client's, who may use the site within 24 hours of disaster. Shells (or
cold sites) are computer-ready buildings, available to accept equipment
on very short notice.
3. A recovery plan: Specifies how processing will be restored on the original
site, including detailed personnel responsibilities
4. A test plan: specifies how the other components of the disaster-recovery
plan will be tested.
19.3.4.2 APPLICATIONS CONTROLS
Application controls are controls implemented specifically for a particular
information system, for example, accounts payable or an order processing system.
Both the automated and the manual aspects of processing need to be controlled.
The principal areas of concern of application control are:
1. Telecommunications
2. Input
3. Output
4. Database:
284

Principle measures undertaken in application control include:


Input Controls: The purpose of input controls is to prevent the entry of
incomplete, erroneous, or otherwise inappropriate data into the information system.
These controls must ensure the following results:
1. Accuracy of data
2. Completeness of input
3. Validation of input
Processing Controls
The primary concern is to ensure that systems processing does not contain
errors processing controls include:
1. Cross footing
2. Reasonableness check
3. Rounding off
4. Functional checks
Database Controls
Information systems files and databases hold the very data we seek to protect
form destruction and from improper access or modification. The following are the
principal measures for safeguarding data stored in systems.
1. Backup and recovery
2. File handling controls
3. Access authorization
Telecommunications Controls
Telecommunications are the most vulnerable component of information
systems. The technique for securing telecommunications is to render any
intercepted information useless to the attacker by encrypting it.
Output Controls
Output controls are largely manual procedures aimed at ensuring that the
information presented in reports and screens is of high quality, complete, and
available only to authorized individuals.
19.3.5 AUDITING INFORMATION SYSTEMS
An audit process consists of two fundamental steps:
1. Compliance testing
2. Substantive testing
What is an Information Systems Audit?
The effectiveness of information systems controls is evaluated through a
process known as IS auditing.
Information systems are audited by external auditors, who render their
opinion on the veracity of corporate financial statements, and by internal auditors,
285

who work for the organization itself. In addition to performing financial audits to
determine the financial health of various corporate units, internal auditors
perform operational audits to evaluate the effectiveness and efficiency of IS
operations.
A trend has developed toward strengthening internal auditing as a means
of management control. An independent audit departments exists in most of the
country's large businesses. Such a department now often includes a group that
performs information systems audits as well.
Information systems have to be auditable by design. This means that every
transaction can be traced to the total figures it affects, and each total figure can be
traced back to the transactions which gave rise to it. In other words, a audit trail
must exist, making it possible to establish where each transaction originated and
how it was processed. Transaction logs provide a basic audit trail.
How is an Information Systems Audit Conducted?
IS auditors primarily concentrate on evaluating information system controls,
on the assumption that if a system has adequate controls that are consistently
applied, then the information produced by it is also reliable. The perform both
scheduled and unscheduled audits.
Characteristics of the compliance auditing include:
1. Auditors study the information system and its documentation, inputs and
outputs, and interview the key users and ARE personnel. They study both
the general and application controls in detail.
2. Auditors select a sample of the transactions processed by the system and
trace their processing form the original documents on to the totals they
affect.
3. Auditors replicate the processing done by the system, and if the results they
obtain are in compliance with those produced by the system, they gain some
confidence in the controls the system is supposed to have.
Characteristics of substantive test auditing include:
1. Substantive testing is used to independently validate the totals contained in
the financial records.
2. The extent of testing depends on the results of compliance testing. If controls
are found operative, then a limited substantive testing will be sufficient. In
areas where controls were inadequate, extensive validation of financial totals
is necessary.
19.4 REVISION POINTS
• Managing information system overview
• Threats to information system
• Protection using anti-virus
286

• Risk Assessment in Safeguarding Information Systems


• Controls for safeguarding information system
• Auditing information system
19.5 INTEXT QUESTIONS
1. What are the responsibilities of programmer and analyst?
2. What are the major functions of IS operations.
3. Write a short note on computer crime and abuse.
4. Enumerate the purpose an information system control is developed.
5. Write a short note on processing control.
6. Write a note on IS auditing.
19.6 SUMMARY
The corporate IS department is principally responsible for the corporate
information system infrastructure - telecommunications network and
management of corporate databases.
The objective of the IS operations staff is to keep information systems
running smoothly: to process transactions with an acceptable response time,
deliver reports on time, and ensure reliable and efficient operation of data
centers and telecommunications networks
Computer crime is defined as any illegal act in which a computer is used as
the primary tool. Computer abuse is unethical use of a computer
To ensure secure operations of information systems and thus safeguard
assets and the data stored in these systems, and to ensure that applications
achieve their objectives in an efficient manner, an organization needs to
institute a set of policies, procedures, and technological measures,
collectively called controls
The effectiveness of information systems controls is evaluated through a
process known as IS auditing.
19.7 TERMINAL EXERCISE
1. ______________ and _______________ are the primary role of information
system specialist.
2. ____________ is an individual's right to retain certain information about
himself or herself without disclosure
3. ______________ is the status accorded to data, limiting its use and
dissemination
4. ____________________ involves simulated attacks on the system in order to
determine its vulnerability
5. __________________ are the policies, procedures, and technology ensure that
data centers are operated in a reliable fashion
6. __________________ is a hardware and software facility that prevents access
to a firm's Intranet from the public Internet
287

19.8 SUPPLEMENTARY MATERIALS


1. http://umsl.edu/~joshik/msis480/chapt14.htm
2. https://opentextbc.ca/strategicmanagement/chapter/creating-
organizational-control-systems/
19.9 ASSIGNMENTS
1. Write an essay about the different types of controls used to safeguard
information system.
19.10 SUGGESTED READING/REFERENCE
1. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
2. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
19.11 LEARNING ACTIVITIES
1. Do visit a computer lab of your school or college. Find out various IS
controls used by them to safe guard the system.
19.12 KEYWORDS
Analyst
Privacy
Threat
Confidentiality
Super zapping
Wiretapping
General controls
Application controls
Auditing information system


288

LESSON - 20
EVALUATING IT INVESTMENT
20.1 INTRODUCTION
Evaluating the IT investment is the weighing up process to rationally assess
the value of any in-house IT assets and acquisition of software or hardware which
are expected to improve business value of an organization’s information systems
It is also the weighing up process to rationally assess the value of any
acquisition of software or hardware which is expected to improve business value of
an organization’s information systems.
The weighing up process to rationally assess the value of any in-
house IT assets and acquisition of software or hardware which are expected to
improve business value of an organization’s information systems
Evaluating the IT investment is the process to rationally assess the value of
any acquisition of software or hardware which is expected to improve business
value of an organization’s information systems.
Moreover it is the weighing up process to rationally assess the value of any in-
house IT assets and acquisition of software or hardware which are expected to
improve business value of an organization’s information systems
20.2 OBJECTIVES
• To get in-depth knowledge about IT investment
• To study the phases of investment control process
• To discuss various investment criteria variable
• To study the techniques for evaluating IT proposal
20.3 CONTENTS
20.3.1 IT Investment – An overview
20.3.2 Organizational Attributes For Successful It Investments
20.3.3 Phases of the Investment Control Process
20.3.3.1 Selection
20.3.3.2 Control
20.3.3.3 Evaluation
20.3.4 Techniques for Evaluating It Investment Proposals
20.3.5 Managerial Issues of Information Systems
20.3.1 IT INVESTMENT – AN OVERVIEW
Introduction
The IT investment process of an organisation should match the culture and
organizational structure of the organisation. The overriding objective is that senior
managers be able to systemically maximize the benefits of IT investments through
use of the IT investment process.
289

The investment process, depicted in Figure 1 below, consists of three phases:


selection, control and evaluation. As Figure 1 indicates, the three phases of the
investment process occur in a continuous cycle of selection, control, and
evaluation. Information from each phase flows freely among all of the other phases
with the exception of evaluation. The evaluation component of the process has a
unidirectional information flow to the selection component. The evaluation
component is used to verify or modify the criteria used during selection.
Select: create a portfolio of IT project investments that maximizes mission
performance, using a standard set of criteria for consistent comparison of projects.
Control: measure ongoing IT projects against their projected costs, schedule,
and benefits and take action to continue, modify, or cancel them.
Evaluate: determine the actual return on investment of an implemented
investment against the organisation's mission and adapt the existing process to
reflect "lessons learned".
The control and evaluation phases are conducted throughout the year and
their results are fed into the selection phase, which in turn feeds back to the
control and evaluation phases.
20.3.2 ORGANIZATIONAL ATTRIBUTES FOR SUCCESSFUL IT INVESTMENTS
While each phase of the investment process has its own requirements for
successful implementation, there are some overall organizational attributes that are
critical to successful investment evaluation. These shared, critical attributes are:
senior management attention, overall mission focus, and a comprehensive portfolio
approach to IT investment.
Critical Attribute #1: Senior management attention
Organisation processes should include the following elements:
• Senior program managers, with authority to make key business and funding
decisions on IT projects, are continuously involved in the process.
• A disciplined and structured management forum is used to make IT
investment decisions, with the authority to approve, cancel, or delay
projects, mitigate risks, and validate expected returns.
• Program, Information Resource Management (IRM), and financial managers
with clearly defined roles, responsibilities, and accountability for the success
of IT projects. Mechanisms to achieve this include establishing service
agreements between providers (IRM/Chief Financial Officer (CFO)) and
consumers (line management) of information technology, incorporating
IRM/CFO issues and requirements into program plans, and routinely
involving the IRM/CFO offices in operational decisions.
Critical Attribute #2: Overall mission focus
Organisation processes should:
• Link strategic planning to the organisation's mission goals and customer
needs as required by the Government Performance and Results Act (GPRA)
of 1993 (Public Law 103-62). This includes developing long-term general
290

goals, setting specific annual performance targets, and annually evaluating


actual performance against these targets.
• Develop mission-related IT measures that link the IRM strategic plan with
the organisation strategic plan. For example, mission goals should be
translated into objective, results-oriented measures of performance, both
quantitative and qualitative, which can form the basis for measuring the
impact of information technology investments.
• Determine whether the function to be supported by the investment should
be performed in the private sector rather than by an organisation of the
Federal government.
• Determine whether the organisation proposing to perform the function is the
most appropriate organisation.
• Examine the work processes involved to ensure they are efficient, effective,
and will take full advantage of the proposed automation.
• use mission benefit, not project completion on time and within budget, as an
important measure of success for any IT project.
• Identify all major existing or planned information systems and define their
relationship to one another and to the organisation's mission.
Critical Attribute #3: Comprehensive approach to IT investment
Organisations should:
• Define a portfolio that includes IT projects in every phase (initial concept,
new, ongoing, or fully operational) and for every type (mission critical, cross-
functional, infrastructure, administrative, and R&D) of IT system.
• Develop levels of review, documentation requirements, and selection criteria
appropriate to the phase and type of IT system.
• Define money thresholds that can be used to channel projects to the
appropriate organisation decision levels to best accommodate organization
wide versus unit specific impact. Most important is the use of a consistent
set of investment decision practices throughout the organisation. Some best
practice organizations submit projects to thorough investment reviews when
costs exceed between 0.5 and 2 percent of the organization's IT budget.
• Develop criteria for identifying projects of a critical nature that fall below the
dollar threshold but should be included in the investment review process.
Each attribute contributes to properly implementing the three phases of the
investment process. Senior managers and those helping to install the investment
process in each organisation should keep these elements in mind during review of
the details of the selection, control, and evaluation phases.
20.3.3 PHASES OF THE INVESTMENT CONTROL PROCESS
20.3.3.1 PHASE ONE: SELECTION
The selection phase creates a portfolio of IT project investments designed to
improve overall organizational performance. This phase combines rigorous
technical evaluations of project proposals with executive management business
knowledge, direction, and priorities. Key to this phase is the use of uniform,
consistent decision criteria that will allow organisation executives to make
291

comparisons of costs, benefits, risks, and returns across project proposals. The four
step selection process is:
Step 1 -- screen IT project proposals;
Step 2 -- analyze risks, benefits, and costs;
Step 3 -- prioritize projects based on risk and return; and
Step 4 -- determine the right mix of projects and make the final cut.

Step 1: Screen Project Proposals


IT proposals should be screened for the level of review as well as relevance and
feasibility.
A mature investment screening process should prescribe the amount of
documentation and level of analytical rigor depending on the project's type (i.e.,
mission critical, infrastructure, etc.) and phase (i.e., initial concept, new, ongoing
and operational). For instance, when senior managers analyze initial concept
proposals the questions and documentation would be different from that required
for a project that is ready to be awarded and implemented.
If a project proposal does not meet all the essential requirements necessary for
its type and phase, it should be returned to the originating business unit sponsor
indicating problems, issues, or documentation that needs further work or
clarification.
Following are some of the questions that can be used to screen projects for
relevancy to the organisation's mission and for technical and organizational
292

feasibility. If the answer to any of these questions is no, a project should not receive
consideration and should be returned to the originating unit. Projects that meet
these criteria should continue to Step 2 where more rigorous analysis is performed.
Step 2: Analyze Project Risks, Benefits, and Costs
At this point, the proposals should be reduced to those with the highest
potential to support the organisation's critical mission and/or operations.
A detailed evaluation of each proposal's supporting analyses should be
conducted and summarized so that senior management can begin examining
tradeoffs among competing proposals that are to occur in the next step. At this
stage, a technical review team should evaluate the soundness of the project's
benefit-cost and risk analyses. In particular, the review team should examine how
the project is expected to improve program or operational performance and the
performance measures that will be used to monitor expected versus actual results.
Step 3: Prioritize Projects Based on Risk and Return
During this phase, IT projects are rigorously compared against one another to
create a prioritized list of all investments under consideration.
After completing analysis, the organisation should develop a ranked listing of
information technology projects. This listing should use expected risks and benefits
to identify candidate projects with the greatest chances of effectively and efficiently
supporting key mission objectives within given budget constraints.
One approach to devising a ranked listing of projects is to use a scoring
mechanism that provides a range of values associated with project strengths and
weaknesses for risk and return issues. Table 1, below, shows an example of how
individual risk and return factors might be scored. This example is a hybrid table
drawn from multiple best practices organizations. Higher scores are given to
projects that meet or exceed positive aspects of the decision criteria. Additionally, in
this example, weights have been attached to criteria to reflect their relative
importance in the decision process. In order to ensure consistency, each of the
decision criteria should have operational definitions based on quantitative or
qualitative measures.
Table 1: Example of Decision Criteria and Scoring Process Used to Rank IT Projects

It Project (1 thru n) Weight

Decision criteria Scoring Percent


Weights for
Overall Risk Factors Risks
SUM=100%

Investment Size - How large is the


proposed technology investment, especially 1__________5__________10 40
in comparison to the overall IT budget? Large Small
293

It Project (1 thru n) Weight

Decision criteria Scoring Percent


Project Longevity - Do projects adopt a
modular approach that combines controlled
systems development with rapid prototyping
techniques? Are projects as narrow in scope 1__________5__________10
30
and brief in duration as possible to reduce Non-modular
risk by identifying problems early and Modular
focusing on projected versus realized
results.

Technical Risk - How will proposed


technology be integrated into existing
systems? Will proposed investment take 1__________5__________10
advantage of Commercial Off-The-Shelf Experimental
30
(COTS) software and systems? How will the Established
complexity of the systems architecture and Custom Industry
software design affect the development of
Standard
the project?

Weights for
Overall Return Factors Returns
SUM=100%

Business Impact or Mission


Effectiveness - How will the technology 1__________5__________10
investment contribute toward improvement 25
in organizational performance in specific Low High
outcome-oriented terms?

Customer Needs - How well does the


technology investment address identified 1__________5__________10
internal and/or external customer needs 15
and demands for increased service quality Low High
and timeliness or reductions in costs?

Return on Investment - Are the return on 1__________5__________10


investment figures using benefit-cost
Risky Known 20
analysis thresholds reliable and technically
sound? estimates benefit

Organizational Impact - How broadly will


the technology investment affect the
1__________5__________10 25
organization (i.e., the number of offices,
users, work processes, and other systems)? Low High
294

It Project (1 thru n) Weight

Decision criteria Scoring Percent


Expected Improvement - Is the proposed 1__________5__________10
investment being used to support, Tactical: Strategic:
maintain, or enhance existing operational
Improves Provides 15
systems and processes (tactical) or designed
to improve future capability (strategic)? Are existing new
any projects required by law, court ruling, process capability
Presidential directive, etc.? Is the project
required to maintain critical operations--
payroll, beneficiary checks, human safety,
etc.--at a minimal operating level? What is
the expected magnitude of the performance
improvement expected from the technology
investment?

Total Risk Adjusted Score = Weighted


Sum of Overall Risk Factors +
Weighted Sum of Overall Return Factors

A scoring and ranking process such as the one depicted in Table 1 may be
used more than once and in more than just this step to "winnow" the number of
projects that will be considered by an executive decision-making body down to the
best possible choice.
An outcome of such a ranking process might produce three groups of projects:
• Likely winners -- One group, typically small, is a set of projects with high
returns and low risk that are likely "winners."
• Likely drop-outs -- At the opposite end of the spectrum, a group of high
risk, low return projects usually develops that would have little chance of
making the final cut.
• Projects that warrant a closer look -- In the middle is usually the largest
group. These projects have either a high return/high risk or a low
return/low risk profile. Analytical and decision-making energy should be
focused on prioritizing these projects in the middle group, where decisions
will be more difficult to make.
At the end of this step, senior managers should have a prioritized list of IT projects
and proposals with supporting documentation and analysis.
Step 4: Determine the Right Mix of Projects and Make the Final Cut
During this phase, an executive level decision making body determines which
projects will be funded based on the analyses completed in the previous steps.
Determining the right mix of projects to fund is ultimately a management
decision that considers the technical soundness of projects, their contribution to
295

mission needs, performance improvement priorities, and overall funding levels that
will be allocated to information technology.
Senior management should consider the following balancing factors when
arriving at a final resource allocation and project mix.
• Strategic improvements vs. maintenance of current operations
Efforts to modernize programs and improve their mission performance may
require significant investments in new information systems. Agencies also
have operational systems on which the agencies depend to operate their
programs as currently structured. These older systems may need to be
maintained. A balance should be struck between continuing to invest in
older systems and modernizing or replacing them. It may be helpful to track
over time the percentage of funding spent on strategic/development vs.
maintenance/operations projects.
• New projects vs. ongoing projects
The senior managers who choose the final mix of projects to be funded must
periodically re-examine projects that have already been approved to ensure
that they should still be supported. There may be concerns about a project's
implementation, such as greater-than-expected delays, cost overruns, or
failures to provide promised benefits. If new projects are more consistent
with an organisation's strategic initiatives, offer greater benefits for
equivalent cost, or present fewer risks, the old projects may need to be
canceled.
• High vs. low risk
If a portfolio is managed only to minimize risk, senior management may
unnecessarily constrain an organisation's ability to achieve results. High
risk, high return projects can significantly enhance the value to the public of
an organisation's IT spending, provided the organisation has the capability
and carefully manages the risks. Most organizations, however, can only
handle a limited number of such projects. As a result, senior management
must consciously help balance the amount of risk in the portfolio against
the organisation's capabilities and ability to manage risk.
• Impact of one project on another
Now that federal agencies are trying to integrate their systems, every new
project proposal is likely to affect, or be affected by, other project proposals,
ongoing projects, or current systems. Senior management must recognize
the context in which the new project will be placed and make decisions
accordingly. For example, one best practice company has established as a
risk the number of dependencies between a new project and other
projects/systems.
296

• Other complicating factors


Other complicating factors can heavily influence how senior management
makes a final cut for approved IT projects.
• Opportunity costs
Consider the impact on long range investment opportunities if all of the
current projects are funded. Will large current costs preclude or delay better
future opportunities? Will large current capital expenditures create even
larger maintenance costs in the future?
• External funding
IT projects sometimes rely on funding and resources from outside agencies
or private organizations. If any project under consideration requires critical
components from outside the organisation, then the value of the
organisation's investment may be lost if the commitment by the outside
party later shifts.
• Budget constraints
How much does the organisation have available for IT investments for this
budget year and for the next several years? Besides budget year spending
levels and out-year estimates for the organisation, the analysis should
examine if there are other sources of funding for the projects. The
organisation should identify these other sources in its investment proposals.
What projects will fit under the spending levels this budget year and in out-
years? Senior management can take the final list of projects with their
associated costs and determine which projects fit within the spending
parameters this budget year and/or in out-years. A project may have a
relatively high priority, but resource constraints may preclude funding it this
budget year. Senior management can then decide that the project be
approved, but that its start date be delayed until funds are available,
assuming it still matches the organisation priority needs in the coming
years.
After consideration of all of the factors mentioned above, senior management
should have enough information to make knowledgeable investment decisions.
Senior management should also designate how many times a project is to be
reviewed based on the level of risk and any steps that the project team must take to
mitigate that risk. For example, one best practices organization requires that senior
management only approve projects after a review schedule has been established,
(e.g., reviewed once a month for high risk, or once a quarter for lower risk), and
specific requirements have been given to the project team to ensure that they
mitigate risks, (e.g., develop a risk management plan).
Project review schedules, risk mitigation plans and the cost-benefit plans from
prior steps all feed directly into the next section of the investment process --
control.
297

20.3.3.2 PHASE TWO: CONTROL


While agencies select proposals once a year, the control phase is an ongoing
activity to review new and ongoing projects, as well as operational systems. During
the control phase, senior management regularly monitors the progress of ongoing IT
projects against projected cost, schedule, performance and delivered benefits. The
frequency of the reviews may vary, but should not wait until the annual budget
preparation and deliberation process. How often and to what extent individual
projects should be reviewed should have been established as the last step in the
Selection phase. Rather than avoiding problems and concerns emerging from
unexpected risks, this phase accentuates the need for management accountability
by creating pre-arranged checkpoints for projects and forcing corrective action
when necessary. If a project is late, over cost, or not being developed according to
expectations, then senior management must decide whether to continue, modify, or
cancel it. The steps in this phase are to:
Step--monitor projects/systems against projected costs, schedule, and
performance; and
Step 2 -- take action to correct any deficiencies.
Before an organization can fully implement the control steps, uniform
mechanisms for collecting, automating, and processing data on expected versus
actual costs, schedules, and returns should be in place for all projects.
Step 1: Monitoring Projects/Systems against Projected Costs, Schedule, and Performance
Senior managers need to compare the preliminary results being achieved by a
project against its projected costs, benefits and risks, and to identify actual or
potential managerial, organizational, or technical problems.
Senior management should be able to judge whether a project is on track to
achieve its projected mission benefits. The key is to use a set of performance
measures consistently so that senior program managers are provided early warning
of potential or actual problems. It is essential torefresh these measures as costs,
benefits, and risks become better known to ensure the continued viability of an
information system prior to and during implementation.
Examples of problems that could affect a project or system include 1) lack of
input by program management into the requirements phase of a project, 2) a
project that was intended to be cross-functional becomes stove-piped because other
offices in the organisation do not support it, 3) new requirements have been added,
and 4) it is more difficult to use the technology than was anticipated.
Senior program managers in federal agencies often pay most of their attention
to new projects and carry ongoing projects as necessary budget items. In best
practice organizations, however, ongoing projects are reviewed continually along
with new projects and go/no-go decisions are made. No project should be allowed to
continue indefinitely through failure. Project continuance should be periodically
challenged.
298

Based on a schedule developed during the selection phase, each project/system


should be reviewed with at least the following considerations in mind:
• How do current costs compare against projected costs?
• How does the current schedule compare against the projected schedule?
• How does the current performance of the deliverables compare against
projected measures?
• If we were starting over, would we fund this proposal today?
• Have new requirements "crept" into the project?
• Have business conditions changed significantly since the project was
approved?
• Is the project still technically feasible?
• Is the project dependent on other projects? Are they late?
• Does the project still support the architecture?
• Is the project necessary for the successful completion of other projects?
Senior program management should be able to develop a well-informed picture of
current and potential problems for each ongoing IT project.
Step 2: Taking Action to Correct Deficiencies
The action should result in the deliberate continuation, modification, or
cancellation of each project.
The prior step, pertaining to monitoring of projects, should pinpoint projects
that senior management need to make decisions on. What action to take is a
management decision.
Senior management should ensure that:
• The solution to problems should not be the sole province of the IRM
organization. Even when senior management is aware of problems with
projects or systems, the solution to the problem is too often left with the
information systems organization. Senior managers should ensure that
program officials are involved in the solution, since in many instances it may
be the business side of the organization which provides a solution.
• All management decisions are documented along with data supporting the
required changes. Common problems and their solutions, which are applicable
to one IT project, should be evaluated as to how they apply to other IT projects
under management's purview. To avoid replication of effort for analysis,
documentation of management decisions is critical. Federal agencies often
treat each budget year as isolated and provide funding for whatever can be
supported each year rather than evaluating the IT projects with a historical
perspective. By contrast, leading organizations revise their selection
processes and IT funding decisions based upon the outcomes produced from
the previous year.
To use an example, many federal agencies are prototyping IT projects before
moving into the implementation stage. Monitoring the mission results gained by the
299

prototype allows senior program management to make informed decisions about


whether to stop or modify a project at this stage, rather than letting the project
continue on into implementation automatically.
Proper control of IT investments enables senior management to mitigate risk of
schedule, cost overruns, and development of a product that does not meet the goals
originally intended. This process is highly dependent on facts provided through
continual measurement of new and ongoing projects. The data fed from the
Selection process to the Control process supports this requirement, as do the
measurements taken throughout the life of a project.
20.3.3.3 PHASE THREE: EVALUATION
Evaluation is conducted after a system has been implemented, and is an
assessment of the project's success or failure. Using post implementation reviews,
data is collected, recorded, and analyzed to compare expected results against actual
benefits and returns. Figure 1 shown previously, depicts the evaluation phase in
relation to the other two phases. Evaluation is used to 1) decide whether future
changes are necessary which can help address serious performance gaps, and 2)
make decisions about modifications to the organization's existing evaluation
process and selection criteria. This phase is comprised of three steps:
Step 1 -- Conduct Post Implementation Reviews
Step 2 -- Decide on Adjustments
Step 3 -- Lessons Learned
Step 1: Conduct Post Implementation Reviews
Conduct and review the results of post implementation reviews, focusing on
anticipated versus actual results in terms of cost, schedule, performance, and
mission improvement outcomes. Determine the causes of major differences between
plans and end results.
Most federal agencies accept that recently implemented systems are a fait
accompli and move on from there. This point of view is contrary to the investment
management philosophy of managing the entire IT portfolio. The primary tool to
assess a project in best practice organizations is the post-implementation review.
Questions to ask include:
• How effective was the project in meeting the original objectives?
• How well did the project meet the planned implementation dates?
• What mission benefits has the project achieved, and do they match the
benefits projected? If not, why not?
• Were the original business assumptions that justified the system valid?
• What lessons did the team learn from this project?
The post-implementation review should inform senior management's decision
whether to continue, modify, or cancel operational systems.
300

Step 2: Decide on Adjustments


Using the results of the post implementation review as a baseline, decide
whether to continue without adjustment, to modify the system to improve
performance or, if necessary, to consider alternatives to the implemented system.
Even with the best system development process, it is quite possible that a new
system will have problems or even major flaws that must be taken care of in order
for the organisation to get the full benefit of its investment. The post
implementation review should provide executive management with useful
information on how best to modify a system, or to work around the flaws in a
system, in order to improve performance and to bring the system further in
alignment with the needs of its customers.
Step 3: Lessons Learned
Using the collective results of post implementation reviews across completed
systems, modify the organization's existing investment selection and control
processes based on lessons learned.
The information from post implementation reviews helps senior management
develop better decision criteria during the Selection process and improve the
evaluation of ongoing projects during the Control process.
20.3.4 TECHNIQUES FOR EVALUATING IT INVESTMENT PROPOSALS
(a) Cost benefits analysis:
The traditional technique of cost benefit analysis is very useful to measure the
quantifiable benefits of the IT application. Such benefits are more prominent in
case operational or tactical application. The costs are the measurement of
resources required to obtain the benefits associated with the application. The
benefits may be in terms of cost savings, cost avoidance, increase in revenue and
the intangible benefits.
Many a time, it is difficult to assign a representative value to the intangible
benefits, such as improvements in resource utilisation, communication and
availability of information for decision making. And quite often the intangible
benefits are more dominant than the tangible ones.
It is, therefore, suggested that surrogate values, determined by consensus,
may be used as estimates of the intangible benefits. Another difficulty with the
cost-benefit analysis is that all the requirements and the impacts of the
applications may not be known or reasonably estimated right before the IT infra-
structure is developed for the application.
In such a case, the very reason for conducting the cost-benefit analysis may be
questionable. However, it is suggested that for most of the applications where the
amount of investment in IT infrastructure is substantial, such an analysis must be
done. The traditional tools for evaluating investment proposals such as Return on
Investment (ROI), Payback period method, Net Present Value (NPV) etc. may be
used to compare different proposals and determine business justification. This
method is very useful in substitutive and complementary applications.
301

(b) Value linking:


Value linking focuses on improvement in business performance and not just
on cost savings. The improvement may be in terms of accuracy and quick
performance of activities, thereby increasing the capability/capacity of the
enterprise in handling greater volumes of activities with same or higher degree of
effectiveness. For example, improved billing system shall ensure savings in time
which is otherwise lost in corrections. It would also ensure that a fewer number of
customers are lost due to bad service.
Consequently, revenue loss due to inability to identify errors in billing is
minimised. It will improve reliability of sales information and thus, improve
planning and control process. However, it may not reduce manpower requirements
for the billing process in the short run.
Value linking method is very useful in substitutive applications as it focuses
on the impact of IT infrastructure on the ability of the enterprise to sustain the
accelerated growth in activities without corresponding increase in other
infrastructure costs.
(c) Value acceleration:
Value acceleration is used to assess the financial implications of time saved in
the business process with the help of IT infrastructure. For example, improved
billing system may result in bills being ready and dispatched in advance of due date
resulting in interest savings.
In many retail stores accepting payment through credit cards, such a saving
which is accelerated by time savings may be substantial with the increase in
volume and value of such transactions. Similarly, the costs involved in completion
of project may be reduced by reducing the time over runs involved in the
implementation of the project.
Timely availability of information regarding stock position would enable the
purchase department to negotiate with suppliers resulting in potential savings.
Such benefits may sometimes more than justify investment on IT infrastructure for
otherwise very routine and lower level activities.
(d) Value restructuring:
Value restructuring focuses on the business values associated with
restructuring a job or function. It aims at measuring the benefits of an application
that stem from organisational change. Such benefits are obtained by combining,
eliminating or redefining responsibilities in different departments.
The restructuring may result in reducing response time, increased pro-
ductivity, job enrichment, improved communication, etc. By moving lower order
activities to IT infrastructure for automation, it is possible to make the organisation
leaner (with reduced workforce) and more competitive. The benefits of value
restructuring are more profound in functions such as legal advisory services,
perspective planning, R&D, etc.
302

(e) Innovation:
Information technology may help innovate the business activity by creating
new/alternative functions, products and service; open up new niche markets
offering competitive edge to the enterprise. In such a case, the ROI is perhaps less
important and the value of being first or risk of not being there or having to face
failure’ becomes more important because in such cases, the question is less of cost
and more of survival.
Such applications are strategic and quantification of benefits from such
application is difficult. Equally difficult is to quantify the costs of changes in busi-
ness process that may be necessitated by the innovation.
20.3.5 THE MANAGERIAL ISSUES OF INFORMATION SYSTEMS
Although information technology is advancing at a blinding pace, there is
nothing easy or mechanical about building and using information systems. There
are major challenges confronting managers:
The information systems investment challenge: How can organizations
obtain business value from their information systems? Earlier in this chapter we
described the importance of information systems as investments that produce value
for the firm. We showed that not all companies realize good returns from
information systems investments. It is obvious that one of the greatest challenges
facing managers today is ensuring that their companies do indeed obtain
meaningful returns on the money they spend on information systems. It’s one thing
to use information technology to design, produce, deliver, and maintain new
products. It’s another thing to make money doing it. How can organizations obtain
a sizable payoff from their investment in information systems? How can
management ensure that information systems contribute to corporate value?
Senior management can be expected to ask these questions: How can we
evaluate our information systems investments as we do other investments? Are we
receiving the return on investment from our systems that we should? Do our
competitors get more? Far too many firms still cannot answer these questions.
Their executives are likely to have trouble determining how much they actually
spend on technology or how to measure the returns on their technology
investments. Most companies lack a clear-cut decision-making process for deciding
which technology investments to pursue and for managing those investments.
The strategic business challenge: What complementary assets are needed to
use information technology effectively? Despite heavy information technology
investments, many organizations are not realizing significant business value from
their systems, because they lack—or fail to appreciate—the complementary assets
required to make their technology assets work. The power of computer hardware
and software has grown much more rapidly than the ability of organizations to
apply and use this technology. To benefit fully from information technology, realize
genuine productivity, and become competitive and effective, many organizations
actually need to be redesigned. They will have to make fundamental changes in
303

employee and management behavior, develop new business models, retire obsolete
work rules, and eliminate the inefficiencies of outmoded business processes and
organizational structures. New technology alone will not produce meaningful
business benefits.
The globalization challenge: How can firms understand the business and
system requirements of a global economic environment? The rapid growth in
international trade and the emergence of a global economy call for information
systems that can support both producing and selling goods in many different
countries. In the past, each regional office of a multinational corporation focused on
solving its own unique information problems. Given language, cultural, and
political differences among countries, this focus frequently resulted in chaos and
the failure of central management controls. To develop integrated, multinational,
information systems, businesses must develop global hardware, software, and
communications standards; create cross-cultural accounting and reporting
structures; and design transnational business processes.
The information technology infrastructure challenge: How can
organizations develop an information technology infrastructure that can support
their goals when business conditions and technologies are changing so rapidly?
Many companies are saddled with expensive and unwieldy information technology
platforms that cannot adapt to innovation and change. Their information systems
are so complex and brittle that they act as constraints on business strategy and
execution. Meeting new business and technology challenges may require
redesigning the organization and building a new information technology (IT)
infrastructure.
Creating the IT infrastructure for a digital firm is an especially formidable task.
Most companies are crippled by fragmented and incompatible computer hardware,
software, telecommunications networks, and information systems that prevent
information from flowing freely between different parts of the organization. Although
Internet standards are solving some of these connectivity problems, creating data
and computing platforms that span the enterprise—and, increasingly, link the
enterprise to external business partners—is rarely as seamless as promised. Many
organizations are still struggling to integrate their islands of information and
technology.
Dependence on technological experts: Technological expertise is a
precondition for development and migration of new and complicated technology in
the institution, but the dependence on such expertise also represents a problem to
management. Managers cannot themselves have the necessary insight in every
technical question, and the use of internal and external advisers is crucial.
Technological experts tend to agree on putting ambitions high, but at the same time
they almost notoriously tend to disagree on specific choices of hardware, software
and methodology. On several occasions reluctance from management and the rest
of the organisation has saved from wasting money and manpower, but this may of
304

course also prevent important decisions to be taken and slow down necessary
changes. Diverging opinions in the organisation require clear decisions by top
management, and when the decision has been taken, it must be followed by
information and necessary resources.
Organisation: A decentralised organisation of IT experts is also an advantage
for their participation. However, questions of technological infrastructure (not only
hardware such as computers and networks) cannot be successfully solved without
a close co-operation between experts throughout the whole organisation. In many
cases the organisation as such will be better off with solutions that for some
projects may be considered suboptimal. Hence, IT requires strong co-ordination,
and this is of course a larger challenge in a decentralized organization than in a
centralised one. Another argument for strong co-ordination is the dependence on
specialists which constitutes a scarce resource.
Choice of technology: It may be risky to choose the most recent version of
technology if this is not well tried. Technology and especially software should be
purchased and not developed within the institution if convenient systems are
available. Open systems that communicate with each other and on which it is easy
to get support in the market might be preferred to more specialised systems even if
the latter are regarded as better. There is a tendency in most technological
environments to develop solutions themselves, which is natural since self-developed
software more easily, will fulfill the specifications, and development is more
interesting than shopping. However, in addition to be expensive (when working
hours are taken into account), self-developed software is vulnerable since it might
be dependent on support from one or a few persons.
External advice: The issue of using external consultants is closely linked to
the choice between buying and developing. Technicians are often reluctant to ask
for external support for the same reason as they prefer to develop solutions
themselves. On the other hand there are many examples of too extensive use of
external consultants in many institutions which lack technological expertise. A
mixture is often optimal, one should use external experts when they obviously have
more experience than internal staff (or internal staff may be overbooked), but this
use requires a certain level of experience within the institution, to avoid being
dependent on the consultants and to implement the systems and ensure follow-up
of results. To management, external experts will often represent a useful "second
opinion" in questions where their own staff disagrees.
Human resources: Principles and plans for management and use of IT may be
good, but we will attain little if we do not have good human resources, even if
external consultants are used in an optimal way. The labour market for IT
specialists has varied over the years.
Ethics and security: The responsibility and control challenge: How can
organizations ensure that their information systems are used in an ethically and
socially responsible manner? How can we design information systems that people
can control and understand? Although information systems have provided
305

enormous benefits and efficiencies, they have also created new ethical and social
problems and challenges. Chapter 5 is devoted entirely to ethical and social issues
raised by information systems, such as threats to individual privacy and
intellectual property rights, computer-related health problems, computer crimes,
and elimination of jobs. A major management challenge is to make informed
decisions that are sensitive to the negative consequences of information systems as
well to the positive ones.
Managers face an ongoing struggle to maintain security and control. Today,
the threat of unauthorized penetration or disruption of information systems has
never been greater. Information systems are so essential to business, government,
and daily life that organizations must take special steps to ensure their security,
accuracy, and reliability. A firm invites disaster if it uses systems that can be
disrupted or accessed by outsiders, that do not work as intended, or that do not
deliver information in a form that people can correctly use. Information systems
must be designed so that they are secure, function as intended, and so that
humans can control the process. Managers will need to ask: Can we apply high-
quality assurance standards to our information systems, as well as to our products
and services? Can we build systems with tight security that are still easy to use?
Can we design information systems that respect people’s rights of privacy while still
pursuing our organization’s goals? Should information systems monitor employees?
What do we do when an information system designed to increase efficiency and
productivity eliminates people’s jobs?
20.4 REVISION POINTS
• Attributes for successful IT investment
• Phases of investment control process
• IT investment evaluating techniques
• Managerial issues of information system
20.5 INTEXT QUESTIONS
1. What are the attributes for successful IT investment?
2. List out the overall risk factors and return factors of IT investment.
3. What are the factors considered when arriving at a final resource allocation?
4. Write a short note on cost benefit analysis.
5. Write brief note on value acceleration.
6. Discuss the managerial issues in information system.
20.6 SUMMARY
The critical attributes for successful IT investment are senior management
attention, overall mission focus, and a comprehensive portfolio approach to
IT investment.
This selection phase combines rigorous technical evaluations of project
proposals with executive management business knowledge, direction, and
priorities.
During the control phase, senior management regularly monitors the
progress of ongoing IT projects against projected cost, schedule, performance
and delivered benefits.
306

Evaluation is conducted after a system has been implemented, and is an


assessment of the project's success or failure.
The traditional technique of cost benefit analysis is very useful to measure
the quantifiable benefits of the IT application
Although information technology is advancing at a blinding pace, there is
nothing easy or mechanical about building and using information systems
20.7 TERMINAL EXERCISE
1. The __________ phase creates a portfolio of IT project investments designed to
improve overall organizational performance
2. IT proposals should be screened for the level of review as well as
__________and ________________
3. Taking Action to Correct Deficiencies is done in ___________ phase
4. _______________ is conducted after a system has been implemented.
5. ______________ focuses on improvement in business performance
20.8 SUPPLEMENTARY MATERIALS
1. https://www.igi-global.com/dictionary/it-investment-evaluation/15859
2. https://clintonwhitehouse2.archives.gov/OMB/inforeg/infotech.html
20.9 ASSIGNMENTS
1. Elaborately explain the phases of an IT investment proposal.
20.10 SUGGESTED READING/REFERENCE
1. Alexis Leon and Mathew Leon, Fundamentals to information Technology,
Leon Tech World, New Delhi, 2008.
2. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
20.11 LEARNING ACTIVITIES
1. Identify a manager from an IT company. Have discussion about the
managerial issues and difficulties facing in his profession.
20.12 KEYWORDS
IT investment
Cost benefit analysis
Value linking
Value acceleration
Value restructuring

307

LESSON - 21
INTRODUCTION TO BUSINESS CONVERGENCE
21.2 INTRODUCTION
Convergence is a term being used more and more across different scenarios
but the general term is used to describe the fusing, or coming together, of separate
entities to form a unified whole. Whether that is the converging of technologies and
cross platform experiences (e.g. digital convergence of TV, mobile device, wearable
devices, virtual reality, etc) or the converging of the business as a whole, the
principal remains the same. I like to settle on this definition for Business
Convergence: Convergence is the act of working together to create a unified whole
to achieve a set purpose.
Within a Converged business, every department and individual working within
the organisation works harmoniously to fulfill the same goal(s). Additionally, the
focus of the converged business is always value. Value is in its many forms;
economic and financial, sustainability, ecological, social contribution etc. A
Converged business is therefore highly efficient and competitively focused as it
single-mindedly pursues its strategic aims to fulfill its vision and purpose. Another
one of the core tenets of Convergence is transparency. Transparency of purpose and
strategy is paramount to the clarification of value; Convergence allows us to clearly
answer, “what does value mean to our business?”
With Business Convergence, and the clarity and communication of purpose
and value permeating throughout the business, the entire business understands
the course corrections that become necessary to allow it to flex and adapt to market
demands and opportunities.
21.2 OBJECTIVES
• To introduce the concept of convergence and business convergence
• To understand the concept of business convergence with information
technology
• To identify the profile of a business intelligence consultant.
21.3 CONTENTS
21.3.1 Business Converging with IT
21.3.2 Business – It Consulting Convergence
21.3.3 Definition of Business and (Bit) It Consulting
21.3.4 Profile of an Ideal Bit Consultant
21.3.1 BUSINESS CONVERGING WITH IT
Information technology is considered as a means through which the business
achieves its ends. No doubt, ends are always important but to reach ends, the
means, in our case information technology, is more exciting. In the ever-changing
world of business, this very tool not only helps reach ends but sometimes also
opens up new opportunities and possibilities. Whether it was resurrection of Apple
as a highly valuable company in the world or the emergence of Google and
308

Facebook— all resulted from business–IT convergence. Even the old-generation


industries (I am not talking about telecom or banking which have also got major
boost due to convergence) such as airlines, automobile and railroads and many
others have found new ends thanks to convergence. New ends could be new
product offerings (GPS-based navigation system and on-board computer) for
automobile manufacturers and new ways of doing business like ticketing, dynamic
pricing and route planning for airlines industry.
Many a time IT prompts opportunities and ideas for action because it is easier
to visualise the end results of any business decisions using IT. And, IT can be used
to implement any and almost every business decision. This is the precise reason
that management consulting and IT has to converge into one seamless whole. Let
us consider a situation for understanding the role of IT tool in conducting business.
Suppose you want to reach a mountain cliff, and you are discussing endlessly with
your friends how to accomplish this task. You talk about various alternative
pathways going to the mountain cliff, avoiding gorges on the way, safety measures
against wild animals and deadly insects, and protection from prickly thrones. And,
then suddenly you are offered a high altitude helicopter that you can afford, and
the skilful pilot promises you to drop safely at the mountain top. You are also
prepared for the bit of adventure and climbing down the rope from the helicopter.
Would you not choose this tool to reach to the top quickly and be ahead of others?
IT is that helicopter in business to run up to the top.
But make no mistakes. Enjoying the beauty of pathways while climbing up the
hill may still be a viable option, so other tools are equally important. Moreover, the
helicopter may be available, the power and perfection of a given tool, will revert
back to what suits the business strategy and objective
We would examine the perspective that business and IT are no longer two
wheels of the same vehicle each operating at different pace but riveted to each
other. In fact, they are considered so close that they can be said to be
amalgamated. I am using some terms of the material sciences to explain the
perspective about business and IT because these terms are not only powerful but
have metaphoric power to convey the understanding of virtual, abstract world. The
gradual emergence of a new order of the day is taking place which is neither clear
and sunny as IT nor dreamy and contemplating like a moonlit night as in strategy.
It is like a overlap of both, a dusk or a dawn, a mixture of both.
21.3.2 BUSINESS – IT CONSULTING CONVERGENCE
The decade of the 1990s gave rise to a new consulting theme in the form of IT
consulting that has now been transformed into a major consulting business.
Statistics tell the story of this transformation while strategy consulting constitutes
8–10% of the management consultancy market. IT consulting alone has a lion’s
share of 25-30% that does not include perform services provided by IT firms. The IT
consulting share pertains to selecting the right information technology to
implement the organisation’s strategic initiatives.
309

IT in itself became complex with issues emerging around outsourcing. Many


strategic consulting firms have added IT consulting practice as a follow-on work
with varying degree of success. Likewise, cashing on the exponential growth of ERP-
led business transformation work, several IT system integrators inched up the
consulting value curve through ERP-enabled business process re-engineering and
e-business strategy, which also meant reworking the client’s business models.
The boundaries of the enterprise are becoming porous due to the enterprise’s
collaborations with suppliers, distributors and even next stage of customers. With
these collaborations serving as a tool to gain and maintain competitive advantage,
the dependence on IT for strategy execution, exploration and exploitation is
becoming a norm. Vendors are managing inventory; logistics are outsourced; and
customer relationship, call centre management and business processes are off
shored to partner companies.
An essential characteristic required to survive in the post-industrial, modern
order is the flexibility to gather means of production, service and customer
engagement. While IT can facilitate efficiency in process, structure and
collaboration across internal and external entities, the organisation has to in a
mode of contemplated action constantly. The conventional bureaucratic
organisations will not fare well in the volatile conditions of post modernity, as their
static structures limit the possibilities to adapt to new creative strategies. They
have to try multiple methods for addressing management and organisational
issues, where the role of IT as solution provider comes in very effectively.
Needless to mention that such a multipronged approach would need melding
of business and IT disciplines that only a new-generation BIT consultant can do. All
this leads to the conclusion that no business consulting can ever happen without
IT, and IT consulting has to have the business component to go beyond just
limiting to requirements solicitation and developing or implementing IT systems
and solution as asked for. They need to build ability to understand and contribute
in overall business problem-solving and not just in implementing IT solution.
21.3.3 DEFINITION OF BUSINESS AND (BIT) IT CONSULTING
BIT consulting, in short, is the business consulting that uses the prowess of
IT. Frequently, consulting professionals as well as management students ask this
question: Do I have be an expert in IT to be a successful BIT consultant?
First and foremost, a BIT consultant is first a business consultant and then an
IT consultant. Any consultant is meant to address the client’s problem, and a BIT
consultant has to deal with a business problem. The first steo on the path of
consultancy for a BIT consultant is to pick up deep knowledge of the business
domain, for example, the issues being faced by the industry and the way it is
changing. They should have as much knowledge of IT to understand its potential
and power to solve business problems. The IT knowledge requirement of a business
consultant is similar to that of an expert driver who knows which vehicle to choose
for driving in a hilly terrain, a desert or a crowded city. The driver knows how the
310

vehicle functions, but he has no idea of repairing major faults that lead to the
vehicle breakdown. Similarly, the business consultant is expected to have
knowledge of how various business IT tools function. But for problems in these
tools, he/she has to fall back on IT experts.
We do not want to give the impression that the BIT consultant is a new avatar
who would replace everything that has been used so far. IT is just a new
perspective that will increasingly be used in the consulting world till it becomes a
new lingo practiced by most readers of this book.
Various themes of BIT consulting can be grouped as shown below:
Business and IT (BIT) theme
• IT strategy for business
• IT planning, IT application portfolio planning
• E-business strategy, including E-commerce and M-commerce strategy
• COTS-enabled10 business process design (CBPD)
• Cost management: integrated supply chain
• Customer-centric: CRM process redesign
• IT-initiated organisation change
Focused IT Management Theme
• IT architecture
• IT service management
• IT governance
• IT process quality improvement consulting
• IT knowledge management
• Business continuity and disaster recovery consulting
• Planning, budgeting and monitoring IT programmes
• IT security, data security and prevention of fraud and intrusion
You will notice all the above listed BIT consulting themes, (which is not
exhaustive but fairly representative) require a consultant to have a very high
component of business understanding. A successful BIT consultant need not have
extensive software development experience such as programming or coding. He has
to focus on business problems and what technology can do rather than how the
technology works or make it work.
Following Figure shows the convergence of business and IT consulting in the
past few decades. In the future they will be like weaving a cloth where one yarn acts
as a warp thread and the other weft thread. Only focusing on one (either business
or IT) will not be enough. While a pure play business consulting may yield precision
of business diagnostic, the BIT consulting approach brings results closer to
realisation.
Following Figure shows a simple framework of understanding the importance
of the interdependence of these two consulting disciplines.
311

Based on our discussion so far, we can draw the conclusion that BIT
consulting is an integrated approach for solving a business problem using a
combination of strategy, process, technology and people. A typical BIT consultant
may start the first level of his engagement by interacting with the CEOs and direct
his/her reports
ports to address issues of strategic nature. Some of these issues could be
defining new market segments, product entry strategy, mergers and acquisitions,
organization redesign and defining growth strategies and objectives. Addressing all
these issues wouldd also require technology intervention.
The second level of BIT consulting engagement happens with the operational
managers to redesign company processes and organisation change initiatives. Once
the processes are designed and agreed upon, the next phase would begin to
develop an IT strategy and solution road map for implementing them. This will also
need IT processes and infrastructure to be designed and provisioned for, in line
with the future growth strategies of the business. BIT consultants usually with a
clear industry focus are technology agnostic. They would build their focus on
specific functional areas such as marketing human resources, banking insurance
and financial services, public sector and government, and healthcare.
21.3.4 PROFILE OF AN IDEAL BIT CONSULTANT
Some of the personality traits and key skills required in BIT consultants are as
follows:
Objectivity
BIT consultants must be objective and impartial while making their
recommendations. They must uphold the highest standards of professional ethics to
be seen as a role model. The quality of their output —be it analysis of facts,
diagnostic presentation, communication and report presentation— should be of very
high order.
Team Work
Consulting is team work where every member should bring a specialised
ability and avoid transgressing areas that are beyond his/her range of expertise.
312

Simple Language
BIT consultants should avoid using complex terminologies so that their
analysis and recommendations are easy to understand and implement by the
clients. They must practice writing using active language, avoiding passive
expressions.
Keeping Aloof from Client’s Organisation Politics
BIT consultants are typically expected to address the client’s issues/problems,
many of which would be impacting the client staff and management power
equation. Despite their knowledge of different aspects of the client organisation, the
consultants should stay clear of the organisation’s politics.
Well Groomed Look
BIT consultants must have a pleasant appearance and should be groomed for
business-like behaviour. They must avoid overdressing or under-dressing.
Sometimes they need to dress according to the occasion or as per the dressing
norm of the client’s organisation.
Continuous Learning
To enhance their continuous learning, BIT consultants should join technical or
professional management bodies such as management consulting institutes and
management associations. They must also participate as guest lecturers in
management institutes or public seminars to be seen and heard. It will also help
them gain confidence and experience on diverse subjects and thus sharpen their
thinking. They should also publish articles and research reports and share then
with their clients for the latter’s education. They should read a lot and should have
good retention memory. A BIT consultant must build a repertoire of articles, books
and websites for use in future assignments.
Creating Niche for the self or the client
BIT consultants must strive to carve out a unique personality professionally.
They should be able to recognise business trends and devise consulting services to
help clients follow these trends. They must strive to establish best practices and
expand their sphere of activity.
Full of enthusiasm
Finally, a BIT consultant must bring enthusiasm to his/her engagement with
the client.
21.4 REVISION POINTS
• Business Convergence
• Definition
• Business convergence and IT
• Ideal business convergence IT (BIT) consultant
21.5 INTEXT QUESTIONS
1. Write short note on convergence.
2. What do you mean by business convergence with IT?
3. List down the various BIT consulting themes.
313

21.6 SUMMARY
Convergence is the general term is used to describe the fusing, or coming
together, of separate entities to form a unified whole.
Information technology is considered as a means through which the
business achieves its ends.
New consulting theme in the form of IT consulting has now been
transformed into a major consulting business
A successful BIT consultant need not have extensive software development
experience such as programming or coding
BIT consultants must be objective, interest for continuous learning and
cooperative to work in a team
21.7 TERMINAL EXERCISE
1. ________________ is the term used to describe the fusing, or coming together,
of separate entities to form a unified whole
2. _______________________ plays an important role in the ever changing world.
3. _______________ is an essential characteristic required to survive in the post-
industrial era.
4. BIT stands for ________________
5. _________________ and ___________________ are the important personality trait
a BIT consultant should possess.
21.7 SUPPLEMENTARY MATERIALS
1. https://www.linkedin.com/pulse/business-convergence-agile-pursuit-
purpose-value-strategic-kent
2. https://www.strategyblocks.com/blog/business-convergence-strategy/
21.8 ASSIGNMENTS
1. Enumerate the personality traits and key skills required in BIT consultants.
21.9 SUGGESTED READING/REFERENCE
1. Sanjiva Shankar Dubey, Management and IT Consultancy, McGraw Hill,
New Delhi, 2012.
2. https://foundry4.com/industry-convergence-going-beyond-business-
boundaries
21.11 LEARNING ACTIVITIES
1. Browse the net and identify the companies involved in convergence. Learn
what is happening in that industry.
21.12 KEYWORDS
Business Convergence
Information Technology
BIT – Business Convergence IT

314

LESSON - 22
BIT CONSULTING PROCESS
22.1 INTRODUCTION
BIT consulting is an integrated approach for solving a business problem using
a combination of strategy, process, technology and people. A typical BIT consultant
may start his engagement by interacting with the CEOs and direct his/her reports
to address issues of strategic nature. Some of these issues could be defining new
market segments, product entry strategy, mergers and acquisitions, organization
redesign and defining growth strategies and objectives. Addressing all these issues
would also require technology intervention.
22.1 OBJECTIVES
• To understand the process of BIT consulting
• To recognize the importance of preparing and entering into contract
22.3 CONTENTS
22.3.1 BIT consulting: An Integrated Stepped Approach
22.3.2 Proposal Development
22.3.3 Concluding and Entering Into the Contract
22.3.4 Executing the Consulting Engagement - Steps
22.3.1 BIT CONSULTING: AN INTEGRATED STEPPED APPROACH
Based on our discussion so far, we can draw the conclusion that BIT
consulting is an integrated approach for solving a business problem using a
combination of strategy, process, technology and people. A typical BIT consultant
may start the first level of his engagement by interacting with the CEOs and direct
his/her reports to address issues of strategic nature. Some of these issues could be
defining new market segments, product entry strategy, mergers and acquisitions,
organization redesign and defining growth strategies and objectives. Addressing all
these issues would also require technology intervention.
The second level of BIT consulting engagement happens with the operational
managers to redesign company processes and organisation change initiatives. Once
the processes are designed and agreed upon, the next phase would begin to
develop an IT strategy and solution road map for implementing them. This will also
need IT processes and infrastructure to be designed and provisioned for, in line
with the future growth strategies of the business. BIT consultants usually with a
clear industry focus are technology agnostic. They would build their focus on
specific functional areas such as marketing human resources, banking insurance
and financial services, public sector and government, and healthcare.
Consulting Process: From Start to End
A typical consulting process consists of the following major sub-processes:
1. Prospecting, qualifying and preparing: This sub-process involves seeking out
clients, marketing consulting services, preparing a list of prospects,
qualifying them and preparing for the first meeting.
315

2. Understanding and proposing: Understanding the client needs and


developing a suitable consulting proposal.
3. Concluding and entering into contract: In this sub-process, the client
agreement on the consulting proposal is sought and a contract is entered
into.
4. Executing the engagement: this is the sub-process of fulfilling the obligation
of executing the consulting engagement as enumerated in the original
proposal.
5. Recommending and implementing: In this sub-process, the consulting
recommendation or implementation work plan is presented to the client
management and their agreement is sought. After the management’s
approval, supervision for the implementation of the consulting
recommendation is undertaken. The consulting report is in this step is
converted into a document that contains the planning steps of the actual
implementation of the consulting recommendation.
6. Closing and collecting: In this sub-process, the consulting contract is closed,
the client feedback on the consulting engagement is sought, and the
remaining payment is collected from the client. The client feedback is
analysed, and the post-engagement debrief is conducted to gain new
learning. Finally, opportunities for next or similar engagement with the
client identified.
7. Managing the follow-on engagement and continued relationship: This an
ongoing process through which continued engagement and relationship with
the client is ensured to seek the same consulting engagement or new
engagements in future.
8. Harvesting intellectual asset, managing knowledge: This process, covers
collection, codification and reuse of intellectual products. Tools, templates
and research outputs for other engagement. This process is important, as it
adds to the distinction of a consulting firm thus helping it emerge as a
strong brand. It also enhances the firm;s operational efficiency . Knowledge
management helps in skill building of the new entrants and acts as a
valuable source for the experienced one.
9. Managing consulting practice profitability and growth: This process, covers
the business management aspect of the counseling practice, profitability,
team building, brand building and managing growth are the areas this
process takes care of.
10. Building and maintaining consulting skills:All counseling firms are
respected for the cutting edge skills of their consultant’s skills. Building and
maintaining skills of the consultants in newer business areas and newer
technologies becomes an important sub-processes followed
316

A schematic representation and interrelationship of all the sub


sub--processes of
the consulting process is shown in the Figure.

22.3.2 PROPOSAL DEVELOPMENT


All consulting proposals for client must be written proposals. And, unless the
client is too familiar, a proposal should not be sent by email. A well-drafted
well
proposal makes a consultant look professional and it also helps in the following
aspects:
• Defines the scope, objectives and deliverables clearly
• Makes engagement management easier.
• Gives the consultant protection against any dispute including legal one.
All proposals must be accompanied with a suitably drafted covering letter. A
template for such a letter is given.
Template for covering Letter
Client Name
Title
Company Name Address 1
Address 2
Dear <the client>
This is with reference to our meeting on < meeting date> regarding the consulting
assignment <engagement name>. We are happy to a detailed proposal after our fact
finding study covering the follo
following.
Objective, scope, deliverables and proposed method
Engagement plan. milestones and the composition of consulting team
317

Schedule of consulting fee and expenses


Support required from your side
Terms and conditions of our consultancy work
We believe this proposal is adequate to cover our discussion. However, please do
not hesitate to revert with your observation.
Please feel free to call me.
Yours Sincerely<your name>
The content of a consulting proposal must have headings such as
• Background of the consulting study
• Objectives and scope of the consulting study
• Scope should cover location as well as functional scope
• Issues and problems to be addressed during the consulting study
• Methodology of the study
• Project plan
• Team composition of both the consulting team and the team from the client
side. The latter is presented as a requirement from the consultants to take
the consulting exercise easier and faster to undertake.
• Governance structure: How the consulting exercise progress will be
monitored and how disputes or any issues arising during the consulting
phase will be addressed.
• Responsibility of the consultant and the client
• Exclusion
• Consulting fee, payment terms and conditions if any
Both the Word and PowerPoint proposals are accepted. The final fee can be
conveyed separately in a covering letter in some cases.
22.3.3 CONCLUDING AND ENTERING INTO THE CONTRACT
At this stage the client’s consent on the consulting proposal is sought and the
contract is entered processes of the consulting process. Making the client agree to
start the consulting engagement is the most difficult of all sub process of the
consulting process. Called ‘closing the engagement’ in the consultant’s vocabulary,
this step is normally driven by the principals, partners and directors of the
consulting firm.
Before arriving at this step, the consultant team would have conducted
several days on doing the fact finding study and discussed with the client the likely
approaches to tackle the issues for which the consulting was proposed. While doing
this, the consulting firm may have exposed their intellectual prowess. This is a
critical moment for the consultant because the client managers may be tempted to
318

feel that they now know enough to do the remainder of the work on their own. This
is usually a false proposition in most cases, as the execution of the consulting
engagement will lead to detailed recommendations and action plans that are just
presented as bulleted points in the fact finding study. Many a time a totally new
insight is obtained during the consultation execution. In addition, a lot of work goes
in detailing out the recommendations during the consulting engagement which
requires deep experience of consultants. It is like I may know the path but it will
not take me to the destination, I have to have the abilities to walk as well! I had
been in the same situation with some of the clients where such apprehension did
arise. My take in all such cases was that by copying a recipe book you do not get a
good recipe made! You need skills to actually realise what may be written in the
proposal document that comes through deep experience of the consulting
processes. This insight is worth noting by our readers who may be on the other side
of the consulting table as recipient or buyer of consulting services.
If the consultant proposal is not up to the mark or is the consultant has not
been able to demonstrate the confidence of doing the task then rejecting the
proposal by the client is totally justified. However, if the client managers think that
they have learnt enough during the proposal process to do things on their own and
would be able to save the consultant’s fee, then their assumption is too simplistic.
Many organisations who have adopted this approach later realised that consulting
assignment is the unique process by which the organisation goes through the
learning. While it may lead to 100 pages of PowerPoint charts but the entire
exercise of producing them is most useful for the organisation.
Costing and Pricing the consulting Engagement.
This step may also be the time when the value of the engagement vs. the cost
of the engagement may be questioned. Typically, the engagement cost is based on
the amount of time spent by different consultants in its implementation. The client
may start calculating the cost based on the labour rate of consultants and the time
spent by them. The smartness at this stage is to price the engagement on the value
it is likely to deliver rather than the man-hour rate. A consulting engagement is
performed by a team consisting of full-time members and part-time members like
the partner, principal and subject matter experts. A blended rate is usually arrived
at to price the engagement. Table 4.1 shows a typical method of calculating the
blended rate.

Typical consulting rate calculator (Labour Based)

Rate /per day % time spent


example) Cost
Associate Consultant 100 90 90
Senior Consultant 200 50 100
Principal or partner 300 30 90
319

Typical consulting rate calculator (Labour Based)

Rate /per day % time spent


example) Cost
Overhead (cost of sales and 30 %
logistics expenses)
Blended cost 280
Blended cost with overhead 364
Profit margin % (35%) 491
Blended Rate/week (use 5 2455
days/week)
Note: These rates are representative numbers
For example: Rs. 100 cost of a associate consultant is arrived at by
calculating 1.5 times salary cost to cover non-work hours, annul leave,
office space overheads and other benefits paid to the employee.

Value Based Pricing


This is a new approach for costing and pricing a consulting engagement.
Depending upon the likely estimated saving or gain (say profit) arising out of the
consulting engagement, the consultant is paid a percentage of it as the consulting
fee. Typically, this fee will be based on the first year saving or the saving after two
years of implementation. Till the time the consulting firm takes a minimum margin
or expense that can be offset against the future payment.
There could also be a risk – and – reward – based payment, which means the
consulting firm is given additional fee if the results exceed the estimated returns, or
it loses a part of its fee if the results fall below a certain level.
Entering Into Contract
Contracting is the process of formal drafting of the agreement for the
engagement proposal. Here the consultant and the client agree to enlist what is
expected of each other. Both make a clear agreement of working together. During
the contracting process, the consultant must ensure adequate and absolute
support from the client in terms of resources, executive time for the fact-finding
study and workshop, assess to data and confidentiality clause. The end-result of
the contracting process is a detailed document covering the consulting project
objective, final deliverables, work plan with completion dates and milestones, the
composition of both the client and consultant teams, and the mechanism of
engagement governance.
22.3.4 EXECUTING THE CONSULTING ENGAGEMENT
After engagement planning, engagement team formation and ratification of
engagement objectives, the engagement enters the execution stage. BIT consulting
execution consists of the following phases:
320

1. Definition space pace identifying the scope and defining the boundary of the
problem
2. Data gathering phase: Understanding the current situation, gathering data
and brainstorming for alternate ideas.
3. Diagnosis phase
• Defining objectives and performance measures
• Assessing the organisation
• Assessing the external environment
• Developing and testing hypothesis
• Arriving at the confirmed diagnosis
4. Solution and recommendation phase
• Generating alternative solutions
• Conducting cost benefit analysis for each solution
• Selecting the optimal and feasible solution
• Seeking management approval
5. Implementation planning and implementation management phase
• Planning the stages for implementation
• Implementing the solution
6. Evaluation of results
In this phase, the BIT consultant evaluates the results with respect to the
objectives set in phase III(a)
22.4 REVISION POINTS
• Consulting process
• Proposal development
• Entering into the contract
• Executing the consulting engagement.
22.5 INTEXT QUESTIONS
1. Discuss the steps in BIT consulting process.
2. List down the importance of a proposal.
3. Prepare a template of a proposal covering letter.
4. Write a short note on value based pricing in the context of BIT consulting.
22.6 SUMMARY
BIT consulting engagement happens with the operational managers to
redesign company processes and organisation change initiatives
Prospecting, qualifying and preparing
321

Understanding and proposing


Concluding and entering into contract
Executing the engagement
Recommending and implementing
Closing and collecting
Managing the follow-on engagement and continued relationship
22.7 TERMINAL EXERCISE
1. ______________ involves seeking out clients, marketing consulting services,
preparing a list of prospects
2. ______________ prepares the BIT proposal.
3. ______________ is the process of formal drafting of the agreement for the
engagement proposal.
22.8 SUPPLEMENTARY MATERIALS
1. https://www.networkworld.com/article/2318651/the-convergence-of-
business-process-management-and-business-service-management.html
2. https://ubiquity.acm.org/article.cfm?id=1276167
22.9 ASSIGNMENTS
1. Specify the steps in executing the consulting engagement.
22.10 SUGGESTED READING/REFERENCE
1. Sanjiva Shankar Dubey, Management and IT Consultancy, McGraw Hill,
New Delhi, 2012.
2. https://foundry4.com/industry-convergence-going-beyond-business-
boundaries
22.11 LEARNING ACTIVITIES
1. Browse and find the structure of proposal and how contract are made.
22.12 KEYWORDS
Prospecting
Intellectual asset
Contracting

322

LESSON - 23

EXECUTING THE CONSULTING ENGAGEMENT


23.1 INTRODUCTION
A business leader’s job is to successfully move the organization forward and
improve business performance. This requires a continuous process of reflection,
analysis, planning and execution. There is no better way to learn/practice these
skills than “by doing”. It doesn’t matter how good your strategies and tactics are, if
you don’t execute them well it won’t really matter. Much has been written about
execution, and many tools and frameworks exist. We shall discuss how executing
the consulting engagement can be done effectively.
23.2 OBJECTIVES
• To study the steps in executing the consulting engagement
• To gain in-depth knowledge of each phase
• To gain insights of consultant-client relationship
23.3 CONTENTS
23.3.1 Executing the Consulting Engagement
23.3.2 Definition Phase
23.3.3 Diagnosis Phase
23.3.4 Solution and Recommendation Phase
23.3.5 Implementation planning and implementation management phase
23.3.6 Evaluation of Results
23.3.7 Recommending and Implementing
23.3.8 Implementation Planning
23.3.9 Closing and Collecting
23.3.10 Consultant-Client Relationship
23.3.1 EXECUTING THE CONSULTING ENGAGEMENT
After engagement planning, engagement team formation and ratification of
engagement objectives, the engagement enters the execution stage. BIT consulting
execution consists of the following phases:
1. Definition space pace identifying the scope and defining the boundary of the
problem
• Data gathering phase: Understanding the current situation, gathering
data and brainstorming for alternate ideas.
2. Diagnosis phase
• Defining objectives and performance measures
• Assessing the organisation
• Assessing the external environment
323

• Developing and testing hypothesis


• Arriving at the confirmed diagnosis
3. Solution and recommendation phase
• Generating alternative solutions
• Conducting cost benefit analysis for each solution
• Selecting the optimal and feasible solution
• Seeking management approval
4. Implementation planning and implementation management phase
• Planning the stages for implementation
• Implementing the solution
5. Evaluation of results
In this phase, the BIT consultant evaluates the results with respect to the
objectives set in phase III(a)
23.3.2 DEFINITION PHASE
In this phase of a BIT consulting engagement, the problem statement is
defined properly, and the scope of the problem is controlled by defining the problem
boundary. The problem boundary our scope could be based on the number of
offices, number of plants to be visited, product line and location. Defining problem
in detail is the first step of inserting engagement execution. As much detail as
possible is collected about the problem and the problem is broken into its
component elements to solve it faster and accurately.
Data Gathering Phase
This comprises three steps
1. Understanding the current situation
2. Data gathering and
3. Generating ideas
Understanding the current situation
The consultant must establish good understanding of the current state of the
client's business, especially around the problem area defined in the scope of the
engagement. at this stage, key individuals and their background are also explored
to gain an overall understanding of the organisation and related issues
Data gathering
Once the problem is developed in sufficient detail, for the data is gather for
each area mentioned in the scope document. The relevant document data could be
available in the organisation about the following aspects.
• Machine capacity
• Work layout
• Work processes
324

• Organisation chart, describing roles and responsibilities


• Balance sheet, profit and loss account statement
• Flow charts or process chart
The organisation also has soft data collected during executive interviews,
employee are customer surveys and observations that a consultant makes by
examining the organisation issues.
For business problem solving, personal interviews with a cross section of key
executives and stakeholders is a widely used and recommended method to gather
data, collect information and seek opinions, which may not be easily available
otherwise. For a larger coverage of the client population, a method, called focused
group sessions or workshops, is conducted to gather data and seek better inputs
and suggestions
To conduct a face-to-face interview of a senior executive or run a focused
group workshop with senior departmental representatives, the consultant must do
proper homework, such as developing a list of questions to be asked or preparing a
presentation outlining the conduct of the focused group. This must be done much
before fixing the interview are the focus to group workshop schedule. The interview
questions or the focused group workshop outline should be discussed with the
senior consultant and feedback obtained. If the person to be interviewed is a senior
executive or the group workshop involves several senior executives or managers
then even the principal or partner would like to review the consultant preparation.
Generating ideas
Consulting is all about solving problems by exploring fresh, cost effective
workable ideas to find a problem or to arrive at a solution. There are several
techniques that can be used for generation of new ideas. One such technique is
brainstorming that is conducted by several methods.
23.3.3 DIAGNOSIS PHASE
Just collecting all the minutest details and data about every activity
concerning the client problem may not lead to arriving at a solution. After collecting
sufficient data, the consultant needs to work on the problem diagnosis. Typically,
the problem diagnosis process is carried out through the following steps:
Defining objectives and performance measures
In this step the consultant examines the formal and informal data gathered so
far to determine objectives, opportunity for improvement or barriers to successful
performance. The main aim here Is to find the measures that can be used to track
performance improvement when the Consultant’s recommendations will be
implemented.
325

Assessing the organization


The internal assessment of the organisation is done to identify factors that
create or destroy value of the products and services of the firm. This step is
performed to find how well the organisation is equipped to face the business
challenges and what glaring problems are being faced by it. In order to assess the
current level of organisation consultant typically conduct SWOT analysis and
assess the internal alignment of the firm
Assessing the external environment
Any company exists in a business ecosystem. While conducting the diagnosis
of the problem, apart from conducting the internal assessment of the organisation,
the entire business ecosystem should also be studied. The external assessment or
the business ecosystem study is conducted to understand those factors that help
the firm create value for its customer and stakeholders or act as limit for it. These
external influencing factors are determined using model such as PEST and porter's
competitive forces analysis.
Developing and testing hypothesis
Management consulting is all about getting faster and useful resolution of the
client problems. A client expects a management consultant to help it identify the
problem and then provide the right solution. All consultants attempt to solve a
problem by forming some possible hypothesis10 about the likely cause of the
problem and then try to prove or disprove it. This hypothesis based approach of
BIT consultants is the result of their deep understanding of business situations and
the likely causes of business problems. Their deep understanding stems from years
of domain experience and having solved similar business problems in several client
situations. Once the requisite data is collected to support the consultant’s
hypothesis, he/she would offer a solution and convince the client to go for it.
The essential steps of hypothesis development and testing are listed below:
• Develop alternative competing descriptive hypothesis about the cause of the
problem.
• Test the descriptive hypothesis with the available data to check whether the
data points towards the problem.
• Envision solution (prescriptive) hypothesis for solving the problem.
• Go backwards to the descriptive hypothesis if new data points towards any
anomaly.
• Build suitable rationale to convince the client about the solution.
Arriving at the confirmed diagnosis
Finally, when all hypothesis are tested and evaluated, the BIT consultant has
to summarise the findings. This is the step in which the consultant has to finally
convert the facts finding and all the data analysis into conclusions. The BIT
consultant will recommend the solution or changes to address the problem
326

23.3.4 SOLUTION AND RECOMMENDATION PHASE


All business problems will have to be ultimately addressed by finding solutions
for them. In the solution and recommendation phase, the BIT consultant proposes
a solution and accompanying recommendations. While doing so, the consultant not
only has to carefully consider various solutions but also has to match them with
the client’s capacity to use and adopt them. So he carefully weighs and ranks all
solutions with proper justification so as to convince the client why a particular
solution is necessary and suitable for solving the business problem. A good BIT
consultant is the one who solves the business problem with least technology
intervention for long lasting results.
The first step of the BIT consultant would summarise the advantages and
disadvantages of every solution. The second step of the phase is to look at the
advantages of all solution and arrive at a comparative summary for each one of
them. Another important aspect concerning available solutions that needs
consideration In this phase is the "ease of implementation" of the solutions.
Once the ranked solutions are presented with evaluation criteria, the
management of the client organisation has to take a decision. The consultant must
help the client management with decision making and then revise the alternatives
recommendations based on their feedback. The revision of recommendations is
required as the client’s senior management may have views about their
implementation and cultural suitability.
23.3.5 IMPLEMENTATION PLANNING AND IMPLEMENTATION MANAGEMENT PHASE
Any solution, howsoever sophisticated or good it may be, will not lead to
desired results if it is not implemented properly. The first step of the
implementation phase is formulating a good implementation plan that clearly
defines tasks and responsibilities. The plan should have milestones and also
governance mechanism so that the progress of the plan could be measured and
monitored. Tools purchase steering committee, program management office and
war rooms can be used to ensure that the implementation is done with speed,
quality and within budget. The solution implementation largely implies allocating
and deploying resources. Resources would include money, men and methods that
the organisation must acquire and deploy as and when necessary as per the
implementation plan. Most managers select some great solutions but then fail to
implement them successfully because of improper planning. Planning is very
important for solution implementation so that the organisation knows what
resources are to be deployed when.
23.3.6 EVALUATION OF RESULTS
The final act of BIT consultant is to help the client evaluate the results to
achieve which the entire exercise was started. The results that follow the solution
implementation must be evaluated with what was proposed to be achieved through
the solution. The actual results may not be exactly like what was envisaged. All
projects need some fine tuning to address the reasons that made achievement of
the proposed results impossible. These reasons could be the solution may not meet
327

all the parameters; the business needs may have changed; the requirements
visualised may have changed; or key people may have changed roles. In all these
cases, the BIT consultant’s job is to take necessary next steps to overcome the
apparent shortcomings of the original solution. Normally, the results evaluation is
done some time after the implementation of the solution.
23.3.7 RECOMMENDING AND IMPLEMENTING
The end-result of the consulting engagement would be the recommended
actions and their implementation. This sub-process covers detailing out the
recommendations and also if client desires, continue till these are implemented.
Roughly around 50-60% of the consulting recommendations may not include
implementation, especially if they pertain to areas like organisation policies,
structure and HR processes etc. However, in the recent times more and more
consultants are participating in the implementation of consulting recommendation
as desired by the clients. This way the client is able to ensure that the onerous of
achieving objectives lie with the same consulting firm but with the different team
from consulting firm having implementation skills. In this regard the
implementation of all IT projects is a good example where the same consulting firm
takes the implementation responsibility as well.
23.3.8 IMPLEMENTATION PLANNING
The sub-process of the consulting project implementation starts with planning
for implementation. This is the step when the consultant and the client directly
agree on the recommendations and the way they would like to implement them. The
implementation of some recommendations (actions) can happen without the
consultant supervision, some with occasional consultant’s inputs or checkpoints
and some can only be done through full involvement of the original consultant
team. This is an exercise to ensure that recommendations are agreed upon and
commensurate action plans are given commitment by the client management. It is
important that the client and the consultant agree on a process that articulates a
monitoring structure to maintain the constancy of purpose. Typically, this is done
in a planning session involving key client team members, key project members and
key stakeholders. Any implementation includes a fair amount of client education,
collection of opinions and feedback with all those involved. By doing proper
implementation planning, the consultant ensures that proper resources are
provided for and the client participation and support has been enlisted. This is a
useful step to garner support from the client organisation, participation, and
commitment to proceed further. Many a time due to change in the leadership within
the client organisation during the consulting engagement (execution or
implementation), the process may be disrupted. But having a suitable governance
mechanism in place reduces the changes consequent to the termination of the
consulting contract or due to the change in the client management.
Implementation is the ultimate phase of any consulting assignment when the
real benefits start occurring to the client it may take long, it may be difficult, and it
may require more changes. But it is the most enjoying phase worth doing and living
328

up to. Whatever originally planned for may not happen as new realities within the
organisation or outside the business may surface. It helps the consultant learn if
there were planning errors or execution challenges, both very important to gain
maturity as a consultant and to build relationship with the client.
23.3.9 CLOSING AND COLLECTING
The closing of consulting assignment means that after fulfilling of all
obligations by the consultant under the contract, the client gives his final signoff for
closing the Consulting contract. At this stage the final and balance payment for the
assignment is also released.
At this stage the consultant also includes a feedback session with the client
management/ sponsors. Many Consulting firms conduct this as a formal exercise
by sending a partner/ principal, not connected with the project, for the feedback
session to allow the client be open with his feedback. This helps in evaluating the
consultant performance, which is normally done by the engagement manager, and
the evaluation of the engagement manager’s performance is done by the concerned
principal or partner.
In this sub process the consulting from documents the following
• Lessons learnt
• The engagement outcome or results produced which can be used for
soliciting similar engagements from clients facing similar issues or problems.
• Generating intellectual capital by sanitizing the work products/
recommendations. By sanitizing, we mean dropping the client confidential
details from the reports and presentation and making them generic so that
the client does not object nor is affected by their circulation to other clients.
The work products could be any templates, PowerPoint charts, excel sheets
and macro or analytical tools that may have been used during the course of
engagement and qualify for the next engagement.
The net result of this sub-process for the consultant is to get a clear and
unambiguous feedback on the work performed and an assessment of the value that
the consultant has provided to the client. Exhibit 4.2 shows a typical client
feedback sheet. This contract closing meeting also leads to opening up a discussion
for the next engagement.
329

End of Engagement Feedback Form (Sample)

Name of the Client Name of the Engagement

_______________________ ___________________________

Start and End date Name of the Engagement Manager

_______________________ ___________________________

Please provide your feedback by assigning a number on a scale of 1 to 10 (high) for


each of the following questions and also write any qualitative feedback.

1 3 6 10
1. Your overall feedback about the engagement ____________________
2. Name the areas of improvement ___________________
3. Provide any specific feedback of any consultant in the team
____________________________________________________________
4. Was the consultant be able to address your problems and issues yes/no
5. Is there any specific feedback about the consultant skills, behaviour or
attitude?

Please share __________________________________________________

Signature with Designation

There could an unfortunate situation when the contract may have to be


terminated. This could happen due to various reasons. The client’s priority might
have changed. Sometimes the change (transfer or leaving) of the sponsor may end
up with no support by the client’s organisation to move forward the consultant’s
assignment.
Alternatively, there could be a situation when the work progress shown by the
consultant may be falling short of the client expectations and despite correcting
efforts by the consultant the client may like to discontinue the engagement. In all
330

such cases, the consultant must collect the due payment as per the contract terms
and also gather the feedback that would have been taken had the assignment been
completed properly.
23.3.10 CONSULTANT CLIENT RELATIONSHIP
Consultant client relationship is very complex and needs a deep
understanding. The BIT consultant has to play multiple roles. Has cleaner of the
system, he has to show to the client the areas of weakness and yet do it in a
manner that should make the client trust his word. However, sometimes the BIT
consultant needs to champion that cause and enthuse or force the client
organisation to at on his advice by presenting them all possible scenarios. The
consultant client relationship is built on the basics of trust, mutual respect and the
consultant’s sincere effort aimed at solving the client's problem.
The BIT consultant always deals with organisational issues solving human
systems that are complex and interconnected with strategy and business
environment. That BIT consultant's long term relationship with the client facilitates
his regular presence at the client's premises does enabling him to know the client's
organisation from close quarters. This helps a consultant formulate a fruitful
recommendations for the client who expects the consultant to come up with
something that is different from our challenges its own perception about future
learning and development.
Factors That Adversely Impact the Consultant-Client Relationship
Some of the factors that adversely impact the consultant-client relationship
are listed below;
• Inability to manage expectations from either side
• Failure or lack of communication between the consultant and client teams
• The consultants in your ability to focus on details
• Lack of understanding of the client's problems. It could also be due to the
client's inability to convey the real problem to the consultant.
• Lack of support from the client side
• Insistence by the client on price method, or resources
Factors that is critical for the success of consultant client relationship
• Competent Consultants
• Focus on client results versus consultant deliverables
• Clear and well communicated Expectations and outcomes
• Receivable executive support
• And adaptation to client readiness For change
• And investment upfront in learning the clients environment
• Defined in terms of incremental success
331

• Real partnership with Consultants


• Inclusion of the consultants in the implementation phase
Both the consultant and the client must have the following qualities for
smooth functioning
• Extraversion positive and enthusiastic
• Agreeableness
• Conscientiousness
• Openness to experience
• Emotional stability
23.4 REVISION POINTS
• Executing the consulting engagement – steps
• Recommending and implementing
• Closing and collection
• Consultant-Client Relationship
23.5 INTEXT QUESTIONS
1. What are the methods of data gathering in an organization?
2. Explain the essential steps in hypothesis development and testing.
3. Write a short note on closing and collecting.
4. What are the important elements of client feedback form.
23.6 SUMMARY
Defining problem in detail is the first step of inserting engagement
execution. As much detail as possible is collected about the problem and the
problem is broken into its component elements to solve it faster and
accurately
After collecting sufficient data, the consultant needs to work on the problem
diagnosis
A good BIT consultant is the one who solves the business problem with least
technology intervention for long lasting results
The important step of the implementation phase is formulating a good
implementation plan that clearly defines tasks and responsibilities
23.7 TERMINAL EXERCISE
1. Defining objectives and performance measures are done in ___________
phase.
2. The ________________ of the organisation is done to identify factors that
create or destroy value of the products and services of the firm.
3. _______________ hypothesis with the available data to check whether the data
points towards the problem
332

23.8 SUPPLEMENTARY MATERIALS


1. https://foundry4.com/industry-convergence-going-beyond-business-
boundaries
2. https://www.networkworld.com/article/2318651/the-convergence-of-
business-process-management-and-business-service-management.html
23.9 ASSIGNMENTS
1. Explain the steps in executing the consulting engagement process.
23.10 SUGGESTED READING/REFERENCE
1. Sanjiva Shankar Dubey, Management and IT Consultancy, McGraw Hill,
New Delhi, 2012.
2. https://www.linkedin.com/pulse/business-convergence-agile-pursuit-
purpose-value-strategic-kent
23.11 LEARNING ACTIVITIES
1. Refer various resource and prepare an client feedback form.
23.12 KEYWORDS
Generating ideas
Internal assessment.
Implementation
Closing and collecting.
Client relationship.

333

LESSON - 24
BIT CONSULTING THEMES
24.1 INTRODUCTION
The BIT consulting has become more poplar in almost all corporate in the
globe. Here we shall list out the common streams where BIT consulting has space
and sort of work it can do in an organization.
24.2 OBJECTIVES
• To develop knowledge about the major consulting themes among
organization.
24.3 CONTENTS
24.3.1 Major Consulting Themes
24.3.2 General Management Stream
24.3.3 Business and IT stream
24.3.1 MAJOR CONSULTING THEMES
In this section, we discuss various consulting approaches. Consulting
approaches have been grouped into three major streams.
General management stream
This Consulting stream includes the following themes:
• Strategy consulting
• Restructuring and turnarounds
• Mergers and acquisitions
• Industrial sector consulting; lean, Six Sigma etc.
• Human resources: Organisation restructuring, compensation planning, etc.
Business and IT stream
This consulting stream uses various combinations of it capabilities to address
business problems. It includes the following major Consulting themes:
• IT strategy for business
• IT planning, it application portfolio planning
• E-business strategy, including e-commerce and M-commerce strategy
• COTS enabled business process design
• Cost management: Integrated supply chain
• Customer centric: CRM process redesign
• IT initiated organisational change
IT management stream
This Consulting stream focus on it investment and optimal management of IT
resources. This stream includes the following:
• IT architecture
334

• IT Service Management
• IT process management, simplification and Optimisation
• IT governance
• IT process quality improvement consulting
• IT knowledge management
• Business continuity and disaster recovery consulting
• Planning budgeting and monitoring it programs
• IT security, data security and prevention of fraud and intrusion
24.3.2 GENERAL MANAGEMENT STREME
Strategy consulting
This type of consulting primary focus on the following areas:
• Creation, implementation assistance and management of Corporate strategic
plan
• Portfolio analysis, competitive analysis and profit improvement studies
• Marketing and sales strategy: New product introduction, marketing channel
strategy, customer satisfaction survey and customer changing need analysis
• Examining possibilities of mergers, acquisitions and collaborations
• Defining growth initiatives as well as successful exit from any business
• Operating strategy consulting: Operations review, review of manufacturing
strategies, outsourcing and logistics studies for Optimisation, distribution
and warehousing planning
• Organisation change strategies to help overcome any impediments in the
way of organisational change during a strategy implementation. These will
include restructuring of the organisation chart, role and responsibilities and
span of control.
• Sharing best practices in any of these areas by sharing research work with
the client
Restructuring and turnarounds
The companies which are making continuous losses and facing mounting
debts need the service of these Consultants. Restructuring consulting is a financial
Diagnostic and correction exercise. Turnaround Consulting goes beyond the cash
flow concern and focus on operational effectiveness, re-evaluation of management
team, improvement of Business and Technology processes and finally revisiting the
firm's strategy
Mergers and acquisitions
Consultants working under this team specialise in the processes and activities
related to the merger of one company with another. The Entity that is formed after
the merger of two companies may take a new name. That one company is
335

subsumed into a required by the other company usually the larger company does
the acquisition of the smaller company
The major process involves a number of activities at several levels such as
financial, legal and operational they also have a bearing on people. Translation of
two companies into one is a crucial time when the consultant’s expertise is needed
to prevent any risk or description of operations, employee exodus or losing of
market share or alienation or apprehension among customers. Finally, the
consultant helps in integration across the merged organisation to bring it to a
steady state of a normal organisation. The final phase of merger brings about
operational unification, new identity and Synergy between the merged or acquired
entities
Industrial Sector Consulting
This consulting theme is industry sector-specific wherein the consulting firm
provides advisory services for new product or process development, new project
formulation, planning for implementation and assessment of possible Risks during
the project implementation. They also address improving operational practiccs and
approachcs that increasc efficicncy, customer service and financial performance.
This consulting theme also covers design and engineering services, operational
processes and technology adoption or acquisition for sectors of the consulting firm's
specialisation. Many tools and techniques like Lean and Six sigma are used in
engagements relating to sector-specific consulting theme.
Human Resources Management (HRM) Consulting
Employees in the knowledge economy are the most precious resources who
can make a difference in a firm’s overall performance. Human resources
management HRM) consulting brings deep expertise to deal with issues in several
areas such as employee engagement, skills enhancement, talent managcment and
retention. HRM consulting covers processes designing and implcmcntation for
employee recruitment, career progression and skill management. It also addresses
issues of workforce management, scheduling and deployment. Leading client firms
give high importance to building and installing employee self-service processes
through the use of supporting IT. This initiative is designed and facilitated by HRM
consultants. Finally, HRM consultants help align the client firm’s goals and
objectives with employees processes covering areas such as compensation, reward,
motivation management and many others, keeping the overall cost of human
resources within manageable limits.
24.3.3 BUSINESS AND IT STREAM
IT strategy consulting
• Consulting team help the firm in the following business related aspects:
• Executing business strategy initiatives
• Taking a strategy view of IT in business strategy design
• Enhancing and redesigning the firm's business model
336

• Designing a strategic direction on the ownership and management of IT


infrastructure investment
• Balancing IT investment opportunities across business units
IT planning, IT application portfolio planning consulting
Typical deliverables of it planning Consulting are:
• Application portfolio
• Data dictionary and databases
• Enterprise process map
• Package Sourcing vs in-house development
• Master project plan
• Financial plan, budget and expenditure plan
• Personnel skills required, training plan for end uses and IT staff
• Change management plan for process changes
• Technical architecture of hardware, communication network, database and
application interfaces
• Migration and cutover plan for new systems
• Integrated project calendar
• Policies for project management, system development or package selection
• Security plan
• Costs, benefits, risks and resource requirements
• Projections of possible future requirements and a comparison of IS
performance
E-Business strategy consulting
Some of the deliverables of e-business strategy below:
• Prioritised list of e-business initiatives
• Strategies for web presence
• E-CRM, e-Commerce application portfolio
• E-business model recommendations
• Recommendations for people-related and process changes
• IT infrastructure assessment
• Recommendations for web infrastructure including network redesign, IT
system performance capability needed
• Assessment of security policies, identification of security vulnerabilities, and
recommendations to eliminate them related to IT systems
337

COTS-Enabled business process design (CEBPD) consulting


CEBPD Consultant are required to perform a variety of task as listed below:
• Selecting the right and suitable ERP package that meets the organisations
needs: The package may not necessarily be a popular one
• Assessing the fit-gap of business processes with respect to the selected ERP
• Re-engineering the organisations business processes and customisation of
the ERP package to suit the business processes.
• Assessing the IT infrastructure needed for the ERP implementation.
• Ascertaining and realising the business benefits before and after the ERP
implementation
• Laying out the project plan, implementation steps, and plans for managing
people-related changes
• Prescribing suitable communication plan for the project life cycle
• Setting up a suitable governance mechanism, such a steering committee,
program management office and change control board.
Cost management: Integrated supply chain consulting
Integrated supply chain consultant deliverables include the following:
• Sourcing policies, process and partners
• Logistics and movement recommendations
• Optimal storage and retrieval strategies
• Managing the information flow
• Selecting of SCM tools and Technology for Optimisation
• Integrated supply chain from the point of origin to the final consumption
• Fright management
• Defining cost-effective appropriate customer service
IT initiated organisational change:
The Change management consultants help the client in creating awareness of
the need for change. Build enthusiasm among the client team. Share expertise of
managing change with processes, tools and templates, past experiences, skills and
education. Implement change once the change plan is made and agreed upon
Change management consultant includes the following deliverables:
• Survey and assessment of the change issues and identifying the people to be
impacted by the change
• Training and skills building to make people feel easy while walking in new
scenario
• Removing apprehensions and fears by devising suitable communication
means.
338

• Soliciting participation of the change sponsors and coaching the new issues
come up
• Making the change foundational so that there is no going back.
24.4 REVISION POINTS
• General management Stream
• Business and IT Stream
24.5 INTEXT QUESTIONS
1. Write short note on strategy consulting
2. What are the consultants’ key functions in mergers and acquisition?
3. What are the functions carried out in HRM consulting?
4. What are the tasks performed by CEBPD consultant?
24.6 SUMMARY
Strategy consulting primary focus on Creation, implementation assistance
and management of corporate strategic plan
The companies which are making continuous losses and facing mounting
debts need the service of these Consultants
Merger brings about operational unification, new identity and Synergy
between the merged or acquired entities
Integrated supply chain consultant deliverables include the Sourcing
policies, process and partners and logistics and movement recommendations
Change management consultant deliver Survey and assessment of the
change issues and identify the people to be impacted by the change
24.7 TERMINAL EXERCISE
1. CEBPD stands for _____________________________
2. The Change management consultants help the client in creating awareness
of the need for ________________
3. Strategies for web presence is an outcome of _____________ strategy
4. Balancing IT investment opportunities across business units is an outcome
of _____________ strategy
5. _________________ focus on operational effectiveness and re-evaluation of
management.
24.8 SUPPLEMENTARY MATERIALS
1. https://www.strategyblocks.com/blog/business-convergence-strategy/
2. https://www.linkedin.com/pulse/business-convergence-agile-pursuit-
purpose-value-strategic-kent
24.9 ASSIGNMENTS
1. Write an essay about the various consulting themes.
339

24.10 SUGGESTED READING/REFERENCE


1. Sanjiva Shankar Dubey, Management and IT Consultancy, McGraw Hill,
New Delhi, 2012.
2. https://ubiquity.acm.org/article.cfm?id=1276167
24.11 LEARNING ACTIVITIES
1. Refer various sources and identify various streams of BIT consulting. What
are the changes it has brought to the organization?
24.12 KEYWORDS
Strategy
Restructuring
Turnarounds
Merger
Acquisition
Supply chain
Organisational change.



178E1250/179E1250/347E1250/348E1250/349E1250
ANNAMALAI UNIVERSITY PRESS 2021– 2022

You might also like