Annamalai University
Annamalai University
179E1250
347E1250
348E1250
349E1250
I – VI
ANNAMALAI UNIVERSITY
DIRECTORATE OF DISTANCE EDUCATION
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester
Copyright Reserved
(For Private Circulation Only)
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester
Dr. N.Ramgopal
Dean
Faculty of Arts
Annamalai University
Internals
Externals
Dr.J.John Adaikalam
Associate Professor
Dept. of Business Administration
Annamalai University
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester
M.B.A. (E - Business)
M.B.A. International Business
M.B.A. Human Resource Management
M.B.A. Marketing Management
M.B.A. Financial Management
Second Semester
the capability to store the data and produce the results on the basis of detailed step
by step instructions given to it. The terms hardware and software are almost always
used in connection with the computer.
Hardware
The hardware is the machinery itself. It is made up of the physical parts or
devices of the computer system like the electronic Integrated Circuits (ICs),
magnetic storage media and other mechanical devices like input devices, output
devices etc. All these various hardware are linked together to form an effective
functional unit. The various types of hardware used in the computers, has evolved
from vacuum tubes of the first generation to Ultra Large Scale Integrated Circuits of
the present generation.
Software
The computer hardware itself is not capable of doing anything on its own. It
has to be given explicit instructions to perform the specific task. The computer
program is the one which controls the processing activities of the computer. The
computer thus functions according to the instructions written in the program.
Software mainly consists of these computer programs, procedures and other
documentation used in the operation of a computer system. Software is a collection
of programs which utilize and enhance the capability of the hardware
Elements of a computer
Computer is a very effective and efficient machine which performs several
activities in few minutes, which otherwise would have taken several days if
performed naturally. Besides there would have been a doubt about the accuracy,
finish etc. The computer may be faster; more accurate but it cannot compete with
human brain. However there are some similarities between the human and the
computer which would make the computer more understandable.
Human Computer
Human beings has ears, nose, Computers have input devices such as
eyes etc. keyboard, scanner, touch screen, mouse
etc to get information.
we remember things Computer also stores information
The computer has storage devices like floppies (now not in usage), hard disks,
compact disks to store and retrieve information. However computer does not
understand emotions, it does not understand meaning beyond words, it cannot
read between the lines like the human. We learn many things unknowingly, certain
things knowingly; we call it as upbringing. But computers can learn everything only
knowingly. We learn many things on our own, but computer has to be taught to do
everything.
The basic parts of computer system are:
• Input Unit
• The Central Processing Unit (Storage Unit, Control Unit, Arithmetic Logic
Unit)
• Output Unit
Various input devices like keyboard etc, provide input to computer; whenever
a key is pressed, the letter or key gets automatically translated to binary codes and
then transmitted to either memory or processor. The information is stored in
memory for further use.
Thus the functions of the input unit are:
• Accept information (data) and programs.
• Convert the data in a form which the computer can accept.
• Provide this converted data to the computer for further processing.
The Central Processing Unit:
This is the head of any computer system. The central processing unit or CPU
is made of three parts:
• The control unit
• The arithmetic logic unit
• The primary storage unit
The Control Unit
The Control Unit controls the operations of the entire computer system. The
control unit gets the instructions from the programs stored in primary storage unit
interprets these instruction an subsequently directs the other units to execute the
instructions. Thus it manages and coordinates the entire computer system.
The Arithmetic Logic Unit
The Arithmetic Logic Unit (ALU) actually executes the instructions and
performs all the calculations and decisions. The data is held in the primary storage
unit and transferred to the ALU whenever needed. Data can be moved from the
primary storage to the arithmetic logic unit a number of times before the entire
processing is complete. After the completion, the results are sent to the output
storage section and the output devices.
CPU Components
Memory Address Register (MAR) - Specifies address for next read or write.
Memory Buffer Register (MBR) – Contains data to be written into or receives
data read from memory.
Program Counter (PC) -Stores the address of the next instruction to be
executed.
General Purpose Registers(R1, R2 etc.) -Used for storing information at the
time of execution by the user.
Instruction Register(IR) – Stores instruction before decoding.
Instruction Decoder(ID) – Decodes the instruction before execution.
Arithmetic & Logic Unit(ALU)- It does all the arithmetic and logical
computations.
5
Control Unit(CU)- Generates control signals to control every action inside the
computer.
The Primary Storage Unit
This is also called as Main Memory. Before the actual processing starts the
data and the instructions fed to the computer through the input units are stored in
this primary storage unit. Similarly, the data which is to be output from the
computer system is also temporarily stored in the primary memory. It is also the
area where intermediate results of calculations are stored. The main memory has
the storage
orage section that holds the computer programs during execution. Thus the
primary unit:
Stores data and programs during actual processing
Stores temporary results of intermediate processing
Stores results of execution temporarily
The main function of memmemory
ory unit is to store data and programs. The
programs must be stored in the device while execution. Inside the system, memory
plays a vital role in execution of set of instructions. Memory can be further
classified into:
Primary Memory: The data or set of instructions are stored in primary storage
before processing and the data is transferred to ALU where further processing is
done. These are expensive and also known as Main Memory.
Secondary Memory: The data or set of instruction are stored permanently;
user can use it whenever required in future. They are cheaper than primary
memory.
The Output Unit
The output devices give the results of the process and computations to the
outside world. The output units accept the results produced by the computer,
convert
ert them into a human readable form and supply them to the users. The more
common output devices are printers, plotters, display screens, magnetic tape drives
etc.
It provides processed results received from the operations performed. Devices
like printers,, monitor etc provides the desired output.
6
The most efficient computer depends on how quickly it executes tasks. The
performance highly depends on few factors:
As the programs are written in high level language, compiler transfers the high
level language to machine level language; so the performance is highly affected.
The speed of the computer depends on the hardware design and machine
instruction sets.
Therefore, for optimum results it is important to design compiler, hardware
and machine instruction sets in a coordinated way.
The hardware comprises of processor and memory usually connected by a bus.
The execution of the program depends on computer system, the processor time
depends on hardware. Cache memory is a part of the processor.
The flow of program instructions and data between processor and memory:
All the program and data are fetched from input device, and then stored in
main memory.
Instructions are fetched one by one over bus from memory into processor and
a copy is placed in cache memory for future use whenever required.
The processor and small cache memory is fabricated in a small integrated
circuit chip which makes the processing speed very high.
If the instruction movement between main memory and processor is
minimized, program will be executed faster which is achieved by cache memory.
1.3.2 CHARACTERISTICS OF COMPUTER
Much of the world runs on computers and computers profoundly changed
human life mostly for better. The characteristics of computers are
Speed
A computer is a very fast device. It can carry out instructions at a very high
speed obediently, uncritically and without exhibiting any emotions. It can perform
in a few seconds the amount of work that a human being can do in an entire year –
if he work day and night and is nothing else.
7
Some calculation that would have taken hours and days to complete otherwise,
can be completed in a few seconds using the computer. The speed of computer is
calculated in MHz, which are one million instructions per second.
Accuracy
Accuracy of a computer is consistently high and the degree of accuracy of a
particular computer depends on the instructions and the type of processor. But for
a particular computer, each and every calculation is performed. For example, the
computer accurately gives the result of division of any number up to 10 decimal
points.
Versatility
Versatility is one of the most wonderful things about computer. Multi-
processing features of computer makes it quiet versatile in nature. One moment, it
is preparing the results of particular examination, the next moment it is busy
preparing electricity bills, and in between it may be helping an office secretary to
trace an important letter in seconds.
It can perform different types of tasks with same ease. All that is required to
change its talent is to slip in a new program into it. Briefly, a computer is capable of
performing almost any task provided that the task can be reduced to a series of
logical steps.
Reliability
Computer provides very high speed accompanied by an equal high level of
reliability. Thus computers never make mistakes of their own accord.
Power of Remembering
A computer can store and recall any amount of information because of its
secondary storage capability. Every piece of information can be retain as long as
desired by the user and it can be recalled information almost instantaneously. Even
after several years, the information recalled will be as accurate as on the day when
it was fed to the computer.
No I.Q
A computer is a magical device. It can only perform tasks that a human being
can. The difference is that it performs these tasks with unthinkable speed and
accuracy.
It has no intelligence of its own. Its I.Q is zero at least till today. It can only
perform what is programmed to do. Hence, only the user can determine what tasks
a computer will perform. Computers have no sense of meaning, cannot perceive and
are only able to make simple robotic decision about the data they receive.
Common Data Used
One item can be involved in several different procedures or accessed, updated
and inspected by a number of different users. This can hinder the work of those
who need access to data. As the time is changing, more and more facilities are
being added to the computers they can perform but in practical life many tasks are
limited to these basic operations.
8
Diligence
The computer is a machine, does not suffer from the human traits of tiredness.
Nor does it lose concentration even after working continuously for a long time.
This characteristic is especially useful for those jobs where same tasks are
done again and again. It can perform long and complex calculations with same
speed and accuracy from the start till the end.
Storage
The computers have a lot of storage devices which can store a tremendous
amount of data. Data storage is essential function of the computer. Second storage
devices like floppy disk can store a large amount of data permanently.
1.3.3 ADVANTAGES AND LIMITATIONS OF A COMPUTER
I. Advantages
a. Speed up Work Efficiency
This is by far the biggest advantage of using computers. They have replaced the
use of manpower in carrying out tedious and repetitive work. Work that can take
days to complete manually can be done in a few minutes using a computer. This is
made possible by the fact that data, instructions and information move very fast in
the electric circuits of computers. They process trillions of instructions within a
second.
b. Large and Reliable Storage Capacity
Computers can store huge volumes of data. To put this into perspective,
physical files that can fill a whole room can be stored in one computer once they
are digitized. Better yet, access to the stored information is super-fast. It takes
micro-seconds for data to be transferred from storage to memory in a computer.
The same cannot be said for the retrieval of physical files.
With a computer, you can store videos, games, applications, documents etc.
that you can access whenever required. Better yet, storage can be backed up fast
and efficiently.
c. Connection with Internet
The Internet is probably the most outstanding invention in history. Computers
allow you to connect to the Internet and access this global repository of knowledge.
With the Internet, you can communicate faster with people across the globe.
You can send email, hold voice and video calls or use IM services. The Internet
also allows for instant sharing of files. You can also connect with friends and family
on social networks and even learn a new language online. The Internet is a great
educational resource where you can find information on virtually anything.
One of the biggest breakthroughs on the Internet is probably ecommerce. We
can actually shop in the convenience from our home and have the items delivered
to our doorstep.
9
d. Consistency
You always get the same result for the same process when using a computer.
For example if you created a document on one computer, you can open it on
another without making any special adjustments. This consistency makes it
possible to save and edit a document from different computers in different parts of
the world. Collaboration is therefore easier.
Whatever job you need done, you can always rest assured that the computer
will get it just right. There will be no variations in results achieved from the same
process. This makes computers ideal for doing tedious and repetitive work.
ii. Limitations
a. Health Risk
Improper and prolonged use of a computer might lead to disorders or injuries of
the elbows, wrist, neck, back, and eyes. As a computer user you can avoid these
injuries by working in a workplace that is well designed, using a good sitting
position and taking proper work breaks. Technology load and computer addiction
are the major behavioral health risks. Addiction comes when you are obsessed with
a computer.
Technology overload comes when you are over loaded with computer and
mobile phones. Both technology overload and computer addiction are avoidable if
the habits are noted and a follow up is done.
b. Violation of Privacy
When using the Internet on your computer, you run the risk of leaking your
private information. This is especially so if you happen to download malicious
software into your computer. Trojans and Malware can infiltrate your system and
give identity thieves access to your personal information.
Of particular interest to identity thieves and to hide your bank and credit card
details. Make sure to install reliable antivirus software to keep malware and Trojans
at bay. You should also avoid clicking on suspicious looking links when using the
Internet.
c. Impact on Environment
Manufacturing process of computers and computer waste are harmful to the
environment. When computer junk is discarded in open grounds, they release
harmful chemicals like lead and mercury to the environment. Mercury can result in
cancer and lead can cause radiation diseases when exposed to the environment.
Disposed computers could also cause fire.
d. Data Security
This is one of the most controversial aspects of computers today. The safety
and integrity of data is key for any business. However, data stored in a computer
can be lost or compromised in a number of ways.
There are instances where the computer could crash wiping out all data that
had been stored therein. Hackers could also gain access into your computer and
compromise the integrity of your data. This is why you should always have a
10
backup. Moreover, you should put measures in place to keep your data safe from
hackers.
1.3.4 GENERATION OF COMPUTERS
The computer has evolved from a large—sized simple calculating machine to a
smaller but much more powerful machine. The evolution of computer to the current
state is defined in terms of the generations of computer. Each generation of
computer is designed based on a new technological development, resulting in
better, cheaper and smaller computers that are more powerful, faster and efficient
than their predecessors. Currently, there are five generations of computer. In the
following subsections, we will discuss the generations of computer in terms of—
• The technology used by them (hardware and software),
• Computing characteristics (speed, i.e., number of instructions executed per
second),
• Physical appearance, and
• Their applications.
First Generation (1940 to 1956): Using Vacuum Tubes
Hardware Technology The first generation of computers used vacuum tubes for
circuitry and magnetic drums for memory. The input to the computer was through
punched cards and paper tapes. The output was displayed as printouts.
Software Technology: The instructions were written in machine language.
Machine language uses 0s and 1s for coding of the instructions. The first
generation computers could solve one problem at a time.
Computing Characteristics: The computation time was in milliseconds.
Physical Appearance: These computers were enormous in size and required a
large room for installation.
Application: They were used for scientific applications as they were the fastest
computing device of their time.
Examples UNIVersal Automatic Computer (UNIVAC), Electronic Numerical
Integrator and Calculator (ENIAC), and Electronic Discrete Variable Automatic
Computer (EDVAC).
The first generation computers used a large number of vacuum tubes and thus
generated a lot of heat. They consumed a great deal of electricity and were
expensive to operate. The machines were prone to frequent malfunctioning and
required constant maintenance. Since first generation computers used machine
language, they were difficult to program.
The first task was to perform series of complex calculation that is helped
determine Hydrogen-bomb feasibility instead. General purpose use only.
It can process 30 tons + 15000 sq. ft. + 18000 vacuum tubes + 140 KW = 5000
additions/sec
11
lap while working (hence the name). Laptops are costlier than the desktop
machines
Netbook These are smaller notebooks optimized for low weight and low cost,
and are designed for accessing web-based applications. Starting with the earliest
netbook in late 2007, they have gained significant popularity now. Netbooks deliver
the performance needed to enjoy popular activities like streaming videos or music,
emailing, Web surfing or instant messaging. The word netbook was created as a
blend of Internet and notebook.
Tablet Computer has features of the notebook computer but it can accept
input from a stylus or a pen instead of the keyboard or mouse. It is a portable
computer. Tablet computer are the new kind of PCs.
Handheld Computer or Personal Digital Assistant (PDA) is a small computer
that can be held on the top of the palm. It is small in size. Instead of the keyboard,
PDA uses a pen or a stylus for input. PDAs do not have a disk drive. They have a
limited memory and are less powerful. PDAs can be connected to the Internet via a
wireless connection. Casio and Apple are some of the manufacturers of PDA. Over
the last few years, PDAs have merged into mobile phones to create smart phones.
Minicomputer systems
Minicomputers are digital computers, generally used in multi-user systems.
They have high processing speed and high storage capacity than the
microcomputers. Minicomputers can support 4–200 users simultaneously. The
users can access the minicomputer through their PCs or terminal. They are used
for real-time applications in industries, research centers, etc. PDP 11, IBM (8000
series) are some of the widely used minicomputers.
Main frame computer systems
Mainframe computers are multi-user, multi-programming and high
performance computers. They operate at a very high speed, have very large storage
capacity and can handle the workload of many users. Mainframe computers are
large and powerful systems generally used in centralized databases. The user
accesses the mainframe computer via a terminal that may be a dumb terminal, an
intelligent terminal or a PC. A dumb terminal cannot store data or do processing of
its own. It has the input and output device only. An intelligent terminal has the
input and output device, can do processing, but, cannot store data of its own. The
dumb and the intelligent terminal use the processing power and the storage facility
of the mainframe computer. Mainframe computers are used in organizations like
banks or companies, where many people require frequent access to the same data.
Some examples of mainframes are CDC 6600 and IBM ES000 series.
Super computer systems
Supercomputers are the fastest and the most expensive machines. They have
high processing speed compared to other computers. The speed of a supercomputer
is generally measured in FLOPS (Floating point Operations Per Second). Some of
the faster supercomputers can perform trillions of calculations per second.
16
means both models understand the same instruction set as you know each
processor understands a fixed no of instructions. Hence forth their architecture is
same. Due to the placement of various hardware components, one model (laptop) is
slim and other is bulky. Hence their organization is different.
1.4 REVISION POINTS
• Computer meaning
• Input and output
• Central Processing Unit
• Generation of Computers
• Types of Computers
1.5 INTEXT QUESTIONS
1. What do you mean by a computer?
2. List out the characteristics of computers.
3. Explain the organization of computer.
4. Write a short note on CPU.
5. List out the developments in 5th gen computers.
1.6 SUMMARY
A fast and accurate data processing system that accepts data, performs
various operations on the data, has the capability to store.
Input devices are the devices which are used to feed programs and data to
the computer.
The Arithmetic Logic Unit (ALU) actually executes the instructions and
performs all the calculations and decisions
The output devices give the results of the process and computations to the
outside world.
The goal of fifth generation computing is to develop computers that are
capable of learning and self-organization.
Supercomputers are used for weather forecasting, climate research (global
warming), molecular research, biological research, nuclear research and
aircraft design.
1.7 TERMINAL EXERCISE
1. The CPU has _________, ___________ and _____________.
2. The first generation computer used _____________ in it.
3. VLSI means _____________________________
4. ___________________ is an example for micro computer.
5. A super computer is used in _____________ purpose.
18
LESSON - 2
INPUT OUTPUT AND STORAGE DEVICES
2.1 INTRODUCTION
Computer is a machine that has many peripheral devices attached to it. A
peripheral device is a hardware device that is required for the computer to perform
its primary function. The computer wants to receive data, store programme, data,
results and it has to produce data (output) when ever requested by the user. It can
be either internal or external, though we more commonly think of external devices
when talking about peripherals.
Most commonly when we talk about peripheral devices we are talking about
input and output devices. But storage is also an important device of the computer.
There are a wide variety of input and output devices, memory devices that we
commonly use, with each specialising in a certain task.
2.2 OBJECTIVES
• To understand the meaning of input, output and storage devices.
• To realize the varieties of input, output and storage devices used in various
computers
• To know the advantages and disadvantages of various input, output and
storage devices.
2.3 CONTENTS
2.3.1 Input Devices
2.3.2 Output Devices
2.3.2 Storage Devices
2.3.1 INPUT DEVICES
An input device is a device that sends data to a computer and that allows you
to interact, control it and add new information. For instance, if a computer does not
have an input device, it will operate by itself, but its settings, bug fixes, or other
user experiences can’t be modified.
In terms of computing, an input device is any hardware equipment used to
send data to computers thereby facilitating us to interact with it.
It has the capability of converting the raw data into the format or language
which is computer-readable and finally delivers the translated data to the Central
Processing Unit (CPU) for further processing.
The input devices are categorized based on their mode of inputs such as
Keyboard devices, Point and draw devices, Speech, and recognization devices.
Let us discuss some of the input devices that are commonly used in today’s
world.
20
Keyboard
A computer keyboard is an input device used to enter characters and
functions into the computer system by pressing buttons, or keys. It is the primary
device used to enter text. A keyboard typically contains keys for individual letters,
numbers and special characters, as well as keys for specific functions. A keyboard
is connected to a computer system using a cable or a wireless connection.
Most keyboards have a very similar layout. The individual keys for letters,
numbers and special characters are collectively called the character keys. The
layout of these keys is derived from the original layout of keys on a typewriter. The
most widely used layout in the English language is called QWERTY, named after
the sequence of the first six letters from the top left.
Mouse
The mouse is a small, movable device that lets you control a range of things on
a computer. Most types of mouse have two buttons, and some will have a wheel in
between the buttons. Most types of mouse connect to the computer with a cable,
and use the computer's power to work.
Some types of mouse are wireless. That means they do not permanently
connect to a computer with a cable. These types of mouse either need batteries to
run or require a recharging cable.
The mouse usually sits on the desk to the left or right of the keyboard. If you
write with your left hand, you should have the mouse to the left of the keyboard. If
you write with your right hand, have the mouse to the right of the keyboard.
A mouse is used to point at objects you see on the screen. By pointing at an
object, you tell the computer that you want to do something with that object.
The mouse pointer, or cursor, represents the mouse on the computer screen.
When you move the mouse across the top of a table, the cursor moves on the
computer screen in the same direction.
The cursor usually looks like an arrow, but it can change shape depending on
what it's pointing at. It's good to note that it's the very tip of the arrow that is the
sensitive part when clicking something on the screen.
Light pen
Light Pen (similar to the pen) is a pointing device which is used to select a
displayed menu item or draw pictures on the monitor screen. It consists of a
photocell and an optical system placed in a small tube. When its tip is moved over
the monitor screen, and pen button is pressed, its photocell sensing element
detects the screen location and sends the corresponding signals to the CPU.
Uses
• Light Pens can be used as input coordinate positions by providing necessary
arrangements.
• If background color or intensity, a light pen can be used as a locator.
• It is used as a standard pick device with many graphics system.
21
Graphic Tablet
The digitizer is an operator input device, which contains a large, smooth board
(the appearance is similar to the mechanical drawing board) & an electronic
tracking device, which can be changed over the surface to follow existing lines. The
electronic tracking device contains a switch for the user to record the desire x & y
coordinate positions. The coordinates can be entered into the computer memory or
stored or an off-line storage medium such as magnetic tape.
Advantages
• Drawing can easily be changed.
• It provides the capability of interactive graphics.
Disadvantages
• Costly
• Suitable only for applications which required high-resolution graphi
Microphone
A microphone is an input device that was developed by Emile Berliner in 1877.
It is used to convert sound waves into electric waves or input the audio into
computers. It captures audio by converting sound waves into an electrical signal,
which may be a digital or analog signal. This process can be implemented by a
computer or other digital audio devices.
Use of a microphone on the computer
• It is used for voice recording.
• It offers users the option of voice recognition.
• It allows users to record sound of musical instruments.
• It enables users to online chatting.
• It allows us for VoIP (Voice over internet protocol).
• It is also used for Computer gaming.
• Furthermore, it can record voice for singing, podcasts, and dictation.
Magnetic Ink Character Reader
A magnetic ink character recognition line (MICR) is a line of characters on a
check printed with a unique ink that allows the characters to be read by a reader-
sorter machine. Introduction of the MICR reader-sorter process allowed check
processing to be automated while making it more difficult to counterfeit checks.
Optical Character Reader
Optical Character Recognition, or OCR, is a technology that enables you to
convert different types of documents, such as scanned paper documents, PDF files
or images captured by a digital camera into editable and searchable data.
It is a computer peripheral device enabling letters, numbers, or other
characters usually printed on paper to be optically scanned and input to
23
a storage device, such as magnetic tape. The device uses the process
of optical character recognition
Bar Code Reader
A barcode scanner, also called a point-of-sale (POS) scanner or a price
scanner, is a device used to capture and read information contained in a barcode.
The scanner consists of a light source, a lens and a light sensor that translates
optical impulses into electrical ones. They also contain decoder circuitry analyzing
the barcode’s image data provided by the sensor and sending that content to a
computer.
A barcode scanner works by directing a beam of light across the barcode and
measuring the amount of light that is reflected back. The dark bars on the barcode
will reflect less light than the white spaces between them. The scanner then
converts the light energy into electrical energy, which is then converted into data by
the decoder and forwarded to a computer.
The most common kinds of scanner used to read one dimensional barcodes
are, pen wands, slot scanners, Charge Couple Device (CCD) scanners, image
scanners and laser scanners.
A pen wand scanner contains no moving parts and is known for its durability
and cost. The pen needs to stay in direct contact with the barcode, be held at a
specific angle and be moved across the barcode at a certain speed.
A slot scanner remains stationary and the item with the barcode is pulled
through the slot by hand. Slot scanners are usually used to scan barcodes on
identification cards.
A CCD scanner has a better read-range and doesn’t involve contact with the
barcode. This makes it ideal for use in retail sales. Typically, a CCD scanner is used
as a “gun” type interface and has to be held no more than one inch from the
barcode. Several different readings are made to reduce the possibility of errors,
every time a barcode is scanned. A disadvantage to the CCD scanner is that it can’t
read a barcode that is wider than its input screen.
An image scanner, also known as a camera reader, uses a small video camera
to capture an image of the barcode and then it uses sophisticated digital image
processing techniques to decode the barcode. An image scanner can read a barcode
from about 3 to 9 inches away and usually costs less than a laser scanner.
A laser scanner can either be handheld or stationary and doesn’t need to be
close to the barcode in order to read it effectively. The scanner uses a system of
mirrors and lenses that allow it to read barcodes regardless of their position and it
can easily read up to 24 inches away from the barcode. A laser scanner may
perform up to 500 scans per second, to reduce the possibility of errors. Specialized
long-range scanners are capable of reading a barcode up to 30 feet away.
24
2D barcode
A 2D (two-dimensional) barcode is a graphical image that stores information
both horizontally, as is the case of one-dimensional barcodes, and vertically. As a
result of their construction, 2D barcodes can store up to 7,089 characters,
significantly greater storage than is possible with the 20-character capacity of a
one-dimensional barcode.
Optical Mark Reader
Optical scanner, Computer input device that uses a light beam to scan codes,
text, or graphic images directly into a computer or computer system. Bar-
code scanners are used widely at point-of-sale terminals in retail stores. A
handheld scanner or bar-code pen is moved across the code, or the code itself is
moved by hand across a scanner built into a checkout counter or other surface,
and the computer stores or immediately processes the data in the bar code. After
identifying the product through its bar code, the computer determines its price and
feeds that information into the cash register. Optical scanners are also used
in fax machines and to input graphic material directly into personal computers.
Touch Panels
Touch Panels is a type of display screen that has a touch-sensitive transparent
panel covering the screen. A touch screen registers input when a finger or other
object comes in contact with the screen.
When the wave signals are interrupted by some contact with the screen, that
located is recorded. Touch screens have long been used in military applications.
Voice Systems (Voice Recognition)
Voice Recognition is one of the newest, most complex input techniques used to
interact with the computer. The user inputs data by speaking into a microphone.
The simplest form of voice recognition is a one-word command spoken by one
person. Each command is isolated with pauses between the words.
Voice Recognition is used in some graphics workstations as input devices to
accept voice commands. The voice-system input can be used to initiate graphics
operations or to enter data. These systems operate by matching an input against a
predefined dictionary of words and phrases.
Advantage
• More efficient device.
• Easy to use
• Unauthorized speakers can be identified
• Disadvantages:
• Very limited vocabulary
• Voice of different operators can't be distinguished.
25
Cons
• Big, bulky, occupies a lot of space.
• Power-hungry.
26
• If you are sharp, you will spot some flickering on a CRT monitor due to the
refresh rate.
• Leave the same image on the screen for too long, and the electrons will burn
it permanently onto the fluorescent screen.
Cons
• Dead pixels can happen when the liquid crystals are manufactured badly.
These dead pixels will not respond to any signals, and will always be “stuck”
to one color… Which is very annoying when you even have one on the
screen.
• Due to the use of polarizing filters, the direction of the light is limited and we
will have problems with the viewing angle. In the older LCD monitors, you
can only see what is on the screen when sitting right in front of it.
This is kind of confusing, but yes, LED monitors are essentially LCD… With
the exception that LED monitors no longer uses a fluorescent backlight, but LED
lights.
Twisted Nematic (TN) – Most affordable, high refresh rate, but bad color
reproduction and viewing angle.
In-plane Switching (IPS) – Slightly more expensive, but good viewing angles
and colors. Slightly lower refresh rate.
Multi-domain Vertical Alignment (MVA) – A cross between TN and IPS.
Patterned Vertical Alignment (PVA) – Also a cross between TN and IPS.
Projector
This is not really a “conventional monitor”, but you have probably seen one of
these around – Projectors are commonly used in schools, offices, and even in movie
theaters.
Remember how LCD monitors work? By controlling and bending light with
liquid crystals to display the image, modern LCD projects pretty much employ the
same idea. But instead of using the “power saving” LED lights, projectors use a
powerful light source such as halogen to project the image out onto a solid surface.
Pros
• Able to project images onto a wide area, turns an empty wall into a useful
screen.
• Used to be expensive, but getting affordable these days.
• Some projects are portable these days, good for presentations and movie
nights.
Cons
• No good in bright areas.
• Image quality and colors are definitely not as good as CRTs and LCDs…
Unless you are willing to spend thousands on a “movie grade” projector.
• The bright light source usually heats up quickly.
Very wide-angle,
angle, Depends. IPS panels
Viewing Very wide--angle… It’s
definitely better than have decently good
Angle a projection.
LCDs. viewing angles.
Printers
Printer is the most important output device, which is used to print data on
paper.
Types of Printers: There are many types of printers which are classified on
various criteria as shown in fig:
29
Impact Printers
The printers that print the characters by striking against the ribbon and onto
the papers are known as Impact Printers.
These Printers are of two types:
• Character Printers
• Line Printers
Non-Impact Printers
The printers that print the characters without striking against the ribbon and
onto the papers are called Non-Impact Printers. These printers print a complete
page at a time, therefore, also known as Page Printers.
Page Printers are of two types:
• Laser Printers
• Inkjet Printers
Drum Printers
These are line printers, which prints one line at a time. It consists of a drum.
The shape of the drum is cylindrical. The drum is solid and has characters
embossed on it in the form of vertical bands. The characters are in circular form.
Each band consists of some characters. Each line on drum consists of 132
characters. Because there are 96 lines so total characters are (132 * 95) = 12, 672.
Drum contains a number of hammers also.
Chain Printers
These are called as line printers. These are used to print one line at a line.
Basically, chain consists of links. Each link contains one character. Printers can
follow any character set style, i.e., 48, 64 or 96 characters. Printer consists of a
number of hammers also.
Advantages
• Chain or Band if damaged can be changed easily.
• It allows printing of different form.
• Different Scripts can be printed using this printer.
Disadvantages
• It cannot print charts and graphs.
• It cannot print characters of any shape.
• Chain Printers is impact printer, hammer strikes so it is noisy.
Non-Impact Printers
Inkjet Printers
These printers use a special link called electrostatic ink. The printer head has
a special nozzle. Nozzle drops ink on paper. Head contains up to 64 nozzles. The
ink dropped is deflected by the electrostatic plate. The plate is fixed outside the
nozzle. The deflected ink settles on paper.
Advantages
• These produce high quality of output as compared to the dot matrix.
• A high-quality output can be produced using 64 nozzles printed.
• Inkjet can print characters in a variety of shapes.
• Inkjet can print special characters.
• The printer can print graphs and charts.
Disadvantages
• Inkjet Printers are slower than dot matrix printers.
• The cost of inkjet is more than a dot matrix printer.
Laser Printers
These are non-impact page printers. They use laser lights to produces the dots
needed to form the characters to be printed on a page & hence the name laser
printers.
31
Step 1: The bits of data sent by processing unit act as triggers to turn the
laser beam on & off.
Step 2: The output device has a drum which is cleared & is given a positive
electric charge. To print a page the modulated laser beam passing from the laser
scans back & forth the surface of the drum. The positive electric charge on the
drum is stored on just those parts of the drum surface which are exposed to the
laser beam create the difference in electric which charges on the exposed drum
surface.
Step 3: The laser exposed parts of the drum attract an ink powder known as
toner.
Step 4: The attracted ink powder is transferred to paper.
Step 5: The ink particles are permanently fixed to the paper by using either
heat or pressure technique.
Step 6: The drum rotates back to the cleaner where a rubber blade cleans off
the excess ink & prepares the drum to print the next page.
Plotters
Plotters are a special type of output device. It is suitable for applications such as:
Advantage
• It can produce high-quality output on large sheets.
• It is used to provide the high precision drawing.
• It can produce graphics of various sizes.
• The speed of producing output is high.
Drum Plotter
It consists of a drum. Paper on which design is made is kept on the drum. The
drum can rotate in both directions. Plotters comprised of one or more pen and
penholders. The holders are mounted perpendicular to drum surface. The pens are
kept in the holder, which can move left to the right as well as right to the left. The
graph plotting program controls the movement of pen and drum.
Flatbed Plotter
It is used to draw complex design and graphs, charts. The Flatbed plotter can
be kept over the table. The plotter consists of pen and holder. The pen can draw
characters of various sizes. There can be one or more pens and pen holding
mechanism. Each pen has ink of different color. Different colors help to produce
multicolor design of document. The area of plotting is also variable. It can vary A4
to 21'*52'.
32
It is used to draw Cars, Ships, Airplanes, Shoe and dress designing, Road and
highway design
Graphics Software
There are two types of Graphics Software.
• Painting programs
• Package used for business purpose
• Package used for medical systems.
• CAD packages
Advantages
The advantages of cache memory are as follows -
• Cache memory is faster than main memory.
• It consumes less access time as compared to main memory.
• It stores the program that can be executed within a short period of time.
• It stores data for temporary use.
Disadvantages
The disadvantages of cache memory are as follows –
• Cache memory has limited capacity.
• It is very expensive.
Primary Memory (Main Memory)
Primary memory holds only those data and instructions on which the
computer is currently working. It has a limited capacity and data is lost when
power is switched off. It is generally made up of semiconductor device. These
memories are not as fast as registers. The data and instruction required to be
processed resides in the main memory. It is divided into two subcategories RAM
and ROM.
Characteristics of Main Memory
• These are semiconductor memories.
• Usually volatile memory.
• Data is lost in case power is switched off.
• It is the working memory of the computer.
• Faster than secondary memories.
• A computer cannot run without the primary memory.
Read Only Memory
ROM stands for Read Only Memory. The memory from which we can only read
but cannot write on it. This type of memory is non-volatile. The information is
stored permanently in such memories during manufacture. A ROM stores such
instructions that are required to start a computer. This operation is referred to
as bootstrap. ROM chips are not only used in the computer but also in other
electronic items like washing machine and microwave oven.
Various types of ROMs and their characteristics.
MROM (Masked ROM)
The very first ROMs were hard-wired devices that contained a pre-programmed
set of data or instructions. These kind of ROMs are known as masked ROMs, which
are inexpensive.
34
often used with computers. RAM is small, both in terms of its physical size and in
the amount of data it can hold.
RAM is of two types −
• Static RAM (SRAM)
• Dynamic RAM (DRAM)
Static RAM (SRAM)
The word static indicates that the memory retains its contents as long as
power is being supplied. However, data is lost when the power gets down due to
volatile nature. SRAM chips use a matrix of 6-transistors and no capacitors.
Transistors do not require power to prevent leakage, so SRAM need not be refreshed
on a regular basis.
There is extra space in the matrix, hence SRAM uses more chips than DRAM
for the same amount of storage space, making the manufacturing costs higher.
SRAM is thus used as cache memory and has very fast access.
Characteristic of Static RAM
• Long life
• No need to refresh
• Faster
• Used as cache memory
• Large size
• Expensive
• High power consumption
Dynamic RAM (DRAM)
DRAM, unlike SRAM, must be continually refreshed in order to maintain the
data. This is done by placing the memory on a refresh circuit that rewrites the data
several hundred times per second. DRAM is used for most system memory as it is
cheap and small. All DRAMs are made up of memory cells, which are composed of
one capacitor and one transistor.
Characteristics of Dynamic RAM
• Short data lifetime
• Needs to be refreshed continuously
• Slower as compared to SRAM
• Used as RAM
• Smaller in size
• Less expensive
• Less power consumption
36
Secondary Memory
This type of memory is also known as external memory or non-volatile. It is
slower than the main memory. These are used for storing data/information
permanently. CPU directly does not access these memories, instead they are
accessed via input-output routines. The contents of secondary memories are first
transferred to the main memory, and then the CPU can access it. For example,
disk, CD-ROM, DVD, etc.
Characteristics of Secondary Memory
• These are magnetic and optical memories.
• It is known as the backup memory.
• It is a non-volatile memory.
• Data is permanently stored even if power is switched off.
• It is used for storage of data in a computer.
• Computer may run without the secondary memory.
• Slower than primary memories.
Types of Storage
External Hard Drive
These are hard drives similar to the type that is installed within a desktop
computer or laptop computer. The difference being that they can be plugged in to
the computer or removed and kept separate from the main computer. They typically
come in two sizes:
Desktop External Hard drive: Uses a 3.5 inch hard drive similar to that used
in desktop computers.
Portable External Hard drive: Uses a 2.5 inch hard drive similar to that used
in laptops.
Desktop External Hard Drives are generally cheaper than Portable External
Hard Drives for the same storage space. Desktop External Hard Drives are usually
faster and more robust.
Capacity: 160GB to 3TB (approx 3000GB)
Connection: Most common connections to the computer are through a USB
2.0 or USB3.0 connection. May also be available in a SATA or eSATA connector.
Advantages
• Very good option for local backups of large amounts of data.
• The cheapest storage option in terms of dollars per GB. Very reliable when
handled with care.
Disadvantages
• Can be very delicate. May be damaged if dropped or through electrical surge.
37
LESSON - 3
INTRODUCTION TO SOFTWARE
3.1 INTRODUCTION
When we speak about a computer, we always remember out the physical
components of the computer and we do not give much importance to the virtual or
invisible components of the computer. Thanks to the influence created by mobile,
noe a days we think about the software of the system. Without a software a
computer is of no use and it is considered as an e-waste. Many of the computers
are discarded because of it software problem and not because of is hardware
issues, which can be rectified.
Software are the set of instructions that tell a computer what to do.
Software comprises the entire set of programs, procedures, and routines associated
with the operation of a computer system. The term was coined to differentiate these
instructions from hardware—i.e., the physical components of a computer system. A
set of instructions that directs a computer’s hardware to perform a task is called a
program, or software program.
3.2 OBJECTIVES
• To enrich the concept of software and its importance
• To throw insights about the types of software.
• To introduce operating system and enhance the knowledge about various
types of operating system
3.3 CONTENTS
3.3.1 Software
3.3.2Types of Software
3.3.3 Operating System
3.3.4 UNIX
3.3.5 LINUX
3.3.6 Windows OS
3.3.7 Freeware
3.3.1 SOFTWARE
It is the set of instructions that tell a computer what to do.
Software comprises the entire set of programs, procedures, and routines associated
with the operation of a computer system. The term was coined to differentiate these
instructions from hardware—i.e., the physical components of a computer system. A
set of instructions that directs a computer’s hardware to perform a task is called a
program, or software program.
Software is typically stored on an external long-term memory device, such as a
hard drive or magnetic diskette. When the program is in use, the computer reads it
from the storage device and temporarily places the instructions in random access
42
memory (RAM). The process of storing and then performing the instructions is
called “running,” or “executing,” a program. By contrast, software programs and
procedures that are permanently stored in a computer’s memory using a read-only
(ROM) technology are called firmware, or “hard software.”
The two main types of software are system software and application software.
System software controls a computer’s internal functioning, chiefly through
an operating system, and also controls such peripherals as monitors, printers, and
storage devices. Application software, by contrast, directs the computer to execute
commands given by the user and may be said to include any program that
processes data for a user. Application software thus includes word processors,
spreadsheets, database management, inventory and payroll programs, and many
other “applications.” A third software category is that of network software, which
coordinates communication between the computers linked in a network.
Software is the collection of data, programs, procedures, routines and
instructions that tell a computer or electronic device how to run, work and execute
specific tasks. This is in contrast to hardware, which is the physical system and
components that perform the work.
Software can take the form of one line of code or, like Microsoft’s Windows
operating system, span into millions.
Software also works with other software to join as a cohesive system. Your
smart phone is a collection of thousands of software components designed to work
together.
Code languages and styles vary in size and scope. The software used to work a
modern microwave is very different from the code that runs an Apple Mac.
3.3.2 TYPES OF SOFTWARE
Different types of software include:
• Application Software
• System Software
• Firmware
• Programming Software
• Driver Software
• Freeware
• Shareware
• Open Source Software
• Closed Source Software
• Utility Software
43
Application Software
Application software is a software program or group of programs designed for
end-users. There are many types of application software.
Types of Application Software and Examples
• Word Processing Software: Google Docs, Microsoft Word, WordPad and
Notepad
• Database Software: MySQL, Microsoft SQL Server, Microsoft Access, Oracle,
IBM DB2 and FoxPro
• Spreadsheet Software: Google Sheets, Apple Numbers and Microsoft Excel
• Multimedia Software: Media Player, Winamp, QuickTime and VLC Media
Player
• Presentation Software: Google Slides, Microsoft Powerpoint, Keynotes,
Prezzy
• Enterprise Software: customer relationship management (CRM) software
(HubSpot, Microsoft Dynamic 365)), project management tools (Jira,
Monday), marketing automation tools (Marketo, HubSpot), enterprise
resource planning (ERP) software (SAGE, Oracle, Microsoft Dynamics),
treasury management system (TMS) software (SAP S/4HANA Finance, Oracle
Treasury), business intelligence (BI) software (SAP Business Intelligence,
MicroStrategy, Microsoft Power BI)
• Information Worker Software: Documentation tools, resource management
tools
• Communication Software: Zoom, Google Meet, Skype
• Educational Software: Dictionaries – Encarta, Britannica; Mathematical:
MATLAB; Others: Google Earth, NASA World Wind
• Simulation Software: Flight and scientific simulators
• Content Access Software: Accessing content through media players, web
browsers
• Application Suites: Apache OpenOffice, Microsoft Office365, Apple’s iWork,
LibreOffice, G-Suite, Oracle E-Business Suite
• Software for Engineering and Product Development: IDE or Integrated
Development Environments
• Email Software: Microsoft Outlook, Gmail, Apple Mail
Benefits of Application Software
Applications are the lifeblood of our digital devices.
Mobile app developers create solutions to let businesses sell and market
themselves online. Financial applications run the stock market. The banking
system uses applications to transfer money and log transactions.
If a business needs a digital solution it usually comes in the form of an app.
44
System Software
System software provides a platform for other software and includes the
programs managing the computer itself, such as the computer’s operating system,
file management utilities and disk operating system (or DOS). The system’s files
consist of libraries of functions, system services, drivers for printers and other
hardware, system preferences and other configuration files. The programs in
system software encompass assemblers, compilers, file management tools, system
utilities and debuggers.
While application software is non-essential and won’t shut down your device
by being uninstalled, system software is essential and creates a platform that apps
sit inside.
Examples of System Software
System software runs things in the background and operating systems are an
example of system software.
For desktop computers, laptops and tablets:
• Microsoft Windows
• MacOS (for Apple devices)
• GNU/Linux
For smart phones:
• Apple’s iOS
• Google’s Android
• Windows Phone OS
Other examples include game engines, computational science software,
industrial automation software and software as a service application.
Other than operating systems, some people also classify programming software
and driver software as types of system software.
Benefits of System Software
Open-source operating systems let businesses create their own OS.
Firmware
Firmware is software that’s stored on a computer’s motherboard or chipset.
Its job is to ensure the device works directly. When you switch on your laptop,
the Basic Input Output System (BIOS) wakes everything up.
It checks the drive for errors then queries if the operating system is present. If
so, it then turns control over to the likes of Windows 10.
Programming Software
Software get developed by using programming software.
45
New peripherals like a printer required the correct driver. When the CD went
missing it took forever to find the right driver software online.
Thankfully Windows and other operating systems install and manage drivers
behind the scenes. The result is an optimised and working machine.
Examples of Driver Software
All hardware devices require drivers. For example:
• Graphic cards
• Network cards
• Mouse and keyboard
When you insert a USB flash drive into your computer, the OS recognizes it as
a new device. The driver then gets installed automatically to make it functional.
Benefits of Driver Software
Drivers are part of the system software category. Without them, nothing would
work.
Hardware manufacturers are usually responsible for creating driver software.
However, Linux and Chrome book often get overlooked because of their small
market share.
Thankfully the coding community comes to the rescue.
Someone writes the code to make the device work correctly on their system.
They then share the driver online for others to download and use.
Freeware
Freeware sounds like free software or open-source software but there’s a
difference.
Freeware software does not expose or share its source code. Yet the software
owner does not charge others to use it.
Freeware licences vary as to what the software can be used for and who can
share it.
Some developers only allow their freeware for private or personal use.
Businesses need a paid license or get written permission. An example of this
is GPT-3 – and only approved developers and marketers can get access to the
program.
Always read the small print and be wary of the copyright of freeware licenses.
Examples of Freeware
Freeware software examples cover a wide base of useful applications from
audio to virtual machines.
Benefits of Freeware
You pay nothing for fully developed software. You can uninstall it if you don’t
like the features. There are no companies ‘forcing’ you to upgrade.
47
Freeware also helps the online community to share and grow. Developers can
showcase their talents while businesses can avail of some excellent apps.
Shareware
Like freeware, shareware is free to use and share with others, but only for a
short time.
It acts as an evaluation. You can try some or all of the features before
committing to a purchase.
Examples of Shareware
WinZip is one of the most established shareware apps.
It started in 1991 when compression software wasn’t included in Windows.
Nearly thirty years later, it still sees high download volumes. The free trial is time-
limited but all versions include encryption.
Benefits of Shareware
Shareware lets you try the software for free before purchasing a full licence.
Some give a limited feature set or are time-locked. “Try before you buy” is a
great way to check if the software is right for your business’s needs.
Open Source Software
Open source means you can explore the actual code that the app was written
in.
Strict software licences restrict what another developer is able to do with the
code. However, the ethos behind open-source is to encourage development.
Open source means evolving the code to make it better for everyone.
Examples of Open Source Software
The Linux OS is the perfect example of open-source software.
Developers can download the source code and edit it as they see fit. New
flavours of Linux help target a certain need as a result.
Benefits of Open Source Software
Github.com is the top destination for coders to save and share their code.
Repositories are often open source and developers can find the right solution to
their issues easily. They can clone whole projects or download elements for free.
Closed Source Software
Most applications are closed source in that they do not expose the original
code.
Licenses are stringent. No unauthorized copying or cracking is allowed. The
app can be commercial or private but it requires payment of some kind to use.
Examples of Closed Source Software
Any app that hides or encrypts its source code is considered closed-source.
For example, Skype allows video conferencing. It’s owned by Microsoft and
although free to use, the corporation charges high-volume users a fee.
48
• Multitasking/Time Sharing OS
• Multiprocessing OS
• Real Time OS
• Distributed OS
• Network OS
• Mobile OS
Batch Operating System
Some computer processes are very lengthy and time-consuming. To speed the
same process, a job with a similar type of needs are batched together and run as a
group.
The user of a batch operating system never directly interacts with the
computer. In this type of OS, every user prepares his or her job on an offline device
like a punch card and submit it to the computer operator.
Multi-Tasking/Time-sharing Operating systems
Time-sharing operating system enables people located at a different
terminal(shell) to use a single computer system at the same time. The processor
time (CPU) which is shared among multiple users is termed as time sharing.
Real time OS
A real time operating system time interval to process and respond to inputs is
very small. Examples: Military Software Systems, Space Software Systems are the
Real time OS example.
Distributed Operating System
Distributed systems use many processors located in different machines to
provide very fast computation to its users.
Network Operating System
Network Operating System runs on a server. It provides the capability to serve
to manage data, user, groups, security, application, and other networking
functions.
Mobile OS
Mobile operating systems are those OS which is especially that are designed to
power smartphones, tablets, and wearables devices.
Some most famous mobile operating systems are Android and iOS, but others
include BlackBerry, Web, and watchOS.
Functions of Operating System
Some typical operating system functions may include managing memory, files,
processes, I/O system & devices, security, etc.
50
Introduction to Kernel
Features of Kennel
• Low-level
level scheduling of processes
• Inter-process
process communication
• Process synchronization
• Context switching
• Types of Kernels
There are many types of kernels that exists, but among them, the two most
popular kernels are:
1. Monolithic
A monolithic kernel is a single code or block of the program. It provides all the
required services offered by the operating system. It is a simplistic design which
creates a distinct communication layer between the hardware and software.
2. Microkernels
Microkernel manages all system resources. In this type of kernel, services are
implemented in different address space. The user services are stored in user
address space, and kernel services are stored under kernel address space. So, it
helps to reduce the size of both the kernel and operating system.
3.3.4 UNIX
UNIX can also be called as an operating system that has its utilization in both
work stations and servers. This is important for the development of the internet and
53
final layer is the user. This means the whole operating system is visible to the user
from the shell itself
Kernel
Amongst the four layer’s kernel is the most powerful one. The kernel contains
mainly utilities along with the master control program. Kernel program has the
power to start or stop a program and even handle the file system. It also suggests
which program to be selected when two resources try to access the device at the
same time for the same resource. As the kernel has special access to the OS this
leads to the division of space between user-space and kernel-space.
Kernel structure is designed in such a way it should support primary UNIX
requirements. Which are divided into two categories and listed below
• Process management.
• File management.
Process Management
The resource allocation in CPU, memory, and services are few things which
will be handled under process management.
File Management
File management deals with managing all the data in files needed by the
process while communicating with devices and regulating data transmission.
The main operations done by the kernel are
• Kernel ensures the running of user-given programs is done on time.
• Plays a role in memory allocation.
• Manages the swapping between memory and disk.
• Transports data between peripherals.
• The kernel also requests service from the process.
That’s the reason why the kernel is called as the heart of the UNIX system. The
kernel itself can be defined as a small program that contains enough data structure
to pass arguments and receive results from a call and the process them on the
calling process.
Hardware
Hardware can be defined as the system components which are seen through
the human eye and be touched like keyboard, monitors, etc., Hardware also
included speakers, clocks, devices in OS architecture.
Shell
The shell can easily be defined as the software program which acts as a
communication bridge between kernel and user. When the user gives the
commands the shell reads the commands, understands them and then sends a
request to execute the program. Then when the program is executed it again sends
the request to display the program to the user on-screen. The shell can also be
55
called a command interpreter. As told above the shell calls the kernel there are all
most 100 in build calls.
Various tasks which shell ask the kernel to do are
• File opening.
• File writing.
• Executing programs.
• Obtaining detailed information about the program.
• Termination of the process.
• Getting information about time and date.
Unix Files and Directories
This includes user-written and shell programs as well as libraries of UNIX
Directories
Directories in Unix have name, path, files, and folder. Rules for both files and
folders are the same. These are stored in the up-side-down hierarchical tree
structure. The main working flow of directories is as follows.
• Displays home directories.
• Copies files to other directories.
• Renaming directories.
• Deleting directories.
• Files
These are the files that contain data, text and program instructions. The main
workflow of files is
• Store user information like an image drawn or some content written.
• Mostly located under a directory.
• It does not allow/store the data of other files.
3.3.5 LINUX
Linux is an operating system or a kernel. It is distributed under an open
source license. Its functionality list is quite like UNIX.
Linux is an operating system or a kernel which germinated as an idea in the
mind of young and bright Linus Torvalds when he was a computer science student.
He used to work on the UNIX OS (proprietary software) and thought that it needed
improvements. However, when his suggestions were rejected by the designers of
UNIX, he thought of launching an OS which will be receptive to changes,
modifications suggested by its users.
As time passed by, he collaborated with other programmers in places like MIT
and applications for Linux started to appear. So around 1991, a working Linux
operating system with some applications was officially launched, and this was the
56
start of one of the most loved and open-source OS options available today. The
earlier versions of Linux were not so user friendly as they were in use by computer
programmers and Linus Torvalds never had it in mind to commercialize his
product.
This definitely curbed the Linux's popularity as other commercially oriented
Operating System Windows got famous. Nonetheless, the open-source aspect of the
Linux operating system made it more robust.
The main advantage of Linux was that programmers were able to use the
Linux Kernel to design their own custom operating systems. With time, a new range
of user-friendly OS's stormed the computer world. Now, Linux is one of the most
popular and widely used Kernel, and it is the backbone of popular operating
systems like Debian, Knoppix, Ubuntu, and Fedora. Nevertheless, the list does not
end here as there are thousands of OS's based on Linux which offer a variety of
functions to the users. Linux Kernel is normally used in combination of GNU
project by Dr. Richard Stallman. All mordern distributions of Linux are actually
distributions of Linux/GNU
Properties of Linux
Linux Pros
A lot of the advantages of Linux are a consequence of Linux' origins, deeply
rooted in UNIX, except for the first advantage, of course:
1. Linux is free:
If you want to spend absolutely nothing, you don't even have to pay the price
of a CD. Linux can be downloaded in its entirety from the Internet completely for
free. No registration fees, no costs per user, free updates, and freely available
source code in case you want to change the behavior of your system.
2. Most of all, Linux is free as in free speech:
The license commonly used is the GNU Public License (GPL). The license says
that anybody who may want to do so, has the right to change Linux and eventually
to redistribute a changed version, on the one condition that the code is still
available after redistribution.
3. Linux is portable to any hardware platform:
A vendor who wants to sell a new type of computer and who doesn't know
what kind of OS his new machine will run (say the CPU in your car or washing
machine), can take a Linux kernel and make it work on his hardware.
4. Linux is secure and versatile:
The security model used in Linux is based on the UNIX idea of security, which
is known to be robust and of proven quality. But Linux is not only fit for use as a
fort against enemy attacks from the Internet: it will adapt equally to other
situations, utilizing the same high standards for security. Your development
machine or control station will be as secure as your firewall.
57
5. Linux is scalable:
From a Palmtop with 2 MB of memory to a petabyte storage cluster with
hundreds of nodes: add or remove the appropriate packages and Linux fits all. You
don't need a supercomputer anymore, because you can use Linux to do big things
using the building blocks provided with the system. If you want to do little things,
such as making an operating system for an embedded processor or just recycling
your old 486, Linux will do that as well.
6. The Linux OS and most Linux applications have very short debug-times:
Because Linux has been developed and tested by thousands of people, both
errors and people to fix them are usually found rather quickly. It sometimes
happens that there are only a couple of hours between discovery and fixing of a
bug.
Linux Cons
• There are far too many different distributions:
"Quot capites, tot rationes", as the Romans already said: the more people,
the more opinions. At first glance, the amount of Linux distributions can be
frightening, or ridiculous, depending on your point of view. But it also
means that everyone will find what he or she needs. You don't need to be an
expert to find a suitable release.
When asked, generally every Linux user will say that the best distribution is
the specific version he is using. So which one should you choose? Don't
worry too much about that: all releases contain more or less the same set of
basic packages. On top of the basics, special third party software is added
making, for example, TurboLinux more suitable for the small and medium
enterprise, RedHat for servers and SuSE for workstations. However, the
differences are likely to be very superficial. The best strategy is to test a
couple of distributions; unfortunately not everybody has the time for this.
Luckily, there is plenty of advice on the subject of choosing your Linux. A
quick search on Google, using the keywords "choosing your distribution"
brings up tens of links to good advise. The Installation HOWTO also
discusses choosing your distribution.
• Linux is not very user friendly and confusing for beginners:
It must be said that Linux, at least the core system, is less user-friendly to
use than MS Windows and certainly more difficult than MacOS, but... In
light of its popularity, considerable effort has been made to make Linux even
easier to use, especially for new users.
• Is an Open Source product trustworthy?
How can something that is free also be reliable? Linux users have the choice
whether to use Linux or not, which gives them an enormous advantage
compared to users of proprietary software, who don't have that kind of
freedom. After long periods of testing, most Linux users come to the
58
conclusion that Linux is not only as good, but in many cases better and
faster that the traditional solutions. If Linux were not trustworthy, it would
have been long gone, never knowing the popularity it has now, with millions
of users. Now users can influence their systems and share their remarks
with the community, so the system gets better and better every day.
3.3. 6 WINDOWS OS
The oldest of all Microsoft’s operating systems is MS-DOS (Microsoft Disk
Operating System). MS-DOS is a text-based operating system. Users have to type
commands rather than use the friendlier graphical user interfaces (GUI’s) available
today. Despite its very basic appearance, MS-DOS is a very powerful operating
system. There are many advanced applications and games available for MS-DOS. A
version of MS-DOS underpins Windows. Many advanced administration tasks in
Windows can only be performed using MS-DOS.
Microsoft Windows is one of the most popular Graphical User Interface (GUI).
Multiple applications can execute simultaneously in Windows, and this is known
as “Multitasking”.
Windows Operating System uses both Keyboard and mouse as input devices.
Mouse is used to interact with Windows by clicking its icons. Keyboard is used to
enter alphabets, numerals and special characters.
Some of the functions of Windows Operating System are:
• Access applications (programs) on the computer (word processing, games,
spread sheets, calculators and so on).
• Load any new program on the computer.
• Manage hardware such as printers, scanners, mouse, digital cameras etc.,
• File management activities (For example creating, modifying, saving, deleting
files and folders).
• Change computer settings such as colour scheme, screen savers of your
monitor, etc.
Windows versions through the years
1985: Windows 1.0
The history of Microsoft Windows dates back to 1985, when Microsoft released
Microsoft Windows Version 1.01. Microsoft’s aim was to provide a friendly user-
interface known as a GUI (graphical user interface) which allowed for easier
navigation of the system features. Windows 1.01 never really caught on. The
release was a shaky start for the tech giant. Users found the software unstable.
(The amazing thing about Windows 1.01 is that it fitted on a single floppy disk).
However, the point-and-click interface made it easier for new users to operate a
computer. Windows 1.0 offered many of the common components found in today's
graphical user interface, such as scroll bars and "OK" buttons.
59
2015: Windows 10
Microsoft announced Windows 10 in September 2014, skipping Windows 9
and launched on July 2015. Version 10 includes the Start menu, which was absent
from Windows 8. A responsive design feature called Continuum adapts the interface
depending on whether the user works with a touch screen or a keyboard and
mouse for input. New features like an onscreen back button simplified touch input.
Microsoft designed the OS to have a consistent interface across devices including
PCs, laptops and tablets.
Difference between Firmware and Operating System
3.3.7 FREEWARE
Freeware (not to be confused with free software) is programming that is offered
at no cost and is a common class of small applications available for downloading
and use in most operating systems. Because it may be copyrighted, you may or
may not be able to reuse it in programming you are developing. The least restrictive
"no-cost" programs are un-copyrighted programs that are in the public domain.
When reusing public domain software in your own programs, it's good to know the
history of the program so that you can be sure it really is in the public domain.
Freeware means there are no paid licenses required to use the application, no
fees or donations necessary, no restrictions on how many times you can download
or open the program, and no expiration date.
However, it can still be restrictive in some ways. Free software, on the other
hand, is completely and totally void of restrictions and allows the user to do
absolutely whatever they want with the program.
Freeware vs. Free Software
Freeware is cost-free software and free software is copyright-free software. In
other words, freeware is software under copyright but available at no cost; free
software is software with no limitations or constraints, but might not be free in the
sense that there’s no price attached to it.
62
Free software can be modified and changed at the will of the user. This means
that the user can make changes to the core elements of the program, re-write
whatever they want, overwrite things, completely repurpose the program, fork it
into new software, etc.
For free software to truly be free requires the developer to release the program
without restrictions, which is normally accomplished by giving away the source
code. This type of software is often called open-source software, or free and open-
source software (FOSS).
Free software is also 100% legally redistributable and can be used to make a
profit. This is true even if the user didn’t spend anything for the free software or if
they make more money from the free software than what they paid for it. The idea
here is that the data is totally and completely available for whatever the user wants.
The following are considered the required freedoms that a user must be
granted for the software to be considered free software (Freedoms 1-3 require
access to the source code):
Freedom 0: You're able to run the program for any purpose.
Freedom 1: You can study how the program works, and change it to make it
do whatever you want.
Freedom 2: You're given the ability to share and make copies of the software
so that you can help others.
Freedom 3: You can improve on the program, and release your improvements
(and modified versions) to the public so that everyone benefits.
Some examples of free software include GIMP, LibreOffice, and Apache HTTP
Server.
A freeware application may or may not have its source code freely available.
The program itself does not cost and is completely usable without charge, but that
doesn’t mean that the program is editable and can be transformed to create
something new, or inspected to learn more about the inner-workings.
Freeware might also be restrictive. For example, one program might be free
only for private use and stop working if it’s found to be used for commercial
purposes, or maybe the software is restricted in functionality because there’s a paid
edition available that includes more advanced features.
Unlike the rights given to free software users, freeware users’ freedoms are
granted by the developer; some developers might give more or less access to the
program than others. They also might restrict the program from being used in
particular environments, lock down the source code, etc.
CCleaner, Skype, and AOMEI Backupper are examples of freeware.
Why Developers Release Freeware
Freeware often exists to advertise a developer's commercial software. This is
usually done by giving out a version with similar but limited features. For example,
63
this edition might have advertisements or some features might be locked down until
a license is provided.
Some programs might be available at no cost because the
installer file advertises other paid-for programs that the user might click on to
generate revenue for the developer.
Other freeware programs might not be profit-seeking but instead, are provided
to the public for free for educational purposes.
3.4 REVISION POINTS
• System software
• Application software
• Operating system
• UNIX, INUX
• Free ware
3.5 INTEXT QUESTIONS
1. Explain the various types of software.
2. Define OS.
3. What are the characteristics of UNIX?
4. List down the advantages od LINUX.
5. Write a short note on freeware.
3.6 SUMMARY
Application software is a software program or group of programs designed
for end-users.
System software provides a platform for other software and includes the
programs managing the computer itself,
Freeware software does not expose or share its source code. Yet the software
owner does not charge others to use it.
An Operating System (OS) is system software that acts as an interface
between computer hardware components and the user
UNIX is designed to be accessed by multiple people at a time and be
multitasking and time-sharing configuration
Programmers were able to use the Linux Kernel to design their own custom
operating systems
3.7 TERMINAL EXERCISE
1. ________________ is an interface between user and the computer.
2. GUI means ________________
3. Set of instruction to run a computer is ______________
4. _____________ is a free operating system
5. _____________ has the power to start or stop a program and even handle the
file system
64
65
LESSON - 4
PROGRAMMING LANGUAGES
4.1 INTRODUCTION
Computer programming languages allow us to give instructions to a computer
in a language the computer understands. Just as many human-based languages
exist, there are arrays of computer programming languages that programmers can
use to communicate with a computer. The portion of the language that a computer
can understand is called a “binary.” Translating programming language into binary
is known as “compiling.” Each language, from C Language to Python, has its own
distinct features, though many times there are commonalities between
programming languages.
These languages allow computers to quickly and efficiently process large and
complex swaths of information. For example, if a person is given a list of
randomized numbers ranging from one to ten thousand and is asked to place them
in ascending order, chances are that it will take a sizable amount of time and
include some errors.
4.2 OBJECTIVES
• To understand the purpose of computer language
• To understand the evolution of programming language
• To know the programming languages that are commonly used
4.3 CONTENTS
4.3.1 Computer Language
4.3.2 Evolution of Programming Languages
4.3.3 Most Commonly Used Programming Language
4.3.4 Compiler
4.3.5 Interpreter
4.3.1 COMPUTER LANGUAGE
A computer language is a method of communication with a computer. Types of
computer languages include:
• Construction language, all forms of communication by which a human can
specify an executable problem solution to a computer
o Command language, a language used to control the tasks of the
computer itself, such as starting other programs
o Configuration language, a language used to write configuration files
o Programming language, a formal language designed to communicate
instructions to a machine, particularly a computer
• Markup language, a grammar for annotating a document in a way that is
syntactically distinguishable from the text, such as HTML
66
i. Machine Language
Advantages
• Python is easy to read, easy to understand, and easy to write.
• It integrates with other programming languages like C, C++, and Java.
• Python executes code line-by-line, so it is easy for the programmer to find
the error that occurred in the code.
• Python is platform-independent means you can write code once and run it
anywhere.
Disadvantages
• Python is not suitable for developing mobile applications and games.
• Python works with the interpreter. That's why it is slower than other
programming languages like C and C++.
Java
Java is a simple, secure, platform-independent, reliable, architecture-neutral
high-level programming language developed by Sun Microsystems in 1995. Now,
Java is owned by Oracle. It is mainly used to develop bank, retail, information
technology, android, big data, research community, web, and desktop applications.
Advantages
• Java is easy to write, compile, learn, and debug as compared to other
programming languages.
• It provides an ability to run the same program on different platforms.
• It is a highly secured programming language because in java, there is no
concept of explicit pointers.
• It is capable of performing multiple tasks at the same time.
Disadvantages
• Java consumes more memory and slower than other programming
languages like C or C++.
• It does not provide a backup facility.
C
C is a popular, simple, and flexible general-purpose computer programming
language. Dennis M Ritchie develops it in 1972 at AT&T. It is a combination of both
low-level programming language as well as a high-level programming language. It is
used to design applications like Text Editors, Compilers, Network devices, and
many more.
Advantages
• C language is easy to learn.
• It is fast, efficient, portable, easy to extend, powerful, and flexible
programming language.
• It is used to perform complex calculations and operations such as MATLAB.
• It provides dynamic memory allocation to allocate memory at the run time.
70
Disadvantages
In the C programming language, it is very difficult to find the errors.
C++
C++ is one of the thousands of programming languages that we use to develop
software. C++ programming language is developed by Bjarne Stroustrup in 1980. It
is similar to the C programming language but also includes some additional
features such as exception handling, object-oriented programming, type checking,
etc.
Advantages
• C++ is a simple and portable structured programming language.
• It supports OOPs features such as Abstraction, Inheritance, Encapsulation.
• It provides high-level abstraction and useful for a low-level programming
language, and more efficient for general-purpose.
• C++ is more compatible with the C language.
Disadvantages
• C++ programming language is not secured as compared to other
programming languages like Java or Python.
• C++ can not support garbage collection.
• It is difficult to debug large as well as complex web applications.
C#
Disadvantages
• C# is less flexible because it is completely based on Microsoft .Net
framework.
• In C#, it is difficult to write, understand, debug, and maintain multithreaded
applications.
JavaScript
JavaScript is a type of scripting language that is used on both client-side as
well as a server-side. It is developed in the 1990s for the Netscape Navigator web
browser. It allows programmers to implement complex features to make web pages
alive. It helps programmers to create dynamic websites, servers, mobile
applications, animated graphics, games, and more.
Advantage
• JavaScript helps us to add behavior and interactivity on the web page.
• It can be used to decrease the loading time from the server.
• It has the ability to create attractive, dynamic websites, and rich interfaces.
• JavaScript is a simple, versatile, and lightweight programming language.
• JavaScript and its syntax are easy to understand.
Disadvantage
• JavaScript is completely based on the browser.
• It does not support multiple inheritance.
• It is less secure compared to other programming languages.
R
Currently, R programming is one of the popular programming languages that
is used in data analytics, scientific research, machine learning algorithms, and
statistical computing. It is developed in 1993 by Ross Ihaka and Robert Gentleman.
It helps marketers and data scientists to easily analyze, present, and visualize data.
Advantages
• R programming provides extensive support for Data Wrangling.
• It provides an easy-to-use interface.
• It runs on any platform like Windows, Linux, and Mac.
• It is an open-source and platform-independent programming language.
Disadvantages
• R programming does not support 3D graphics.
• It is slower than other programming languages.
72
PHP
PHP stands for Hypertext Preprocessor. It is an open-source, powerful server-
side scripting language mainly used to create static as well as dynamic websites. It
is developed by Rasmus Laird in 1994. Inside the php, we can also
write HTML, CSS, and JavaScript code. To save php file, file extension .php is used.
Advantages
• PHP is a more secure and easy-to-use programming language.
• It supports powerful online libraries.
• It can be run on a variety of operating systems such as Windows, Linux, and
Mac.
• It provides excellent compatibility with cloud services.
Disadvantages
• PHP is not capable of handling a large number of applications and not
suitable for large applications.
• It is quite difficult to maintain.
Go
Go or Golang is an open-source programming language. It is used to build
simple, reliable, and efficient software. It is developed by Robert Griesemer, Rob
Pike, and Ken Thompson in 2007.
Advantages
• Go language is easy-to-learn and use.
• It comes with the in-built testing tools.
• Go is a fast programming language.
Disadvantages
• Go language does not support generics.
• It does not support error handling.
• It supports a lack of frameworks.
Ruby
Ruby is an open-source, general-purpose, and pure object-oriented
programming language released in 1993. It is used in front-end and back-end web
development. It is mainly designed to write CGI (Common Gateway Interface)
scripts.
Advantages
• Ruby supports various GUI (Graphical User Interface) tools like GTK and
OpenGL.
• It is used to develop both internet as well as intranet applications.
• The code written in Ruby is small and contains less number of lines.
73
Disadvantages
• Ruby is slower than other programming languages.
• It is very difficult for programmers to debug the code written in Ruby.
4.3.4 COMPILER
A compiler is a software program that is responsible for changing initial
programmed code into a more basic machine language closer to the “bare metal” of
the hardware, and more readable by the computer itself. A high-level source
code that is written by a developer in a high-level programming language gets
translated into a lower-level object code by the compiler, to make the result
“digestible” to the processor.
Formally, the output of the compilation is called object code or sometimes an
object module. The object code is machine code that the processor can perform one
instruction at a time.
Compilers are needed because of the way that a traditional processor executes
object code. The processor uses logic gates to route signals on a circuit board,
manipulating binary high and low signals to work the computer’s arithmetic logic
unit. But that’s not how a human programmer builds the code: unlike this basic,
binary machine language, the initial high-level code consists of variables,
commands, functions, calls, methods and other assorted fixtures represented in a
mixture of arithmetic and lexical syntax. All of that needs to be put into a form that
the computer can understand in order to execute the program.
A compiler executes four major steps:
Scanning: The scanner reads one character at a time from the source code
and keeps track of which character is present in which line.
Lexical Analysis: The compiler converts the sequence of characters that
appear in the source code into a series of strings of characters (known as tokens),
which are associated by a specific rule by a program called a lexical analyzer. A
symbol table is used by the lexical analyzer to store the words in the source code
that correspond to the token generated.
Syntactic Analysis: In this step, syntax analysis is performed, which involves
preprocessing to determine whether the tokens created during lexical analysis are
in proper order as per their usage. The correct order of a set of keywords, which can
yield a desired result, is called syntax. The compiler has to check the source code to
ensure syntactic accuracy.
Semantic Analysis: This step consists of several intermediate steps. First, the
structure of tokens is checked, along with their order with respect to the grammar
in a given language. The meaning of the token structure is interpreted by
the parser and analyzer to finally generate an intermediate code, called object code.
The object code includes instructions that represent the processor action for a
corresponding token when encountered in the program. Finally, the entire code is
74
Interpreter translates just one Compiler scans the entire program and
statement of the program at a time translates the whole of it into machine
into machine code. code at once.
An interpreter takes very less time A compiler takes a lot of time to analyze
to analyze the source code. the source code. However, the overall time
However, the overall time to taken to execute the process is much
execute the process is much faster.
slower.
LESSON - 5
SYSTEM DEVELOPMENT PROCESS
5.1 INTRODUCTION
Systems development is the process of defining, designing, testing and
implementing a new software application or program. It can include the internal
development of customized systems, the creation of database systems or the
acquisition of third party developed software.
Organizations aims to produce high quality systems that meet or exceed
customer expectations, based on customer requirements, by delivering systems
which move through each clearly defined phase, within scheduled time-frames and
cost estimates. They also adhere to important phases that are essential for our
developers such as planning, analysis, design and implementation. We shall
discuss some of the models used for software development process in this lesson.
5.2 OBJECTIVES
• To understand the common steps in software development
• To study various models used for software development
5.3 CONTENTS
5.3.1 Software Development Life Cycle
5.3.2 Waterfall Model
5.3.3 Iterative Model
5.3.4 Spiral Model
5.3.5 V-Model
5.3.6 Big Bang Model
5.3.1 SOFTWARE DEVELOPMENT LIFE CYCLE
Software Development Life Cycle (SDLC) is a process used by the software
industry to design, develop and test high quality softwares. The SDLC aims to
produce a high-quality software that meets or exceeds customer expectations,
reaches completion within times and cost estimates.
• SDLC is the acronym of Software Development Life Cycle.
• It is also called as Software Development Process.
• SDLC is a framework defining tasks performed at each step in the software
development process.
• ISO/IEC 12207 is an international standard for software life-cycle
processes. It aims to be the standard that defines all the tasks required for
developing and maintaining software.
What is SDLC?
SDLC is a process followed for a software project, within a software
organization. It consists of a detailed plan describing how to develop, maintain,
78
replace and alter or enhance specific software. The life cycle defines a methodology
for improving the quality of software and the overall development process.
The following figure is a graphical representation of the various stages of a typical
SDLC.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next phase is
started only after the defined set of goals are achieved for previous phase and it is
signed off, so the name "Waterfall Model". In this model, phases do not overlap.
Waterfall Model - Application
Every software developed is different and requires a suitable SDLC approach to
be followed based on the internal and external factors. Some situations where the
use of Waterfall model is most appropriate are −
• Requirements are very well documented, clear and fixed.
• Product definition is stable.
• Technology is understood and is not dynamic.
• There are no ambiguous requirements.
• Ample resources with required expertise are available to support the
product.
• The project is short.
Waterfall Model - Advantages
The advantages of waterfall development are that it allows for
departmentalization and control. A schedule can be set with deadlines for each
stage of development and a product can proceed through the development process
model phases one by one.
Development moves from concept, through design, implementation, testing,
installation, troubleshooting, and ends up at operation and maintenance. Each
phase of development proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows −
• Simple and easy to understand and use
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Clearly defined stages.
• Well understood milestones.
• Easy to arrange tasks.
• Process and results are well documented.
83
V-Model - Design
Under the V-Model,
Model, the corresponding testing phase of the development
develop phase
is planned in parallel. So, there are Verification phases on one side of the ‘V’ and
Validation phases on the other side. The Coding Phase joins the two sides of the V-
V
Model.
The following illustration depicts the different phases in a V
V-Model off the SDLC.
Architectural Design
Architectural specifications are understood and designed in this phase.
Usually more than one technical approach is proposed and based on the technical
and financial feasibility the final decision is taken. The system design is broken
down further into modules taking up different functionality. This is also referred to
as High Level Design (HLD).
The data transfer and communication between the internal modules and with
the outside world (other systems) is clearly understood and defined in this stage.
With this information, integration tests can be designed and documented during
this stage.
Module Design
In this phase, the detailed internal design for all the system modules is
specified, referred to as Low Level Design (LLD). It is important that the design is
compatible with the other modules in the system architecture and the other
external systems. The unit tests are an essential part of any development process
and helps eliminate the maximum faults and errors at a very early stage. These
unit tests can be designed at this stage based on the internal module designs.
Coding Phase
The actual coding of the system modules designed in the design phase is taken
up in the Coding phase. The best suitable programming language is decided based
on the system and architectural requirements.
The coding is performed based on the coding guidelines and standards. The
code goes through numerous code reviews and is optimized for best performance
before the final build is checked into the repository.
Validation Phases
The different Validation Phases in a V-Model are explained in detail below.
Unit Testing
Unit tests designed in the module design phase are executed on the code
during this validation phase. Unit testing is the testing at code level and helps
eliminate bugs at an early stage, though all defects cannot be uncovered by unit
testing.
Integration Testing
Integration testing is associated with the architectural design phase.
Integration tests are performed to test the coexistence and communication of the
internal modules within the system.
System Testing
System testing is directly associated with the system design phase. System
tests check the entire system functionality and the communication of the system
under development with external systems. Most of the software and hardware
compatibility issues can be uncovered during this system test execution.
91
Acceptance Testing
Acceptance testing is associated with the business requirement analysis phase
and involves testing the product in user environment. Acceptance tests uncover the
compatibility issues with the other systems available in the user environment. It
also discovers the non-functional issues such as load and performance defects in
the actual user environment.
V- Model ─ Application
V- Model application is almost the same as the waterfall model, as both the
models are of sequential type. Requirements have to be very clear before the project
starts, because it is usually expensive to go back and make changes. This model is
used in the medical development field, as it is strictly a disciplined domain.
The following pointers are some of the most suitable scenarios to use the V-
Model application.
• Requirements are well defined, clearly documented and fixed.
• Product definition is stable.
• Technology is not dynamic and is well understood by the project team.
• There are no ambiguous or undefined requirements.
• The project is short.
V-Model - Pros and Cons
The advantage of the V-Model method is that it is very easy to understand and
apply. The simplicity of this model also makes it easier to manage. The
disadvantage is that the model is not flexible to changes and just in case there is a
requirement change, which is very common in today’s dynamic world, it becomes
very expensive to make the change.
The advantages of the V-Model method are as follows −
• This is a highly-disciplined model and Phases are completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
The disadvantages of the V-Model method are as follows −
• High risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high
risk of changing.
92
LESSON - 6
SDLC MODELS
6.1 INTRODUCTION
There are various software development life cycle models defined and designed
which are followed during the software development process. These models are also
referred as Software Development Process Models". Each process model follows a
Series of steps unique to its type to ensure success in the process of software
development.
Following are some of the popular SDLC models followed in the industry and
are discussed detail in this lesson.
Agile Model,
RAD Model,
Prototyping Models.
6.2 OBJECTIVES
• To understand the steps and features of various models
• To study the advantages and disadvantages of each model
6.3 CONTENTS
6.3.1 Agile Model
6.3.2 RAD Model
6.3.3 Prototyping Model
6.3.1 SDLC - AGILE MODEL
Agile SDLC model is a combination of iterative and incremental process models
with focus on process adaptability and customer satisfaction by rapid delivery of
working software product. Agile Methods break the product into small incremental
builds. These builds are provided in iterations. Each iteration typically lasts from
about one to three weeks. Every iteration involves cross functional teams working
simultaneously on various areas like −
• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing and
• Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and
important stakeholders.
96
What is Agile?
Agile model believes that every project needs to be handled differently and the
existing methods need to be tailored to best suit the project requirements. In Agile,
the tasks are divided to time boxes (small time frames) to deliver specific features
for a release.
Iterative approach is taken and working software build is delivered after each
iteration. Each build is incremental in terms of features; the final build holds all the
features required by the customer.
Here is a graphical illustration of the Agile Model −
The Agile thought process had started early in the software development and
started becoming popular with time due to its flexibility and adaptability.
The most popular Agile methods include Rational Unified Process (1994),
Scrum (1995), Crystal Clear, Extreme Programming (1996), Adaptive Software
Development, Feature Driven Development, and Dynamic Systems Development
97
prototyping very effectively to give the exact look and feel even before the actual
software is developed.
Software that involves too much of data processing and most of the
functionality is internal with very little user interface does not usually benefit from
prototyping. Prototype development could be an extra overhead in such projects
and may need lot of extra efforts.
Software Prototyping - Pros and Cons
Software prototyping is used in typical cases and the decision should be taken
very carefully so that the efforts spent in building the prototype add considerable
value to the final software developed. The model has its own pros and cons
discussed as follows.
The advantages of the Prototyping Model are as follows −
• Increased user involvement in the product even before its implementation.
• Since a working model of the system is displayed, the users get a better
understanding of the system being developed.
• Reduces time and cost as the defects can be detected much earlier.
• Quicker user feedback is available leading to better solutions.
• Missing functionality can be identified easily.
• Confusing or difficult functions can be identified.
The Disadvantages of the Prototyping Model are as follows −
• Risk of insufficient requirement analysis owing to too much dependency on
the prototype.
• Users may get confused in the prototypes and actual systems.
• Practically, this methodology may increase the complexity of the system as
scope of the system may expand beyond original plans.
• Developers may try to reuse the existing prototypes to build the actual
system, even when it is not technically feasible.
• The effort invested in building prototypes may be too much if it is not
monitored properly.
6.4 REVISION POINTS
• Steps in Agile model
• Steps in RAD
• Steps in Prototyping
6.5 INTEXT QUESTIONS
1. Explain the manifesto principles of Agile model.
2. Compare Agile Vs Traditional models of SDLC.
3. Write a short note on Rapid Application Development.
4. Explain the phases of RAD model.
5. Explain the software prototyping types.
105
6.6 SUMMARY
Agile SDLC model is a combination of iterative and incremental process
models with focus on process adaptability and customer satisfaction by rapid
delivery of working software product.
Agile is based on the adaptive software development methods
In the RAD model, the functional modules are developed in parallel as
prototypes and are integrated.
Prototyping is used to allow the users evaluate developer proposals and try
them out before implementation
6.7 TERMINAL EXERCISE
1. _______________ is the backbone of this Agile methodology
2. _______________ follow iterative and incremental model
3. _____________ is a working model
4. ________________ can have horizontal or vertical dimensions.
5. Predictive methods entirely depend on the ___________________________ done
in the beginning of cycle.
6.8 SUPPLEMENTARY MATERIALS
1. https://www.javatpoint.com/software-engineering-sdlc-models
2. https://www.innovativearchitects.com/KnowledgeCenter/basic-IT-
systems/8-SDLC-models.aspx
6.9 ASSIGNMENTS
1. Discuss the basic requirements, steps and types of prototyping model of
software development.
6.10 SUGGESTED READING/REFERENCE
1. Ritendra Goel & Kakkar D.N., Computer Application in Management, New
Age Publishing, New Delhi, 2013.
2. https://www.tutorialspoint.com/sdlc/sdlc_quick_guide.htm
6.11 LEARNING ACTIVITIES
1. Form a group with your interested friends and discuss about the features,
advantages and disadvantages of various SDLC models.
6.12 KEYWORDS
Adaptive software development methods
Requirement analysis and planning
Predictive methods
Throwaway prototyping
Evolutionary prototyping
Extreme prototyping
106
LESSON - 7
SYSTEM, INPUT AND OUTPUT DESIGN
7.1 INTRODUCTION
System design is the process that bridges the gap between problem domain
and the existing system in a manageable way. Information systems in business are
file and database oriented. Data are accumulated into files that are processed or
maintained by the system. In an information system, input is the raw data that is
processed to produce output. During the input design, the developers must
consider the input devices such as PC, MICR, OMR, etc.
Outputs from computer systems are required primarily to communicate the
results of processing to users. Without quality output, the entire system may
appear to be so unnecessary that users will avoid using it, possibly causing it to
fail. The term output applies to any information produced by an information
system. Let us discuss the design of system, form, input and output in detail in this
chapter.
7.2 OBJECTIVES
• To understand the concept of system and its importance.
• To get insights in the system design concept and process
• To infuse the knowledge of input and form design.
• To learn about output design and its importance
• To introduce database design.
7.3 CONTENTS
7.3.1 System Design
7.3.2 File Design
7.3.3 Input Design
7.3.4 Output Design
7.3.5 Form Design
7.3.6 Database Design
7.3.1 SYSTEM DESIGN
System design is the phase that bridges the gap between problem domain and
the existing system in a manageable way. This phase focuses on the solution
domain, i.e. “how to implement?”
It is the phase where the SRS document is converted into a format that can be
implemented and decides how the system will operate.
In this phase, the complex activity of system development is divided into
several smaller sub-activities, which coordinate with each other to achieve the main
objective of system development.
107
Symbol Meaning
Entity
Weak Entity
Relationship
Identity Relationship
Attributes
Key Attributes
Multivalued
110
Symbol Meaning
Composite Attribute
Derived Attributes
Total Participation of E2 in R
Three types of relationships can exist between two sets of data: one-to-one, one-to-
many, and many-to-many.
File Organization
It describes how records are stored within a file.
There are four file organization methods −
• Serial − Records are stored in chronological order (in order as they are input
or occur). Examples − Recording of telephone charges, ATM transactions,
Telephone queues.
• Sequential − Records are stored in order based on a key field which
contains a value that uniquely identifies a record. Examples − Phone
directories.
• Direct (relative) − Each record is stored based on a physical address or
location on the device. Address is calculated from the value stored in the
record’s key field. Randomizing routine or hashing algorithm does the
conversion.
• Indexed − Records can be processed both sequentially and non-sequentially
using indexes.
111
File Access
One can access a file using either Sequential Access or Random Access. File
Access methods allow computer programs read or write records in a file.
Sequential Access
Every record on the file is processed starting with the first record until End of
File (EOF) is reached. It is efficient when a large number of ththee records on the file
need to be accessed at any given time. Data stored on a tape (sequential access) can
be accessed only sequentially.
Direct (Random) Access
Records are located by knowing their physical locations or addresses on the
device rather than their positions relative to other records. Data stored on a CD
device (direct-access)
access) can be accessed either sequentially or randomly.
Types of Files used in an Organization System
Following are the types of files used in an organization system −
• Master file − It contains the current information for a system. For example,
customer file, student file, telephone directory.
• Table file − It is a type of master file that changes infrequently and stored
in a tabular format. For example, storing Zipcode.
• Transaction file − It contains the day-to-day
day information generated from
business activities. It is used to update or process the master file. For
example, Addresses of the employees.
• Temporary file − It is created and used whenever needed by a system.
112
• Mirror file − They are the exact duplicates of other files. Help minimize the
risk of downtime in cases when the original becomes unusable. They must
be modified each time the original file is changed.
• Log files − They contain copies of master and transaction records in order
to chronicle any changes that are made to the master file. It facilitates
auditing and provides mechanism for recovery in case of system failure.
• Archive files − Backup files that contain historical versions of other files.
Documentation Control
Documentation is a process of recording the information for any reference or
operational purpose. It helps users, managers, and IT staff, who require it. It is
important that prepared document must be updated on regular basis to trace the
progress of the system easily.
After the implementation of system if the system is working improperly, then
documentation helps the administrator to understand the flow of data in the
system to correct the flaws and get the system working.
Programmers or systems analysts usually create program and system
documentation. Systems analysts usually are responsible for preparing
documentation to help users learn the system. In large companies, a technical
support team that includes technical writers might assist in the preparation of user
documentation and training materials.
Advantages
• It can reduce system downtime, cut costs, and speed up maintenance tasks.
• It provides the clear description of formal flow of present system and helps
to understand the type of input data and how the output can be produced.
• It provides effective and efficient way of communication between technical
and nontechnical users about system.
• It facilitates the training of new user so that he can easily understand the
flow of system.
• It helps the user to solve the problems such as troubleshooting and helps
the manager to take better final decisions of the organization system.
• It provides better control to the internal or external working of the system.
Types of Documentations
When it comes to System Design, there are following four main documentations −
• Program documentation
• System documentation
• Operations documentation
• User documentation
113
Program Documentation
• It describes inputs, outputs, and processing logic for all the program
modules.
• The program documentation process starts in the system analysis phase
and continues during implementation.
• This documentation guides programmers, who construct modules that are
well supported by internal and external comments and descriptions that
can be understood and maintained easily.
Operations Documentation
Operations documentation contains all the information needed for processing
and distributing online and printed output. Operations documentation should be
clear, concise, and available online if possible.
It includes the following information −
• Program, systems analyst, programmer, and system identification.
• Scheduling information for printed output, such as report, execution
frequency, and deadlines.
• Input files, their source, output files, and their destinations.
• E-mail and report distribution lists.
• Special forms required, including online forms.
• Error and informational messages to operators and restart procedures.
• Special instructions, such as security requirements.
User Documentation
It includes instructions and information to the users who will interact with the
system. For example, user manuals, help guides, and tutorials. User
documentation is valuable in training users and for reference purpose. It must be
clear, understandable, and readily accessible to users at all levels.
The users, system owners, analysts, and programmers, all put combined efforts to
develop a user’s guide.
A user documentation should include −
• A system overview that clearly describes all major system features,
capabilities, and limitations.
• Description of source document content, preparation, processing, and,
samples.
• Overview of menu and data entry screen options, contents, and processing
instructions.
• Examples of reports that are produced regularly or available at the user’s
request, including samples.
• Security and audit trail information.
114
This item, called the record key, key attribute, or simply key, is already part of
the record and is not an additional data added to it. It is used just for the purpose
of identification.
Common examples of record keys are the part number in an inventory record,
the chart number in a patient medical record, the student number in a university
record, or the serial number of a manufactured product. Each of these record keys
has various other uses in the organization.
Entity
An entity is any person, place, thing, or event of interest to the organisation
and about which data are captured, stored, or processed. Patients and tests are
entities of interest in hospitals, while banking entities include customers and
cheques.
File and Database
File: A file is a collection of related records. Each record in a file is included
because it pertains to the same entity. A file of cheques, for example, consists only
of cheques. Inventory records and invoice do not belong in a cheque file, since they
pertain to different entities.
Database: A database is an integrated collection of file. Records for different
entities are typically stored in a database (whereas files store records for a single
entity). In a university database, records for students, courses, and faculty are
interrelated in the same database.
File Organization: Records are stored in files using a file organization. This
file organization determines how the records will be Stored, Located and Retrieved
easily. There are different types of file organization which are explained below.
Sequential Organization
Sequential organisation is the simplest way to store and retrieve records in a
file. In a sequential file, records are stored one after the other without concern for
the actual value of the data in the records. The first record stored is placed at the
beginning of the file. The second is stored right after the first, (there are no unused
positions), the third after the second, and so on. This order never changes in
sequential file organisation, unlike the other organisations to be discussed
Sequential organization simply means storing and sorting in physical,
contiguous blocks within files on tape or disk. Records are also in sequence within
each block. To access a record, previous records within the block are scanned.
Thus sequential record design is best suited for “get next” activities, reading one
record after another without a search delay.
In a sequential organization, records can be added only at the end of the file. It
is not possible to insert a record in the middle of the file, without rewriting the file.
In a data base system, however, a record may be inserted anywhere in the file,
which would automatically resequence the records following the inserted record.
Another approach is to add all new records at the end of the file and later sort the
117
file on a key (name, number, etc.). Obviously, in a 60,000- record file it is less time-
consuming to insert the few records directly than to sort the entire file.
In a sequential file update, transaction records are in the same sequence as in
the master file. Records from both files are matched, one record at a time, resulting
in an updated master file. For example, the system changes the customer’s city of
residence as specified in the transaction file (on floppy disk) and corrects it in the
master file. A “C” in the record number specifies “replace”; an “A,” “add”; and a “D,”
“delete.”
In a personal computer with two disk drives, the master file is loaded on a
diskette into drive A, while the transaction file is loaded on another diskette into
drive B. Updating the master file transfers data from drive B to A, controlled by the
software in memory.
Reading data in Sequential Organization
To read a sequential file, the system always starts at the beginning of the file
and reads its way up to the record, one record at a time. For example, if a
particular record happens to be the tenth one in a file, the system starts at the first
record and reads ahead one record at a time until the tenth is reached. It cannot go
directly to the tenth record in a sequential file without starting from the beginning.
In fact, the system does not know it is the tenth record. Depending on the nature of
the system being designed the file starts reading from the very beginning.
Sequential Organization (Searching Record)
Records are accessed in order of their appearance in the file. To find location of
cheque 1258 in a sequential file, we will call the cheque number 1258, the search
key. The program controls all the processing steps that follow. The first record is
read and its cheque number compared with the search key: 1240 (Let it be first)
versus 1258. Since the cheque number and search key do not match, the process is
repeated. The cheque number for the next record is 1244, and it also does not
match the search key. The process of reading and comparing records continues
until the cheque number and the search key match. Once the key number matches
the record is accessed. If the file does not contain a cheque numbered 1258, the
reading and comparing process continues until the end of the file is reached.
Direct-Access Organisation
In contrast to sequential organisation, processing a direct-access file does not
require the system to start at the first record in the file. Direct-access files are
keyed files. They associate a record with a specific key value and a particular
storage location. All records are stored by key at addresses rather than by position;
if the program knows the record key, it can determine the location address of a
record and retrieve it independently of every other record in the file.
In direct – access file organization, records are placed randomly throughout
the file. Records need not be in sequence because they are updated directly and
rewritten back in the same location. New records are added at the end of the file or
inserted in specific locations based on software commands.
118
think of it as a file of separate, full or partially full blocks, each in sequential order.
The adjacent blocks are not in ascending order
Like sequential organization, keyed sequential organization stores data in
physically contiguous blocks. The difference is in the use of indexes to locate
records. To understand this method, we need to distinguish among three areas in
disk storage: prime area, overflow area and index area. The prime area contains file
records stored by key or ID numbers. All records are initially stored in the prime
area. The overflow area contains records added to the file that cannot be placed in
logical sequence in the prime area. The index area is more like a data dictionary. It
contains keys of records and their locations on the disk. A pointer associated with
each key is an address that tells the system where to find a record.
In an airline reservation file, the index area might contain pointers to the
Chicago and Delhi flights. The Chicago flight points to the Chicago flight
information stored in the prime area. The Delhi flight points to the Delhi flight
information in the prime area. Lack of space to store the Brisbane flight in
sequential order make it necessary to load it in the overflow area. The overflow
pointer places it logically in sequential order in the prime area. The same
arrangement applies to the other flights.
Indexed-sequential organization reduces the magnitude of the sequential
search and provides quick access for sequential and direct processing. The primary
drawback is the extra storage space required for the index. It also takes longer to
search the index for data access or retrieval.
Chaining
File organization requires that relationships be established among data items.
It must show how characters form fields, fields form files, and files relate to one
another. Establishing relationships is done through chaining or the use of pointers.
The example on airline reservation file showed how pointers, link one record to
another. Part number retrieves a record. A better way is to chain the records by
linking a pointer to each. The pointer gives the address of the next part type of the
same class. The search method applies similarly to other parts in the file.
Inverted File
The other type of data structure commonly used in database management
systems is an inverted file. This approach uses an index to store information about
the location of records having particular attributes. In a fully inverted file, there is
one index for each type of data item in the data set . Each record in the index
contains the storage address of each record in the file that meets the attribute.
Some data items in a database will probably never be used to retrieve data.
Therefore, no index will be built for those data items. If not all attributes are
indexed, the database is only partially inverted, which is more common data
structure.
Like the indexed-sequential storage method, the inverted list organization
maintains an index. The two methods differ, however, in the index level and record
121
storage. The indexed- sequential method has a multiple index for a given key,
whereas the inverted list method has a single index for each key type. In an
inverted list, records are not necessarily stored necessarily stored in particular
sequence. They are placed in the data storage area, but indexes are updated for the
record keys and location.
Data for our flight reservation system has a separate Index area and a data
location area. The index area may contain flight number and a pointer to the record
present in the data location area. The data location area may have record numbers
along with all the details of the flight such as the flight number, flight description,
and flight departure time. These are all defined as keys, and a separate index is
maintained for each. In the data location area, flight information is in no particular
sequence. Assume that a passenger needs information about the Delhi flight. The
agent requests the record with the flight description “Delhi flight”. The Data Base
Management System (DBMS) then reads the single-level index sequentially until it
finds the key value for the Delhi flight. This value may have two records associated
with it. The DBMS essentially tells the agent the departing time of the flight.
Looking at inverted-list organization differently, suppose the passenger requests
information’s on a Delhi flight that departs at 8:15. The DBMS first searches the
flight description index for the value of the “Delhi flight.” It finds both the records.
Next it searches the flight departure index for these values. It finds that one of them
departs at 10:10, but the other departs at 8:15. The later record in the data
location area is displayed for follow-up.
It can be seen that inverted lists are best for applications that request specific
data on multiple keys. They are ideal for static files because additions and deletions
cause expensive pointer updating.
7.3.3 INPUT DESIGN
Introduction
Input Specification describes the manner in which data enter the systems for
processing. Input design features can ensure reliability of system and produce
results from accurate data. The input design also determines whether the user can
interact efficiently with the system.
In an information system, input is the raw data that is processed to produce
output. During the input design, the developers must consider the input devices
such as PC, MICR, OMR, etc.
Therefore, the quality of system input determines the quality of system output.
Well designed input forms and screens have following properties −
• It should serve specific purpose effectively such as storing, recording, and
retrieving the information.
• It ensures proper completion with accuracy.
• It should be easy to fill and straightforward.
• It should focus on user’s attention, consistency, and simplicity.
122
• All these objectives are obtained using the knowledge of basic design
principles regarding −
o What are the inputs needed for the system?
o How end users respond to different elements of forms and screens.
Objectives for Input Design
The objectives of input design are −
• To design data entry and input procedures
• To reduce input volume
• To design source documents for data capture or devise other data capture
methods
• To design input data records, data entry screens, user interface screens, etc.
• To use validation checks and develop effective input controls.
Controlling Amount of Input Data preparation and data entry operations
depend on people. Because labour costs are high, the cost of preparing and
entering data is high, so reducing data requirements can lower costs. The computer
may sit idle while data are being prepared & input for processing. By reducing
input requirements, the analyst can speed the entire process from data capture to
processing.
Avoiding Delay: Avoiding processing delays resulting from data preparation or
data entry operations should be one of the objectives of the analyst in designing
input.
Avoiding Errors: In Data the rate at which errors occur depends on the
quantity of data, since the smaller the amount of data fewer the opportunities for
errors. The analyst can reduce the number of errors by reducing the volume of data
that must be entered for each transaction.
Avoiding Extra Steps When the volume of transactions can't be reduced, the
analyst must be sure the process is as efficient as possible. Such input designs that
cause extra steps should be avoided.
Keeping The Process Simple There should not be so many controls on errors
that people will have difficulty using the system. The system should be such that it
is comfortable to use while providing the error control methods.
Data Input Methods
It is important to design appropriate data input methods to prevent errors
while entering data. These methods depend on whether the data is entered by
customers in forms manually and later entered by data entry operators, or data is
directly entered by users on the PCs.
A system should prevent user from making mistakes by −
• Clear form design by leaving enough space for writing legibly.
123
costs are on the rise. This means that programming and software enhancements
should be kept simple and easy to update.
Accuracy and integrity: - The accuracy of a database ensures that data
quality and content remain constant. Integrity controls detect data inaccuracies
where they occur.
Recovery from failure: - With multi-user access to a database, the system
must recover quickly after it is down with no loss of transactions. This objective
also helps maintain data accuracy and integrity.
Privacy and security: - For data to remain private, security measures must
be taken to prevent unauthorized access. Database security means that data are
protected from various forms of destruction; users must be positively identified and
their actions monitored.
Performance: - This objective emphasizes response time to inquiries suitable
to the use of the data. How satisfactory the response time is depends on the nature
of the user-data base dialogue. For example, inquiries regarding airline seat
availability should be handled in a few seconds. On the other extreme, inquiries
regarding the total sale of a product over the past two weeks may be handled
satisfactorily in 50 seconds.
In a data base environment, the DBMS is the software that provides the
interface between the data file on disk and the program that requests processing.
The DBMS stores and manages data. The procedure is as follows:
1. The user requests a sales report through the application program. The
application program uses a data manipulation language (DML) to tell the
DBMS what is required.
2. The DBMS refers to the data model, which describes the view in a language
called the data definition language (DDL). The DBMS uses DDL to determine
how data must be structured to produce the user’s view.
3. The DBMS requests the input/output control system (IOCS) to retrieve the
information from physical storage as specified by the application program.
The output is the sales report.
To summarize,
1. DML manipulates data; it specifies what is required.
2. DDL describes how data are structured.
3. DBMS manages data according to DML requests and DDL descriptions.
4. DBMS performs several important functions:
5. Storing, retrieving, and updating data.
6. Creating program and data independence. Either one can be altered
independently of the other.
130
7. Enforcing procedures for data integrity. Data are immune from deliberate
alteration because the programmer has no direct method of altering physical
databases.
8. Reducing data redundancy. Data are stored and maintained only once.
9. Providing security facilities for defining users and enforcing authorization.
Access is limited to authorized users by passwords or similar schemes.
10. Reducing physical storage requirements by separating the logical and
physical aspects of the database.
7.4 REVISION POINTS
• System design
• E-R model
• Documentation
• File components
• Input and output design
• Form and database design
7.5 INTEXT QUESTIONS
1. Explain the steps in system design.
2. Explain the terms and symbol used in E-R model.
3. Explain the different types of documentation for a system.
4. List the components of a file.
5. Write short note on : Input design, Output design and database design.
7.6 SUMMARY
System design is the phase that bridges the gap between problem domain
and the existing system in a manageable way.
Operations documentation contains all the information needed for
processing and distributing online and printed output.
System documentation serves as the technical specifications for the IS and
how the objectives of the IS are accomplished.
File organization determines how the records will be stored, located and
retrieved easily.
The input design also determines whether the user can interact efficiently
with the system.
During output design, developers identify the type of outputs needed, and
consider the necessary output controls and prototype report layouts.
131
LESSON - 8
INTERATION OF APPLICATION AND TEXT PROCESSING
8.1 INTRODUCTION
Application integration is the process of enabling individual applications to
work with one another. Application integration helps bridge the gap between
existing on-premises systems and fast-evolving cloud-based enterprise applications.
Microsoft Office is a suite of desktop productivity applications that is designed
specifically to be used for office or business use. It is a proprietary product of
Microsoft Corporation and was first released in 1990. Microsoft Office is available in
35 different languages and is supported by Windows, Mac and most Linux variants.
It mainly consists of Word, Excel, PowerPoint, Access, OneNote, Outlook and
Publisher applications.
Microsoft has dominated the business world for a long time. It is on every
computer, in every office, no matter if that is a library or a Fortune 500 company.
We use Microsoft Word to create the newsletters for our businesses and collate our
data with Microsoft Excel. As familiar as we are with these programs, you should
know that they’re always being improved in order to help us do more.
8.2 OBJECTIVES
• To know and understand the concept of application integration and its uses.
• To get knowledge of Ms-Office
• To improve the understanding of Microsoft Office, Powerpoint, excel and
Access.
• To learn the various operation in Ms-Office software.
8.3 CONTENTS
8.3.1 Application Integration
8.3.2 Text Processing Software
8.3.3 Ms-Office
8.3.4 Ms-Word
8.3.5 Ms-PowerPoint
8.3.5 Ms-Excel
8.3.6 Ms-Access
8.3.1 INTEGRATION OF APPLICATIONS
Application integration is the process of enabling individual applications—each
designed for its own specific purpose—to work with one another. By merging and
optimizing data and workflows between multiple software applications,
organizations can achieve integrations that modernize their infrastructures and
support agile business operations.
Application integration helps bridge the gap between existing on-premises
systems and fast-evolving cloud-based enterprise applications. Through seamlessly
interconnected processes and data exchanges, application integration allows
133
Uses of Integration
As more and more organizations concentrate on deploying agile integration
strategies, modernizing legacy systems is a primary focus. Industry-specific
examples include the following:
• Banking: By integrating customer accounts, loan applications services,
and other back-end systems with their mobile app, a bank can provide
services via a new digital channel and appeal to new customers.
• Manufacturing: Factories use hundreds or even thousands of devices to
monitor all aspects of the production line. By connecting the devices to
other systems (e.g., parts inventories, scheduling applications, systems
that control the manufacturing environment), manufacturers can uncover
insights that help them identify production problems and better balance
quality, cost, and throughput.
• Healthcare: By integrating a hospital patient’s record with an electronic
health record (EHR) system, anyone who treats the patient has access to
the patient’s history, treatments, and records from the primary care
physician and specialists, insurance providers, and more. As the patient
moves through different areas of the hospital, the relevant caregivers can
easily access the information they need to treat the patient most effectively.
• Organizations in any industry can leverage mission-critical systems through
integration:
• ERP systems: Enterprise resource planning (ERP) systems serve as a hub
for all business activities in the organization. By integrating ERP with
supporting applications and services, organizations can streamline and
automate mission-critical business processes, such as payment
processing, supply chain functions, sales lead tracking, and more.
• CRM platforms: When combined with other tools and services, customer
relationship management (CRM) platforms can maximize productivity and
efficiency by automating a number of sales, marketing, customer support,
and product development functions.
8.3.2 TEXT PROCESSING SOFTWARE
In computing, the term text processing refers to the theory and practice of
automating the creation or manipulation of electronic text. Text usually refers to all
the alphanumeric characters specified on the keyboard of the person engaging the
practice, but in general text means the abstraction layer immediately above the
standard character encoding of the target text. The term processing refers to
automated (or mechanized) processing, as opposed to the same manipulation done
manually.
Text processing involves computer commands which invoke content, content
changes, and cursor movement, for example to
• search and replace
• format
136
desktop productivity to the web, streamlining the way you work and making it
easier to share, access and analyze information so you get better results. Office
2000 offers a multitude of new features. Of particular importance for this release
are the features that affect the entire suite. These Office-wide, or shared features
hold the key to the new realm of functionality enabled by Office. Office offers a new
Web-productivity work style that integrates core productivity tools with the Web to
streamline the process of sharing information and working with others. It makes it
easier to use an organization's intranet to access vital business information and
provides innovative analysis tools that help users make better, timelier business
decisions. Office delivers new levels of resiliency and intelligence, enabling users
and organizations to get up and running quickly, stay working and achieve great
results with fewer resources
Microsoft has dominated the business world for a long time. It is on every
computer, in every office, no matter if that is a library or a Fortune 500 company.
We use Microsoft Word to create the newsletters for our businesses and collate our
data with Microsoft Excel. As familiar as we are with these programs, you should
know that they’re always being improved in order to help us do more. The latest
features for Microsoft Office 365 not only maintain the comfortable framework that
we are used to, but they also bring with them incredible new features that we never
knew we needed. These ten new features of Microsoft Office will raise your
productivity, creativity, and efficiency in the workplace and at home.
Features of M.S. Office
Simultaneous Collaboration
This feature allows you to collaborate in real time with your colleagues and
staff and is available through Microsoft Word and PowerPoint Presentations.
Simultaneous Collaboration seamlessly shows what each team member is doing on
the document. It is important to note that while PowerPoint is compatible with
Simultaneous Collaboration, real-time typing is not yet available in PowerPoint
presentations.
Simple Sharing
Sharing documents should be easy. Microsoft Office new feature offers
straightforward document sharing with a simple share button. This button is in the
ribbon in Microsoft Word, Excel and PowerPoint. Now your team members can not
only quickly share documents, spreadsheets, and presentations but it also allows
you to access and change permissions. It is essential to note that co-authored files
must be shared through SharePoint or OneDrive.
Share Large Files As A URL
Large files are no longer a stalling point for businesses. Now if you need to
share a large file you can quickly do so with OneDrive, by sending it as a private
URL. This saves you a significant amount of time, as you no longer have to wait for
the file to upload. Additionally, the recipient no longer has to wait for the file to
download. Any large files that are attached to an email will automatically convert to
a link that the recipient can then open or download.
138
Helpful Versioning
No one is perfect. Sometimes you delete a section of a document, or
accidentally change the formatting, which is where versioning comes into play. You
can hastily revert the document to a previous version by clicking on File > History
to view and select from a list of all prior versions. Additionally, it is beneficial to
view various versions when you are tracking changes in drafts. While this isn’t a
new feature, it is newly improved. Microsoft knows how important this feature is, so
they have worked to ensure that this feature functions more efficiently each time.
Smart Lookup
The Smart Lookup feature provides you the ability to look up a word right from
inside your Microsoft word document, saving you the time of opening up a web page
or a dictionary. In the time that it would take to open a webpage, you would already
be back to your writing. To use this feature, you only need to highlight the word,
right-click, and select “Smart Lookup.”
Outlook Groups
Outlook has a new feature called Groups, which offers users a quick and
efficient way to work as a team without pre-created distribution lists. You can now
create a group of your colleagues or friends, giving this new group its own shared
inbox, calendar, file repository, and OneNote notes. This feature is incredibly
beneficial for task management and file sharing when working in teams.
New Charts in Microsoft
How many meetings have you sat through while staring at the same bar
graphs, with the same dull colors? Microsoft now offers six new chart types:
Treemap, Waterfall, Pareto, Histogram, Box and Whisker, and Sunburst. Each new
chart comes with a new layout and new possibilities. For example, Treemap
provides a hierarchical view of data, while Waterfall gives you a running total of
items as they are added and subtracted. Microsoft Word, Excel, and PowerPoint are
all compatible with these new charts.
One-Click Forecasting
Data is only important if it is used. One-click Forecasting, a new feature for
Excel, helps ensure that you use all the data that you are already collecting. This
feature allows users to view quick predictions based on selected portions of the
Excel Spreadsheet, which are made possible through Microsoft’s Exponential
Smoothing algorithm. This algorithm provides explicit short-term forecasts based
on the collective data within the spreadsheet.
Skype Integration
Skype is now integrated into Word and Excel, which means that while team
members are working on a shared document, they have all of Skype’s capabilities at
their fingertips. Now you can call, text, or screen share, right from either the Word
Document or the Excel Spreadsheet. This capability provides team members the
ability to effectively communicate, without having to leave their work, which saves
you crucial time. Every time you have to stop working to save the spreadsheet,
share the document, and plan a meeting, you are losing valuable time, and putting
139
a dent in your workflow. Skype integration allows you to continue working without
interruption.
Cross-Device Compatibility
When collaborating, it’s essential that you have strong cross-device support.
Microsoft has worked hard to ensure a smooth transition cross platforms and apps
by integrating them evenly with one another. This means that no matter which
platform and app you are using, you can be sure they are all on the same page.
A Common User Interface
While learning one application of the suite you get to learn the operational
basics of the other applications, while maintaining some uniqueness in the
applications. Consistency in MS-Office applications is in the form of:
• Tool –Bars
• Menus
• Dialog Boxes
• Customizable features and operational features are similar too.
Quick Access to Other Applications
The MS-Office provides the Microsoft Office Short cut Bar, which is used for
the following:
• Create a new file based on templates and wizards
• Opening existing files and automatically launching the related
applications
• Add tasks, make appointments, record tasks and add contacts and
journal entries.
• Create a new Outlook Message.
• Switch between and launch Microsoft Office Applications.
Sharing Data across Applications
Microsoft Office provides several means of sharing data between applications:
• Copying – copies the data from the source application to the target
applications using the clipboard.
• Linking-links the data from the source document to the target document
and saves with the source document.
• Embedding- embeds the data from the source document to the target
document and saves with the source document.
• Microsoft Office extends the data sharing beyond application integration
by providing workgroup integration with the Microsoft Outlook. Users can
mail documents, spreadsheets, presentations and data files from within
the source applications.
Providing a Common Language:
140
Providing the common language has been a more challenging goal from
Microsoft Office. It provides a common macro programming language for all the
applications –Visual Basic for the Applications.
Advanced presentation features
While PowerPoint is still one of the most popular and commonly-used
presentation solutions available, there are plenty of others who view it as dated—
particularly with more tech-savvy options like Prezi available. In order to stay
relevant, Microsoft has announced plans to incorporate more advanced
presentation features in Office . These include things like enhanced Morph and
Zoom capabilities to help you create a more sophisticated and dynamic
presentation. Those features are already included in Microsoft 365 subscriptions,
but are not available to people who are currently operating with Office.
Improved inking features
Those who use Microsoft Surface devices are probably already big fans of the
digital pen that allow them to draw, note, and doodle directly onto their device’s
screen. Office 2019 will introduce all new inking capabilities across all apps—such
as pressure sensitivity, tilt effects that adjust the ink’s thickness depending on the
angle of the pen, and even a roaming pencil case, which allows users to store and
organize their favorite pens, pencils, and highlighters to roam with them across
their different devices
Easier email management.
Microsoft has teased several new features to take some of the hassle and
headaches out of email management. According to Microsoft, these include things
like:
Updated contact cards
• Microsoft Office Groups
• @mentions
• Focused inbox
• Travel package cards
Microsoft is hopeful that these additions will help users manage their email far
more efficiently and effectively.
Power Map in Excel: Turning data into a map
Power Map is part of the powerful and interactive data visualization features in
Excel, which are enhanced with Power BI, the solution for analyzing, visualizing
and sharing data insights. You can turn rows of data into a 3D interactive map with
Power Map, which includes the ability to filter data using three different filters: List,
Range, or Advanced.
141
Ms-Word not only supports word processing features but also DTP features.
Some of the important features of Ms-Word are listed below:
1. Using word you can create the document and edit them later, as and when
required, by adding more text, modifying the existing text, deleting/moving
some part of it.
2. Changing the size of the margins can reformat complete document or part of
text.
3. Font size and type of fonts can also be changed. Page numbers and Header
and Footer can be included.
4. Spelling can be checked and correction can be made automatically in the
entire document. Word count and other statistics can be generated.
5. Text can be formatted in columnar style as we see in the newspaper. Text
boxes can be made.
6. Tables can be made and included in the text.
7. Word also allows the user to mix the graphical pictures with the text.
Graphical pictures can either be created in word itself or can be imported
from outside like from Clip Art Gallery.
8. Word also has the facility of macros. Macros can be either attached to some
function/special keys or to a tool bar or to a menu.
9. It also provides online help of any option.
Different elements and categories which are available in MS Word doc:
• Home
This has options like font colour, font size, font style, alignment, bullets, line
spacing, etc. All the basic elements which one may need to edit their
document is available under the Home option.
• Insert
Tables, shapes, images, charts, graphs, header, footer, page number, etc.
can all be entered in the document. They are included in the “Insert”
category.
• Design
The template or the design in which you want your document to be created
can be selected under the Design tab. Choosing an appropriate tab will
enhance the appearance of your document.
• Page Layout
Under the Page Layout tab comes options like margins, orientation,
columns, lines, indentation, spacing, etc.
144
• References
This tab is the most useful for those who are creating a thesis or writing
books or lengthy documents. Options like citation, footnote, table of
contents, caption, bibliography, etc. can be found under this tab.
• Review
Spell check, grammar, Thesaurus, word count, language, translation,
comments, etc. can all be tracked under the review tab. This acts as an
advantage for those who get their documents reviewed on MS Word.
Apart from all the above-mentioned features, the page can be set in different
views and layouts, which can be added and optimised using the View tab on the
Word document. Margins and scales are also available for the benefit of the users.
Uses of MS Word
Given below are the different fields in which MS Word is used and simplifies
the work of an individual:
• In Education: It is considered as one of the simplest tools which can be
used by both teachers and students. Creating notes is easier using MS Word
as they can be made more interactive by adding shapes and images. It is
also convenient to make assignments on MS Word and submitting them
online
• In Workplace: Submitting letters, bills, creating reports, letterheads, sample
documents, can all easily be done using MS Word
• Creating & Updating Resume: One of the best tools to create your resumes
and is easy to edit and make changes in it as per your experience
• For Authors: Since separate options are available for bibliography, table of
contents, etc., it is the best tool which can be used by authors for writing
books and adjusting it as per the layout and alignment of your choice
8.3.5 MS-POWER POINT
A PowerPoint presentation is a presentation created using Microsoft
PowerPoint software. The presentation is a collection of individual slides that
contain information on a topic. PowerPoint presentations are commonly used in
business meetings and for training and educational purposes. Microsoft PowerPoint
is a software product used to perform computer based presentations. There are
various circumstances in which a presentation is made: teaching a class,
introducing a product to sell, explaining an organizational structure, etc. The
preparation and the actual delivery of each are quite different. PowerPoint typically
comes with a set of preloaded themes for you to choose from. These can range from
simple color changes to complete format layouts with accompanying font text.
Themes can be applied through the whole presentation or a single slide. Using the
page setup allows you to optimize the presentation for the display size; for instance,
you should use a larger screen ratio when displaying on a projector compared to a
computer screen.
145
Features of MS PowerPoint
Microsoft first rolled out MS PowerPoint in 1987. PowerPoint software features
and formatting options include a wizard that walks you through the presentation
creation process. Design templates---prepackaged background designs and font
styles that will be applied to all slides in a presentation. When viewing a
presentation, slide progression can be manual, using the computer mouse or
keyboard to progress to the next slide, or slides can be set up to progress after a
specified length of time. Slide introductions and transitions can be added to the
slides.
With each version, new features in PowerPoint become available that help
make creating presentations easier. Microsoft PowerPoint is one of the most popular
programs for making presentations. With an intuitive graphical interface, in-built
editing tools, and more, it transformed presenting information for students,
businesses, and everything in between. Nevertheless, the program keeps evolving.
With PowerPoint 2019 comes a plethora of new features, helping even the greenest
of users make visually engaging presentations.
There are several new features in PowerPoint that were missing in previous
releases. This just goes to show that Microsoft does listen to what their users want.
Read on to learn about the new features in PowerPoint!
1. Insert Vectors
Tired of fuzzy images? Now your PowerPoint slides can boast the clarity and
sharpness of scalable vector graphic (SVG) pictures. You can edit vectors, such as
changing their color and size. PowerPoint 2019 can also handle SVG images with
filters on them. Insert your vector image in the SVG format, like you normally do
with other pictures.
A Format menu will appear in the ribbon. Here you can find different options
to play around with. Transform your vector into line art, use an eyedropper to select
a color from your slide, or convert it to an Office Shape. This allows the vector’s
disassembly for arranging as you please!
2. Conduct Slide Shows with Digital Pens
Presenting your slides is now easier than before! Use a compatible digital pen
like a wireless remote for a comparatively hands-free presenting experience. Of
course, you will need to update to the latest Windows 10 version. You also need a
digital pen (e.g. the Surface Pen) and a computer that supports Bluetooth. First,
enable Bluetooth on both devices. Next, pair your computer and pen through
Settings > Devices > Bluetooth & Other Devices.
Once paired, adjust the settings for the pen’s shortcut button. Navigate to
Settings > Devices > Pen & Windows Ink. Here you can see settings for choosing
how many clicks of the shortcut button launch which action. Check the box for the
option “Allow apps to override the shortcut button behavior.”
146
Now you are all set! When you start your presentation, activate the slide show. Tap
the button once to go forward, then briefly hold it down to go backward.
3. Morph Transition
Want smoother animations? PowerPoint 2019 brings you the Morph effect for
sleeker transitions. Two slides should have a minimum of one object in common
with each other. Make a copy of the first slide you want to Morph, then on the copy
modify the object.
Navigate to the Transitions tab, choose Morph. Play around with the Effect
Options until the Morph effect works to your preference.
5. Introducing:
oducing: The Text Highlighter
PowerPoint 2019 has a Text Highlighter tool now – just like MS Word. A small
feature, but popular enough to include in this release. Go to the Home tab and
select the highlight option. Choose your preferred color, then glide tthe
he cursor over
the text you want to accentuate.
6. Use 3D Models
One of the many new features in PowerPoint for graphics includes inserting 3D
models. PowerPoint 2019 makes it simple to put a 3D model in your presentation.
Users can also rotate the 3D model 360 degrees for maximum impact. Head over to
the Insert tab > Illustrations section > 3D Models to locate and insert your file.
8. Export in 4K Resolution
Prefer a video for a smooth presenting experience? Now you can choose 4K
resolution! Simply go to File > Export > Create a Video, and select the Ultra HD (4K)
option.
• PivotTable - flips and sums data in seconds and allows you to perform data
analysis and generating reports like periodic financial statements, statistical
reports, etc. You can also analyse complex data relationships graphically.
• Shortcut Menus - commands that are appropriate to the task that you are
doing appear by clicking the right mouse button.
Let us now discuss the essential features of MS Excel in datail:
Header and Footer
MS Excel allows users to insert header and footer into their spreadsheet
document files. A header is the top margin of each page in an Excel Worksheet,
while a footer
er is the bottom margin of each page in an Excel worksheet. These are
the valuable components for Excel sheets as they appear on every page of the
document. Users can enter any text or numbers to include header and footer in
their Excel document. For examp
example- title of the document, user/ author name, page
numbers, etc.
Apart from this, the main advantage of header and footer in Excel is that this
feature allows users to insert a watermark into their Excel documents.
Shortcut Keys
The use of shortcut keys in Excel is one of the main features of this powerful
spreadsheet program. MS Excel has an extensive range of shortcut keys that help
users reduce their working time. The keyboard shortcuts are essential alternatives
to using a mouse or a touch screen to p
perform
erform most excel commands instantly.
151
Since Excel has a pretty long list of shortcut keys, we discuss the few essential
Excel shortcuts keys below:
Additionally, we can press the shortcut keys Shift + F11 on the keyboard to
insert a new worksheet instantly.
152
Besides, we can also delete any desired worksheet with ease. First, we must
click on the desired worksheet name and then select the 'Delete' option from
right-click
the menu options.
To use this feature, we need to navigate to the Home > Find and
Select. Additionally, we can also use the ffollowing
ollowing shortcuts for quick access to
individual features:
Users can apply the 'Sorting and Filtering' feature on one or more columns.
Built-in Formulae
Excel has a wide range of built
built-in
in formulae that allow users to perform
different operations on the data in worksheets. Using functions and formulae to
manipulate numbers and getting desired results is one of the most powerful
features of MS Excel. It contains more than 450 functions and formulae, enabling
users to perform basic to complex operations efficiently.
To access the formulae in Excel, we are required to navigate to the 'Formulas' tab.
155
The basic formulae include SUM, AVERAGE, MINIMUM, MULTIPLY, etc. Let us
understand this feature with the following example:
Suppose we have specific numerical values in cell A1 and cell A2, and we want
to add these two values and get the result in cell A3. Thus, we apply the sum
function in cell A3, i.e., =SUM(A1,A2). This way, Excel displays the sum of values
from cell A1 and A2 in cell A3.
This means that cell A5 is dependent on the above cells to display the
corresponding results (value). Therefore, if we copy the results from cell A5 and
paste it on another cell using the 'Paste' option, Excel will only paste the
corresponding formula but not the value.
If we want to paste as values only, we must copy cell A5 and then use the
'Paste Special' feature in a cell where we want to paste values. In our case, we paste
the values in C5 Cell. When we select the 'Paste Speci
Special'
al' or press its shortcut keys,
such as Alt + E + S, we get the following window.
We have many options here to paste the contents as desired. However, since
we want to paste only the values, we must select the 'Values' option and then click
on the 'OK' button on cell C5.
Thus, Excel only pastes the values from the cell A5, i.e.:
Similarly, we can use other options as per our requirements.
Pivot Tables
Pivot Tables help summarize vast amounts of data from the database
organized so that the first row conta
contains
ins a heading and others contain values or
categories. Besides, there should be no blank rows in the selected range of data.
This feature is beneficial to analyze and compare data easily.
To insert Pivot Tables in Excel, we must first select the range of cells
c or table
and then navigate to Insert > Tables > Recommended PivotTables.
Once the user applies Pivot Tables in Excel, Excel then creates the PivotTables
in a new worksheet and displays the different fields, allowing users to rearrange the
data as desired.
157
Conditional Formatting
Conditional Formatting in Excel is another helpful feature that allows users to
change the formatting of a cell based on the contents or range of the cells. This
particular feature is mainly beneficial to focus on essential as aspects
pects of specific
desired values in spreadsheets. For example, conditional formatting features enable
users to fill in different colors to highlight the essential aspects of data in
spreadsheets.
Users can also apply basic fonts and cell formattings such a
ass font style, size, and
other font attributes.
To highlight the contents, users get various rules and styles and can even
create their custom rules as per needs.
Charts and Graphics
Excel allows users to create different types of charts based on the data in
sheets. Users can also use different built
built-in
in shapes and images if desired. In
addition to this, Excel also enables users to use mixed charts, meaning that we can
use/ combine two styles of charts in the same worksheet.
For example, we can use the line chart and the column chart on the same
range of data. This feature is mainly beneficial where users need to highlight two
different types of information or a range of values that changes significantly.
To insert charts and other graphics objects, we are rrequired
equired to navigate to
the Insert tab.
158
Auto-Fill Data
Although it is a minor feature, it is very much useful for regular users. Using
the Auto-fill
fill feature, users can fill data in series. For example
example- values from 1 to 10
or even more, weekdays, months name, dates, etc.
8.3.7 MS-ACCESS
Regular Microsoft Office users are not as familiar with Microsoft Access as they
are with Microsoft Word, Excel or PowerPoint.
Since Microsoft Access is a relational database application included in the
Microsoft Office Suite,
ite, which allows users to enter, manage, and run reports on a
larger scale, it is most suitable for those who need to quickly organize a large
amount of data.
It is layered somewhere between Excel, which is ideal for individuals with
small data storage and
d SQL Servers required by larger teams and corporations.
With the help of Microsoft, Access users can effectively manage important
information by storing it conveniently for future reference, reporting, and analysis.
As the name suggests, users will have a access
ccess to organized information in their
database with minimal effort.
Microsoft Access Features
• Ideal for individual users and smaller teams
• Easier than a client--server database to understand and use
• Import and export to other Microsoft Office and other ap
applications
plications
• Ready templates for regular users to create and publish data
• Allows building and publishing Web databases effortlessly
159
The Import and Export Link group displays icons for all the data formats that
Microsoft Access can Import or Export Data to. On clicking more, users will can see
more formats that Microsoft Access can work with.
For further convenience of the users, the import/export wizard helps users
with the task and saving the details of that operation as a specification.
Ready templates
lates for regular users to create and publish data
Microsoft Access helps users in creating and managing databases even if they
have minimal experience in tthe
he field. This is made possible with the help of several
Microsoft Access templates, which has everything ready for use.
On opening a specific template file, the user finds a new database with tables,
forms, macros, reports, and other necessary fields whic
which
h is already created, saving
time and effort.
The templates gallery conveniently comprises both desktop and web-based
web
templates for the user to choose from. For creating an Access database for personal
use, the best option would be to go with a desktop tem
template.
For creating databases for publishing on a SharePoint server, it is
recommended that the user choose the Web
Web-based templates.
For example, the Desktop Customer Service Template from Microsoft Access
2013 helps users to create a customer service data
database
base to manage multiple issues
tracking assignments, priority, status, and resolution with ease.
161
The Microsoft Access Templates for Employee and Vendor Project Marketing
helps users create an extensive marketing project database to track time-sensitive
deliverable’s, employee roles, and their priced vendors.
Allows building and publishing Web databases effortlessly
Users of Microsoft Access can either design their own database or create a
database using a readily available template as per their requirement.
Those who are tech-savvy and familiar with Web Databases would ideally
design their own database by creating a blank database on which they would create
the tables that their database would need on Access.
Those who need help or are not aware of what tables their project would
require can make use of the templates available for them. Microsoft Access
templates have a huge compilation for some commonly used databases that users
would require.
Even new users can create a database using a template by the following steps.
Open Access and open the backstage view by clicking on FILE.
Find the required template among the templates you would see there.
If the required template is not found, users can search Microsoft Office online
to find additional templates.
Tap or click the template which is suitable for the purpose and ensure that the
selected template is specific for either for a desktop database or Web Publishing.
Enter a file name and select a location to store the created database.
Tap or click the Create button to create the database.
It is as easy as that.
An user-friendly feature, ‘Tell Me’ For Assistance
The new user-friendly feature ‘Tell Me’ being introduced in Microsoft Access
2016 works like an assistant helping users complete the task quickly.
The feature is available as a text box on the ribbon in Microsoft Access 2016
that says Tell me what you want to do with a bulb beside it.
Users can enter words and phrases in the text field related to what they want
to do next and quickly get to features they want to use or actions they want to
perform. It also provides help related to what is being searched for.
For example, when the word ‘filter’ is entered, all the filter related option in the
application will crop up. Users do not have to hunt any feature down through a
maze of menus with the Tell me bar available.
Moreover, unlike help assistants of the past, this feature doesn’t tell the user
how to perform a specific function; it simply offers a simple step actually to do it.
Allows developers to create custom solutions using VBA code
Visual Basic for Applications (VBA) is a programming language that can be
used with Microsoft Access. Developers using Access can create custom solutions
162
for their database using VBA code, an effective programming language that consists
of a code/command for specific programs.
The instructions will cause actions to take place automatically when the
program is executed. This
his powerful feature allows developers to extend basic
custom end-user
user solutions to a professional solution by using advanced
automation, data validation, error trapping, and multi
multi-user
user support in their
databases.
Hide/Show option for Ribbon.
The Microsoft Access window consists of a variety of components helping users
to work more efficiently. The important components are the Navigation Pane,
Access work area, ribbon, shortcut menus, and Quick Access Toolbar.
Toolbar Some of
these components are common to other Office apps, whereas others are unique to
Microsoft Access.
The ribbon contains five tabs used to accomplish various tasks on the
computer related to organizing and managing the contents of the open window in
Microsoft Access.
It is located near the top of the window below the title bar and provides easy,
central access to the tasks performed while creating a database.
The ribbon available in Microsoft Access consists of tabs tabs,, groups, and
commands. Each tab contains a collection of groups, and each group contains
related functions. It can be further customized to suit the user requirement.
163
At times the ribbon, which initially displays several main tabs, can be a bit
irritating. Now users have the option to hide the ribbon when not required to have a
clutter-free screen.
The minimize option for the ribbon can be set for the Ribbon in Access by
double-clicking for future sessions. Now users do not have to worry about the
ribbon infringing on their work area.
Report View Eliminates Extra Reports
With Microsoft Access, users can choose four different ways to view reports:
• Report view
• Print Preview
• Layout view
• Design view.
The report view shows the report on the screen as users would prefer to see it.
A very useful feature of Microsoft Access, the new Report View allows users to
perform ad hoc filters on a report similar to the way they can filter forms.
Users have a choice to pick which fields they desire to have on their reports by
choosing their preference from more than one table or query.
By filtering specific column value or words that begin with or contain a similar
letter or data range.
The resulting reports show exactly what the viewer wants to see, with
summaries automatically recalculated. Moreover, this requires no special
programming skills on behalf of the user.
Further, developers can add grouping levels, set the order for their records and
sort record either in ascending or descending order. Finally, the user will see the
report as exactly they want it with all the extra unwanted fields eliminated.
With little effort, the readability of the reports will be enhanced, and they
become more viewer-friendly.
With the help of hide Duplicates property to Yes, viewers can rid of unwanted
duplicate reports when needed.
Output Reports in PDF format
Today more and more users are sharing database through electronic image
formats or popularly known as fixed formats like PDF by Adobe Systems and XPS
by Microsoft.
Access allows users to create reports in electronic image files through the
EXTERNAL DATA tab on the ribbon. These reports can be viewed even by users
who do not have Access installed on their system since the PDF file can be opened
on Adobe Reader.
With the reports and database shared in PDF format, the applications of
Access has been enhanced significantly. Users love this output type, which is
164
LESSON - 9
DATA PROCESSING
9.1 INTRODUCTION
Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data scientists and data
engineers in an organization. The raw data is collected, filtered, sorted, processed,
analyzed, stored, and then presented in a readable format. There are many types
and methods of data processing. Lest us discuss these topic in this chapter.
9.2 OBJECTIVES
• To understand the concept of data processing
• To get an insight in the application of data processing
• To analyse the data processing cycle
• To recognize the types of data processing.
9.3 CONTENTS
9.3.1 Data Processing
9.3.2 Data Processing Cycle
9.3.3 Types of Data Processing
9.3.4 Data Processing Methods
9.3.1 DATA PROCESSING
Data processing means manipulation of data by a computer. It includes the
conversion of raw data to machine-readable form, flow of data through
the CPU and memory to output devices, and formatting or transformation of
output. Any use of computers to perform defined operations on data can be
included under data processing. In the commercial world, data processing refers to
the processing of data required to run organizations and businesses.
Data in its raw form is not useful to any organization. Data processing is the
method of collecting raw data and translating it into usable information. It is
usually performed in a step-by-step process by a team of data scientists and data
engineers in an organization. The raw data is collected, filtered, sorted, processed,
analyzed, stored, and then presented in a readable format.
Data processing is crucial for organizations to create better business strategies
and increase their competitive edge. By converting the data into a readable format
like graphs, charts, and documents, employees throughout the organization can
understand and use the data.
Processing of data is becoming a popular topic because of the various new
laws and uses associated with the data. Big companies and MNCs are collecting
data by various means which comprises of personal information, customer data,
health information, contact information, location data etc. Due to collection of this
data, there is an increasing concern over how it is collected and how it will be used.
167
subsequent findings are valid and usable. Raw data can include monetary figures,
website cookies, profit/loss statements of a company, user behavior, etc.
2. Data preparation
Once the data is collected, it then enters the data preparation stage. Data
preparation, often referred to as “pre-processing” is the stage at which raw data is
cleaned up and organized for the following stage of data processing. During
preparation, raw data is diligently checked for any errors. The purpose of this step
is to eliminate bad data (redundant, incomplete, or incorrect data) and begin to
create high-quality data for the best business intelligence.
Data preparation or data cleaning is the process of sorting and filtering the
raw data to remove unnecessary and inaccurate data. Raw data is checked for
errors, duplication, miscalculations or missing data, and transformed into a
suitable form for further analysis and processing. This is done to ensure that only
the highest quality data is fed into the processing unit.
3. Data input
The clean data is then entered into its destination, and translated into a
language that it can understand. Data input is the first stage in which raw data
begins to take the form of usable information. The raw data is converted into
machine readable form and fed into the processing unit. This can be in the form of
data entry through a keyboard, scanner or any other input source.
4. Processing
During this stage, the data inputted to the computer in the previous stage is
actually processed for interpretation. Processing is done using machine
learning algorithms, though the process itself may vary slightly depending on the
source of data being processed (data lakes, social networks, connected devices etc.)
and its intended use (examining advertising patterns, medical diagnosis from
connected devices, determining customer needs, etc.). the raw data is subjected to
various data processing methods using machine learning and artificial intelligence
algorithms to generate a desirable output. This step may vary slightly from process
to process depending on the source of data being processed (data lakes, online
databases, connected devices, etc.) and the intended use of the output.
5. Data output/interpretation
The output/interpretation stage is the stage at which data is finally usable to
non-data scientists. It is translated, readable, and often in the form of graphs,
videos, images, plain text, etc. Members of the company or institution can now
begin to self-serve the data for their own data analytics projects. This output can be
stored and further processed in the next data processing cycle.
6. Data storage
The final stage of data processing is storage. After all of the data is processed,
it is then stored for future use. While some information may be put to use
immediately, much of it will serve a purpose later on. Plus, properly stored data is a
necessity for compliance with data protection legislation like General Data
169
Protection Regulation. When data is properly stored, it can be quickly and easily
accessed by members of the organization when needed.
The data and metadata is stored for further use. This allows for quick access
and retrieval of information whenever needed, and also allows it to be used as input
in the next data processing cycle directly.
9.3.3. TYPES OF DATA PROCESSING
There are different types of data processing based on the source of data and
the steps taken by the processing unit to generate an output. There is no one-size-
fits-all method that can be used for processing raw data.
There are number of methods and techniques which can be adopted for
processing of data depending upon the requirements, time availability, software and
hardware capability of the technology being used for data processing. There are
number of types of data processing methods.
Batch Processing
This is one of the widely used type of data processing which is also known as
Serial/Sequential, Tacked/Queued offline processing. The fundamental of this type
of processing is that different jobs of different users are processed in the order
received. Once the stacking of jobs is complete they are provided/sent for
processing while maintaining the same order. This processing of a large volume of
data helps in reducing the processing cost thus making it data processing
economical. Batch Processing is a method where the information to be organized is
sorted into groups to allow for efficient and sequential processing.
Batch Processing can be defined as concurrent, simultaneous, or sequential
execution of an activity. Simultaneous Batch processing occurs when they are
executed by the same resource for all the cases at the same time. Sequential Batch
processing occurs when they are executed by the same resource for different cases
either immediately or immediately after one another.
Concurrent Batch processing means when they are executed by the same
resources but partially overlapping in time. It is used mostly in financial
applications or at the places where additional levels of security are required. In this
processing, the computational time is relatively less because by applying a function
to the whole data altogether extracts the output. It is able to complete work with a
very less amount of human intervention.
Real time processing
As the name suggests this method is used for carrying out real-time
processing. This is required where the results are displayed immediately or in
lowest time possible. The data fed to the software is used almost instantaneously
for processing purpose. The nature of processing of this type of data processing
requires use of internet connection and data is stored/used online. No lag is
expected/acceptable in this type and receiving and processing of transaction is
carried out simultaneously. This method is costly than batch processing as the
hardware and software capabilities are better. Example includes banking system,
170
tickets booking for flights, trains, movie tickets, rental agencies etc. This technique
can respond almost immediately to various signals to acquire and process
information. These involve high maintenance and upfront cost attributed to very
advanced technology and computing power. Time saved is maximum in this case as
the output is seen in real time. For example in banking transactions.
In real-time, it help in reducing the time lag between occurrence and
processing to almost nil. Huge chunks of data are being poured into systems off
organizations, hence storing and processing it in a real-time environment would
change the scenario.
Most organizations want to have real-time insights into the data so as to
understand the environment within or outside their organization fully. This is
where the need for a system arises that would be able to handle real-time data
processing and analytics. This type of processing provides results as and when it
happens. The most common method is to take the data directly from its source,
which may also be referred to as stream, and draw conclusions without actually
transferring or downloading it. Another major technique in real-time processing is
Data virtualization techniques where meaningful information is pulled for the needs
of data processing while the data remains in its source form.
Online Processing
This processing method is a part of automatic processing method. This method
at times known as direct or random access processing. Under this method the job
received by the system is processed at same time of receiving. This can be
considered and often mixed with real-time processing. This system features random
and rapid input of transaction and user defined/ demanded direct access to
databases/content when needed. This is a method that utilizes Internet
connections and equipment directly attached to a computer. This allows the data to
be stored in one place and being used at an altogether different place. Cloud
computing can be considered as an example which uses this type of processing. It
is used mainly for information recording and research.
In the parlance of today’s database systems, “online” that signifies
“interactive”, within the bounds of patience.” Online processing is the opposite of
“batch” processing. Online processing can be built out of a number of relatively
more simple operators, much as traditional query processing engines are built.
Online Processing Analytical operations typically involve major fractions of large
databases. It should, therefore be surprising that today’s Online analytical systems
provide interactive performance. The secret to their success is precomputation.
In most Online Analytical Processing systems, the answer to each point and
click is computed long before the user even starts the application. In fact, many
Online processing systems do that computation relatively inefficiently, but since the
processing is done in advance, the end-user does not see the performance problem.
This type of processing is used when data is to be processed continuously, and it is
fed into the system automatically.
171
Distributed Processing
This method is commonly utilized by remote workstations connected to one big
central workstation or server. ATMs are good examples of this data processing
method. All the end machines run on a fixed software located at a particular place
and make use of exactly same information and sets of instruction.
Multiprocessing
This type of processing perhaps the most widely used types of data processing.
It is used almost everywhere and forms the basic of all computing devices relying on
processors. Multi processing makes use of CPUs (more than one CPU). The task or
sets of operations are divided between CPUs available simultaneously thus
increasing efficiency and throughput. The break down of jobs which needs be
performed are sent to different CPUs working parallel within the mainframe. The
result and benefit of this type of processing is the reduction in time required and
increasing the output. Moreover CPUs work independently as they are not
dependent on other CPU, failure of one CPU does not result in halting the complete
process as the other CPUs continue to work. Examples include processing of data
and instructions in computer, laptops, mobile phones etc.
Time sharing
Time based used of CPU is the core of this data processing type. The single
CPU is used by multiple users. All users share same CPU but the time allocated to
all users might differ. The processing takes place at different intervals for different
users as per allocated time. Since multiple users can uses this type it is also
referred as multi access system. This is done by providing a terminal for their link
to main CPU and the time available is calculated by dividing the CPU time between
all the available users as scheduled.
Commercial Data Processing
The commercial data processing means a method of applying standard
relational databases, and it includes the usage of batch processing. It involves
providing huge data as input into the system and creating a large volume of output
but using less computational operations. It basically combines commerce and
computers for making it useful for a business. The data that is processed through
this system is usually standardized and therefore has a much lower chance of
errors.
Many manual works are automated through the use of computers to make it
easy and error-proof. Computers are used in business to take raw data and process
it into a form of information that is useful to the business. Accounting programs
are prototypical examples of data processing applications. An Information System
(IS) is the field that studies such as organizational computer systems.
Scientific Data Processing
Unlike commercial data processing, scientific data processing involves a large
use of computational operations but lower volumes of inputs as well as outputs.
The computational operations include arithmetical and comparison operations. In
172
this type of processing, any chances of errors are not acceptable as it would lead to
wrongful decision making. Hence the process of validating, sorting, and
standardizing the data is done very carefully, and a wide variety of scientific
methods are used to ensure no wrong relationships and conclusions are reached.
This takes a longer time than in commercial data processing. The common
examples of scientific data processing include processing, manage, and distribute
science data products and facilitate scientific analysis of algorithms, calibration
data, and data products as well as maintaining all software, calibration data, under
strict configuration control.
Type Uses
Batch Processing Data is collected and processed in batches. Used for large
amounts of data.
Eg: payroll system
Real-time Processing Data is processed within seconds when the input is
given. Used for small amounts of data.
Eg: withdrawing money from ATM
Online Processing Data is automatically fed into the CPU as soon as it
becomes available. Used for continuous processing of
data.
Eg: barcode scanning
Multiprocessing Data is broken down into frames and processed using
two or more CPUs within a single computer system. Also
known as parallel processing.
Eg: weather forecasting
Time-sharing Allocates computer resources and data in time slots to
several users simultaneously.
9.3.4 DATA PROCESSING METHODS
There are three main data processing methods - manual, mechanical and
electronic.
Manual Data Processing
Data is processed manually without using any machine or tool to get the
required results. In manual data processing, all the calculations and logical
operations are performed manually on the data. Similarly, data is transferred
manually from one place to another. This method of data processing is very slow,
and errors may also occur in the output. Mostly, Data is processed manually in
many small business firms as well as government offices & institutions. In an
educational institute, for example, marks sheets, fee receipts, and other financial
calculations (or transactions) are performed by hand. The entire process of data
collection, filtering, sorting, calculation and other logical operations are all done
with human intervention without the use of any other electronic device or
automation software. It is a low-cost method and requires little to no tools, but
produces high errors, high labor costs and lots of time.
173
This method is avoided as far as possible because of the very high probability
of error, labour intensive and very time-consuming. This type of data processing
forms the very primitive stage when technology was not available, or it was not
affordable. With the advancement of technology, the dependency on manual
methods has drastically decreased. This also makes processing expensive and
requires large manpower depending on the data required to be processed. Example
includes selling of commodity on shop.
Mechanical Data Processing
Data is processed mechanically through the use of devices and machines.
These can include simple devices such as calculators, typewriters, printing press,
etc. Simple data processing operations can be achieved with this method. This
method of data processing is faster and more accurate than manual data
processing. These are faster than the manual mode but still form the early stages of
data processing. With invention and evolution of more complex machines with
better computing power this type of processing also started fading away.
Examination boards and printing press use mechanical data processing devices
frequently. Any device which facilitates data processing can be considered under
this category. The output from this method is still very limited. It has much lesser
errors than manual data processing, but the increase of data has made this method
more complex and difficult.
Electronic Data Processing
Data is processed with modern technologies using data processing software
and programs. A set of instructions is given to the software to process the data and
yield output. Electronic data processing is also known as EDP, a frequently used
term for automatic information processing. It uses the computers to collect,
manipulate, record, classification and to summarize data. EPD meaning can be
described as the processing of data using electronic means such as computers,
calculators, servers and other similar electronic data processing equipment. A
computer is the best example of an EDP system. Use of a data processing
system ensures accurate and rapid data processing. This method is the most
expensive but provides the fastest processing speeds with the highest reliability and
accuracy of output.
Examples of Data Processing
Data processing occurs in our daily lives whether we may be aware of it or not.
Here are some real-life examples of data processing:
• A stock trading software that converts millions of stock data into a simple
graph
• An e-commerce company uses the search history of customers to
recommend similar products
• A digital marketing company uses demographic data of people to strategize
location-specific campaigns
• A self-driving car uses real-time data from sensors to detect if there are
pedestrians and other cars on the road
174
The data processing cycle consists of a series of steps where raw data (input)
is fed into a process (CPU) to produce actionable insights (output).
In manual data processing, all the calculations and logical operations are
performed manually on the data. Similarly, data is transferred manually
from one place to another.
9.7 TERMINAL EXERCISE
1. __________________ often referred to as “pre-processing” is the stage at which
raw data is cleaned up
2. ___________________ can be defined as concurrent, simultaneous, or
sequential execution of an activity.
3. ____________________ is required where the results are displayed immediately
or in lowest time possible.
4. Example for distributed processing is ______________
5. A set of instructions is given to the software to process the data and yield
output is called _________________
9.8 SUPPLEMENTARY MATERIALS
1. https://www.jigsawacademy.com/blogs/data-science/types-of-data-
processing/
2. https://www.talend.com/resources/what-is-data-processing/
9.9 ASSIGNMENTS
1. Explain the various types of data processing.
9.10 SUGGESTED READING/REFERENCE
1. https://www.simplilearn.com/what-is-data-processing-article
2. Ramesh Behi, Information Technology for Management, Tata McGraw Hill,
New Delhi, 2012.
9.11 LEARNING ACTIVITIES
1. Identify the type of data processing that you face in various departments of
banks, colleges, schools and government offices. Have a discussion about
that type among your friend and teachers.
9.12 KEYWORDS
Batch processing
Real-time processing
Online processing
Distributed processing.
Time sharing
Multiprocessing
Image file.
176
LESSON - 10
HIERARCHY OF DATA
10.1 INTRODUCTION
Data is the most important input of an organization. It has to be processed to
obtain information. Importance of data processing is now realised as it is necessary
to keep every bit of collected data in order. This makes it easy to use, keep it stored
and work with it. Moreover, digitized data is easy to be approached anywhere via
emails, clouds and other data storing apps and devices.
10.2 OBJECTIVES
• To understand the hierarchy of data
• To realize the data processing system and its types
• To realize the importance of data processing systems
10.3 CONTENTS
10.3.1 Hierarchy of Data
10.3.2 Data Processing Systems
10.3.3 Types of Data Processing Systems
10.3.4 Management of Data Processing System in Business Organization
10.3.5 Importance of Data Processing
10.3.1 HIERARCHY OF DATA
Data are the principal resources of an organization. Data stored in computer
systems form a hierarchy extending from a single bit to a database, the major
record-keeping entity of a firm. Each higher rung of this hierarchy is organized from
the components below it.
Data are logically organized into:
• Bits (characters)
• Fields
• Records
• Files
• Databases
Bit (Character) - a bit is the smallest unit of data representation (value of a bit
may be a 0 or 1). Eight bits make a byte which can represent a character or a
special symbol in a character code.
Field - a field consists of a grouping of characters. A data field represents an
attribute (a characteristic or quality) of some entity (object, person, place, or event).
Record - a record represents a collection of attributes that describe a real-
world entity. A record consists of fields, with each field describing an attribute of
the entity.
177
By service type
Transaction processing systems
A transaction process system and transaction processing are often contrasted
with a batch process system and batch processing, where many requests are all
executed at one time. The former requires the interaction of a user, whereas batch
processing does not require user involvement. In batch processing the results of
each transaction are not immediately available. Additionally, there is a delay while
the many requests are being organized, stored and eventually executed. In
transaction processing there is no delay and the results of each transaction are
immediately available. During the delay time for batch processing, errors can occur.
Although errors can occur in transaction processing, they are infrequent and
tolerated, but do not warrant shutting down the entire system.
To achieve performance, reliability and consistency, data must be readily
accessible in a data warehouse, backup procedures must be in place and the
recovery process must be in place to deal with system failure, human failure,
computer viruses, software applications or natural disasters.
Information storage and retrieval systems
An information storage and retrieval system (ISRS) is a network with a built-in
user interface that facilitates the creation, searching, and modification of stored
data. An ISRS is typically a peer-to-peer ( P2P ) network operated and maintained
by private individuals or independent organizations, but accessible to the general
public. Some, but not all, ISRSs can be accessed from the Internet. (The largest
ISRS in the world is the Internet itself.)
Characteristics of an ISRS include lack of centralization, graceful degradation
in the event of hardware failure, and the ability to rapidly adapt to changing
demands and resources. The lack of centralization helps to ensure that
catastrophic data loss does not occur because of hardware or program failure, or
because of the activities of malicious hackers. Graceful degradation is provided by
redundancy of data and programming among multiple computers. The physical and
electronic diversity of an ISRS, along with the existence of multiple operating
platforms, enhances robustness, flexibility, and adaptability. (These characteristics
can also result in a certain amount of chaos.) In addition to these features, some
ISRSs offer anonymity, at least in theory, to contributors and users of the
information.
A significant difference between an ISRS and a database management system
( DBMS ) is the fact that an ISRS is intended for general public use, while a DBMS
is likely to be proprietary, with access privileges restricted to authorized entities. In
addition, an ISRS, having no centralized management, is less well-organized than a
DBMS.
Command and control systems
A command, control, and communication (C3) system is an information
system employed within a military organization. It is a general phrase that
179
retransmitted when the required route is available. This is called store and forward
network.
10.3.4 MANAGEMENT OF DATA PROCESSING SYSTEM IN BUSINESS ORGANIZATION
Data processing systems comprise the interaction of people, processes, and
equipment to generate usable information from raw data. Thus, data processing
system management involves the administration of the people and equipment
aspects of the system including all the processes outlined as follows:
• Data conversion—changing the data into the required format that can be
processed.
• Data cleanup—removing irregularities in the data before processing.
• Organizing—categorizing the data into sets/groups.
• Analysis—discovering and generating valuable information from the data.
• Reporting—presenting the information.
Management of the different aspects of the data processing system involves
planning where resources necessary in the system are allocated to the different
functions. Management also includes the process of organizing all the functions to
ensure seamless operation of the system. Control is an oversight role of
management and ensures that the data processing system works as expected and
delivers the output required. Control also ensures that any issues affecting the
system are realized and addressed.
A data processing system is a combination of processes that, for any given
amount of inputted data, produces a corresponding amount of outputted data. This
processing can be manual, automatic, or electronic, though most modern data
processing is done through the use of computers. As such, the management of data
usually entails highly specific and specialized knowledge concerning the programs
that collect and organize data sets. This is especially true in the age of "Big Data," a
term that refers to the modern introduction of data sets that are so large and
complex that traditional methods of storing and analyzing data are rendered
obsolete. In these cases, the data is usually organized into individualized
components that are more easily decipherable by data analysts, specialists who are
employed by businesses for the purpose of data management.
Importance of data processing in business
Importance of data processing includes increased productivity and profits,
better decisions, more accurate and reliable. Further cost reduction, ease in
storage, distributing and report making followed by better analysis and
presentation are other advantages. The need to process data is now widely realized
and reflected in every field of work. Let the work be done in a business atmosphere
or for educational research purpose, data management systems are used by every
business. It is a multidimensional process which is involved in almost every field of
human life.
181
Generally speaking, the term “Data Processing” is used where you have to
collect innumerable data files from different sources. You have to arrange them in a
way that can be practically beneficial for the purpose you have gathered all that
material. It is a task of synchronizing collected data from different sources and
convert it to an organized form . This makes it easy to understand and retrieve the
specific information anytime.
There are various data processing methods which include manual data
processing, mechanical data processing and electronic data processing. Data
processing is one of the most important daily tasks especially when dealing with big
data and performing data mining. All those fields where we can expect a huge data
available to settle down like education, banking or transportation now realises the
importance of data processing. With the emergence of fields like data science, data
analysis, big data etc. the need to process data and to understand the importance
of processing the data is crucial.
Nowadays more and more data is collected for academic, scientific research,
private & personal use, institutional use, commercial use. This collected info needs
to be stored, sorted, filtered, analyse and presented and even require data transfer
for it to be of any use. This process can be simple or complex depending on the
scale at which collection is done and the complexity of the results which are
required to be obtained. The time consumed in obtaining the desired result depends
on the operations which need to be performed on the collected data and on the
nature of the output file required to be obtained. This problem becomes starker
when dealing with the very large volume of data.
The need for processing becomes more and more critical in such cases. In
such cases, data mining and data management come into play without which
optimal results cannot be obtained. Each stage starting from collection to
presentation has a direct effect on the output and usefulness of the processed data.
Sharing the dataset with third party must be done carefully and as per written
agreement & service agreement. This prevents data theft, misuse and loss of data.
Processed data makes it easy to arrange it by type and information, it saves a
lot of space. This also helps in making sure that all staff and workers can learn and
understand it easily. They can implement it in the work, which can otherwise take
up more time and end up in providing a decreased output. This can harm the
interests of the business or organization.
Most businesses and fields require data for providing a good quality of service.
Collecting data and it’s implications are very important aspects of managing and
ensuring statistical authenticity. It is particularly essential for services concerned
with financial technologies. This is so because transactions and payment details
need to be properly stored for easy access by customers as well as the company
officials upon need. Processing is not limited to computers and can be done
manually as well.
182
The invention of computer technology was one of the most important events of
all time. With the improvement of computer technology and ease of use, it has
become a popular technology in the hands of many. Data processing has also
become popular with computer systems making it easier to be handled and for
specialised requirements companies look for a data scientist. Data processing is a
field that has numerous applications in most fields like business, education,
healthcare, research and more. The importance is increasing with the increase in
advancement in areas like data science, machine learning, artificial intelligence,
data quality and data security etc. When discussing about data processing, it’s
good to know about data collection, business intelligence, data analysis, data
sources and quality information etc.
All of the departments which align data processing as their most important
task daily, have much more to do in different areas of working field. There are some
priorities which they need to do at any cost and then there are some pending works
to complete as well. Study shows that it is a general practice to waste a lots of time
doing manual data processing. Modern world calls for an automated data
processing and the increased use of structured data. This is done by using different
reliable apps so that the remaining fields can have their required manpower to
complete daily tasks.
10.3.5 IMPORTANCE OF DATA PROCESSING
Data is being collected by almost everyone either knowingly or unknowingly.
Collection of data is the first step but processing of data is another vital activity.
Companies, institutes & various groups all over the world are engaged in the work
of data processing. While talking about the importance of data processing it is
equally important to be aware about the related aspects right starting from
the methods of data collection, data processing, data processing cycle, information
processing cycle, methods of data processing, types of data processing, data
presentation and analysis till the data management best practices. Some of the
many purposes that call for an efficient data processing and important ones are
mentioned below:
Yields better results & increases productivity
A company or organisation which possess data or have access to required data
is undoubtedly at advantage. Data is not just simply numbers and tables but the
undue advantage which an entity can have over its competitors. Data can be
processed in different forms to obtain the required information, without data it will
be impossible to take a good decision. A decision taken after the data is analysed
gives confidence while taking decision as the stats and required details are available
with the group. Having access to structured data which can be used to obtain
meaningful information offers any company a competitive advantage. Real time
processing of data is the foundation and backbone of many companies such as
those in banking and record keeping sector.
183
Having analysed the data as per requirement you will gain insight to the areas
in which improvements needs to be made. Data Visualisation & data mapping are
particularly important those key areas can be prioritized and addressed
accordingly. To boost the productivity in terms of sales, better understanding of
topic or for further processing of data required area can be selected. Similarly you
can analyse the areas which require minimal intervention so that extra workforce (if
any) can be assigned for other words. Potential areas with maximum benefits can
be identified and investment can be made in those key areas to improve the
efficiency and profit.
Report Making is simplified:
In almost all of the activities, data is used hugely for the purpose of collecting
certain values and making reports. It had been taking so long to prepare a report
when manual data processing was utilized. But now it has been simplified and all
of the process of report making is done so speedily. Once you have processed and
placed all data according to a certain framework, you can just gather the needed
data with a few clicks. In many cases preparing a report has now become matter of
minutes. Your report looks more organised & authentic. It contains relevant
information obtained after a logical processing of data. Data presentation and
analysis is also simplified and becomes more effective.
Speed, Accurate and more reliable
It is important to make sure that the collection of facts and figures is done
quite speedily and without making any errors. When data is collected and filtered
through computers, there are no or negligible chance of errors. It is almost
guaranteed that the further processes will be done with maximum possible
accuracy. If the input data is accurate then the output is always accurate.
Processing can be done at a greater speed and with higher accuracy when the right
combination of software’s is used. Another importance of data processing is the
major advantage when working in a competitive environment. It is not uncommon
to have access to same data. Data and information with better quality is more
reliable. Predictive modeling, data cleaning, data validation, batch processing are
necessary for accurate data.
Storage and distribution is easy when data is processed:
When we have piles of data, we need a huge place to store it and there is a
huge chance of missing information and confusions. When the data is processed
through computers, you do not need an extra room to store all those hard files and
papers. All of your data is processed and labelled through a complete computerized
setup so you do not need to get confused at any stage. It is easy to take out any
information from a processed data instead of an unprocessed one. Having your data
stored in a digital form rather than having the hard copies is another aspect which
highlights the importance of data processing.
Cost Reduction
Data once collected acts as asset for any group and having it stored provides
easy access to when required. This eliminates the need to collect data again and
184
again. Moreover it is very easy and convenient to make copied of the stored data
when stored in digital form. Sending or transferring the data is also much easier
and eases the use of this data for research purposes. This directly helps in cost
reduction. The cost or loss which a company might incur because of lack of
information is also drastically reduced. This is so as processed data enables it to
take a wise and informed decision thus again saving on huge cost.
Digitization has made processes quite cost-effective. That is why students are
using computers and laptops to prepare their assignments. Some even use online
essay writing services and keep all the record in digital form instead of printed one.
Teachers are also keeping records on computers instead of taking piles of papers
with them to check every day.
Safe and secure
Having processed data in digital fulfils another very essential requirement of
information being secure. Since the value of data has increased over the time,
incidents of data theft is not something which is unheard. Once the data is
processed it is easier and essential to keep it secure. This can be done by use of
various paid and free softwares. This will prevent any unauthorized access to the
data and thus keeping your data safe. You can also encrypt your data if the need is
felt.
Importance of data processing is now realised as it is necessary to keep every
bit of collected data in order. This makes it easy to use, keep it stored and work
with. Moreover, digitized data is easy to be approached anywhere via emails, clouds
and other data storing apps and devices.
Other benefits and merits of Data Processing are:
1. Data processing makes it easier to validate actions and changes and
transactions easily and reduce dependence on computational power for
collecting them on demand from a basic form. Transaction processing
systems are highly dependent on real time data processing.
2. Insurance claims can be easily handled and settled with properly processed
data and make it time-saving for the police authorities as well. Keeping and
managing health records, creating electronic health records is now possible
due to batch processing, powerful & reliable data warehouses.
3. Data processing can also be made to include image processing and make it
easier to present any data to users in a readable format that is liked by
them.
4. Invoices can be easily generated for services which have been used and
make the customer experience better.
5. Data processing in the form of word processing can help in making
documents which are readable and likeable by readers and be made even
more engaging.
Another importance of data processing is that the governments can use the
processed data for saving time during surveys and also provide services and
185
amenities to places based on the geographical and economic information. With safe
and reliable data, reports can be easily made and troubleshooting processes can be
made easier and less time-consuming.
From sorting of data to aggregation of a similar type of data, data processing
can help a lot in making everything organized and ensure a smooth workflow that
satisfies both the users of the data as well as managers of the data. With the
current trends, it is a very good field and can hope to engage more people to make
itself into a good industry.
10.4 REVISION POINTS
• Hierarchy of data
• Types of data processing system
• Data processing in Business
10.5 INTEXT QUESTIONS
1. Define: bit, byte, data, record, file, and database
2. Write a short note on data analysis.
3. Explain the information storage and retrieval system
4. Explain the importance of process control system
5. Enumerate the importance of data processing in business.
10.6 SUMMARY
Database is an integrated collection of logically related records or files
Scientific data processing usually involves a great deal of computation
Data processing system management involves the administration of the
people and equipment aspects of the system including all the processes.
Data processing yields better results & increases productivity
Storage and distribution is easy when data is processed
10.7 TERMINAL EXERCISE
1. _____________ consists of grouping of characters.
2. _______________ is a combination of machines, people, and processes that
for a set of inputs produces a defined set of outputs
3. _____________ ensures that supplied data is clean, correct and useful.
4. In ____________ system many requests are all executed at one time
5. _____________ is categorizing the data into sets/groups.
186
LESSON - 11
APPLICATION PORTFOLIO DEVELOPMENT
11.1 INTRODUCTION
Application portfolio refers to an organization’s collection of software
applications and software-based services, which it uses to attain its goals or
objectives. Managing these resources is often referred to as application portfolio
management (APM).
11.2 OBJECTIVES
• To understand the concept of application portfolio
• To get verse with programme development cycle
11.3 CONTENTS
11.3.1 Application Portfolio Development
11.3.2 Programme Development Cycle
11.3.1 APPLICATION PORTFOLIO DEVELOPMENT
Application portfolio development and tools are increasingly important as CIOs
and IT managers are learning and finding new ways to keep their older
infrastructure in line while still building further software and applications.
Although CIOs and IT leaders need to keep on top of the development, they might
need help or a solution that will monitor continuously. It can be somewhat of an
uphill battle to stay online and keep operations in line.
In a perfect world, applications will update themselves and everything will fall
in line – or IT departments will have an unlimited budget with an unlimited amount
of employees with all of the best skills. Unfortunately, that isn’t really possible. The
reality is that budgets are what they are and businesses can’t continually hire new
people while keeping the older ones as well.
Application portfolio development solutions help to improve how data and
applications are managed throughout development – without going over budget or
slowing down development.
When development your application portfolio, there are many things that you
need to keep track off – and with so many fires burning, it can be difficult to do so.
With application portfolio development, specialized software can be used to
assess the health and effectiveness of your applications. Develop tool that will
uncover which apps are working toward your business goals, which ones need to be
repaired in order to work, and which ones are just taking away from your budget
and need to be retired. It serves to look deep into the apps and see what they are
capable of doing – and what they can’t do. When it comes to application
modernization, the development solutions can analyze which apps will be
problematic and how to effectively bring them up to date. Either address individual
apps and change them, or develop a new solution.
188
Problem Definition
• The first step in the process of program development is the thorough
understanding and identification of the problem for which is the program or
software is to be developed.
• In this step the problem has to be defined formally.
• All the factors like Input/output, processing requi requirement,
rement, memory
requirements, error handling, interfacing with other programs have to be
taken into consideration in this stage.
Program Design
• The next stage is the program design. The software developer makes use of
tools like algorithms and flowcharts to develop the design of the program.
o Algorithm
o Flowchart
Coding
• Once the design process is complete, the actual computer program is
written, i.e. the instructions are written in a computer language.
• Coding is generally a very small part of the entire progra
program
m development
process and also a less time consuming activity in reality.
• In this process all the syntax errors i.e. errors related to spelling, missing
commas, undefined labels etc. are eliminated.
• For effective coding some of the guide lines which are ap
applied
plied are:
o Use of meaningful names and labels of variables,
190
LESSON - 12
FLOW CHART AND EVALUATION
12.1 INTRODUCTION
A flowchart is a type of diagram that represents a workflow or process. A
flowchart can also be defined as a diagrammatic representation of an algorithm, a
step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order by
connecting the boxes with arrows. This diagrammatic representation illustrates a
solution model to a given problem. Flowcharts are used in analyzing, designing,
documenting or managing a process or program in various fields.
12.2 OBJECTIVES
• To understand the purpose of flow chart
• To recognize the symbols used in flowchart
• To appreciate the factors to evaluate software
12.3 CONTENTS
12.3.1 Flowchart
12.3.2 Symbols for Creating a Flowchart
12.3.3 Intermediate & Advanced Flowchart
12.3.5 Symbols Input & Output Symbols
12.3.6 Merging & Connecting Symbols
12.3.7 Evaluation of software Package
12.3.1 FLOWCHART
A flowchart is a diagram that depicts a process, system or computer algorithm.
They are widely used in multiple fields to document, study, plan, improve and
communicate often complex processes in clear, easy-to-understand diagrams.
Flowcharts, sometimes spelled as flow charts, use rectangles, ovals, diamonds and
potentially numerous other shapes to define the type of step, along with connecting
arrows to define flow and sequence. They can range from simple, hand-drawn
charts to comprehensive computer-drawn diagrams depicting multiple steps and
routes. If we consider all the various forms of flowcharts, they are one of the most
common diagrams on the planet, used by both technical and non-technical people
in numerous fields. Flowcharts are sometimes called by more specialized names
such as Process Flowchart, Process Map, Functional Flowchart, Business Process
Mapping, Business Process Modeling and Notation (BPMN), or Process Flow
Diagram (PFD).
193
The oval, or terminator, is used to represent the start and end of a process.
2. The Rectangle
A Step in the Flowcharting Process
Single and multiple document icons show that there are additional points of
reference involved in your flowchart. These might be used to indicate items like
“create an invoice” or “review testing paperwork.”
Data Symbols
Data symbols clarify where the data your flowchart references is being stored.
12.3.4 INPUT & OUTPUT SYMBOLS
Input and output symbols show where and how data is coming in and out
throughout your process.
195
Merging and connector symbols make it easier to connect flowcharts that span
multiple pages.
Additional Usefull Flowchart Symbols
The below are a few additional symbols that prove flowcharting competence
when put to good use.
196
In performing the evaluation, you may want to consider how different user
classes affect the importance of the criteria. For example, for Usability-
Understandability, a small set of well-defined, accurate, task-oriented user
documentation may be comprehensive for Users but inadequate for Developers.
Assessments specific to user classes allow the requirements of these specific user
classes to be factored in and so, for example, show that a project rates highly for
Users but poorly for Developers, or vice versa.
Scoring can also be affected by the nature of the software itself e.g. for
Learnability one could envisage an application that has been well-designed, offers
context-sensitive help etc. and consequently is so easy to use that tutorials aren’t
needed. Portability can apply to both the software and its development
infrastructure e.g. the open source software OGSA-DAI2 can be built, compiled and
tested on Unix, Windows or Linux (and so is highly portable for Users and User-
Developers). However, its Ruby test framework cannot yet run on Windows, so
running integration tests would involve the manual setup of OGSA-DAI servers (so
this is far less portable for Developers and, especially, Members).
The assessment criteria are grouped as follows.
Criterion Sub-criterion Notes – to what extent is/does the
software…
Usability Understandability Easily understood?
Documentation Comprehensive, appropriate, well-
structured user
documentation?
Buildability Straightforward to build on a supported
system?
Installability Straightforward to install on a supported
system?
Learnability Easy to learn how to use its functions?
Sustainability Identity Project/software identity is clear and
and unique?
maintainability Copyright Easy to see who owns the
project/software?
Licencing Adoption of appropriate licence?
Governance Easy to understand how the project is
run and the development of the software
managed?
Community Evidence of current/future community?
Accessibility Evidence of current/future ability to
download?
Testability Easy to test correctness of source code?
Portability Usable on multiple platforms?
Supportability Evidence of current/future developer
support?
198
LESSON - 13
NETWORKING - INTRODUCTION
13.1 INTRODUCTION
Networking is referred as connecting computers electronically for the purpose
of sharing information. Resources such as files, applications, printers and software
are common information shared in a networking. The advantage of networking can
be seen clearly in terms of security, efficiency, manageability and cost effectiveness
as it allows collaboration between users in a wide range. Basically, network
consists of hardware component such as computer, hubs, switches, routers and
other devices which form the network infrastructure. These are the devices that
play an important role in data transfer from one place to another using different
technology such as radio waves and wires. Let us discuss some concepts of
networking in this chapter.
13.2 OBJECTIVES
• To understand the basics of networking
• To learn about the components of networking
• To gain knowledge on benefits of networking
13.3 CONTENTS
13.3.1 Networking and its Components
13.3.2 Benefits of Networks
13.3.1 NETWORKING AND ITS COMPONENTS
A computer network consists of two or more computing devices that are
connected in order to share the components of a network, its resources and the
information you store there. The most basic computer network (which consists of
just two connected computers) can expand and become more usable when
additional computers join and add their resources to those being shared.
The first computer, yours, is commonly referred to as your local computer. It is
more likely to be used as a location where you do work, a workstation, than as a
storage or controlling location, a server. As more and more computers are
connected to a network and share their resources, the network becomes a more
powerful tool, because employees using a network with more information and more
capability are able to accomplish more through those added computers or
additional resources.
The real power of networking computers becomes apparent if you envision
your own network growing and then connecting it with other distinct networks,
enabling communication and resource sharing across both networks. That is, one
network can be connected to another network and become a more powerful tool
because of the greater resources. For example, you could connect the network you
and your classmates develop for this course to similarly constructed networks from
other introductory networking classes if you wanted them to share your information
and networked resources. Those classes could be within your own school, or they
202
could be anywhere in the world. Wherever that newly joined network is, the
communication and resource sharing activities in that new network could then be
shared with anyone connected to your network. All you have to do is join that new
network’s community or allow its members to join yours.
In addition, a company’s cost of doing business can be reduced as a result of
sharing data and resources. Instead of having individual copies of the data at
several locations around the company, and needing to keep all of them similarly
updated, a company using a network can have just one shared copy of that data
and share it, needing to keep only that one set of data updated. Furthermore,
sharing networked resources (like printers) means that more people can use a
particular resource and a wider variety of resources (like different printers) can be
used by each network user. Any time a company can do more with less, or buy
fewer items to do the same job, its total costs are reduced, and it is able to make
more money than spent.
Components of network
Networks are made up of various devices—computers, switches, routers—
connected together by cables or wireless signals. Understanding the basics of how
networks are put together is an important step in building a network in a
community or neighborhood.
The basic components are as follows:
• Clients and servers—how services such as e-mail and web pages connect
using networks.
• IP addresses—how devices on a network can be found.
• Network hubs, switches and cables—the hardware building blocks of any
network.
• Routers and firewalls—how to organize and control the flow of traffic on a
network.
Clients and Servers
An important relationship on networks is that of the server and the client. A
server is a computer that holds content and services such as a website, a media
file, or a chat application. A good example of a server is the computer that holds the
website for Google’s search page: http://www.google.com. The server holds that
page, and sends it out when requested.
A client is a different computer, such as your laptop or cell phone, that
requests to view, download, or use the content. The client can connect over a
network to exchange information. For instance, when you request Google’s search
page with your web browser, your computer is the client.
In the example below, two computers are connected together with an Ethernet
cable. These computers are able to see each other and communicate over the cable.
The client computer asks for a website from the server computer. The website is
delivered from the server, and displayed on the client’s web browser.
203
Most requests and content delivery on networks are similar to, or are based
on, a client to server relationship. On a network, the server can be located almost
anywhere, and if the client has the address, it can access the content on the server.
IP Addresses
In order to send and direct data across a network, computers need to be able
to identify destinations
ations and origins. This identification is an IP
IP—Internet
Internet Protocol—
Protocol
address. An IP address is just a set of four numbers between 1 and 254, separated
by dots. An example of an IP address is 173.194.43.7.
An IP address is similar to a street address. Parts o off the address describe
where in the world the building is located, another part narrows it down to a state
or city, then the area within that state or city, then the location on the street.
Below we can see 192.168.1 Street. On it are three houses:
The complete
omplete addresses for each of these houses are: 192.168.1.20,
192.168.1.21, and 192.168.1.22.
There are different classifications, or types of IP addresses. A network can be
public, or it can be private. Public IP addresses are accessible anywhere on the
Internet.
nternet. Private IP addresses are not, and most are typically hidden behind a
device with a public IP address.
Here we can see an example
example—a a street with two buildings with public IP
addresses—representing
representing computers with addresses that are visible to the entire
ent
Internet. These buildings might be anywhere in the world, but their addresses are
complete, so we know exactly where they are and can send messages to them.
204
To see an example of how public and private IP addresses are commonly used,
let’s take another look at 192.168.1 Street. We have a new building on the street.
That building has a public IP address, and a private IP address. There is also a
fence that blocks the rest of the Internet from seeing and passing messages to
addresses on the street.
The postal building controls messages that travel between the Internet and the
street, keeping track of messages that leave the street, and directs return messages
to the right house. On the street, it has the address 192.168.1.1, and on the
Internet it has the address 74.10.10.50.
Network Hubs and Switches
Traditionally, computers are connected to each other using cables—creating
cables a
network. The cable used most often is Ethernet, which consists of four pairs of
wires inside of a plastic jacket. It is physicall
physically
y similar to phone cables, but can
transport much more data.
But cables and computers alone do not make a good network, so one early
solution was to use a network hub. The Ethernet cables from the computer connect
to the device similar to the hub of a bike wheel—where
where all of the spokes come
together in the center.
An example of how a hub works is shown below. Computer A wants to send a
message to computer B. It sends the message through the Ethernet cable to the
hub, then the hub repeats the message to all of the connected computers.
205
A network using a hub can slow down if many computers are sending
messages, since they may try and send messages at the same time and confuse the
hub. To help with this problem, networks began to use another device called
a switch.
itch. Instead of repeating all messages that come in, a switch only sends the
message to the intended destination. This eliminates the unnecessary repetition of
the hub.
Using a switch, computer A sends a message to computer B—theB other
computers do not seee the message. Those computers can send other messages at
the same time without interfering.
In this case, the postal service building is routing messages between the rest of
the Internet using its public address and the s street
treet with private addresses.
Definitions
DHCP—Dynamic
Dynamic Host Configuration Protocol: It assigns IP addresses to client
devices, such as desktop computers, laptops, and phones, when they are plugged
into Ethernet or connect to Wireless networks.
Ethernet: A type of networking protocol
protocol—it
it defines the types of cables and
connections that are used to wire computers, switches, and routers together. Most
often Ethernet cabling is Category 5 or 6, made up of twisted pair wiring similar to
phone cables.
Hub: A network
rk device that repeats the traffic it receives to all connected
devices.
207
Peripherals
Additional components that attach to a computer, called peripherals, like
printers, scanners, and speakers, are connected to a computer to expand its use.
When there are multiple users and computers, it soon becomes apparent that the
peripheral devices are seldom fully utilized. Money can be saved if some of these
peripherals are shared, instead of having to purchase a separate set for each
computer. Networking enables the sharing of peripherals.
The ability to share printers was very often enough of a cost savings for
companies to invest in implementing and supporting a simple network. The
company could then also realize additional cost savings as it shared additional
peripheral devices, such as faxes, modems, scanners, plotters, and virtually any
other device that connects to computers. Sharing peripherals often ends up
producing significant cost savings and more than justifies the expense of adding a
network.
Storage
Large amounts of storage capacity, usually in fast, very powerful computers,
can be set up to act as storage locations for data. The access to it could be
controlled by the person storing the data. Data can be stored centrally so that it is
accessible to any user who needed it.
Applications
Cost and space savings are achieved when computer users can centrally store
their software applications. Applications, such as those used for preparing taxes,
creating text documents, or playing computer games, have grown in complexity and
size and often take up considerable local storage. Installing an application once on
a network and then sharing it cuts down on the storage space required when
multiple users need the same application.
Assisting Collaboration
Having digital information and the ability to share it instantly with others over
networks, can have multiple people working on the same process collectively.
Early computer users found that once they created something and sent it
out for review, the comments returned often led to important adjustments that
would improve the original product. Such collaboration assisted the widespread use
of computers because it provided a tangible benefit that businesses could associate
with the increased costs of installing computers in the first place.
Many software makers took this early form of collaboration into consideration
and added that feature to the capabilities of their software. The newest versions of
the applications included in Microsoft’s Office suite (such as Word, Access, Excel,
and PowerPoint) allow multiple users to access and make changes to the same
document at the same time. That way, all users can work together on the original
document, and changes made by any collaborating member are immediately posted
within the document.
209
• Terms definition
• Benefits of networking
13.5 INTEXT QUESTIONS
1. Define network
2. Itemize the components of networking.
3. Write short note on IP address.
4. What do you mean by hub?
5. What are the resources shared by networking?
13.6 SUMMARY
A computer network consists of two or more computing devices that are
connected in order to share the components of a network, its resources and
the information you store there.
Networks are made up of various devices—computers, switches, routers—
connected together by cables or wireless signals
An IP address is just a set of four numbers between 1 and 254, separated by
dots is used to send and direct data across a network
The initial reason for developing most computer networks was to assist
users with sharing their increased output
The ability to share resources was another reason networks were created,
and it is still one of the main purposes for using networks.
13.7 TERMINAL EXERCISE
1. ______________ is used to find device in the networking
2. __________________ are used to organize and control the flow of traffic on a
network
3. DHCP means _____________________
4. _______________ defines the types of cables and connections that are used to
wire computers, switches, and routers together
5. Additional components that attach to a computer is called _______________
13.8 SUPPLEMENTARY MATERIALS
1. https://commotionwireless.net/docs/cck/networking/learn-networking-
basics/
2. https://www3.nd.edu/~cpoellab/teaching/cse40814_fall14/networks.pdf
13.9 ASSIGNMENTS
1. Write an essay about the benefits of networking in a bank.
211
LESSON - 14
CLASSIFICATION OF NETWORKS
14.1 INTRODUCTION
Networking is referred as connecting computers electronically for the purpose
of sharing information. Resources such as files, applications, printers and software
are common information shared in a networking. The advantage of networking can
be seen clearly in terms of security, efficiency, manageability and cost effectiveness
as it allows collaboration between users in a wide range. In this chapter we shall
discuss the classification of networks and its features.
14.2 OBJECTIVES
• To study about the classification of networks
• To understand the merits and demerits of each network.
14.3 CONTENTS
14.3.1 Classification Based on their Geography
14.3.2 Wireless LAN
14.3.3 Bluetooth
14.3.4 Internet
14.3.5 Intranet
14.3.6 Extranet
14.3.1 CLASSIFICATION BASED ON THEIR GEOGRAPHY
Networks are frequently classified according to the geographical boundaries
the network spans. Two basic geographical designations for networks—local area
network (LAN) and wide area network (WAN)—are the most common. A third
designation, metropolitan area network (MAN), is also used, although its use has
become clouded (because it might not be a clear-cut classification anymore) as
networks continue connecting to the Internet.
LAN (Local Area Network)
• Local Area Network is a group of computers connected to each other in a
small area such as building, office.
• LAN is used for connecting two or more personal computers through a
communication medium such as twisted pair, coaxial cable, etc.
• This network usually has the highest speed components and fastest
communications equipment.
• It is less costly as it is built with inexpensive hardware such as hubs,
network adapters, and ethernet cables.
• The data is transferred at an extremely faster rate in Local Area Network.
• Local Area Network provides higher security.
213
• Get updated files: Software companies work on the live server. Therefore,
the programmers get the updated files within seconds.
• Exchange messages: In a WAN network, messages are transmitted fast. The
web application like Facebook, Whatsapp, Skype allows you to communicate
with friends.
• Sharing of software and resources: In WAN network, we can share the
software and other resources like a hard drive, RAM.
• Global business: We can do the business over the internet globally.
• High bandwidth: If we use the leased lines for our company then this gives
the high bandwidth. The high bandwidth increases the data transfer rate
which in turn increases the productivity of our company.
Disadvantages of Wide Area Network
The following are the disadvantages of the Wide Area Network:
• Security issue: A WAN network has more security issues as compared to
LAN and MAN network as all the technologies are combined together that
creates the security problem.
• Needs Firewall & antivirus software: The data is transferred on the
internet which can be changed or hacked by the hackers, so the firewall
needs to be used. Some people can inject the virus in our system so
antivirus is needed to protect from such a virus.
• High Setup cost: An installation cost of the WAN network is high as it
involves the purchasing of routers, switches.
• Troubleshooting problems: It covers a large area so fixing the problem is
difficult.
14.3.2 WIRELESS LAN
Wireless LAN stands for Wireless Local Area Network. It is also called LAWN
(Local Area Wireless Network). WLAN is one in which a mobile user can connect to a
Local Area Network (LAN) through a wireless connection.
The IEEE 802.11 group of standards defines the technologies for wireless
LANs. For path sharing, 802.11 standard uses the Ethernet protocol and CSMA/CA
(carrier sense multiple access with collision avoidance). It also uses an encryption
method i.e. wired equivalent privacy algorithm.
Wireless LANs provide high speed data communication in small areas such as
building or an office. WLANs allow users to move around in a confined area while
they are still connected to the network.
In some instance wireless LAN technology is used to save costs and avoid
laying cable, while in other cases, it is the only option for providing high-speed
internet access to the public. Whatever the reason, wireless solutions are popping
up everywhere.
215
Advantages of WLANs
• Flexibility: Within radio coverage, nodes can communicate without further
restriction. Radio waves can penetrate walls, senders and receivers can be
placed anywhere (also non-visible, e.g., within devices, in walls etc.).
• Planning: Only wireless ad-hoc networks allow for communication without
previous planning, any wired network needs wiring plans.
• Design: Wireless networks allow for the design of independent, small devices
which can for example be put into a pocket. Cables not only restrict users
but also designers of small notepads, PDAs, etc.
• Robustness: Wireless networks can handle disasters, e.g., earthquakes,
flood etc. whereas, networks requiring a wired infrastructure will usually
break down completely in disasters.
• Cost: The cost of installing and maintaining a wireless LAN is on average
lower than the cost of installing and maintaining a traditional wired LAN, for
two reasons. First, after providing wireless access to the wireless network via
an access point for the first user, adding additional users to a network will
not increase the cost. And second, wireless LAN eliminates the direct costs of
cabling and the labor associated with installing and repairing it.
• Ease of Use: Wireless LAN is easy to use and the users need very little new
information to take advantage of WLANs.
Disadvantages of WLANs
• Quality of Services: Quality of wireless LAN is typically lower than wired
networks. The main reason for this is the lower bandwidth due to limitations
is radio transmission, higher error rates due to interference and higher
delay/delay variation due to extensive error correction and detection
mechanisms.
• Proprietary Solutions: Due to slow standardization procedures, many
companies have come up with proprietary solutions offering standardization
functionality plus many enhanced features. Most components today adhere
to the basic standards IEEE 802.11a or 802.11b.
• Restrictions: Several govt. and non-govt. institutions world-wide regulate
the operation and restrict frequencies to minimize interference.
• Global operation: Wireless LAN products are sold in all countries so,
national and international frequency regulations have to be considered.
• Low Power: Devices communicating via a wireless LAN are typically power
consuming, also wireless devices running on battery power. Whereas the
LAN design should take this into account and implement special power
saving modes and power management functions.
216
• License free operation: LAN operators don't want to apply for a special
license to be able to use the product. The equipment must operate in a
license free band, such as the 2.4 GHz ISM band.
• Robust transmission technology: If wireless LAN uses radio transmission,
many other electrical devices can interfere with them (such as vacuum
cleaner, train engines, hair dryers, etc.).Wireless LAN transceivers cannot be
adjusted for perfect transmission is a standard office or production
environment.
Fundamentals of WLANs
HiperLAN
• HiperLAN stands for High performance LAN. While all of the previous
technologies have been designed specifically for an adhoc environment,
HiperLAN is derived from traditional LAN environments and can support
multimedia data and asynchronous data effectively at high rates (23.5
Mbps).
• A LAN extension via access points can be implemented using standard
features of the HiperLAN/1 specification. However, HiperLAN does not
necessarily require any type of access point infrastructure for its operation.
• HiperLAN was started in 1992, and standards were published in 1995. It
employs the 5.15GHz and 17.1 GHz frequency bands and has a data rate of
23.5 Mbps with coverage of 50m and mobility< 10 m/s.
• It supports a packet-oriented structure, which can be used for networks
with or without a central control (BS-MS and ad-hoc). It supports 25 audio
connections at 32kbps with a maximum latency of 10 ms, one video
connection of 2 Mbps with 100 ms latency, and a data rate of 13.4 Mbps.
• HiperLAN/1 is specifically designed to support adhoc computing for
multimedia systems, where there is no requirement to deploy a centralized
infrastructure. It effectively supports MPEG or other state of the art real time
digital audio and video standards.
• The HiperLAN/1 MAC is compatible with the standard MAC service
interface, enabling support for existing applications to remain unchanged.
• HiperLAN 2 has been specifically developed to have a wired infrastructure,
providing short-range wireless access to wired networks such as IP and
ATM.
The two main differences between HiperLAN types 1 and 2 are as follows:
• Type 1 has a distributed MAC with QoS provisions, whereas type 2 has a
centralized schedule MAC.
• Type 1 is based on Gaussian minimum shift keying (GMSK), whereas type 2
is based on OFDM.
• HiperLAN/2 automatically performs handoff to the nearest access point. The
access point is basically a radio BS that covers an area of about 30 to 150
meters, depending on the environment. MANETs can also be created easily.
217
• The goal of Home RF is to integrate all of these into a single network suitable
for all applications and to remove all wires and utilize RF links in the
network suitable for all applications.
• This includes sharing PC, printer, fileserver, phone, internet connection, and
so on, enabling multiplayer gaming using different PCs and consoles inside
the home, and providing complete control on all devices from a single mobile
controller.
• With Home RF, a cordless phone can connect to PSTN but also connect
through a PC for enhanced services. Home RF makes an assumption that
simultaneous support for both voice and data is needed.
Advantages of Home RF
• In Home RF all devices can share the same connection, for voice or data at
the same time.
• Home RF provides the foundation for a broad range of interoperable
consumer devices for wireless digital communication between PCs and
consumer electronic devices anywhere in and around the home.
• The working group includes Compaq computer corp. Ericson enterprise
network, IBM Intel corp., Motorola corp. and other.
• A specification for wireless communication in the home called the shared
wireless access protocol (SWAP) has been developed.
IEEE 802.11 Standards
IEEE 802.11 is a set of standards for the wireless area network (WLAN), which
was implemented in 1997 and was used in the industrial, scientific, and medical
(ISM) band. IEEE 802.11 was quickly implemented throughout a wide region, but
under its standards the network occasionally receives interference from devices
such as cordless phones and microwave ovens. The aim of IEEE 802.11 is to
provide wireless network connection for fixed, portable, and moving stations within
ten to hundreds of meters with one medium access control (MAC) and several
physical layer specifications. This was later called 802.11a. The major protocols
include IEEE 802.11n; their most significant differences lie in the specification of
the PHY layer.
13.3.3 BLUETOOTH
Bluetooth is one of the major wireless technologies developed to achieve WPAN
(wireless personal area network). It is used to connect devices of different functions
such as telephones, computers (laptop or desktop), notebooks, cameras, printers,
and so on.
Architecture of Bluetooth
• Bluetooth devices can interact with other Bluetooth devices in several ways
in the figure. In the simplest scheme, one of the devices acts as the master
and (up to) seven other slaves.
219
• A network with a master and one or more slaves associated with it is known
as a piconet. A single channel (and bandwidth) is shared among all devices
in the piconet.
• Each of the active slaves has an assigned 3-bit active member address.
many other slaves can remain synchronized to the master though remaining
inactive slaves, referred to as parked nodes.
• The master regulates channel access for all active nodes and parked nodes.
Of two piconets are close to each other, they have overlapping coverage
areas.
• This scenario, in which nodes of two piconets intermingle, is called a
scatternet. Slaves in one piconet can participate in another piconet as either
a master or slave through time division multiplexing.
• In a scatternet, the two (or more) piconets are not synchronized in either
time or frequency. Each of the piconets operates in its own frequency
hopping channel, and any devices in multiple piconets participate at the
appropriate time via time division multiplexing.
• The Bluetooth baseband technology supports two link types. Synchronous
connection oriented (SCO) types, used primarily for voice, and asynchronous
connectionless (ACL) type, essentially for packet data.
14.3.4 INTERNET
Internet is called the network of networks. It is a global communication system
that links together thousands of individual networks. In other words, internet is a
collection of interlinked computer networks, connected by copper wires, fiber-optic
cables, wireless connections, etc. As a result, a computer can virtually connect to
other computers in any network. These connections allow users to interchange
messages, to communicate in real time (getting instant messages and responses), to
share data and programs and to access limitless information.
Basics of Internet Architecture
Internet architecture is a meta-network, which refers to a congregation of
thousands of distinct networks interacting with a common protocol. In simple
terms, it is referred as an internetwork that is connected using protocols. Protocol
used is TCP/IP. This protocol connects any two networks that differ in hardware,
software and design.
Process
TCP/IP provides end to end transmission, i.e., each and every node on one
network has the ability to communicate with any other node on the network.
Layers of Internet Architecture
Internet architecture consists of three layers −
220
A broader definition comes from the organization that Web inventor Tim Berners-
Lee helped found, the World Wide Web Consortium (W3C): The World Wide Web is
the universe of network-accessible information, an embodiment of human
knowledge.
In simple terms, The World Wide Web is a way of exchanging information
between computers on the Internet, tying them together into a vast collection of
interactive multimedia resources.
HTTP
HTTP stands for Hypertext Transfer Protocol. This is the protocol being used to
transfer hypertext documents that makes the World Wide Web possible.
A standard web address such as Yahoo.com is called a URL and here the
prefix http indicates its protocol
URL
URL stands for Uniform Resource Locator, and is used to specify addresses on
the World Wide Web. A URL is the fundamental network identification for any
resource connected to the web (e.g., hypertext pages, images, and sound files).
A URL will have the following format −
protocol://hostname/other_information
The protocol specifies how information is transferred from a link. The protocol
used for web resources is HyperText Transfer Protocol (HTTP). Other protocols
compatible with most web browsers include FTP, telnet, newsgroups, and Gopher.
The protocol is followed by a colon, two slashes, and then the domain name.
The domain name is the computer on which the resource is located.
Links to particular files or subdirectories may be further specified after the
domain name. The directory names are separated by single forward slashes.
Website
Website is a collection of various pages written in HTML markup language.
There are millions of websites available on the web. Each page available on the
website is called a web page and first page of any website is called home page for
that site.
Web Server
Every Website sits on a computer known as a Web server. This server is always
connected to the internet. Every Web server that is connected to the Internet is
given a unique address made up of a series of four numbers between 0 and 256
separated by periods. For example, 68.178.157.132 or 68.122.35.127.
When you register a Web address, also known as a domain name, such as
tutorialspoint.com you have to specify the IP address of the Web server that will
host the site.
We will see different type of Web servers in a separate chapter.
222
Web Browser
Web Browsers are software installed on your PC. To access the Web you need a
web browsers, such as Netscape Navigator, Microsoft Internet Explorer or Mozilla
Firefox.
Currently you must be using any sort of Web browser while you are navigating
through my site tutorialspoint.com. On the Web, when you navigate through pages
of information this is commonly known as browsing or surfing.
We will see different type of Web browsers in a separate chapter.
SMTP Server
SMTP stands for Simple Mail Transfer Protocol Server. This server takes care
of delivering emails from one server to another server. When you send an email to
an email address, it is delivered to its recipient by a SMTP Server.
ISP
ISP stands for Internet Service Provider. They are the companies who provide
you service in terms of internet connection to connect to the internet.
You will buy space on a Web Server from any Internet Service Provider. This
space will be used to host your Website.
HTML
HTML stands for Hyper Text Markup Language. This is the language in which
we write web pages for any Website. Even the page you are reading right now is
written in HTML.
This is a subset of Standard Generalized Mark-Up Language (SGML) for
electronic publishing, the specific standard used for the World Wide Web.
Hyperlink
A hyperlink or simply a link is a selectable element in an electronic document
that serves as an access point to other electronic resources. Typically, you click the
hyperlink to access the linked resource. Familiar hyperlinks include buttons, icons,
image maps, and clickable text links.
DNS
DNS stands for Domain Name System. When someone types in your domain
name, www.example.com, your browser will ask the Domain Name System to find
the IP that hosts your site. When you register your domain name, your IP address
should be put in a DNS along with your domain name. Without doing it your
domain name will not be functioning properly.
W3Cs
W3C stands for World Wide Web Consortium which is an international
consortium of companies involved with the Internet and the Web.
The W3C was founded in 1994 by Tim Berners-Lee, the original architect of
the World Wide Web. The organization's purpose is to develop open standards so
that the Web evolves in a single direction rather than being splintered among
competing factions. The W3C is the chief standards body for HTTP and HTML.
223
14.3.5 INTRANET
Intranets are private networks used by organizations to distribute
communications exclusively to their workforce, and they’ve been used for decades
by enterprises for internal communications.
Intranets are run, created, and updated by a dedicated intranet or digital workplace
team. These teams use a variety of cross-functional skills to run the intranet.
The Spark Trajectory Intranet and Digital Workplace skills matrix shows how
intranet and digital workplace teams have a variety of skill sets that stem from
technology and IT management, content and communication, user experience
design, and social and collaboration management.
All these skill sets lend themselves to creating a tool that incorporates
communication, database management, and design. The Sparks skills matrix also
highlights how the department and its responsibilities are not as fixed or defined as
HR or IT. Because the intranet team is responsible for both creating, uploading to,
and managing the intranet, it can be difficult to uncover where exactly new features
and tools need to be added.
The intranet is not the digital workplace. The goal of which is to break down
communication barriers and foster efficiency, innovation, and growth. It is not a
one-size-fits-all solution but rather a best-in-class set of platforms and tools that
make work happen seamlessly. A successful digital workplace uses intelligent
workflows to make everything work on-demand and with less friction.
Internet Vs Intranet
The difference between the internet and the intranet is simple: the internet is a
public network that is not owned by any entity, while an intranet is privately owned
and not accessible to just anyone who can get online.
Advantages of an intranet
Despite the fact that new technology is emerging to advance the field of
communications, for many companies, there are still key advantages to having an
intranet made and managed by a dedicated in-house team. Here are three strategic
benefits of having a company intranet.
Easy storage of files and information
Every organization has hundreds, if not thousands, of differently formatted
files floating among email threads, Google Drive, or hard drives on laptops or
desktops. Having a company intranet makes it easy to store and access all your
files in one central location. Any communication that happens on an intranet is
also saved for as long as the intranet is up. This makes it easier for individuals to
search for past posts from their company intranet.
Easy ways to communicate among employees
Intranets usually feature user profiles similar to your LinkedIn profile. They
contain a photo, job title and description, and contact information. Any employee
with access to the intranet can discover new colleagues and message them through
the intranet. This facilitates increased collaboration and helps establish a workforce
network.
224
LESSON - 15
NETWORKING TOPOLOGIES
TOPOLOGIE
15.1 INTRODUCTION
Networking is the core of any organization today for data and information
sharing. We have discussed the basics of networking in earlier chapters. There are
many methods to connect a computer or peripheral to a network. In this chapter we
shall discuss about the topologies that are used to incorporate a computer in a
network.
15.2 OBJECTIVES
• To understand the various topology in networking.
• To study the advantages and disadvantages of various topologies.
15.3 CONTENTS
15.3.1 Definition
15.3.2 Bus Topology
15.3.3 Ring Topology
15.3.4 Star Topology
15.3.5 Tree Topology
15.3.6 Mesh Topology
15.3.7 Hybrid Topology
15.3.1 DEFINITION
Topology defines the structure of the network of how all the components are
interconnected to each other. There are two types of topology: physical and logical
topology.
Physical topology is th
the
e geometric representation of all the nodes in a network.
229
• The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard
networks.
• The configuration of a bus topology is quite simpler as compared to other
topologies.
• The backbone cable is considered as a "single lane" through which the
message is broadcast to all the stations.
• The most common access method of the bus topologies is CSMA (Carrier
Sense Multiple Access).
CSMA: It is a media access control used to control the data flow so that data
integrity is maintained, i.e., the packets do not get lost. There are two alternative
ways of handling the problems that occur when two nodes send the messages
simultaneously.
• CSMA CD: CSMA CD ((Collision detection)) is an access method used to
detect the collision. Once the collision is detected, th the
e sender will stop
recovery after the collision".
transmitting the data. Therefore, it works on ""recovery collision
• CSMA CA: CSMA CA (Collision Avoidance) is an access method used to
avoid the collision by checking whether the transmission media is busy or
not. If busy, then the sender waits until the media becomes idle. This
technique effectively reduces the possibility of the collision. It does not work
on "recovery after the collision".
230
• It has no terminated ends, i.e., each node is connected to other node and
having no termination point.
• The data in a ring topology flow in a clockwise direction.
• The most common access method of the ring topology is token passing.
o Token passing: It is a network access method in which token is
passed from one node to another node.
o Token: It is a frame that circulates around the network.
Working of Token passing
• A token moves around the network, and it is passed from computer to
computer until it reaches the destination.
• The sender modifies the token by putting the address along with the data.
• The data is passed from one device to another device until the destination
address matches. Once the token received by the destination device, then it
sends the acknowledgment to the sender.
• In a ring topology, a token is used as a carrier.
Advantages of Ring topology
• Network Management: Faulty devices can be removed from the network
without bringing the network down.
• Product availability: Many hardware and software tools for network
operation and monitoring are available.
• Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the
installation cost is very low.
• Reliable: It is a more reliable network because the communication system is
not dependent on the single host computer.
Disadvantages of Ring topology
• Difficult troubleshooting: It requires specialized test equipment to
determine the cable faults. If any fault occurs in the cable, then it would
disrupt the communication for all the nodes.
• Failure: The breakdown in one station leads to the failure of the overall
network.
• Reconfiguration difficult: Adding new devices to the network would slow
down the network.
• Delay: Communication delay is directly proportional to the number of
nodes. Adding new devices increases the communication delay.
232
• Tree topology combines the characteristics of bus topology and star topology.
• A tree topology is a type of structure in which all the computers are
connected with each other in hierarchical fashion.
• The top-most
most node in tree topology is known as a root node, and all other
nodes are the descendants of the root node.
• There is only one path exists between two nodes for the data transmission.
Thus, it forms a parent
parent-child hierarchy.
Advantages of Tree topology
• Support for broadband transmission: Tree topology is mainly used to
provide broadband transmission, i.e., signals are sent over long distances
without being attenuated.
• Easily expandable: We can add the new device to the existing ting network.
Therefore, we can say that tree topology is easily expandable.
• Easily manageable: In tree topology, the whole network is divided into
segments known as star networks which can be easily managed and
maintained.
• Error detection: Error detection and error correction are very easy in a tree
topology.
• Limited failure: The breakdown in one station does not affect the entire
network.
• Point-to-point
point wiring: It has point-to-point
point wiring for individual segments.
Disadvantages of Tree topology
• Difficult troubleshooting: If any fault occurs in the node, then it becomes
difficult to troubleshoot the problem.
• High cost: Devices required for broadband transmission are very costly.
234
• Failure: A tree topology mainly relies on main bus cable and failure in main
bus
s cable will damage the overall network.
• Reconfiguration difficult: If new devices are added, then it becomes
difficult to reconfigure.
15.3.6 MESH TOPOLOGY
• Scalable: Size of the network can be easily expanded by adding new devices
without affecting the functionality of the existing network.
• Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
• Effective: Hybrid topology is very effective as it can be designed in such a
way that the strength of the network is maximized and weakness of the
network is minimized.
Disadvantages of Hybrid topology
• Complex design: The major drawback of the Hybrid topology is the design of
the Hybrid network. It is very difficult to design the architecture of the
Hybrid network.
• Costly Hub: The Hubs used in the Hybrid topology are very expensive as
these hubs are different from usual Hubs used in other topologies.
• Costly infrastructure: The infrastructure cost is very high as a hybrid
network requires a lot of cabling, network devices, etc.
15.4 REVISION POINTS
• Definitions of various topologies
• Advantages and disadvantages of various topologies
15.5 INTEXT QUESTIONS
1. Define topology.
2. List down the features of bus topology.
3. Write down the advantages of tree topology.
4. Record the features of mesh topology.
5. What are the advantages of hybrid topology?
15.6 SUMMARY
Topology defines the structure of the network of how all the components are
interconnected to each other
The bus topology is designed in such a way that all the stations are
connected through a single cable known as a backbone cable
The most common access method of the ring topology is token passing
Star topology is an arrangement of the network in which every node is
connected to the central hub, switch or a central computer.
Mesh technology is an arrangement of the network in which computers are
interconnected with each other through various redundant connections
15.7 TERMINAL EXERCISE
1. The backbone cable is considered as a ___________ through which the
message is broadcast to all the stations.
2. CSMA stands for ________________________
3. The data flows is unidirectional in ________________
237
LESSON - 16
ADVANCED NETWORKING AND VIRUS
16.1 INTRODUCTION
Networking is the core of any organization today for data and information
sharing and we have discussed the basics of networking in earlier chapters. Todays
business are run using sophisticated, hi-speed and complex networks. Here we
shall discuss the advanced mode of networking and its uses to business.
Since networking faces the exposure of many users there are lot of threats
associated with it. One form of threat is virus. Let us have a look into it in this
chapter.
16.2 OBJECTIVES
• To understand the advanced methods of networking
• To study the pros and cons of advanced networking methods.
• To get knowledge about virus and its impact.
16.3 CONTENTS
16.3.1 Virtual Private Networking
16.3.2 Peer to Peer Networking
16.3.3 Client Server Networking
16.3.4 Virus
16.3.5 Anti-Virus
16.3.1 VIRTUAL PRIVATE NETWORK
A virtual private network, or VPN, is an encrypted connection over the Internet
from a device to a network. The encrypted connection helps ensure that sensitive
data is safely transmitted.
VPN stands for virtual private network. A virtual private network (VPN) is a
technology that creates a safe and encrypted connection over a less secure network,
such as the internet. Virtual Private network is a way to extend a private network
using a public network such as internet. The name only suggests that it is Virtual
“private network” i.e. user can be the part of local network sitting at a remote
location. It makes use of tunneling protocols to establish a secure connection.
Surfing the web or transacting on an unsecured Wi-Fi network means you
could be exposing your private information and browsing habits. That’s why a
virtual private network, better known as a VPN, should be a must for anyone
concerned about their online security and privacy.
The encryption and anonymity that a VPN provides helps protect your online
activities: sending emails, shopping online, or paying bills. VPNs also help keep
your web browsing anonymous.
VPNs essentially create a data tunnel between your local network and an exit
node in another location, which could be thousands of miles away, making it seem
239
as if you’re in another place. This benefit allows online freedom, or the ability to
access your favorite apps and websites while on the go.
A VPN can hide a lot of information that can put your privacy at risk. Here are five
of them.
1. Your browsing history
2. Your IP address and location
3. Your location for streaming
4. Your devices
5. Your web activity — to maintain internet freedom
Features of VPN
1. VPN also ensures security by providing an encrypted tunnel between client
and vpn server.
2. VPN is used to bypass many blocked sites.
3. VPN facilitates Anonymous browsing by hiding your ip address.
4. Also most appropriate Search engine optimization(SEO) is done by analyzing
the data from VPN providers which provide country wise stats of browsing a
particular product . This method of SEO is used widely my many internet
marketing managers to form new strategies.
How to choose a VPN
A smart way to stay secure when using public Wi-Fi is to use a VPN solution.
But what’s the best way to choose a virtual private network? Here are some
questions to ask when you’re choosing a VPN provider.
1. Do they respect your privacy? The point of using a VPN is to protect your
privacy, so it’s crucial that your VPN provider respects your privacy, too.
They should have a no-log policy, which means that they never track or log
your online activities.
2. Do they run the most current protocol? OpenVPN provides stronger
security than other protocols, such as PPTP. OpenVPN is an open-source
software that supports all the major operating systems.
3. Do they set data limits? Depending on your internet usage, bandwidth
may be a large deciding factor for you. Make sure their services match your
needs by checking to see if you’ll get full, unmetered bandwidth without data
limits.
4. Where are the servers located? Decide which server locations are
important to you. If you want to appear as if you’re accessing the Web from a
certain locale, make sure there’s a server in that country.
5. Will you be able to set up VPN access on multiple devices? If you are like
the average consumer, you typically use between three and five devices.
Ideally, you’d be able to use the VPN on all of them at the same time.
240
6. How much will it cost? If price is important to you, then you may think
that a free VPN is the best option.
16.3.2 PEER TO PEER NETWORK
The peer to peer computing architecture contains nodes that are equal
participants in data sharing. All the tasks are equally divided between all the nodes.
The nodes interact with each other as required as share resources.
Disadvantages of Peer
er to Peer Computing
Some disadvantages of peer to peer computing are as follows −
It is difficult to backup the data as it is stored in different computer systems
and there is no central server.
It is difficult to provide overall security in the peer to p
peer
eer network as each
system is independent and contains its own data.
16.3.3 CLIENT-SERVER
SERVER NETWORKS
Client-server
server networks are computer networks that use a dedicated computer
(server) to store
ore data, manage/provide resources and control user access.
The server acts as a central point on the network upon which the other
computers connect to.
A computer that connects to the se
server is called a client.
A client-server
server network is usually preferred over a peer-to-peer
peer network that
doesn’t have a central server to manage the network.
Network server functions
A client-server
server network may have more than one server, each dedicated to
handling a specific function.
Functions may include:
• Data storage
• Handling security
• Hosting shared applications
• Managing an internet connection
• Scheduling and running backups
• Email services
242
• Print jobs
• Domain name services
• Storing usernames and passwords to control access
• Assigning levels of access to resources
• Monitoring network traffic
Benefits of a client-server network
• Generally more secure than peer-to-peer networks
• One client computer crashing does not effect the other computers
• Easier to recover files as backups can be controlled centrally by the network
administrator
• Files and resources are easier to share and control from server
• Improved levels of security as files are centralised
• It’s easier to administrate the whole network using a server
• Faster performance as each computer is only fulfilling one role
• Security is potentially cheaper and easier when done centrally
• Individual users do not have to worry about backups or security
• Larger networks can be created
Drawbacks of a client-server network
• Servers can be expensive to buy and maintain
• A network technician will often be required
• Trickier to set up with specialist knowledge needed
• Over-all set up cost is more expensive than a peer-to-peer network
• Server failure will probably disrupt all computers on the network
16.3.4 VIRUS
A computer virus is a type of malicious code or program written to alter the
way a computer operates and is designed to spread from one computer to another.
A virus operates by inserting or attaching itself to a legitimate program or document
that supports macros in order to execute its code. In the process, a virus has the
potential to cause unexpected or damaging effects, such as harming the system
software by corrupting or destroying data.
Once a virus has successfully attached to a program, file, or document, the
virus will lie dormant until circumstances cause the computer or device to execute
its code. In order for a virus to infect your computer, you have to run the infected
program, which in turn causes the virus code to be executed.
A virus can remain dormant on your computer, without showing major signs
or symptoms. However, once the virus infects your computer, the virus can infect
243
LESSON -17
IT STRATEGIC ALIGNMENT
17.1 INTRODUCTION
Information technology is the use of computers to store and process facts and
figures into a useful, organized, form. “Data” is the raw material: numbers and
facts. “Information” is the raw material organized in a useful way. Numbers are
data. A telephone book full of numbers is information. To emphasize the role of
communications some people use the acronym ICT which stands for Information
and Communication Technology.
IT sector is so well evolved that it is directly or indirectly influencing the
working of various other sectors and industries. IT sector acts as a supporting
figure for various sectors such as healthcare, aviation, education, manufacturing
sector, telecommunications sector, various government initiatives and departments
etc.
The IT industry is one which is not limited to software development alone, but
also it can be applied in libraries, hospitals, banks, shops, prisons, hotels, airports,
train stations and many other places through database management systems, or
through custom-made software as seen fit.
17.2 OBJECTIVES
• To understand the concepts of Information technology
• To recognize the meaning IT strategy
• To get insights of IT strategy and business alignment
• To study the various models for IT strategies.
17.3 CONTENTS
17.3.1 Information Technology Features
17.3.2 IT Strategy
17.3.3 IT-Business Alignment Model
17.3.4 IT and Porter’s Five Forces
17.3.5 Value Chain Model
17.3.6 Strategic Resources and Capabilities
17.3.1 INFORMATION TECHNOLOGY: FEATURES
i. The Importance of Information Storage & Retrieval Systems in an Organization
Information is a critical business resource and like any other critical resource
must be properly managed. Constantly evolving technology, however, is changing
the way that even very small businesses manage vital business information. An
information or records management system -- most often electronic -- designed to
capture, process, store and retrieve information is the glue that holds a business
together.
248
offerings. Instead, these organizations may fold IT strategies into the overall
business strategy to create a single unified document.
sellers have been able to renegotiate the prices of raw materials because of
increased volume of production.
Role of Managers in IT-Enabled Strategy
Firstly, the managers need to understand industry characteristics and impact
of IT on them. They should be able to predict the role of IT in their industry and the
way it might affect industry characteristics in future. IT might change each force
separately and also the combined effect of all these five forces. Even the boundaries
of business models in the industry might change. Thought leaders in industries will
be able to see the foreseeable changes and become ready for that. They even might
use IT to change the characteristics so that their competitors are forced to follow
the same.
Secondly, managers think of changes in the business models through
collaboration with organizations within as well as outside the industry. Unless
managers foresee these changes in business boundaries, they would not be able to
take full advantage of technology. In many of the recent benchmarking exercises in
IT industry, competitors have collaborated with each other through online forums
to share best practices with each other and gain customers confidence.
Thirdly, managers should manage the change that will be necessitated
because of technology adoption. They should devise a plan that will prioritize
technology investment in different departments/functions, develop business case
for investment and prepare a roadmap for implementing new technology. Business
managers should work with IT managers to decide architecture, integration of
applications, and choice of right technology so that business alignment is achieved
while maintaining competitive advantage.
Finally, managers should use IT to create a learning organization. A
technology-enabled learning organization should use technology to capture learning
from business transactions, create a knowledge repository, and then share these
best practices throughout the organization.
17.3.5 VALUE CHAIN MODEL
A value chain is a business model that describes the full range of activities
needed to create a product or service. For companies that produce goods, a value
chain comprises the steps that involve bringing a product from conception to
distribution, and everything in between—such as procuring raw materials,
manufacturing functions, and marketing activities.
A company conducts a value-chain analysis by evaluating the detailed
procedures involved in each step of its business. The purpose of a value-chain
analysis is to increase production efficiency so that a company can deliver
maximum value for the least possible cost.
Because of ever-increasing competition for unbeatable prices, exceptional
products, and customer loyalty, companies must continually examine the value
they create in order to retain their competitive advantage. A value chain can help a
257
company to discern areas of its business that are inefficient, then implement
strategies that will optimize its procedures for maximum efficiency and profitability.
In addition to ensuring that production mechanics are seamless and efficient,
it's critical that businesses keep customers feeling confident and secure enough to
remain loyal.
The overarching goal of a value chain is to deliver the most value for the least
cost in order to create a competitive advantage.
"Competitive advantage cannot be understood by looking at a firm as a whole.
It stems from the many discrete activities a firm performs in designing, producing,
marketing, delivering, and supporting its product"(Michael E Porter).
In other words, it's important
ortant to maximize value at each specific point in a firm's
processes.
Elements in Porter's Value Chain
Rather than looking at departments or accounting cost types, Porter's Value
Chain focuses on systems, and how inputs are changed into the outputs purchased
by consumers. Using this viewpoint, Porter described a chain of activities common
to all businesses, and he divided them into primary and support activities, as
shown below.
Primary Activities
Primary activities relate directly to the physical creation, sale, maintenance
and support of a product or service. They consist of the following:
Inbound logistics – These are all the processes related to receiving, storing,
and distributing inputs internally. Your supplier relationships are a key factor in
creating value here.
Operations – These are the transformation activities that change inputs into
outputs that are sold to customers. Here, your operational systems create value.
258
working capital will place great strain on the business finances. Such a strategy
needs to be very carefully managed from a finance point-of-view.
Physical resource: The category of physical resources covers wide range of
operational resources concerned with the physical capability to deliver a strategy.
This includes production facility, marketing facility and information technology
facility.
Intangible resource: patents, know-how, relationships.etc.
However, these must qualify the following criteria to be of strategic importance
(i.e to be core competences): Valuability, rarity, un-substitutability, inimitability.
The resource based theory explains the strategic and capabilities in best
manner. Resource-based theory contends that the possession of strategic
resources provides an organization with a golden opportunity to develop competitive
advantages over its rivals (“Resource-Based Theory: The Basics”) (Barney, 1991).
These competitive advantages in turn can help the organization enjoy strong profits,
especially over time.
According to resource-based theory, organizations that own “strategic
resources” have important competitive advantages over organizations that do not.
Some resources, such as cash and trucks, are not considered to be strategic
resources because an organization’s competitors can readily acquire them. Instead,
a resource is strategic to the extent that it is valuable, rare, difficult to imitate, and
non substitutable.
Resource-based theory can be confusing because the term resources are used
in many different ways within everyday common language. It is important to
distinguish strategic resources from other resources. To most individuals, cash is
an important resource. Tangible goods such as one’s car and home are also vital
resources. When analyzing organizations, however, common resources such as
cash and vehicles are not considered to be strategic resources. Resources such as
cash and vehicles are valuable, of course, but an organization’s competitors can
readily acquire them. Thus an organization cannot hope to create an enduring
competitive advantage around common resources.
A strategic resource is an asset that is valuable, rare, difficult to imitate,
and non-substitutable.
Valuable resources aid in improving the organization’s effectiveness and
efficiency while neutralizing the opportunities and threats of competitors.
Difficult-To-Imitate resources often involve legally protected intellectual
property such as trademarks, patents, or copyrights. Other difficult-to-imitate
resources, such a brand names, usually need time to develop fully.
Rare resources are those held by few or no other competitor
260
LESSON - 18
INFORMATION TECHNOLOGY PLANNING
18.1 INTRODUCTION
Information technology planning is a discipline within the information
technology and information systems domain and is concerned with making
the planning process for information technology investments and decision-making a
quicker, more flexible, and more thoroughly aligned process.
18.2 OBJECTIVES
• To understand the concept of IT planning
• To recognize the steps in IT planning.
18.3 CONTENTS
18.3.1 IT planning
18.3.2 Establish Leadership And Support.
18.3.3 Assess Your Resources.
18.3.4 Define Your Needs.
18.3.5 Explore Solutions.
18.3.6 Write the Plan.
18.3.7 Get Funding. And Implement the Plan
18.3.1 IT PLANNING
IT planning guides the use of resources for IT systems and services used
throughout the organisation.
IT planning has three components: IT governance, IT leadership development,
and IT strategic planning.
IT governance defines the processes, components, structures, and participants
for making decisions regarding the use of IT.
IT leadership development defines who will lead and drive IT strategies to a
successful conclusion. It also prepares and develops the current and next
generation of IT leaders across the organization.
Information Technology planning is a process that takes time and resources in
order to understand what is appropriate for the organization. Program directors and
their management staff may use this resource to further their understanding of
what is involved in technology planning.
• Establish leadership and support
• Assess your resources
• Define your needs
• Explore solutions
• Write the plan
264
• Get funding
• Implement the plan
Effective technology planning is an involved process. It takes a commitment of
time and resources from senior managers and other staff. In order to make good
decisions, an organization also needs to understand key aspects of technology.
But through technology planning, organizations can make significant gains.
Sound technology management leads to greater productivity, increased staff
morale, and improved service to clients through having machines that work,
networks that give access to information, and applications that are appropriate for
an organization's mission.
Information can transform organizations by giving them the tools to
understand the environment they're working in, to measure the effectiveness of
their actions, and to counter opposing information from other groups and policy
makers. Technology is uniquely positioned to harness the power of information.
Technology planning is a process
1. Establish leadership and support.
2. Assess your resources.
3. Define your needs.
4. Explore solutions.
5. Write the plan.
6. Get funding.
7. Implement the plan.
18.3.2 ESTABLISH LEADERSHIP AND SUPPORT
Setting up a Information Technology team and ensuring management and staff
buy-in will allow you to get started with the whole organization behind you.
A tech plan isn't written in a day. The process behind the writing is the most
important part, and the process is all about how staff work together to find the best
solutions.
Information Technology Team: It is crucial that the Information Technology
plan be a product of the whole organization, not just one staff person's brainchild.
Nonprofit Information Technology experts all recommend that you set up a
Information Technology team to lead your Information Technology planning
process, if you do not have a team already. An IT team should be made up of a wide
range of staff members. It is very important to have your executive director or
another person in management involved. Your team might be composed of a board
member, the executive director, a project manager, an administrative assistant, an
accountant and a development director, as well as your system administrator, if
you have one. Set up a regular meeting schedule to review progress on the plan.
265
Make sure to distribute responsibilities and set clear expectations so that each
person is involved in the process.
Lead Person: It is crucial to have one person who is designated to lead the
Information Technology team and coordinate the whole process. That person need
not be someone who is already in a management position, but should be someone
with leadership capabilities and relative comfort with Information Technology.
Management Support: It is next to impossible to do a Information Technology
plan and carry it out without active support from management. Management is the
key to financial support and funding for the plan. It also makes a huge difference if
you can convince your management to stand up and talk to staff about the plan.
One strategy for convincing management is to describe the current costs of not
doing a plan. Let them know how many hours of staff time are wasted, and how
much money is lost trying to make the current system work. If your organization
requires a major Information Technology overhaul, management will appreciate a
plan which is broken into implementation phases, so that they are not faced with
funding the entire initiative in one budget year. Even if management is reluctant,
they should be consulted and informed at every major step.
18.3.3 ASSESS YOUR RESOURCES
The first step in planning is to assess your existing Information Technology.
What do you have in place? How well is it working?
The key is to spend some time asking yourself what is working, and what
needs improvement. What Information Technology do you have in place in your
organization? What Information Technology skills does your staff have? Who does
your organization rely on for Information Technology support?
One part of assessment is taking a basic inventory of the computers and
software in your organization. A hardware inventory worksheet can give you a sense
of the overall capacity and range of workstations in your organization. A software
inventory worksheet can give you an overview of the software resources and how
they are distributed on different computers.
By taking this step, you can help avoid buying redundant technologies or
incompatible technologies, and you can help assess whether any of your current
Information Technology is obsolete.
In the hardware inventory worksheet, you will want to write down the following
items for each computer:
• User
• Brand
• Model
• Serial Number
• Monitor type
• Processor type and speed
266
• RAM
• Hard disk capacity
• Available hard disk space
• Operating system
• Modem or network card (if any)
• Ports available (USB, FireWire, SCSI, etc.)
• Floppy, CD, or DVD drive (Be specific: indicate the type of floppy drive or
whether you have a CD, CD-R, CD-RW, DVD, DVD-R, DVD+RW, DVD-RW,
or DVD-RAM drive)
Any additional equipment attached to the computer
Other equipment such as network printers, switches, firewalls, modems, etc.
In the software inventory worksheet, you will want to mark down major
software packages that you use, along with their version numbers.
There's more to an assessment than listing your hardware and software. For
example, you need to document your network set-up, access policies, and
protocols; document your services, including centralized databases, email, and
groupware; and document your management practices, from staffing to written
policies.
The most important part of assessment is to ask yourself some questions
about how well your systems are currently working.
18.3.4 DEFINE YOUR NEEDS
Why do you need Information Technology? What will new Information
Technology help you do that you can't do already? Defining your needs will enable
you to choose the most efficient solutions.
The trick to defining your needs is to describe what you want to do with
Information Technology, not what you think you need to buy. Consider the
problems you might run into in your organization--new policies to institute,
procedures you need to follow to find new funding, and new staff members to work
into your organization's structure. Then consider all the potential tools, including
Information Technology tools that you might use to solve these problems.
Start by thinking more abstractly, then begin to discuss how Information
Technology might help you solve your problems and help your organization better
fulfill its mission. What might your staff members be able to accomplish with a new
intranet? What new capability will make a critical difference to productivity?
Put together a good Information Technology team, one that represents all the
major program and administrative areas of the organization--including a decision
maker who is involved in strategic planning--and technical staff. Remember that a
team full of people who have technical skills is not necessarily the best equipped to
think of Information Technology in terms of your organization's mission. It also
267
helps if the Information Technology team gathers input from staff about their
needs. You can get staff input through a survey, or through individual interviews.
The more you can connect your Information Technology needs to your larger
mission as an organization, the better your plan will be. Its recommendations will
be more useful and meaningful, as well as more convincing to potential funders.
As you define your needs, develop a sense of what your priorities are. What is
mission-critical for the next month, and what can wait half a year? For instance, a
nonprofit might decide that backing up all data takes first priority, while developing
a website for funders can wait a few months.
Also look to other organizations in your sector to learn about best practices in
Information Technology. While you don't want to follow other organizations blindly,
keeping abreast of changes in your field is essential to being able to take advantage
of Information Technology in a timely manner.
18.3.5 EXPLORE SOLUTIONS
The next step is to research existing Information Technology options and
decide on ones that meet your needs at a minimum cost.
Once you have assessed your resources and defined your needs, the next step
is to make a concrete plan for how to meet those needs. This phase of Information
Technology planning requires the most technical knowledge.
If you have not already been working with a consultant, you may want to hire
one at this point. Make sure any consultants you hire know what your budget
range is. Tell them what support resources you will have available so they do not
recommend a system that requires extensive maintenance if you do not have the
staff time or expertise for it.
Deciding on concrete solutions that fit within your budget can be the most
difficult part of Information Technology planning. It's important to make sure that
all the solutions you pick are compatible. For instance, if you want a new database,
a new back-up system, and a new network, you will have to make sure that the
database can be shared across the type of network you are getting, and the back-up
system can copy the database when it is open, if necessary. Information Technology
is interdependent and there are dozens of options with different price tags for each
Information Technology decision, so negotiating your priorities can get very tricky.
The important thing is to go back to your original vision of how Information
Technology can help you accomplish your mission. What are the key new functions
you want Information Technology to fill? Consider price, of course, but don't get
locked into an inexpensive Information Technology that won't grow with you and
won't work with future technologies.
Before you decide on a solution or defer to a consultant, make sure you have a
solid understanding of the different options. Start for background information and
further resources to answer overarching Information Technology questions:
• What type of network do you need?
268
Assign responsibilities. Make clear which staff member will carry out which
task.
Establish a timeline. Set milestones and target dates for different phases of
your plan.
Evaluate your success. Evaluation should be built into any planning process,
and Information Technology planning is no exception. Decide beforehand what
indicators of success you will look for. Build evaluation checkpoints into your
timeline.
Update your Information Technology plan. A Information Technology plan
should be a living, breathing document. As new needs and priorities come up,
modify the plan accordingly! If one Information Technology project does not help
you as you hoped, you are free to go back to the plan to rethink and rewrite.
18.4 REVISION POINTS
• Features of Information technology
• Steps in information technology planning process.
18.5 INTEXT QUESTIONS
1. What are the three components of IT planning?
2. What do you mean by assessing the resources?
3. How to define your IT needs?
4. Write a short note on organizational profile.
5. List out the key elements for successful implementation of IT planning.
18.6 SUMMARY
IT planning guides the use of resources for IT systems and services used
throughout the organisation.
Information Technology planning is a process that takes time and resources
in order to understand what is appropriate for the organization
IT planning team might be composed of a board member, the executive
director, a project manager, an administrative assistant, an accountant and
a development director, as well as your system administrator,
One part of assessment is taking a basic inventory of the computers and
software in your organization
The trick to defining your needs is to describe what you want to do with
Information Technology
The budget should include estimated costs for all aspects of the projects you
18.7 have listed
272
LESSON - 19
MANAGING IS DEPARTMENT
19.1 INTRODUCTION
Firms organize their Information Services function in very different ways,
reflecting the nature of their business, their general structure and business
strategy, their history, and the way they wish to provide information services to the
business units. Utmost care should be taken to manage the vulnerabilities of
information system. If something goes wrong then the entire oranisation will have
to face the consequences or adverse effects. In this chapter let us discuss about hoe
information system department is managed in corporate.
19.2 OBJECTIVES
• To know about the people in-charge of information system
• To understand the major operations in information system
• To be familiar about the vulnerabilities in information management
• To create awareness of various controls for information system
19.3 CONTENTS
19.3.1 Managing Information System
19.3.2 Managing Information Systems Operations
19.3.3 Threats to Security, Privacy, and Confidentiality In Is Operations
19.3.4 Risk Assessment in Safeguarding Information Systems
19.3.4.1 General Controls
19.3.4.2 Application Controls
19.3.5 Auditing Information Systems
19.3.1 MANAGING INFORMATION SYSTEM
The Information Services (IS) department is the unit responsible for providing
or coordinating the delivery of computer-based information services in an
organization. These services include:
1. Developing, maintaining, and maintaining organizational information
systems
2. Facilitating the acquisition and adaptation of software and hardware.
3. Coordinates the delivery of many of these services, rather than providing all
of them itself.
Firms organize their Information Services function in very different ways,
reflecting the nature of their business, their general structure and business
strategy, their history, and the way they wish to provide information services to the
business units. Most of the IS departments remain centralized. Traditional
functional structure is a more contemporary structure of a centralized IS unit. This
structure is far better suited to servicing a firm's business units with specialized
consulting and end-user oriented services.
274
Administrative Controls
Administrative controls aim to ensure that the entire control framework is
instituted, continually supported by management, and enforced with proper
procedures, including audits.
Administration controls include:
1. Published controls policy
2. Formal procedures
3. Screening of personnel
4. Continuing supervision of personnel
5. Separation of duties
Systems Development and Maintenance Controls
Internal IS auditors should be involved through the entire systems
development process. They should:
1. Participate in major milestones and sign off on the appropriate deliverables.
They need to ensure that the system is secure, and also auditable.
2. Participants in the post implementation review that follows the system being
placed in operation.
3. Must check that the appropriate system documentation is developed and
maintained
4. During systems maintenance, ensure that only authorized changes are made
to the system and that the appropriate version of the system goes into
operation
Operations Controls
Operations controls are the policies, procedures, and technology established to
ensure that data centers are operated in a reliable fashion. Included among these
controls are:
1. Controls over access to the data center
2. Control over operations personnel
3. Control over maintenance of computer equipment
4. Control over archival storage
Physical Protection of Data Centers
Operations controls in data centers must be supplemented by a set of controls
that will protect these centers from the elements and from environmental attacks.
Some of these controls include:
1. Environmental controls (air conditioning, humidification etc.) as required by
the equipment.
2. Emergency power sources must be available. A battery-based
uninterruptible power supply (UPS) should be installed to provide
continuous operation in case of total or partial power failure.
281
3. The more secure the data is, the more of a requirement for shielding the
radiation to contain it from being detected outside the data center.
Hardware Controls
A computer's central processor contains circuitry for detection and, in some
cases, correction of certain processing errors. Some of these include:
1. Parity check in which each byte of data in storage contains an additional bit,
called a parity bit, which helps detect an erroneous change in the value of a
single bit during processing.
2. Processor hardware usually has at least two states:
• Privileged state - in which any operation can be performed. A user cannot
enter privileged state, as it is reserved for system software.
• User state - in which only some operations can be done.
3. Fault-tolerant computer systems - these systems continue to operate after
some of their processing components fail. Fault-tolerant computer systems
are built with redundant components; they generally include several
processors in a multiprocessing configuration. If one of the processors fail,
the other (others) can provide degraded, yet effective, service.
Identification, Authentication, and Firewalls
Controlling Access to Corporate Computer Systems
In today's computing environment, users as well as interlopers may attempt to
access a computer system from virtually anywhere. We need to ensure that only
authorized accesses take place.
Characteristics of identification and authentication
1. A user first identified themselves to the system, typically with a name or an
account number
2. The system then looks up the authentication information stored for the
identified user and does a double-check
3. The system requests the user to provide a password or another means by
which they can be authenticated.
A variety of security features are implemented to increase the effectiveness of
passwords. The features include:
1. Regular and frequent password changes
2. Use of a combination of letters and digits
3. Prevention of the use of a common word, easily associated with the user
Biometric security features are also implemented. These systems rely on using
the personal characteristics. Features include:
1. Voice verification
2. Fingerprints
3. Hand geometry
282
4. Signature dynamics
5. Keystroke analysis
6. Retina scanning
7. Face recognition
8. Genetic pattern analysis
A firewall is a hardware and software facility that prevents access to a firm's
Intranet from the public Internet, but allows access to the Internet. The use of a
firewall is to insure that only authorized traffic passes through.
Encryption: Controlling Access to Information
A different way to prohibit access to information is to keep it in a form that is
not intelligible to an unauthorized user. Encryption is the transformation of data
into a form that is unreadable to anyone without an appropriate decryption key.
Encryption is gaining particular importance as electronic commerce over
telecommunications networks is gaining momentum.
Encryption renders access to encoded data useless to an interloper who has
managed to gain access to the system by masquerading as a legitimate user, or to
an industrial spy who can employ a rather simple receiver to pick up data sent over
a satellite telecommunications link. Thus, the technique is important not only in
the protection of the system boundary but also in the communications and
database controls.
1. The two most important encryption techniques are the:
2. Private-key Data Encryption Standard (DES)
3. Public-key encryption
Encryption is scrambling data, or any text in general, into a cipher that can be
decoded only if one has the appropriate key (i.e., bit pattern). It renders the encoded
data useless to an interloper. The major disadvantage of the DES is that keys must
be distributed in a secure manner. Since the keys must be changed frequently, this
represents significant exposure. Also, a prior relationship between the sender and
the receiver is necessary in order for them to share the same private key.
In a public-key systems, two keys are needed to ensure secure transmission;
one is the encoding key and the other is the decoding key. Because the secret
decoding key cannot be derived from the encoding key, the encoding key can be
made public therefore, they do not require secure distribution of keys between
parties prior to their communication. Drawback of public-key encryption and
decryption is that they are more time-consuming than the private key systems, and
can significantly degrade performance of transaction processing systems.
Controls of Last Resort: Disaster Recovery Planning
Two controls of last resort should be available:
1. Adequate insurance for the residual risk
2. A disaster recovery plan
283
who work for the organization itself. In addition to performing financial audits to
determine the financial health of various corporate units, internal auditors
perform operational audits to evaluate the effectiveness and efficiency of IS
operations.
A trend has developed toward strengthening internal auditing as a means
of management control. An independent audit departments exists in most of the
country's large businesses. Such a department now often includes a group that
performs information systems audits as well.
Information systems have to be auditable by design. This means that every
transaction can be traced to the total figures it affects, and each total figure can be
traced back to the transactions which gave rise to it. In other words, a audit trail
must exist, making it possible to establish where each transaction originated and
how it was processed. Transaction logs provide a basic audit trail.
How is an Information Systems Audit Conducted?
IS auditors primarily concentrate on evaluating information system controls,
on the assumption that if a system has adequate controls that are consistently
applied, then the information produced by it is also reliable. The perform both
scheduled and unscheduled audits.
Characteristics of the compliance auditing include:
1. Auditors study the information system and its documentation, inputs and
outputs, and interview the key users and ARE personnel. They study both
the general and application controls in detail.
2. Auditors select a sample of the transactions processed by the system and
trace their processing form the original documents on to the totals they
affect.
3. Auditors replicate the processing done by the system, and if the results they
obtain are in compliance with those produced by the system, they gain some
confidence in the controls the system is supposed to have.
Characteristics of substantive test auditing include:
1. Substantive testing is used to independently validate the totals contained in
the financial records.
2. The extent of testing depends on the results of compliance testing. If controls
are found operative, then a limited substantive testing will be sufficient. In
areas where controls were inadequate, extensive validation of financial totals
is necessary.
19.4 REVISION POINTS
• Managing information system overview
• Threats to information system
• Protection using anti-virus
286
288
LESSON - 20
EVALUATING IT INVESTMENT
20.1 INTRODUCTION
Evaluating the IT investment is the weighing up process to rationally assess
the value of any in-house IT assets and acquisition of software or hardware which
are expected to improve business value of an organization’s information systems
It is also the weighing up process to rationally assess the value of any
acquisition of software or hardware which is expected to improve business value of
an organization’s information systems.
The weighing up process to rationally assess the value of any in-
house IT assets and acquisition of software or hardware which are expected to
improve business value of an organization’s information systems
Evaluating the IT investment is the process to rationally assess the value of
any acquisition of software or hardware which is expected to improve business
value of an organization’s information systems.
Moreover it is the weighing up process to rationally assess the value of any in-
house IT assets and acquisition of software or hardware which are expected to
improve business value of an organization’s information systems
20.2 OBJECTIVES
• To get in-depth knowledge about IT investment
• To study the phases of investment control process
• To discuss various investment criteria variable
• To study the techniques for evaluating IT proposal
20.3 CONTENTS
20.3.1 IT Investment – An overview
20.3.2 Organizational Attributes For Successful It Investments
20.3.3 Phases of the Investment Control Process
20.3.3.1 Selection
20.3.3.2 Control
20.3.3.3 Evaluation
20.3.4 Techniques for Evaluating It Investment Proposals
20.3.5 Managerial Issues of Information Systems
20.3.1 IT INVESTMENT – AN OVERVIEW
Introduction
The IT investment process of an organisation should match the culture and
organizational structure of the organisation. The overriding objective is that senior
managers be able to systemically maximize the benefits of IT investments through
use of the IT investment process.
289
comparisons of costs, benefits, risks, and returns across project proposals. The four
step selection process is:
Step 1 -- screen IT project proposals;
Step 2 -- analyze risks, benefits, and costs;
Step 3 -- prioritize projects based on risk and return; and
Step 4 -- determine the right mix of projects and make the final cut.
feasibility. If the answer to any of these questions is no, a project should not receive
consideration and should be returned to the originating unit. Projects that meet
these criteria should continue to Step 2 where more rigorous analysis is performed.
Step 2: Analyze Project Risks, Benefits, and Costs
At this point, the proposals should be reduced to those with the highest
potential to support the organisation's critical mission and/or operations.
A detailed evaluation of each proposal's supporting analyses should be
conducted and summarized so that senior management can begin examining
tradeoffs among competing proposals that are to occur in the next step. At this
stage, a technical review team should evaluate the soundness of the project's
benefit-cost and risk analyses. In particular, the review team should examine how
the project is expected to improve program or operational performance and the
performance measures that will be used to monitor expected versus actual results.
Step 3: Prioritize Projects Based on Risk and Return
During this phase, IT projects are rigorously compared against one another to
create a prioritized list of all investments under consideration.
After completing analysis, the organisation should develop a ranked listing of
information technology projects. This listing should use expected risks and benefits
to identify candidate projects with the greatest chances of effectively and efficiently
supporting key mission objectives within given budget constraints.
One approach to devising a ranked listing of projects is to use a scoring
mechanism that provides a range of values associated with project strengths and
weaknesses for risk and return issues. Table 1, below, shows an example of how
individual risk and return factors might be scored. This example is a hybrid table
drawn from multiple best practices organizations. Higher scores are given to
projects that meet or exceed positive aspects of the decision criteria. Additionally, in
this example, weights have been attached to criteria to reflect their relative
importance in the decision process. In order to ensure consistency, each of the
decision criteria should have operational definitions based on quantitative or
qualitative measures.
Table 1: Example of Decision Criteria and Scoring Process Used to Rank IT Projects
Weights for
Overall Return Factors Returns
SUM=100%
A scoring and ranking process such as the one depicted in Table 1 may be
used more than once and in more than just this step to "winnow" the number of
projects that will be considered by an executive decision-making body down to the
best possible choice.
An outcome of such a ranking process might produce three groups of projects:
• Likely winners -- One group, typically small, is a set of projects with high
returns and low risk that are likely "winners."
• Likely drop-outs -- At the opposite end of the spectrum, a group of high
risk, low return projects usually develops that would have little chance of
making the final cut.
• Projects that warrant a closer look -- In the middle is usually the largest
group. These projects have either a high return/high risk or a low
return/low risk profile. Analytical and decision-making energy should be
focused on prioritizing these projects in the middle group, where decisions
will be more difficult to make.
At the end of this step, senior managers should have a prioritized list of IT projects
and proposals with supporting documentation and analysis.
Step 4: Determine the Right Mix of Projects and Make the Final Cut
During this phase, an executive level decision making body determines which
projects will be funded based on the analyses completed in the previous steps.
Determining the right mix of projects to fund is ultimately a management
decision that considers the technical soundness of projects, their contribution to
295
mission needs, performance improvement priorities, and overall funding levels that
will be allocated to information technology.
Senior management should consider the following balancing factors when
arriving at a final resource allocation and project mix.
• Strategic improvements vs. maintenance of current operations
Efforts to modernize programs and improve their mission performance may
require significant investments in new information systems. Agencies also
have operational systems on which the agencies depend to operate their
programs as currently structured. These older systems may need to be
maintained. A balance should be struck between continuing to invest in
older systems and modernizing or replacing them. It may be helpful to track
over time the percentage of funding spent on strategic/development vs.
maintenance/operations projects.
• New projects vs. ongoing projects
The senior managers who choose the final mix of projects to be funded must
periodically re-examine projects that have already been approved to ensure
that they should still be supported. There may be concerns about a project's
implementation, such as greater-than-expected delays, cost overruns, or
failures to provide promised benefits. If new projects are more consistent
with an organisation's strategic initiatives, offer greater benefits for
equivalent cost, or present fewer risks, the old projects may need to be
canceled.
• High vs. low risk
If a portfolio is managed only to minimize risk, senior management may
unnecessarily constrain an organisation's ability to achieve results. High
risk, high return projects can significantly enhance the value to the public of
an organisation's IT spending, provided the organisation has the capability
and carefully manages the risks. Most organizations, however, can only
handle a limited number of such projects. As a result, senior management
must consciously help balance the amount of risk in the portfolio against
the organisation's capabilities and ability to manage risk.
• Impact of one project on another
Now that federal agencies are trying to integrate their systems, every new
project proposal is likely to affect, or be affected by, other project proposals,
ongoing projects, or current systems. Senior management must recognize
the context in which the new project will be placed and make decisions
accordingly. For example, one best practice company has established as a
risk the number of dependencies between a new project and other
projects/systems.
296
(e) Innovation:
Information technology may help innovate the business activity by creating
new/alternative functions, products and service; open up new niche markets
offering competitive edge to the enterprise. In such a case, the ROI is perhaps less
important and the value of being first or risk of not being there or having to face
failure’ becomes more important because in such cases, the question is less of cost
and more of survival.
Such applications are strategic and quantification of benefits from such
application is difficult. Equally difficult is to quantify the costs of changes in busi-
ness process that may be necessitated by the innovation.
20.3.5 THE MANAGERIAL ISSUES OF INFORMATION SYSTEMS
Although information technology is advancing at a blinding pace, there is
nothing easy or mechanical about building and using information systems. There
are major challenges confronting managers:
The information systems investment challenge: How can organizations
obtain business value from their information systems? Earlier in this chapter we
described the importance of information systems as investments that produce value
for the firm. We showed that not all companies realize good returns from
information systems investments. It is obvious that one of the greatest challenges
facing managers today is ensuring that their companies do indeed obtain
meaningful returns on the money they spend on information systems. It’s one thing
to use information technology to design, produce, deliver, and maintain new
products. It’s another thing to make money doing it. How can organizations obtain
a sizable payoff from their investment in information systems? How can
management ensure that information systems contribute to corporate value?
Senior management can be expected to ask these questions: How can we
evaluate our information systems investments as we do other investments? Are we
receiving the return on investment from our systems that we should? Do our
competitors get more? Far too many firms still cannot answer these questions.
Their executives are likely to have trouble determining how much they actually
spend on technology or how to measure the returns on their technology
investments. Most companies lack a clear-cut decision-making process for deciding
which technology investments to pursue and for managing those investments.
The strategic business challenge: What complementary assets are needed to
use information technology effectively? Despite heavy information technology
investments, many organizations are not realizing significant business value from
their systems, because they lack—or fail to appreciate—the complementary assets
required to make their technology assets work. The power of computer hardware
and software has grown much more rapidly than the ability of organizations to
apply and use this technology. To benefit fully from information technology, realize
genuine productivity, and become competitive and effective, many organizations
actually need to be redesigned. They will have to make fundamental changes in
303
employee and management behavior, develop new business models, retire obsolete
work rules, and eliminate the inefficiencies of outmoded business processes and
organizational structures. New technology alone will not produce meaningful
business benefits.
The globalization challenge: How can firms understand the business and
system requirements of a global economic environment? The rapid growth in
international trade and the emergence of a global economy call for information
systems that can support both producing and selling goods in many different
countries. In the past, each regional office of a multinational corporation focused on
solving its own unique information problems. Given language, cultural, and
political differences among countries, this focus frequently resulted in chaos and
the failure of central management controls. To develop integrated, multinational,
information systems, businesses must develop global hardware, software, and
communications standards; create cross-cultural accounting and reporting
structures; and design transnational business processes.
The information technology infrastructure challenge: How can
organizations develop an information technology infrastructure that can support
their goals when business conditions and technologies are changing so rapidly?
Many companies are saddled with expensive and unwieldy information technology
platforms that cannot adapt to innovation and change. Their information systems
are so complex and brittle that they act as constraints on business strategy and
execution. Meeting new business and technology challenges may require
redesigning the organization and building a new information technology (IT)
infrastructure.
Creating the IT infrastructure for a digital firm is an especially formidable task.
Most companies are crippled by fragmented and incompatible computer hardware,
software, telecommunications networks, and information systems that prevent
information from flowing freely between different parts of the organization. Although
Internet standards are solving some of these connectivity problems, creating data
and computing platforms that span the enterprise—and, increasingly, link the
enterprise to external business partners—is rarely as seamless as promised. Many
organizations are still struggling to integrate their islands of information and
technology.
Dependence on technological experts: Technological expertise is a
precondition for development and migration of new and complicated technology in
the institution, but the dependence on such expertise also represents a problem to
management. Managers cannot themselves have the necessary insight in every
technical question, and the use of internal and external advisers is crucial.
Technological experts tend to agree on putting ambitions high, but at the same time
they almost notoriously tend to disagree on specific choices of hardware, software
and methodology. On several occasions reluctance from management and the rest
of the organisation has saved from wasting money and manpower, but this may of
304
course also prevent important decisions to be taken and slow down necessary
changes. Diverging opinions in the organisation require clear decisions by top
management, and when the decision has been taken, it must be followed by
information and necessary resources.
Organisation: A decentralised organisation of IT experts is also an advantage
for their participation. However, questions of technological infrastructure (not only
hardware such as computers and networks) cannot be successfully solved without
a close co-operation between experts throughout the whole organisation. In many
cases the organisation as such will be better off with solutions that for some
projects may be considered suboptimal. Hence, IT requires strong co-ordination,
and this is of course a larger challenge in a decentralized organization than in a
centralised one. Another argument for strong co-ordination is the dependence on
specialists which constitutes a scarce resource.
Choice of technology: It may be risky to choose the most recent version of
technology if this is not well tried. Technology and especially software should be
purchased and not developed within the institution if convenient systems are
available. Open systems that communicate with each other and on which it is easy
to get support in the market might be preferred to more specialised systems even if
the latter are regarded as better. There is a tendency in most technological
environments to develop solutions themselves, which is natural since self-developed
software more easily, will fulfill the specifications, and development is more
interesting than shopping. However, in addition to be expensive (when working
hours are taken into account), self-developed software is vulnerable since it might
be dependent on support from one or a few persons.
External advice: The issue of using external consultants is closely linked to
the choice between buying and developing. Technicians are often reluctant to ask
for external support for the same reason as they prefer to develop solutions
themselves. On the other hand there are many examples of too extensive use of
external consultants in many institutions which lack technological expertise. A
mixture is often optimal, one should use external experts when they obviously have
more experience than internal staff (or internal staff may be overbooked), but this
use requires a certain level of experience within the institution, to avoid being
dependent on the consultants and to implement the systems and ensure follow-up
of results. To management, external experts will often represent a useful "second
opinion" in questions where their own staff disagrees.
Human resources: Principles and plans for management and use of IT may be
good, but we will attain little if we do not have good human resources, even if
external consultants are used in an optimal way. The labour market for IT
specialists has varied over the years.
Ethics and security: The responsibility and control challenge: How can
organizations ensure that their information systems are used in an ethically and
socially responsible manner? How can we design information systems that people
can control and understand? Although information systems have provided
305
enormous benefits and efficiencies, they have also created new ethical and social
problems and challenges. Chapter 5 is devoted entirely to ethical and social issues
raised by information systems, such as threats to individual privacy and
intellectual property rights, computer-related health problems, computer crimes,
and elimination of jobs. A major management challenge is to make informed
decisions that are sensitive to the negative consequences of information systems as
well to the positive ones.
Managers face an ongoing struggle to maintain security and control. Today,
the threat of unauthorized penetration or disruption of information systems has
never been greater. Information systems are so essential to business, government,
and daily life that organizations must take special steps to ensure their security,
accuracy, and reliability. A firm invites disaster if it uses systems that can be
disrupted or accessed by outsiders, that do not work as intended, or that do not
deliver information in a form that people can correctly use. Information systems
must be designed so that they are secure, function as intended, and so that
humans can control the process. Managers will need to ask: Can we apply high-
quality assurance standards to our information systems, as well as to our products
and services? Can we build systems with tight security that are still easy to use?
Can we design information systems that respect people’s rights of privacy while still
pursuing our organization’s goals? Should information systems monitor employees?
What do we do when an information system designed to increase efficiency and
productivity eliminates people’s jobs?
20.4 REVISION POINTS
• Attributes for successful IT investment
• Phases of investment control process
• IT investment evaluating techniques
• Managerial issues of information system
20.5 INTEXT QUESTIONS
1. What are the attributes for successful IT investment?
2. List out the overall risk factors and return factors of IT investment.
3. What are the factors considered when arriving at a final resource allocation?
4. Write a short note on cost benefit analysis.
5. Write brief note on value acceleration.
6. Discuss the managerial issues in information system.
20.6 SUMMARY
The critical attributes for successful IT investment are senior management
attention, overall mission focus, and a comprehensive portfolio approach to
IT investment.
This selection phase combines rigorous technical evaluations of project
proposals with executive management business knowledge, direction, and
priorities.
During the control phase, senior management regularly monitors the
progress of ongoing IT projects against projected cost, schedule, performance
and delivered benefits.
306
LESSON - 21
INTRODUCTION TO BUSINESS CONVERGENCE
21.2 INTRODUCTION
Convergence is a term being used more and more across different scenarios
but the general term is used to describe the fusing, or coming together, of separate
entities to form a unified whole. Whether that is the converging of technologies and
cross platform experiences (e.g. digital convergence of TV, mobile device, wearable
devices, virtual reality, etc) or the converging of the business as a whole, the
principal remains the same. I like to settle on this definition for Business
Convergence: Convergence is the act of working together to create a unified whole
to achieve a set purpose.
Within a Converged business, every department and individual working within
the organisation works harmoniously to fulfill the same goal(s). Additionally, the
focus of the converged business is always value. Value is in its many forms;
economic and financial, sustainability, ecological, social contribution etc. A
Converged business is therefore highly efficient and competitively focused as it
single-mindedly pursues its strategic aims to fulfill its vision and purpose. Another
one of the core tenets of Convergence is transparency. Transparency of purpose and
strategy is paramount to the clarification of value; Convergence allows us to clearly
answer, “what does value mean to our business?”
With Business Convergence, and the clarity and communication of purpose
and value permeating throughout the business, the entire business understands
the course corrections that become necessary to allow it to flex and adapt to market
demands and opportunities.
21.2 OBJECTIVES
• To introduce the concept of convergence and business convergence
• To understand the concept of business convergence with information
technology
• To identify the profile of a business intelligence consultant.
21.3 CONTENTS
21.3.1 Business Converging with IT
21.3.2 Business – It Consulting Convergence
21.3.3 Definition of Business and (Bit) It Consulting
21.3.4 Profile of an Ideal Bit Consultant
21.3.1 BUSINESS CONVERGING WITH IT
Information technology is considered as a means through which the business
achieves its ends. No doubt, ends are always important but to reach ends, the
means, in our case information technology, is more exciting. In the ever-changing
world of business, this very tool not only helps reach ends but sometimes also
opens up new opportunities and possibilities. Whether it was resurrection of Apple
as a highly valuable company in the world or the emergence of Google and
308
vehicle functions, but he has no idea of repairing major faults that lead to the
vehicle breakdown. Similarly, the business consultant is expected to have
knowledge of how various business IT tools function. But for problems in these
tools, he/she has to fall back on IT experts.
We do not want to give the impression that the BIT consultant is a new avatar
who would replace everything that has been used so far. IT is just a new
perspective that will increasingly be used in the consulting world till it becomes a
new lingo practiced by most readers of this book.
Various themes of BIT consulting can be grouped as shown below:
Business and IT (BIT) theme
• IT strategy for business
• IT planning, IT application portfolio planning
• E-business strategy, including E-commerce and M-commerce strategy
• COTS-enabled10 business process design (CBPD)
• Cost management: integrated supply chain
• Customer-centric: CRM process redesign
• IT-initiated organisation change
Focused IT Management Theme
• IT architecture
• IT service management
• IT governance
• IT process quality improvement consulting
• IT knowledge management
• Business continuity and disaster recovery consulting
• Planning, budgeting and monitoring IT programmes
• IT security, data security and prevention of fraud and intrusion
You will notice all the above listed BIT consulting themes, (which is not
exhaustive but fairly representative) require a consultant to have a very high
component of business understanding. A successful BIT consultant need not have
extensive software development experience such as programming or coding. He has
to focus on business problems and what technology can do rather than how the
technology works or make it work.
Following Figure shows the convergence of business and IT consulting in the
past few decades. In the future they will be like weaving a cloth where one yarn acts
as a warp thread and the other weft thread. Only focusing on one (either business
or IT) will not be enough. While a pure play business consulting may yield precision
of business diagnostic, the BIT consulting approach brings results closer to
realisation.
Following Figure shows a simple framework of understanding the importance
of the interdependence of these two consulting disciplines.
311
Based on our discussion so far, we can draw the conclusion that BIT
consulting is an integrated approach for solving a business problem using a
combination of strategy, process, technology and people. A typical BIT consultant
may start the first level of his engagement by interacting with the CEOs and direct
his/her reports
ports to address issues of strategic nature. Some of these issues could be
defining new market segments, product entry strategy, mergers and acquisitions,
organization redesign and defining growth strategies and objectives. Addressing all
these issues wouldd also require technology intervention.
The second level of BIT consulting engagement happens with the operational
managers to redesign company processes and organisation change initiatives. Once
the processes are designed and agreed upon, the next phase would begin to
develop an IT strategy and solution road map for implementing them. This will also
need IT processes and infrastructure to be designed and provisioned for, in line
with the future growth strategies of the business. BIT consultants usually with a
clear industry focus are technology agnostic. They would build their focus on
specific functional areas such as marketing human resources, banking insurance
and financial services, public sector and government, and healthcare.
21.3.4 PROFILE OF AN IDEAL BIT CONSULTANT
Some of the personality traits and key skills required in BIT consultants are as
follows:
Objectivity
BIT consultants must be objective and impartial while making their
recommendations. They must uphold the highest standards of professional ethics to
be seen as a role model. The quality of their output —be it analysis of facts,
diagnostic presentation, communication and report presentation— should be of very
high order.
Team Work
Consulting is team work where every member should bring a specialised
ability and avoid transgressing areas that are beyond his/her range of expertise.
312
Simple Language
BIT consultants should avoid using complex terminologies so that their
analysis and recommendations are easy to understand and implement by the
clients. They must practice writing using active language, avoiding passive
expressions.
Keeping Aloof from Client’s Organisation Politics
BIT consultants are typically expected to address the client’s issues/problems,
many of which would be impacting the client staff and management power
equation. Despite their knowledge of different aspects of the client organisation, the
consultants should stay clear of the organisation’s politics.
Well Groomed Look
BIT consultants must have a pleasant appearance and should be groomed for
business-like behaviour. They must avoid overdressing or under-dressing.
Sometimes they need to dress according to the occasion or as per the dressing
norm of the client’s organisation.
Continuous Learning
To enhance their continuous learning, BIT consultants should join technical or
professional management bodies such as management consulting institutes and
management associations. They must also participate as guest lecturers in
management institutes or public seminars to be seen and heard. It will also help
them gain confidence and experience on diverse subjects and thus sharpen their
thinking. They should also publish articles and research reports and share then
with their clients for the latter’s education. They should read a lot and should have
good retention memory. A BIT consultant must build a repertoire of articles, books
and websites for use in future assignments.
Creating Niche for the self or the client
BIT consultants must strive to carve out a unique personality professionally.
They should be able to recognise business trends and devise consulting services to
help clients follow these trends. They must strive to establish best practices and
expand their sphere of activity.
Full of enthusiasm
Finally, a BIT consultant must bring enthusiasm to his/her engagement with
the client.
21.4 REVISION POINTS
• Business Convergence
• Definition
• Business convergence and IT
• Ideal business convergence IT (BIT) consultant
21.5 INTEXT QUESTIONS
1. Write short note on convergence.
2. What do you mean by business convergence with IT?
3. List down the various BIT consulting themes.
313
21.6 SUMMARY
Convergence is the general term is used to describe the fusing, or coming
together, of separate entities to form a unified whole.
Information technology is considered as a means through which the
business achieves its ends.
New consulting theme in the form of IT consulting has now been
transformed into a major consulting business
A successful BIT consultant need not have extensive software development
experience such as programming or coding
BIT consultants must be objective, interest for continuous learning and
cooperative to work in a team
21.7 TERMINAL EXERCISE
1. ________________ is the term used to describe the fusing, or coming together,
of separate entities to form a unified whole
2. _______________________ plays an important role in the ever changing world.
3. _______________ is an essential characteristic required to survive in the post-
industrial era.
4. BIT stands for ________________
5. _________________ and ___________________ are the important personality trait
a BIT consultant should possess.
21.7 SUPPLEMENTARY MATERIALS
1. https://www.linkedin.com/pulse/business-convergence-agile-pursuit-
purpose-value-strategic-kent
2. https://www.strategyblocks.com/blog/business-convergence-strategy/
21.8 ASSIGNMENTS
1. Enumerate the personality traits and key skills required in BIT consultants.
21.9 SUGGESTED READING/REFERENCE
1. Sanjiva Shankar Dubey, Management and IT Consultancy, McGraw Hill,
New Delhi, 2012.
2. https://foundry4.com/industry-convergence-going-beyond-business-
boundaries
21.11 LEARNING ACTIVITIES
1. Browse the net and identify the companies involved in convergence. Learn
what is happening in that industry.
21.12 KEYWORDS
Business Convergence
Information Technology
BIT – Business Convergence IT
314
LESSON - 22
BIT CONSULTING PROCESS
22.1 INTRODUCTION
BIT consulting is an integrated approach for solving a business problem using
a combination of strategy, process, technology and people. A typical BIT consultant
may start his engagement by interacting with the CEOs and direct his/her reports
to address issues of strategic nature. Some of these issues could be defining new
market segments, product entry strategy, mergers and acquisitions, organization
redesign and defining growth strategies and objectives. Addressing all these issues
would also require technology intervention.
22.1 OBJECTIVES
• To understand the process of BIT consulting
• To recognize the importance of preparing and entering into contract
22.3 CONTENTS
22.3.1 BIT consulting: An Integrated Stepped Approach
22.3.2 Proposal Development
22.3.3 Concluding and Entering Into the Contract
22.3.4 Executing the Consulting Engagement - Steps
22.3.1 BIT CONSULTING: AN INTEGRATED STEPPED APPROACH
Based on our discussion so far, we can draw the conclusion that BIT
consulting is an integrated approach for solving a business problem using a
combination of strategy, process, technology and people. A typical BIT consultant
may start the first level of his engagement by interacting with the CEOs and direct
his/her reports to address issues of strategic nature. Some of these issues could be
defining new market segments, product entry strategy, mergers and acquisitions,
organization redesign and defining growth strategies and objectives. Addressing all
these issues would also require technology intervention.
The second level of BIT consulting engagement happens with the operational
managers to redesign company processes and organisation change initiatives. Once
the processes are designed and agreed upon, the next phase would begin to
develop an IT strategy and solution road map for implementing them. This will also
need IT processes and infrastructure to be designed and provisioned for, in line
with the future growth strategies of the business. BIT consultants usually with a
clear industry focus are technology agnostic. They would build their focus on
specific functional areas such as marketing human resources, banking insurance
and financial services, public sector and government, and healthcare.
Consulting Process: From Start to End
A typical consulting process consists of the following major sub-processes:
1. Prospecting, qualifying and preparing: This sub-process involves seeking out
clients, marketing consulting services, preparing a list of prospects,
qualifying them and preparing for the first meeting.
315
feel that they now know enough to do the remainder of the work on their own. This
is usually a false proposition in most cases, as the execution of the consulting
engagement will lead to detailed recommendations and action plans that are just
presented as bulleted points in the fact finding study. Many a time a totally new
insight is obtained during the consultation execution. In addition, a lot of work goes
in detailing out the recommendations during the consulting engagement which
requires deep experience of consultants. It is like I may know the path but it will
not take me to the destination, I have to have the abilities to walk as well! I had
been in the same situation with some of the clients where such apprehension did
arise. My take in all such cases was that by copying a recipe book you do not get a
good recipe made! You need skills to actually realise what may be written in the
proposal document that comes through deep experience of the consulting
processes. This insight is worth noting by our readers who may be on the other side
of the consulting table as recipient or buyer of consulting services.
If the consultant proposal is not up to the mark or is the consultant has not
been able to demonstrate the confidence of doing the task then rejecting the
proposal by the client is totally justified. However, if the client managers think that
they have learnt enough during the proposal process to do things on their own and
would be able to save the consultant’s fee, then their assumption is too simplistic.
Many organisations who have adopted this approach later realised that consulting
assignment is the unique process by which the organisation goes through the
learning. While it may lead to 100 pages of PowerPoint charts but the entire
exercise of producing them is most useful for the organisation.
Costing and Pricing the consulting Engagement.
This step may also be the time when the value of the engagement vs. the cost
of the engagement may be questioned. Typically, the engagement cost is based on
the amount of time spent by different consultants in its implementation. The client
may start calculating the cost based on the labour rate of consultants and the time
spent by them. The smartness at this stage is to price the engagement on the value
it is likely to deliver rather than the man-hour rate. A consulting engagement is
performed by a team consisting of full-time members and part-time members like
the partner, principal and subject matter experts. A blended rate is usually arrived
at to price the engagement. Table 4.1 shows a typical method of calculating the
blended rate.
1. Definition space pace identifying the scope and defining the boundary of the
problem
2. Data gathering phase: Understanding the current situation, gathering data
and brainstorming for alternate ideas.
3. Diagnosis phase
• Defining objectives and performance measures
• Assessing the organisation
• Assessing the external environment
• Developing and testing hypothesis
• Arriving at the confirmed diagnosis
4. Solution and recommendation phase
• Generating alternative solutions
• Conducting cost benefit analysis for each solution
• Selecting the optimal and feasible solution
• Seeking management approval
5. Implementation planning and implementation management phase
• Planning the stages for implementation
• Implementing the solution
6. Evaluation of results
In this phase, the BIT consultant evaluates the results with respect to the
objectives set in phase III(a)
22.4 REVISION POINTS
• Consulting process
• Proposal development
• Entering into the contract
• Executing the consulting engagement.
22.5 INTEXT QUESTIONS
1. Discuss the steps in BIT consulting process.
2. List down the importance of a proposal.
3. Prepare a template of a proposal covering letter.
4. Write a short note on value based pricing in the context of BIT consulting.
22.6 SUMMARY
BIT consulting engagement happens with the operational managers to
redesign company processes and organisation change initiatives
Prospecting, qualifying and preparing
321
LESSON - 23
all the parameters; the business needs may have changed; the requirements
visualised may have changed; or key people may have changed roles. In all these
cases, the BIT consultant’s job is to take necessary next steps to overcome the
apparent shortcomings of the original solution. Normally, the results evaluation is
done some time after the implementation of the solution.
23.3.7 RECOMMENDING AND IMPLEMENTING
The end-result of the consulting engagement would be the recommended
actions and their implementation. This sub-process covers detailing out the
recommendations and also if client desires, continue till these are implemented.
Roughly around 50-60% of the consulting recommendations may not include
implementation, especially if they pertain to areas like organisation policies,
structure and HR processes etc. However, in the recent times more and more
consultants are participating in the implementation of consulting recommendation
as desired by the clients. This way the client is able to ensure that the onerous of
achieving objectives lie with the same consulting firm but with the different team
from consulting firm having implementation skills. In this regard the
implementation of all IT projects is a good example where the same consulting firm
takes the implementation responsibility as well.
23.3.8 IMPLEMENTATION PLANNING
The sub-process of the consulting project implementation starts with planning
for implementation. This is the step when the consultant and the client directly
agree on the recommendations and the way they would like to implement them. The
implementation of some recommendations (actions) can happen without the
consultant supervision, some with occasional consultant’s inputs or checkpoints
and some can only be done through full involvement of the original consultant
team. This is an exercise to ensure that recommendations are agreed upon and
commensurate action plans are given commitment by the client management. It is
important that the client and the consultant agree on a process that articulates a
monitoring structure to maintain the constancy of purpose. Typically, this is done
in a planning session involving key client team members, key project members and
key stakeholders. Any implementation includes a fair amount of client education,
collection of opinions and feedback with all those involved. By doing proper
implementation planning, the consultant ensures that proper resources are
provided for and the client participation and support has been enlisted. This is a
useful step to garner support from the client organisation, participation, and
commitment to proceed further. Many a time due to change in the leadership within
the client organisation during the consulting engagement (execution or
implementation), the process may be disrupted. But having a suitable governance
mechanism in place reduces the changes consequent to the termination of the
consulting contract or due to the change in the client management.
Implementation is the ultimate phase of any consulting assignment when the
real benefits start occurring to the client it may take long, it may be difficult, and it
may require more changes. But it is the most enjoying phase worth doing and living
328
up to. Whatever originally planned for may not happen as new realities within the
organisation or outside the business may surface. It helps the consultant learn if
there were planning errors or execution challenges, both very important to gain
maturity as a consultant and to build relationship with the client.
23.3.9 CLOSING AND COLLECTING
The closing of consulting assignment means that after fulfilling of all
obligations by the consultant under the contract, the client gives his final signoff for
closing the Consulting contract. At this stage the final and balance payment for the
assignment is also released.
At this stage the consultant also includes a feedback session with the client
management/ sponsors. Many Consulting firms conduct this as a formal exercise
by sending a partner/ principal, not connected with the project, for the feedback
session to allow the client be open with his feedback. This helps in evaluating the
consultant performance, which is normally done by the engagement manager, and
the evaluation of the engagement manager’s performance is done by the concerned
principal or partner.
In this sub process the consulting from documents the following
• Lessons learnt
• The engagement outcome or results produced which can be used for
soliciting similar engagements from clients facing similar issues or problems.
• Generating intellectual capital by sanitizing the work products/
recommendations. By sanitizing, we mean dropping the client confidential
details from the reports and presentation and making them generic so that
the client does not object nor is affected by their circulation to other clients.
The work products could be any templates, PowerPoint charts, excel sheets
and macro or analytical tools that may have been used during the course of
engagement and qualify for the next engagement.
The net result of this sub-process for the consultant is to get a clear and
unambiguous feedback on the work performed and an assessment of the value that
the consultant has provided to the client. Exhibit 4.2 shows a typical client
feedback sheet. This contract closing meeting also leads to opening up a discussion
for the next engagement.
329
_______________________ ___________________________
_______________________ ___________________________
1 3 6 10
1. Your overall feedback about the engagement ____________________
2. Name the areas of improvement ___________________
3. Provide any specific feedback of any consultant in the team
____________________________________________________________
4. Was the consultant be able to address your problems and issues yes/no
5. Is there any specific feedback about the consultant skills, behaviour or
attitude?
such cases, the consultant must collect the due payment as per the contract terms
and also gather the feedback that would have been taken had the assignment been
completed properly.
23.3.10 CONSULTANT CLIENT RELATIONSHIP
Consultant client relationship is very complex and needs a deep
understanding. The BIT consultant has to play multiple roles. Has cleaner of the
system, he has to show to the client the areas of weakness and yet do it in a
manner that should make the client trust his word. However, sometimes the BIT
consultant needs to champion that cause and enthuse or force the client
organisation to at on his advice by presenting them all possible scenarios. The
consultant client relationship is built on the basics of trust, mutual respect and the
consultant’s sincere effort aimed at solving the client's problem.
The BIT consultant always deals with organisational issues solving human
systems that are complex and interconnected with strategy and business
environment. That BIT consultant's long term relationship with the client facilitates
his regular presence at the client's premises does enabling him to know the client's
organisation from close quarters. This helps a consultant formulate a fruitful
recommendations for the client who expects the consultant to come up with
something that is different from our challenges its own perception about future
learning and development.
Factors That Adversely Impact the Consultant-Client Relationship
Some of the factors that adversely impact the consultant-client relationship
are listed below;
• Inability to manage expectations from either side
• Failure or lack of communication between the consultant and client teams
• The consultants in your ability to focus on details
• Lack of understanding of the client's problems. It could also be due to the
client's inability to convey the real problem to the consultant.
• Lack of support from the client side
• Insistence by the client on price method, or resources
Factors that is critical for the success of consultant client relationship
• Competent Consultants
• Focus on client results versus consultant deliverables
• Clear and well communicated Expectations and outcomes
• Receivable executive support
• And adaptation to client readiness For change
• And investment upfront in learning the clients environment
• Defined in terms of incremental success
331
LESSON - 24
BIT CONSULTING THEMES
24.1 INTRODUCTION
The BIT consulting has become more poplar in almost all corporate in the
globe. Here we shall list out the common streams where BIT consulting has space
and sort of work it can do in an organization.
24.2 OBJECTIVES
• To develop knowledge about the major consulting themes among
organization.
24.3 CONTENTS
24.3.1 Major Consulting Themes
24.3.2 General Management Stream
24.3.3 Business and IT stream
24.3.1 MAJOR CONSULTING THEMES
In this section, we discuss various consulting approaches. Consulting
approaches have been grouped into three major streams.
General management stream
This Consulting stream includes the following themes:
• Strategy consulting
• Restructuring and turnarounds
• Mergers and acquisitions
• Industrial sector consulting; lean, Six Sigma etc.
• Human resources: Organisation restructuring, compensation planning, etc.
Business and IT stream
This consulting stream uses various combinations of it capabilities to address
business problems. It includes the following major Consulting themes:
• IT strategy for business
• IT planning, it application portfolio planning
• E-business strategy, including e-commerce and M-commerce strategy
• COTS enabled business process design
• Cost management: Integrated supply chain
• Customer centric: CRM process redesign
• IT initiated organisational change
IT management stream
This Consulting stream focus on it investment and optimal management of IT
resources. This stream includes the following:
• IT architecture
334
• IT Service Management
• IT process management, simplification and Optimisation
• IT governance
• IT process quality improvement consulting
• IT knowledge management
• Business continuity and disaster recovery consulting
• Planning budgeting and monitoring it programs
• IT security, data security and prevention of fraud and intrusion
24.3.2 GENERAL MANAGEMENT STREME
Strategy consulting
This type of consulting primary focus on the following areas:
• Creation, implementation assistance and management of Corporate strategic
plan
• Portfolio analysis, competitive analysis and profit improvement studies
• Marketing and sales strategy: New product introduction, marketing channel
strategy, customer satisfaction survey and customer changing need analysis
• Examining possibilities of mergers, acquisitions and collaborations
• Defining growth initiatives as well as successful exit from any business
• Operating strategy consulting: Operations review, review of manufacturing
strategies, outsourcing and logistics studies for Optimisation, distribution
and warehousing planning
• Organisation change strategies to help overcome any impediments in the
way of organisational change during a strategy implementation. These will
include restructuring of the organisation chart, role and responsibilities and
span of control.
• Sharing best practices in any of these areas by sharing research work with
the client
Restructuring and turnarounds
The companies which are making continuous losses and facing mounting
debts need the service of these Consultants. Restructuring consulting is a financial
Diagnostic and correction exercise. Turnaround Consulting goes beyond the cash
flow concern and focus on operational effectiveness, re-evaluation of management
team, improvement of Business and Technology processes and finally revisiting the
firm's strategy
Mergers and acquisitions
Consultants working under this team specialise in the processes and activities
related to the merger of one company with another. The Entity that is formed after
the merger of two companies may take a new name. That one company is
335
subsumed into a required by the other company usually the larger company does
the acquisition of the smaller company
The major process involves a number of activities at several levels such as
financial, legal and operational they also have a bearing on people. Translation of
two companies into one is a crucial time when the consultant’s expertise is needed
to prevent any risk or description of operations, employee exodus or losing of
market share or alienation or apprehension among customers. Finally, the
consultant helps in integration across the merged organisation to bring it to a
steady state of a normal organisation. The final phase of merger brings about
operational unification, new identity and Synergy between the merged or acquired
entities
Industrial Sector Consulting
This consulting theme is industry sector-specific wherein the consulting firm
provides advisory services for new product or process development, new project
formulation, planning for implementation and assessment of possible Risks during
the project implementation. They also address improving operational practiccs and
approachcs that increasc efficicncy, customer service and financial performance.
This consulting theme also covers design and engineering services, operational
processes and technology adoption or acquisition for sectors of the consulting firm's
specialisation. Many tools and techniques like Lean and Six sigma are used in
engagements relating to sector-specific consulting theme.
Human Resources Management (HRM) Consulting
Employees in the knowledge economy are the most precious resources who
can make a difference in a firm’s overall performance. Human resources
management HRM) consulting brings deep expertise to deal with issues in several
areas such as employee engagement, skills enhancement, talent managcment and
retention. HRM consulting covers processes designing and implcmcntation for
employee recruitment, career progression and skill management. It also addresses
issues of workforce management, scheduling and deployment. Leading client firms
give high importance to building and installing employee self-service processes
through the use of supporting IT. This initiative is designed and facilitated by HRM
consultants. Finally, HRM consultants help align the client firm’s goals and
objectives with employees processes covering areas such as compensation, reward,
motivation management and many others, keeping the overall cost of human
resources within manageable limits.
24.3.3 BUSINESS AND IT STREAM
IT strategy consulting
• Consulting team help the firm in the following business related aspects:
• Executing business strategy initiatives
• Taking a strategy view of IT in business strategy design
• Enhancing and redesigning the firm's business model
336
• Soliciting participation of the change sponsors and coaching the new issues
come up
• Making the change foundational so that there is no going back.
24.4 REVISION POINTS
• General management Stream
• Business and IT Stream
24.5 INTEXT QUESTIONS
1. Write short note on strategy consulting
2. What are the consultants’ key functions in mergers and acquisition?
3. What are the functions carried out in HRM consulting?
4. What are the tasks performed by CEBPD consultant?
24.6 SUMMARY
Strategy consulting primary focus on Creation, implementation assistance
and management of corporate strategic plan
The companies which are making continuous losses and facing mounting
debts need the service of these Consultants
Merger brings about operational unification, new identity and Synergy
between the merged or acquired entities
Integrated supply chain consultant deliverables include the Sourcing
policies, process and partners and logistics and movement recommendations
Change management consultant deliver Survey and assessment of the
change issues and identify the people to be impacted by the change
24.7 TERMINAL EXERCISE
1. CEBPD stands for _____________________________
2. The Change management consultants help the client in creating awareness
of the need for ________________
3. Strategies for web presence is an outcome of _____________ strategy
4. Balancing IT investment opportunities across business units is an outcome
of _____________ strategy
5. _________________ focus on operational effectiveness and re-evaluation of
management.
24.8 SUPPLEMENTARY MATERIALS
1. https://www.strategyblocks.com/blog/business-convergence-strategy/
2. https://www.linkedin.com/pulse/business-convergence-agile-pursuit-
purpose-value-strategic-kent
24.9 ASSIGNMENTS
1. Write an essay about the various consulting themes.
339
178E1250/179E1250/347E1250/348E1250/349E1250
ANNAMALAI UNIVERSITY PRESS 2021– 2022