0% found this document useful (0 votes)
120 views

Unit 3

This document discusses knowledge management and business intelligence systems. It introduces knowledge management and the differences between data, information, and knowledge. It describes the evolution of knowledge management from first generation systems focused on technology and capturing information to second generation systems recognizing that knowledge exists in people's minds. The document then discusses business analytics, business intelligence, and how these tools can be used across different business functions like marketing, sales, human resources, and finance. It provides objectives for understanding these concepts and designing intelligence systems for organizations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

Unit 3

This document discusses knowledge management and business intelligence systems. It introduces knowledge management and the differences between data, information, and knowledge. It describes the evolution of knowledge management from first generation systems focused on technology and capturing information to second generation systems recognizing that knowledge exists in people's minds. The document then discusses business analytics, business intelligence, and how these tools can be used across different business functions like marketing, sales, human resources, and finance. It provides objectives for understanding these concepts and designing intelligence systems for organizations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Information Systems

UNIT 3 INTELLIGENCE INFORMATION


SYSTEMS
Structure Page No.
3.0 Introduction 76
3.1 Objectives 77
3.2 Knowledge Management in Organisation 77
3.2.1 First and Second Generation Knowledge Management
3.2.2 Knowledge
3.2.3 Approach for Successful Implementation
of Knowledge Management
3.3 Creating, Developing and Sharing Knowledge 84
3.3.1 Knowledge Creation and Sharing
3.3.2 Capturing Knowledge
3.3.3 Knowledge transfer and Organisation
3.3.4 Drivers of Knowledge Management
3.3.5 Knowledge Representation
3.4 Artificial Intelligence in Business 92
3.5 Business Analytics 101
3.6 Business Intelligence 110
3.7 Role of Business Intelligence 113
3.7.1 Marketing
3.7.2 Sales and Orders
7.7.3 Human Resource
3.7.4 Finance and Accounts
3.8 Business Intelligence Tools 117
3.9 Business Intelligence Reports 124
3.10 Summary 127
3.11 Solutions/Answers 128
3.12 Further Readings/References 129

3.0 INTRODUCTION
In today’s fast-changing global markets, success is no longer tied to the traditional
inputs of labour, capital or land. The new critical resource is inside the heads of
employees: knowledge. What a company knows? and how it leverages that knowledge
into knowledge management for its use in the organisation. The individual
technologies are not in themselves knowledge management solutions. Instead, when
brought to market they are typically embedded in a smaller number of solutions
packages, each of which is designed to be adaptable to solve a range of business
problems. Examples are portals, collaboration software, and distance learning
software. In this unit we are aiming at imparting knowledge about this knowledge
management and how to use different tools and technologies to achieve the objectives
of an organisation.

Business Analytics and Business Intelligence are concerned with the process of
collecting and analysing domain-specific data stored in data warehouses to derive
valuable insights about customers and emerging markets, and to identify opportunities
as well as key drivers to business growth. It is increasingly being seen as the key
differentiator that provides a competitive edge to companies across industries. The
tools used in Business Analytics are varied. They range from simple slice-and-dice
tools to statistical methods such as log it regression, discriminant analysis and
multivariate analysis, to more sophisticated tools such as neural networks and
optimisation. The application domains are equally varied. How companies, banks,

76
insurance companies and airlines have all been beneficiaries of Business Intelligence Intelligence Information
is what we are also going to discuss in this unit. Systems

3.1 OBJECTIVES
After going through this unit, you will be able to:
• understand the knowledge management system;
• understand the Artificial Intelligence and its use for business;
• understand the use of business intelligence system for Marketing, Human
Resource, and Finance etc;
• design business intellgence system for your organisation;
• understand the use of businesses intelligence tools, and
• understand the use business intelligence reports.

3.2 KNOWLEDGE MANAGEMENT IN


ORGANISATION
Knowledge management (KM) is the management of knowledge within
organisations. A widely accepted ‘working definition’ of knowledge management
applied in worldwide organisations is “Knowledge Management caters to the critical
issues of organisational adaptation, survival, and competence in the face of
increasingly discontinuous environmental change.... Essentially, it embodies
organisational processes that seek synergistic combination of data and information
processing capacity of information technologies, and the creative and innovative
capacity of human beings.”

This definition not only gives an indication of what Knowledge Management is, but of
how its advocates often treat the English language. In simpler terms, Knowledge
Management seeks to make the best use of the knowledge that is available to an
organisation, creating new knowledge in the process.

It is helpful to make a clear distinction between knowledge on the one hand, and
information and data on the other.

Information can be considered as a message. It typically has a sender and a receiver.


Information is the sort of stuff that can, at least potentially, be saved onto a computer.
Data is a type of information that is structured, but has not been interpreted.

Knowledge might be described as information that has a use or purpose. Whereas


information can be placed onto a computer, knowledge exists in the heads of people.
Knowledge is information to which intent has been attached.

3.2.1 First and Second Generation Knowledge Management


By the early nineties, it was clear that there were two distinct branches of Knowledge
Management.

First Generation Knowledge Management involves the capture of information and


experience so that it is easily accessible in a corporate environment. An alternate term
is “knowledge capture”. Managing this capture allows the system to grow into a
powerful information asset.

This first branch had its roots firmly in the use of technology. In this view Knowledge
Management is an issue of information storage and retrieval. It uses ideas derived

77
Information Systems from systems analysis and management theory. This approach led to a boom in
consultancies and in the development of so-called knowledge technologies. Typically
first-generation Knowledge Management involved developing sophisticated data
analysis and retrieval systems with little thought as to how the information they
contained would be developed or used. This led to organisations investing heavily in
technological fixes that had either little impact or a negative impact on the way in
which knowledge was used.

A typical scenario might have seen an organisation install a sophisticated intranet in


order to categorise and disseminate information, only to find that the extra work
involved in setting up the metadata meant that few within the organisation actually
used the intranet. This occasionally led to management mandating the use of the
intranet, resulting in resentment amongst staff, and undermining their trust in the
organisation. Thus first generation solutions are often counterproductive.

Management theory functions as a branch of Economics, and to a large extent it adopts


econometric standards. When it became apparent that it would be useful to be able to
manage knowledge, it was natural for managers to attempt to apply their preferred
econometric methods to the cause. But econometrics is about commodities and cash
flow. It found it therefore necessary to treat knowledge as if it were a commodity.

This, of course, was a surprisingly difficult thing to do, essentially because knowledge
is not a commodity but a process. But a suitable epistemology was found, in the form
of that developed by Michael Polanyi. Polanyi’s epistemology objectified the
cognitive component of knowledge – learning and doing – by labelling it tacit
knowledge and for the most part removing it from the public view. Learning and doing
became a ‘black box’ that was not really subject to management; the best that could be
done was to make tacit knowledge explicit.

Its failure to provide any theoretical understanding of how organisations learn new
things and how they act on this information meant that first generation Knowledge
Management was incapable of managing knowledge creation.

Second Generation Knowledge Management: Faced with the theoretical and


practical failure of first generation techniques to live up to its promise, theorists began
to look more closely at the ways in which knowledge is created and shared.

Along with this realisation came a change in metaphor. Organisations came to be seen
as capable of learning, and so a link grew between learning theory and management.

At the same time hierarchical models of organisational structure were replaced by


more organic models, which see effective organisations as capable of structural
change in response to their environment.

The advent of complexity theory and chaos theory provided more metaphors that
enable managers to replace models of organisations as integrated systems with models
of organisations as complex interdependent entities that are capable of responding to
their environment.

Second generation Knowledge Management gives priority to the way in which people
construct and use knowledge. It derives its ideas from complex systems, often making
use of organic metaphors to describe knowledge growth. It is closely related to
organisational learning. It recognises that learning and doing are more important to
organisational success than dissemination and imitation.

78
3.2.2 Knowledge Intelligence Information
Systems
Knowledge is the awareness and understanding of facts, truths or information gained
in the form of experience or learning. Knowledge is an appreciation of the possession
of interconnected details which, in isolation, are of lesser value.

Knowledge is a term with many meanings depending on context, but is (as a rule)
closely related to such concepts as meaning, information, instruction, communication,
representation, learning and mental stimulus.

Knowledge is distinct from simple information. Both knowledge and information


consist of true statements, but knowledge is information that has a purpose or use.
Philosophers would describe this as information associated with intentionality. The
study of knowledge is called epistemology.

A common definition of knowledge is that it consists of justified true belief. This


definition derives from Plato’s Theaetetus. It is considered as necessary, but not
sufficient, conditions for some statement to count as knowledge.

What constitutes knowledge certainty and truth are controversial issues. These issues
are debated by philosophers, social scientists, and historians. Ludwig Wittgenstein
wrote “On Certainty” — aphorisms on these concepts — exploring relationships
between knowledge and certainty. A thread of his concern has become an entire field,
the philosophy of action.

Distinguishing knowing that from knowing how


Suppose that Fred says to you: “The fastest swimming stroke is the front crawl. One
performs the front crawl by oscillating the legs at the hip, and moving the arms in an
approximately circular motion”. Here, Fred has propositional knowledge of swimming
and how to perform the front crawl.

However, if Fred acquired this propositional knowledge from an encyclopedia, he will


not have acquired the skill of swimming: he has some propositional knowledge, but
does not have any procedural knowledge or “know-how”. In general, one can
demonstrate know-how by performing the task in question, but it is harder to
demonstrate propositional knowledge.

Inferential vs. Factual Knowledge


Knowledge may be factual or inferential. Factual knowledge is based on direct
observation. It is still not free of uncertainty, as errors of observation or interpretation
may occur, and any sense can be deceived by illusions.

Inferential knowledge is based on reasoning from facts or from other inferential


knowledge such as a theory. Such knowledge may or may not be verifiable by
observation or testing. For example, all knowledge of the atom is inferential
knowledge. The distinction between factual knowledge and inferential knowledge has
been explored by the discipline of general semantics.

3.2.3 Approach for Successful Implementation of Knowledge Management


Although KM is as an enterprise-wide goal, many companies find success if they
kickoff an initiative in one department and then extend the practices throughout other
parts of the organisation. Here, we will outline those practices that help ensure a
successful KM initiative within the IT help desk or customer contact center. Often
KM practices relating to service and support can be defined as knowledge-powered
problem resolution — using a knowledge base, knowledge sharing, collaboration and
knowledge reuse to efficiently solve customer questions.
79
Information Systems
A successful knowledge management initiative within a help desk or call center can
reduce agent training time and speed new employee ramp up. Knowledge-powered
problem resolution enables agents to become more confident and competent sooner
than they otherwise would without a KM practice. By having access to a knowledge
base, new help desk and customer service agents can get answers to common
questions without having to constantly ask other more experienced agents. Customers
and end-users benefit from faster problem resolution, and experienced agents can
focus on solving more challenging problems.

Customers and end-users also benefit when they have direct access to a knowledge
base to solve their own issues without ever contacting an agent. A growing number of
people now prefer self-service to live interaction, at least for certain problem types.
For some people, self-service fits perfectly into their lifestyle. They are in a hurry and
they need a specific piece of information and that’s all they want. Say, for example, in
a corporate environment, an employee needs to know if there is a Windows 2000
driver for a USB Zip drive. She doesn’t want to wait in a queue. She doesn’t want to
talk to an agent. She just wants to know if there is a driver available and where to find
it. In this case, self-service can be superior to agent-assisted service.

Knowledge Management is an evolving discipline that can be affected by new


technologies and best practices, but there are some things that we do know for sure.
There is a systematic approach to successfully implementing knowledge management
and if you analyse what you are trying to accomplish, map out a strategy, garner
support from the organisation and have a way to measure it, then you are much more
likely to be successful., The outlined 11 points that will serve as a primer to help
understand what it takes to have a successful Knowledge Management initiative.

Point 1: Knowledge Management is a discipline


A lot of people think knowledge management is a technology or software solution but
it is much more than that; knowledge management is a discipline. Obviously, you have
to have a good piece of software or a good system to capture knowledge – but that’s
not the whole equation. Underestimating what it takes just to capture the knowledge
correctly is a big risk, as is underestimating the integration task into your already
complex environment.

There are some providers of pre-packaged knowledge out there, but our experience is
that while they can be useful to the help desk they are not relevant to customer service
centers which have business-specific content needs. In either case, you must ensure
you have the adequate resources to create and maintain the content you promise.
Creating content is not a one-time project. Also, over time the content must be updated
and supplemented as new products or services are supported as shown in Figure 1.
Empowering agents to add new content as resolutions are discovered is the key to
maintaining a robust system.

80
Intelligence Information
Systems

Figure 1: Keeping KM updated

Point 2: One champion is not enough

To be successful, your project must have several champions within the organisation.
These are individuals that believe in the project, enthusiastically advocate it and have
the clout to “make things happen.” Projects that lack a champion generally don’t get
off the ground. Those with only one champion are also at serious risk.

Losing your champion can spell disaster for your project. This is a real problem for
knowledge management projects, due to their continuous duration. If the project
champion transfers, retires or leaves the company, the project often loses its
momentum and the project may falter as someone else takes it over.

What we like to see when we work with clients is a dual-sponsorship: one at the
operational level and one at the executive level. So if an operations manager decides
the company really needs knowledge management, that manager should find
somebody on the executive staff who will agree to support the vision. By having that
dual track of vision the project is more likely to succeed.

Point 3: Cultural change isn’t automatic


Buy-in is needed at all levels, and this may require cultural change. The people that
are going to use the tools have to be part of the design unless you plan on strong-
arming them (and that doesn’t work very well). Don’t make this management decision
in a vacuum. Include some people from the various groups that would directly or
indirectly use the system.

Sometimes there is a fear that knowledge management will be used to replace people.
If your staff thinks that is what you are trying to do, then you really need to address
that head-on. If that is not your intention, you should convince your team that current
head count reduction is not the goal. Therefore, you need to look for and plan the
motivation for each party. After all, you are asking people to shift from a system
where being a tower of knowledge is rewarded, to a system where they share their
expertise with everybody on the team.

Each party will have a unique motivation to embrace knowledge management. For
example, in a technical support environment, a frontline tech will have a different
motivational schema than a 3rd level technician. The frontline tech is not going to
have to ask the 2nd line tech as many questions, and can resolve more problems faster.
The 2nd level tech is not going to get as many of the common questions. Level 3
researchers won’t have to start at ground zero when handed a problem by level 2,
because they know that all the intermediate steps have been covered. So as you look

81
Information Systems across the organisation everybody has a different interest and you have to protect all of
them.

Failing to see how knowledge management is going to fit into the rest of the
organisation is a mistake. You must invest the time and energy to understand the
culture, identify motivations and ensure change happens where needed.

Point 4: Create a change management plan


If your employees are not already sharing information, you will need a change
management plan because you are asking people to do their jobs differently. The
change management plan specifies how you will gain acceptance of knowledge
management within the organisation. Let’s say you are a call center manager and you
measure your employees’ performance by call handle time and number of cases
closed. Now you are going to be asking them to use a knowledge base on every call or
email interaction – thus asking them to change the way they perform their job on a
daily basis. Also, if you don’t make changes to their performance reviews and
compensation, there may be friction because you’re asking them to do one thing but
you are judging them by another set of rules. As part of the overall change
management plan you need to update job descriptions, feedback sessions and
performance reviews to reflect the new workflow. Neglecting to make these changes
may foster acceptance issues with your team members.

Point 5: Stay strategic


Knowledge management is a strategic endeavour, not just a project. I prefer to call it a
strategic initiative as opposed to a project because a project implies a finite timeline.
With KM you are never really done; you initiate it and you build it and then it is
online and you maintain it.

In our practice we look for our clients to have a strategic goal for the project rather
than a tactical goal. If you are looking to shorten handle time that’s a tactical
motivation and you’re not as likely to be willing to go through the steps that a
successful enterprise rollout would take. But if it is a strategic initiative, especially
something that is top-down motivated (for instance improving customer service or
improving employee satisfaction) then there is a better value statement involved and
you are not relying on changing one metric. So you might see improvement in
individual metrics like handle time and resolution rates but their value is limited
compared to the return from becoming a collaborative knowledge sharing
organisation.

To get going, decide what goals you are trying to accomplish and why. Then try to
identify a solution and methodology that will help you attain those goals in your
environment. Sometimes people within an organisation may say that a KM initiative is
nice-to-have, but an economic downturn might slow the process down or defer it —
thus, being counterproductive when resources are scarce. But I think it’s
counterproductive to consider KM a nice-to-have because the rewards are equally
beneficial during both a downturn and the inevitable upturn. If you wait until the
upturn then you will be forced to play catch up as your call volumes increase and your
email volume doubles; that’s not the time to introduce a knowledge-powered system
or build a knowledge base. It’s not necessary to hire more employees if you have
resources that are not 100% utilized or if you encourage your agents to contribute
knowledge during their daily workflow.

Point 6: Pick a topic, go in-depth, and keep it current


We advise that you pick one area that needs improvement or has limited resources,
and then build a robust knowledge base for that subject matter. Use that experience to
learn about implementing knowledge in your organisation; do one call center or one

82
product group and learn from there. It is much better to be comprehensive for a narrow Intelligence Information
topic than fail to get enough depth. Sometimes an enterprise initiative is needed right Systems
away, and it can be done successfully, but it can involve a larger resource commitment
to do a full-scale project all at once. Remember, the depth of your knowledge base
truly depends upon your customers’ needs.

Today’s systems should enable agents to contribute new knowledge during their
natural workflow. This is critical to ensure that solutions that are not currently in the
system can be quickly added once the resolution has been determined. It’s also
important to remember that regular and timely maintenance of the knowledge base is
the key to success. You should also consider appointing resources to maintain the
knowledge. Be sure to build in a mechanism that identifies gaps in content
(information sought but not found), and a process for filling those gaps. If people
repeatedly fail to find what they are looking for they will stop using the system.

Point 7: Don’t get hung up on the limitations


Certain types of knowledge are very well suited to quickly harvesting into a
knowledge base. Company processes or technical procedures are well suited for
knowledge management. By populating a knowledge base with this type of
information and making it available to employees and customers, an organisation can
shorten or even avoid many calls. Organisations can also use a KM system to access
existing unstructured sources of information that may already exist on a corporate
network, intranet or within an existing call center or help desk system. It’s important
to note that experienced agents can certainly benefit from access to both structured
knowledge and unstructured information because they’re more likely to be able to
pinpoint a solution within an unstructured document. However, level 1 agents or
end-users accessing the knowledge base through self-service, may not find these
sources of unstructured information helpful because they don’t have the expertise to
decipher the information quickly.

In addition to sources of knowledge, the specific type of information is also important


to consider. The craftsmanship or expertise that a true expert has is much more
difficult to capture. A master craftsman has a huge body of knowledge. S/he tends to
“chunk” their knowledge and can’t tell you the steps they use when they make a
decision in their field; they just do it. Much like tying your shoe, you do it everyday
but when you have to explain it its tough because you have internalized the process. I
think that is where the breakdown is for harvesting expertise. We think a KM initiative
could be somewhat limited because of the nature of complex knowledge, and
thankfully we will always need human expertise. However, when it comes to a
successful initiative, it’s important to first determine what knowledge can be easily
added to the system and then provide agents or a knowledge manager with the tools to
add this step-by-step complex information to the system, ensuring that even difficult
questions can be answered accurately.

Point 8: Set expectations or risk extinction


A big pitfall is the failure of knowledge management proponents in helping executive
management set appropriate expectations. Customers, employees and management
alike must know what they are going to get out of knowledge management, what it
will take to get those results and how success will be measured. Measurement is where
most organisations fail because they are doing things that were not measured before.
So a year from now, you’ve built this thing, it’s up and running and everyone loves it
and your boss says “where’s my return?” If you don’t have a measurement system in
place then you will have a hard time answering his or her question – especially for the
new metrics that didn’t exist before. You probably measured handle time, abandon
rates, and similar operational metrics. But you may not be measuring call avoidance or
knowledge usage, which affects ultimately the ability to measure resolution rates.

83
Information Systems In addition to setting management expectations you have to set customer and end-user
expectations. For example, if you are going to provide customers with Web self-
service for one specific product then you must include the known problems that they
are going to encounter in the knowledge base. In that situation you are better off to set
their expectations that the knowledge base covers only that product and no other.
Customers pose the same extinction risk that your employees do. If they visit the site a
few times and they can’t find an accurate or appropriate answer they will probably not
return again.

Point 9: Integrate KM into existing systems


Typically, organisations that are implementing knowledge management already have
an established data center, so they are not only building a knowledge base – they must
also integrate it into their existing environment – their call tracking system, IVR
system, email, remote diagnostics and other support systems.
When selecting a KM system, consider systems that have open architectures and
proven integrations into existing call center and help desk tools to ensure a successful
implementation. Also, processes will be affected, requiring change to reporting and
measurement systems as well. Integrate reporting capabilities where possible to best
understand how the combined systems are affecting the effectiveness of your support
operations.

Point 10: Educate your self-service users


You’ve created your KM plan, determined the critical knowledge to include, initiated
a plan to garner cultural acceptance, trained your agents and pinpointed key sources of
knowledge – finally you need to educate your self-service users on how to find and
access support information online to ensure a satisfying experience.

There are many ways to “push” your self-service capabilities out to your end-user
audience. Traditional marketing techniques should be employed to promote this
valuable service, such as email, online newsletters or direct mail. Encourage users to
visit your online support site by making it easy to find and access the knowledge base.
Be sure to include the site URL and directions for obtaining a login, if needed, in your
marketing communications.

Another method is to encourage your agents to end support calls by informing the user
of the support site. “Thanks for calling today, I’m glad that I could help you solve
your problem. By the way, we now have a Web self-service site if you’d like to search
the knowledge base. You can find it at www.ABC-Support.com and you can obtain a
login by clicking the request login button on that page.”

Finally, make sure your Internet or intranet site includes an easy-to-find link to your
Web self-service site. A twist on the old saying, “If they can’t find it, they won’t
come.” So make it easy to find, easy to access and easy to use.

Point 11: Become a knowledge-enabled organisation


We think it is inevitable that knowledge management will have a high adoption rate in
the next few years. Over time to remain competitive it will be essential to be
“knowledge-enabled.” Just a few years ago email was not a common method for
seeking customer service; now customers demand the ability to contact you through
channels other than the phone. Going forward, as customers deal with companies that
are knowledge-enabled and can quickly and efficiently answer their questions, they
are going to expect a greater level of service in all of their support interactions.

The bottom line can be summarized with a quote from Gartner, Inc. – “Those
enterprises that include KM processes as part of their customer relationship
management initiatives have a higher probability of success than those that don’t”.

84
Intelligence Information
3.3 CREATING, DEVELOPING AND SHARING Systems

KNOWLEDGE
Knowledge flows comprise the set of processes, events and activities through which
data, information, knowledge and meta-knowledge are transformed from one state to
another. To simplify the analysis of knowledge flows, the framework described here is
based primarily on the Knowledge Model. The model organizes knowledge flows into
four primary activity areas: knowledge creation, retention, transfer and utilisation
(Figure 2).
Creation

Utilization

Transfer Retention
Figure 2: Knowledge model

Knowledge Creation: This comprises activities associated with the entry of new
knowledge into the system, and includes knowledge development, discovery and
capture.

Knowledge Retention: This includes all activities that preserve knowledge and allow
it to remain in the system once introduced. It also includes those activities that
maintain the viability of knowledge within the system.

Knowledge Transfer: This refers to activities associated with the flow of knowledge
from one party to another. This includes communication, translation, conversion,
filtering and rendering.

Knowledge Utilisation: This includes the activities and events connected with the
application of knowledge to business processes.

Let us, look at the basic processes of knowledge creation and sharing within
organisations and what type of technologies can be applied to knowledge management
and to assess their actual or potential contribution.

3.3.1 Knowledge Creation and Sharing

A set of systematic and disciplined actions that an organisation can take to obtain the
greatest value from the knowledge available is given the name Knowledge
management. “Knowledge” in this context includes both the experience and
understanding of the people in the organisation and the information artifacts, such as
documents and reports, available within the organisation and in the world outside.
Effective knowledge management typically requires an appropriate combination of
organisational, social, and managerial initiatives along with, in many cases,
deployment of appropriate technology.

85
Information Systems To structure the discussion of processes involved in knowledge creation and sharing
and technologies involved, it is helpful to classify the technologies by reference to the
notions of tacit and explicit knowledge.

• Tacit knowledge is what the knower knows, which is derived from experience
and embodies beliefs and values. Tacit knowledge is actionable knowledge, and
therefore the most valuable. Furthermore, tacit knowledge is the most important
basis for the generation of new knowledge; however, the key to knowledge
creation is the mobilisation and conversion of tacit knowledge.

• Explicit knowledge is represented by some artifact, such as a document or a


video, which has typically been created with the goal of communicating with
another person.

Both forms of knowledge are important for organisational effectiveness.

Now, let us look at processes by which knowledge is transformed between its


tacit and explicit forms, as shown in Figure 3. Organisational learning takes
place as individuals participate in these processes, since by doing so their
knowledge is shared, articulated, and made available to others. Creation of new
knowledge takes place through the processes of combination and
internalisation. As shown in Figure 3, the processes by which knowledge is
transformed within and between forms usable by people are:

• Socialisation (tacit to tacit): Socialisation includes the shared formation and


communication of tacit knowledge between people, e.g., in meetings.
Knowledge sharing is often done without ever producing explicit knowledge
and, to be most effective, should take place between people who have a
common culture and can work together effectively. Thus, tacit knowledge
sharing is connected to ideas of communities and collaboration. A typical
activity in which tacit knowledge sharing can take place is a team meeting
during which experiences are described and discussed, often informal, in
which information technology (IT) plays a minimal role. However, an
increasing proportion of meetings and other interpersonal interactions use on-
line tools known as groupware. These tools are used either to supplement
conventional meetings, or in some cases to replace them. To what extent can
these tools facilitate formulation and transfer of tacit knowledge?

TACIT-TO-TACIT TACTI-TO-EXPLICIT

Socialisation Externalisation

Team meeting & discussion Team meeting & Answer Questions

EXPLICIT-TO-TACIT EXPLICIT-TO-EXPLICIT

Internalisation Combination

Learning-from-Reports E-mail/Reports

Figure 3: Conversion of Knowledge from Tacit-to-Explicit form and vice-versa, Processes &
Techniques

86
Groupware: Groupware is a fairly broad category of application software that Intelligence Information
helps individuals to work together in groups or teams. Groupware can to some Systems
extent support all four of the facets of knowledge transformation. To examine
the role of groupware in socialization we focus on two important aspects: shared
experiences and trust.

Shared experiences are an important basis for the formation and sharing of tacit
knowledge. Groupware provides a synthetic environment, often called a virtual
space, within which participants can share certain kinds of experience; for
example, they can conduct meetings, listen to presentations, have discussions,
and share documents relevant to some task. Indeed, if a geographically dispersed
team never meets face to face, the importance of shared experiences in virtual
spaces is proportionally enhanced. An example of current groupware is Lotus
Notes, which facilitates the sharing of documents and discussions and allows
various applications for sharing information and conducting asynchronous
discussions to be built. Groupware might be thought to mainly facilitate the
combination process, i.e., sharing of explicit knowledge. However, the selection
and discussion of explicit knowledge to some degree constitutes a shared
experience.

A richer kind of shared experience can be provided by applications that support


real-time on-line meetings, a more recent category of groupware. On-line
meetings can include video and text-based conferencing, as well as synchronous
communication and chat. Text-based chat is believed to be capable of
supporting a group of people in knowledge sharing in a conversational mode.
Commercial products of this type include Lotus Sametime and Microsoft
NetMeeting. These products integrate both instant messaging and on-line
meeting capabilities. Instant messaging is found to have properties between
those of the personal meeting and the telephone: it is less intrusive than
interrupting a person with a question but more effective than the telephone in
broadcasting a query to a group and leaving it to be answered later.

Some of the limitations of groupware for tacit knowledge formation and sharing
have been highlighted by recent work on the closely related issue of the degree
of trust established among the participants. It was found that videoconferencing
(at high resolution—not Internet video) was almost as good as face-to-face
meetings, whereas audio conferencing was less effective and text chat least so.
These results suggest that a new generation of videoconferencing might be
helpful in the socialization process, at least in so far as it facilitates the building
of trust. But even current groupware products have features that are found to be
helpful in this regard. In particular, access control, which is a feature of most
commercial products, enables access to the discussions to be restricted to the
team members if appropriate, which has been shown to encourage frankness and
build trust.

• Externalisation (tacit to explicit): By its nature, tacit knowledge is difficult to


convert into explicit knowledge. Through conceptualization, elicitation, and
ultimately articulation, typically in collaboration with others, some proportion of
a person's tacit knowledge may be captured in explicit form. Typical activities in
which the conversion takes place are in dialog among team members, in
responding to questions, or through the elicitation of stories.

The conversion of tacit to explicit knowledge (externalization) involves forming


a shared mental model, then articulating through dialog. Collaboration systems
and other groupware (for example, specialized brainstorming applications) can
support this kind of interaction to some extent.

87
Information Systems On-line discussion databases are another potential tool to capture tacit
knowledge and to apply it to immediate problems. It needs to be noted that team
members may share knowledge in groupware applications. To be most effective
for externalization, the discussion should be such as to allow the formulation
and sharing of metaphors and analogies, which probably requires a fairly
informal and even freewheeling style. This style is more likely to be found in
chat and other real-time interactions within teams.

Newsgroups and similar forums are open to all, unlike typical team discussions,
and share some of the same characteristics in that questions can be posed and
answered, but differ in that the participants are typically strangers. Nevertheless,
it is found that many people who participate in newsgroups are willing to offer
advice and assistance, presumably driven by a mixture of motivations including
altruism, a wish to be seen as an expert, and the gratitude and positive feedback
contributed by the people they have helped.

3.3.2 Capturing Knowledge

Once tacit knowledge has been conceptualized and articulated, thus converting it to
explicit knowledge, capturing it in a persistent form as a report, an e-mail, a
presentation, or a Web page makes it available to the rest of the organisation.
Technology already contributes to knowledge capture through the ubiquitous use of
word processing, which generates electronic documents that are easy to share via the
Web, e-mail, or a document management system. Capturing explicit knowledge in this
way makes it available to a wider audience, and “improving knowledge capture” is a
goal of many knowledge management projects.

• Combination: (explicit to explicit): Explicit knowledge can be shared in


meetings, via documents, e-mails, etc., or through education and training. The use of
technology to manage and search collections of explicit knowledge is well established.
However, there is a further opportunity to foster knowledge creation, namely to enrich
the collected information in some way, such as by reconfiguring it, so that it is more
usable. An example is to use text classification to assign documents automatically to a
subject schema. A typical activity here might be to put a document into a shared
database.There can be little doubt that the phase of knowledge transformation best
supported by IT is combination, because it deals with explicit knowledge. We can
distinguish the challenges of knowledge management from those of information
management by bearing in mind that in knowledge management the conversion of
explicit knowledge from and to tacit knowledge is always involved.

• Internalisation (explicit to tacit): In order to act on information, individuals


have to understand and internalize it, which involves creating their own tacit
knowledge. By reading documents, they can to some extent re-experience what others
previously learned. By reading documents from many sources, they have the
opportunity to create new knowledge by combining their existing tacit knowledge with
the knowledge of others. However, this process is becoming more challenging because
individuals have to deal with ever-larger amounts of information. A typical activity
would be to read and study documents from a number of different databases.

These processes do not occur in isolation, but work together in different combinations
in typical business situations. For example, knowledge creation results from
interaction of persons and tacit and explicit knowledge. Through interaction with
others, tacit knowledge is externalized and shared. Although individuals, such as
employees, for example, experience each of these processes from a knowledge
management and therefore an organisational perspective, the greatest value occurs
from their combination since, as already noted, new knowledge is thereby created,
disseminated, and internalized by other employees who can therefore act on it and thus

88
form new experiences and tacit knowledge that can in turn be shared with others and Intelligence Information
so on. Since all the processes of Figure 3 are important, it seems likely that knowledge Systems
management solutions should support all of them, although we must recognise that the
balance between them in a particular organisation will depend on the knowledge
management strategy used.

Table 1 shows some examples of technologies that may be applied to facilitate the
knowledge conversion processes of Figure 3. The individual technologies are not in
themselves knowledge management solutions. Instead, when brought to market they
are typically embedded in a smaller number of solutions packages, each of which is
designed to be adaptable to solve a range of business problems. Examples are portals,
collaboration software, and distance learning software. Each of these can and does
include several different technologies.

Table 1: Examples of technologies that can support or enhance the transformation of knowledge

Tacit to Tacit Tacit to Explicit


E-meetings Answering questions
Synchronous collaboration (chat) Annotation
Explicit to Tacit Explicit to Explicit
Visualization Text search
Browsable video/audio of presentations Document categorization

It is found that the strongest contribution to current solutions is made by technologies


that deal largely with explicit knowledge, such as search and classification.

Contributions to the formation and communication of tacit knowledge, and support for
making it explicit, are currently weaker, although some encouraging developments are
highlighted, such as the use of text-based chat, expertise location, and unrestricted
bulletin boards.

Knowledge capture stages


Knowledge may be accessed, or captured, at three stages: before, during, or after
knowledge-related activities. For example, individuals undertaking a new project for
an organisation might access KM resources to learn best practices and lessons learnt
for similar projects undertaken previously, access the KM network again during the
project implementation to seek advice on issues encountered, and access the system
afterwards for advice on after-project actions and review activities. Similarly,
knowledge may be captured and recorded into the system before the project
implementation, for example, as the project team learns information and lessons
during the initial project analysis. Similarly, lessons learnt during the project operation
may be entered into the KM system, and after-action reviews may lead to further
insights and lessons being recorded in the KM system for future access.

3.3.3 Knowledge Transfer and Organisation


In the Organisational development area of organisational learning, a practical problem
is that of knowledge transfer, how to get some packet of knowledge, that exists in
one part of the organisation, into another (or all other) parts of the organisation. It’s
more than just a communications problem. If it were merely that, then a memo, an
e-mail or a meeting would accomplish the knowledge transfer.

Challenges
What complicates knowledge transfer? There are many factors, including:

• geography
• language
89
Information Systems • areas of expertise
• internal conflicts (e.g., professional territoriality)
• generational differences
• union-management relations
• incentives
• the use of visual representations to transfer knowledge (Knowledge
visualization)

Process
• identifying the key knowledge holders within the organisation
• motivating them to share
• designing a sharing mechanism to facilitate the transfer
• executing the transfer plan
• measuring to ensure the transfer
• applying the knowledge transferred

3.3.4 Drivers of Knowledge Management


There are a number of ‘drivers’, or motivations, leading to organisations undertaking a
Knowledge Management program. Perhaps first among these are to gain the
competitive advantage that comes with improved or faster learning and new
knowledge creation. KM programs may lead to greater innovation, better customer
experiences, consistency in best practices and knowledge access across a global
organisation, as well as many other benefits, and KM programs may be driven with
these goals in mind.

Considerations driving a knowledge management program might include:

• making available increased knowledge content in the development and provision


of products and services.
• achieving shorter new product development cycles.
• facilitating and managing organisational innovation.
• leverage the expertise of people across the organization.
• Benefiting from ‘network effects’ as the number of productive connections
between employees in the organisation increases and the quality of information
shared increases.
• managing the proliferation of data and information in complex business
environments and allowing employees to rapidly access useful and relevant
knowledge resources and best practice guidelines.
• facilitate organisational learning.
• managing intellectual capital and intellectual assets in the workforce (such as the
expertise and know-how possessed by key individuals) as individuals retire − in
larger numbers than they have in a long time - and new workers are hired.

Knowledge Management enablers


Historically, there have been a number of technologies ‘enabling, or facilitating KM
practices in the organisation, including expert systems, knowledge bases, software
help desk tools, document management systems and other IT systems supporting
organisational knowledge flows.

The advent of the internet brought with it further enabling technologies, including E-
learning, web conferencing, collaborative software, Content management systems,
corporate ‘Yellow pages’ directories, email lists, Wikis, Blogs, and other technologies.
Each enabling technology can expand the level of inquiry available to an employee,
while providing a platform to achieve specific goals or actions. The practice of KM
will continue to evolve with the growth of collaboration applications available by IT
and through the Internet. Since its adoption by the mainstream population and business

90
community, the Internet has led to an increase in creative collaboration, learning and Intelligence Information
research, e-commerce, and instant information. Systems

There are also a variety of organisational enablers for KM programs, including


Communities of Practice, before-, after- and during- action reviews, peer assists,
information taxonomies, coaching and mentoring, and so on.

3.3.5 Knowledge Representation


Knowledge representation is a central problem in arranging knowledge. It is needed
for library classification and processing concepts in an information system.

There are difficulties in the field of artificial intelligence. The problem consists of how
to store and manipulate knowledge in an information system in a formal way so that it
may be used by mechanisms to accomplish a given task. Examples of applications are
expert systems, machine translation systems, computer-aided maintenance systems
and information retrieval systems (including database front-ends).

Some people think it would be best to represent knowledge in the same way that it is
represented in the human mind, which is the only known working intelligence so far,
or to represent knowledge in the form of human language. Unfortunately, we don’t
know how knowledge is represented in the human mind, or how to manipulate human
languages in the same way as the human mind.

For this reason, various artificial languages and notations have been proposed for
representing knowledge. They are typically based on logic and mathematics, and have
easily parsed grammars to ease machine processing.

The recent fashion in knowledge representation languages is to use XML as the low-
level syntax. This tends to make the output of these KR languages easy for machines
to parse, at the expense of human readability.

First-order predicate calculus is commonly used as a mathematical basis for these


systems, to avoid excessive complexity. However, even simple systems based on this
simple logic can be used to represent data which is well beyond the processing
capability of current computer systems:

Examples of notations:
• DATR is an example for representing lexical knowledge
• RDF is a simple notation for representing relationships between objects

Examples of artificial languages intended for knowledge representation include:


• CycL
• Loom
• OWL
• KM

Techniques of knowledge representation


Semantic networks may be used to represent knowledge. Each node represents a
concept and the arcs are used to define relations between the concepts.

From the earliest times, the knowledge frame or just frame has been used. A frame
consists of slots which contain values; for instance, the frame for house might contain
a color slot, number of floors slot, etc.

91
Information Systems Frames can behave something like object-oriented programming languages, with
inheritance of features described by the “is-a” link. However, there has been no small
amount of inconsistency in the usage of the “is-a” link: Ronald J. Brachman wrote a
paper titled “What IS-A is and isn’t”, wherein 29 different semantics were found in
projects whose knowledge representation schemes involved an “is-a” link. Other links
include the “has-part” link.

Frame structures are well-suited for the representation of schematic knowledge and
stereotypical cognitive patterns. The elements of such schematic patterns are weighted
unequally, attributing higher weights to the more typical elements of a schema. A
pattern is activated by certain expectations: If a person sees a big bird, he or she will
classify it rather as a sea eagle than a golden eagle, given his or her “sea-scheme” is
currently activated.

Frames representations are more object-centers than semantic networks: all the facts
and properties of a concept are located in one place - there is no need for costly search
processes in the database.

Frames suffer from the frame problem of knowledge linking.

A script is a type of frame that describes what happens temporally; the usual example
given is that of describing going to a restaurant. The steps include waiting to be seated,
receiving a menu, ordering, etc.

Check Your Progress 1

1) State whether True or False:


a) Knowledge Management seeks to make the best
use of the knowledge that is available to an organisation,
creating new knowledge in the process. True False
b) Explicit knowledge is represented by some artifact, such
as a document or a video, which has typically been created
with the goal of communicating with another person. True False
c) Explicit knowledge is the most important basis for the
generation of new knowledge. True False
d) Socialization Process is involved for transfer of Tacit to
Tacit information. True False
e) Internalization Process is involved for conversion/
transfer of Tacit to Explicit information. True False
f) Externalization Process is involved for conversion/
transfer of Explicit to Tacit information. True False
g) Combination Process is involved for conversion/
transfer of Explicit to Explicit information. True False

2) Answer the Following Question:


a) What are the factors which make knowledge management implementation
difficult in an organisation?
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

3.4 ARTIFICIAL INTELLIGENCE IN BUSINESS


Intelligence is the capability to solve perceptual problems. By the term “perceptual”,
we mean individual, special, random, fuzzy, sensory, and/or emotional. Solving such

92
problems requires accumulation, induction and inference of experiences to form new Intelligence Information
knowledge. Systems

Artificial intelligence (abbreviated AI) is defined as intelligence exhibited by an


artificial entity. Such an entity is generally computer-controlled; therefore artificial
intelligence in this context is pre-programmed. Humans use intuition and viewpoints
to make judgments and choices instead of using precise rules or procedures. However,
almost none of those used by human beings can be done programmatically. In
conclusion, we can say that no matter how powerful a computer might be, if it works
only upon a given set of rules/programs, it is not regarded as having real intelligence.

Research in AI is concerned with producing machines to automate tasks requiring


intelligent behavior. Examples include control, planning and scheduling, the ability to
answer diagnostic and consumer questions, handwriting, speech, and facial
recognition. As such, it has become a scientific discipline, focused on providing
solutions to real life problems. AI systems are now in routine use in economics,
medicine, engineering and the military, as well as being built into many common
home computer software applications, traditional strategy games like computer chess
and other video games.

Schools of thought
AI is divided roughly into two schools of thought: Conventional AI and
Computational Intelligence (CI).

Conventional AI mostly involves methods now classified as machine learning,


characterized by formalism and statistical analysis. This is also known as symbolic AI,
logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). AI
Methods include

• Expert systems: apply reasoning capabilities to reach a conclusion. An expert


system can process large amounts of known information and provide
conclusions based on them.
• Case based reasoning.
• Bayesian networks.
• Behaviour based AI: a modular method of building AI systems by hand.

Computational Intelligence involves iterative development or learning (e.g.


parameter tuning e.g., in connectionist systems). Learning is based on empirical data
and is associated with non-symbolic AI, scruffy AI and soft computing. Methods
mainly include: Neural networks: systems with very strong pattern recognition
capabilities.

• Fuzzy systems: techniques for reasoning under uncertainty, has been widely
used in modern industrial and consumer product control systems.

• Evolutionary computation: applies biologically inspired concepts such as


populations, mutation and survival of the fittest to generate increasingly better
solutions to the problem. These methods most notably divide into evolutionary
algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant
algorithms).

With hybrid intelligent systems attempts are made to combine these two groups.
Expert inference rules can be generated through neural network or production rules
from statistical learning such as in ACT-R.

93
Information Systems A promising new approach called intelligence amplification tries to achieve artificial
intelligence in an evolutionary development process as a side-effect of amplifying
human intelligence through technology.

History of commercial AI applications


It was not until the late 1970s that the first commercial AI based System, XCON
(Expert System), was developed. At that time, practical, commercial applications of
AI were still rare. In the early 1980s, Fuzzy Logic techniques were implemented on
Japanese subway trains and in a production application by a Danish cement
manufacturer. Commercial AI products were only returning a few million dollars in
revenue at this time.

The Expert Systems that companies are starting to use, and the AI groups in many
large companies, were formed in the mid-1980s. Expert Systems started to show limits
on the amount of rules they can work with, and 1986 sales of AI-based hardware and
software were $425 million (WFMO, 2001). Likewise, interest in using Neural Nets in
business applications developed. By the end of the 1980s, Expert Systems were
increasingly used in industry, and other AI techniques were being implemented, often
unnoticed but with beneficial effect (WFMO, 2001). AI revenues reach $1 billion
(MIT, Timeline of AI, 2001).

In the early 1990s, AI applications such as automatic scheduling software, software to


manage information for individuals, automatic mortgage underwriting systems, and
automatic investment decision makers were used. In the mid-1990s, AI software to
improve the prediction of daily revenues and staffing requirements for a business,
credit fraud detection systems, and support systems were developed and used. It was
not until the late-1990s that the applications such as data mining tools, e-mail filters,
and web crawlers were developed and generally accepted.

Artificial intelligence methods for business use


We will discuss AI methods used in business namely Expert System, Artificial Neural
Network (ANN), and Evolutionary Algorithm (EA) and move on to Hybrid Systems,
(the AI methods that are used to complement, or in combination with these); Fuzzy
Logic and Data Mining.

Expert System: One of the accepted definition of Expert System is “A computer


program with the expertise embodied in it, based on interview by a knowledge worker
of the expert in that domain”. During the “knowledge acquisition” it will not only be
the "knowledge" of experts that will be cloned and built into these systems, but also
their intuition and the way that they reason, so that the best options can be selected
under any given set of circumstances.

An Expert System can be developed by: Expert System Shell software that has been
specifically designed to enable quick development, AI languages, such as LISP and
Prolog or through the conventional languages, such as Fortran, C++, Java, etc.

While the Expert System concept may sound futuristic, one of the first commercial
Expert Systems, called Mycin, was already in business use in 1974. Mycin, which
was created by Edward H. Shortliffe at Stanford University, is one of the most famous
Expert Systems. Mycin was designed as a medical diagnosis tool giving information
concerning a patient's symptoms and test results; Mycin attempted to identify the
cause of the patient's infection and suggested treatments. It was observed by some
users that Mycin produced better analysis than medical students or practising doctors,
provided its limitations were observed. Another example of an Expert System is
Dendral, a computerized chemist. According to the Massachusetts Institute of
Technology, the success of Dendral helped to convince computer science researchers

94
that systems using heuristics were capable of mimicking the way human experts solve Intelligence Information
problems. Systems

As regards to Potential Applications for an Expert System, these have been


developed for a variety of reasons, including: the archiving of rare skills, preserving
the knowledge of retiring personnel, and to aggregate all of the available knowledge in
a specific domain from several experts, (when no single expert has complete
knowledge of that domain). Perhaps an expert’s knowledge is needed more frequently
than the expert can handle, or in places that the expert cannot travel to. The Expert
System can train new employees or eliminate large amounts of the monotonous work
humans do, thereby saving the expert 'time for situations requiring his or her expertise.
In our opinion the only limit on the possible applications of stored knowledge in an
Expert System is what the mind can imagine.

We may Conclude that the Expert System is an AI application that takes decisions
based on knowledge and inference (the ability to react on the knowledge), as defined
by experts in a certain domain and to solve problems in that domain. The Expert
System normally falls under the definition of Weak AI, and is one of the AI
techniques that has been easiest for companies to embrace. Commercial Expert
Systems were developed during the 1970s, and continue to be used by companies. One
advantage of an Expert System is that it can explain the logic behind a particular
decision, why particular questions were asked, and/or why an alternative was
eliminated. That is not the case with other AI methods.

Artificial Neural Network: Sometimes the following distinction is made between the
terms “Neural Network” and “Artificial Neural Network”. “Neural network” indicates
networks that are hardware based and “Artificial Neural Network” normally refers to
those which are software-based. In the following paragraphs, “Artificial Neural
Network” is sometimes referred to as “Neural Network” or “Neural Computing”.
Neural Networks are an approach, which is inspired by the architecture of the human
brain. In the human brain a neural network exists, which is comprised of over 10
billion neurons; each neuron then builds hundreds and even thousands of connections
with other neurons.

“Neural computing is defined as the study of networks of adaptable nodes which,


through a process of learning from task examples, store experimental knowledge and
make it available for use.”. As a Neural Network (NN) is designed, rather than being
programmed, the systems learn to recognize patterns. Learning is achieved through
repeated minor modifications to selected neuron weights (The weight is equal to the
importance of the neuron). NN typically starts out with randomized weights for all
their neurons. This means that they do not “know” anything, and must be trained.
Once a NN has been trained correctly, it should be able to find the desired output to a
given input; however, it cannot be guaranteed that a NN will produce the correct
output pattern. NN learns by either a supervised or an unsupervised learning process.

i) The Supervised Learning Process: A supervised learning process has a target


pattern (desired output). While learning different input patterns, the weight
values are changed dynamically until their values are balanced, so that each
input will lead to the desired output. There are two supervised learning
algorithms: Forward, and Back-propagation, Learning Algorithms.

ii) An unsupervised Neural Network has no target outputs. During the learning
process, the neural cells organise themselves in groups, according to input
pattern. The incoming data is not only received by a single neural cell, but also
influences other cells in its neighbourhood. The goal is to group neural cells
with similar functions close together. Self-organisation Learning Algorithms
tend to discover patterns and relationships in that data.

95
Information Systems
Artificial Neural Network Techniques: There are many kinds of Artificial Neural
Networks. No one knows exactly how many. This dissertation only examines the most
common ones. (i) Perceptron, (ii) Multi-Layer-Perceptron, (iii) Backpropagation Net,
(iv) Hopfield Net Physicist, and (v) Kohonen Feature Map.

ANN as a method of Forecasting: “Forecasting is essential to business. NN does this


job better than traditional forecasting methods. The advantages of ANN over
traditional statistical forecasting methods are that ANN do not have to fulfill any
statistical assumptions and the ability to handle non-linearity, which are common in
business. Further advantages are that ANN is easy to learn and use, and normally
requires less data preparation.

We can conclude that ANN is inspired by the architecture of the human brain, and
learns to recognise patterns through repeated minor modifications to selected neuron
weights. There are many kinds of ANN techniques that are good at solving problems
involving patterns, pattern mapping, pattern completion, and pattern classification.

ANN pattern recognition capability makes it useful to forecast time series in business.
A Neural Network can easily recognise patterns that have too many variables for
humans to see. They have several advantages over conventional statistical models:
they handle noisy data better, do not have to fulfil any statistical assumptions, and are
generally better at handling large amounts of data with many variables.

A problem with Neural Networks is that it is very difficult to understand their internal
reasoning process, however, this is not entirely accurate. It is possible to get an idea
about the learned ANN variables’ elasticity. By changing one variable at a time,
looking at the changes in the output pattern during that time, at least some information
regarding the importance of the different variables will be visible. Neural Networks
can be very flexible systems for problem solving.

Evolutionary Algorithm is “an algorithm that maintains a population of structures


(usually randomly generated initially) that evolves according to rules of selection,
recombination, mutation and survival referred to as genetic operators. A shared
“environment” determines the fitness or performance of each individual in the
population. It also tells us that the fittest individuals are more likely to be selected for
reproduction (retention or duplication), while recombination and mutation modifies
those individuals, yielding potentially superior ones.

Branches of Evolutionary Algorithms: There are currently four main paradigms in


(EA) research: Genetic Algorithm (GA), with two sub-classes and Genetic
Programming (GP), Evolutionary Programming, and Evolution Strategy.

• Genetic Algorithm (GA), is inspired by Darwin’s theory about evolution.


Solution to a problem solved by genetic algorithms is evolved. Algorithm is
started with a set of solutions (represented by chromosomes) called population.
Solutions from one population are taken and used to form a new population.
This is motivated by a hope, that the new population will be better than the old
one. Solutions are selected to form new solutions (offspring) according to their
fitness — the more suitable they are the more chances they have to reproduce.
This is repeated until some condition (for example, “number of populations or
improvement of the best solution) is satisfied.”
• Genetic programming (GP) is a programming technique that extends the
Genetic Algorithm to the domain of whole computer programs. In GP,
populations of programs are genetically bred to solve problems.

96
• Evolutionary Programming and Evolution Strategy: Evolution programming
Intelligence Information
Systems
uses mutations to evolve populations. Is a stochastic optimisation strategy
similar to Genetic Algorithm, but instead places emphasis on the behavioural
linkage between parents and their offspring, rather than seeking to emulate
specific Genetic Operators as observed in nature. Evolutionary Programming is
very similar to Evolution Strategies, although the two approaches developed
independently.

Advantage and Disadvantages: Examples of problems where EA have been quite


successful are: Timetabling and Job-Shop Scheduling Problem (JSSP), finding the
most beneficial locations for offices, etc., and typical Operational Research (OR)
problems with many constraints.

GA has proven to be well suited to optimisation of specific non-linear multivariable


systems. GA is used in a variety of applications including scheduling, resource
allocation, training ANNs, and selecting rules for fuzzy systems. “GAs should be used
when there is no other known problem solving strategy, and the problem domain is
NP-complete. That is where GAs comes into play, heuristically finding solutions where
all else fails.” It is generally agreed that EAs are especially ill suited for problems
where efficient ways of solving them are already known.

We may conclude that the EA tries to mimic the process of biological evolution,
complete with natural selection and survival of the fittest. The four main paradigms
are Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming,
and Evolution Strategy. EA is a useful method of optimisation when other techniques
are not possible. EAs seem to offer an economic combination of simplicity and
flexibility, and may be the better method for finding quick solutions than the more
expensive and time consuming (but higher quality) OR methods. However, hybrid
system between OR and EA should be able to perform quite well.

It is expected that if a backward Evolutionary Algorithm is used on an accepted OR


solution, maybe then the human eye could easily rearrange the first string in a more
effective way. If EA then were to run the string through the normal forward process,
the end result could be better than using EA on an unperfected start string.

Hybrid System: More people have recently begun to consider combining the
approaches into hybrid ones. Hybrid System is the system that uses more than one
problem-solving technique in order to solve a problem”. There is a huge amount of
interest in Hybrid Systems, for example: neural-fuzzy, neural-genetic, and fuzzy-
genetic hybrid systems. Researchers believe they can capture the best of the methods
involved, and outperform the solitary methods.

“Fuzzy Logic and Fuzzy Expert System” and “Data Mining” are deliberately placed
under the heading of Hybrid System. Fuzzy Logic is a method that is combined with
other AI techniques (Hybrid System) to represent knowledge and reality in a better
way. Data Mining does not have to be a Hybrid System, but usually is, for example,
IBM’s DB2 (Data Mining tool), which contains techniques (IBM, 2001) such as
Statistics, ANN, GA, and Model quality graphics, etc. Let us now take a closer look at
the methods.

Fuzzy Logic and Fuzzy Expert Systems Fuzzy Logic resembles human reasoning,
but uses estimated information and vagueness in a better way. The answers to real-
world problems are rarely black or white, true or false, or start or stop. By using Fuzzy
Logic, knowledge can be expressed in a more natural way (fuzzy logic instead of
Boolean “Crisp” logic).

97
Information Systems i) Fuzzy Logic is a departure from classical two-valued sets and logic, that uses
"soft" linguistic (e.g,. large, hot, tall) system variables and a continuous range
of truth values in the interval [0,1], rather than strict binary (True or False)
decisions and assignments.”

Fuzzy Logic is ideal for controlling non-linear systems and for modeling
complex systems where an inexact model exists, or in systems where ambiguity
or vagueness is common. There are many commercial products are available
today which uses Fuzzy logic like washing machines, high speed train etc.

ii) Fuzzy Expert Systems: Often Fuzzy Logic is combined with Expert Systems,
such as the so-called Fuzzy Expert Systems which are the most common use of
Fuzzy logic. These systems are also called “Fuzzy Systems” and use Fuzzy
Logic instead of Boolean (crisp) logic. Fuzzy Expert Systems are used in several
wide-ranging fields, including: Linear and Nonlinear Control Pattern
Recognition, Financial Systems, Operation Research, Data Analysis, Pattern
recognition etc.

We may conclude that a Hybrid system uses more than one technique, such as neural-
fuzzy, neural-genetic, Fuzzy Expert System, Data Mining (most often), etc., to solve a
problem. Fuzzy logic is incorporated into computer systems so that they represent
reality better by using “non-crisp” knowledge. Often Fuzzy Logic is combined with
Expert Systems, the so-called Fuzzy Expert System or more simply, “Fuzzy System.”

Data Mining software most often uses various techniques, including Neural Networks,
statistical and visualization techniques, etc. to turn what are often mountains of data
into useful information. Data Mining does not always contain AI techniques. It is
expected that it is quite possible that Data Mining will become a very useful tool for
companies in the competition for market shares.

More examples of commercial use of Artificial Intelligence:

Expert Systems For Equipment Failure Diagnosis:


Many expert systems for diagnosis have been developed in maintenance management.
An example of expert system for diagnosing ventilators is described below. Figure 4
shows the object (air fan) to be diagnosed and its possible areas and causes of failure.
Sensing point
Rotor wing Failure
Portion
Fan Fan Rotor wing
1 2 3 4 Unbalance looseness of bolt, etc.
Rotor axis Bending, resonance, etc.
Transmission Coupling Misalignment, looseness of bolt, etc.
Motor Rotor Unbalance bending of axis, etc.
Motor Stator Unbalance of winding, etc.
Cooling fan
Unbalance, etc.
Bearing Bad lubrication, crack, wear, etc.
Coupling Base Frame Resonance, lack of silliness, etc.
Bolt Looseness, breaking, etc.
Bearing Base Lack of silliness, etc.

Figure 4: Failure analysis of an air fan

Rule 1 : If the velocities at measuring points 3 and 4 are abnormal, then the fan failure is expected.
Rule 2 : If the velocities at measuring points 2 and 3 are abnormal, then the transmission failure is expected.

Figure 5: Production rules based on Experts’ experience

98
Intelligence Information
Fan Failure Systems

Velocity Value Motor Failure

Start Diagnosis Coupling Failure

Acceleration Bearing failure


Value

Figure 6: Inference process

The acceleration and velocity of vibration at the sensing points 1, 2, 3, and 4 are
measured. The causalities between these measured values and failures are obtained by
using expert knowledge. This knowledge is expressed in a matrix and is transformed
into production rules are in Figure 5. The precise diagnosis is carried out based on the
spectral analysis of the vibration data. The levels of the fundamental and higher
components of the data are calculated. The relationship between the level values and
failures is obtained by using expert knowledge. This knowledge is represented by
frames. By using this knowledge source, the process of inference process as shown in
Figure 6.

Predictive Maintenance in Place of Preventive Maintenance


With the help of AI the importance of predictive maintenance has increased although
time based preventive maintenance has been used as a basic method. Predictive
maintenance based on condition of the equipment that is for example, if we replace a
bearing when the vibration exceeds a certain limit rather than being guided by
replacing the bearing at a fixed interval. With use of predictive maintenance the extra
time spent in breakdown maintenance can be avoided. Lastly, when the deterioration
is slow, the predictive maintenance provides substantive time and cost advantages
over the preventive maintenance performed at a set time interval. Thus, as
demonstrated, the advantages of predictive maintenance include:

• replacement period prolongation


• safety improvement
• accident prevention
• reliability improvement

Diagnostic techniques based on machine condition are used to detect degradation of


any equipment. In Japan, these techniques have been known since the 1960s,
particularly in the steel manufacturing industry. Here are some examples.

Machinery and equipment

• Fluid machines
• Electric rotation machines
• Mills
• Stationary electric machines
• Motors

99
Information Systems • Blowers
• Pumps
• Towers
• Drums

Sensing place

• Bearing portions
• Tanks
• Shafts
• Pipes

Condition-based diagnosis techniques have also been used to identify the mode of
failure in abnormal vibration, crack by nondestructive examination (ultrasonic or
X-ray), corrosion, or degradation of insulation. A very popular technique is to detect
the abnormal vibration in the bearing portions and the shaft of rotating machinery. The
level of vibration in the machine axis is measured by using the acceleration pick-up at
regular intervals, thus obtaining the tendency of increasing vibration. Many rotating
machines are maintained by using this method. The precondition for a condition-based
strategy is to make the deteriorating conditions more transparent and predictable. This
is where Artificial Intelligence can be used to bring competitive advantages.

Customer Relationship Management “Behaviour Analysis”


Behaviour Analysis and prediction of possible behaviour by the customer and then
accordingly planning the business activity can definitely boost the business and
profitability for business activities like retail business at a departmental store, credit
card issuance and collections of dues, insurance policies coverage and mortgage etc.
Such analysis is successfully being carried out by using AI techniques. The benefit of
these new systems is that they reduce the amount of time necessary to approve a loan
by using the computer to take decisions based on the variables values. Without human
influence in the decision-making process, it becomes a very clean decision without
emotion or preconceived ideas. Whether to apply for or extend a loan is often a critical
decision for a company or an individual. With this new fast approval are companies
not making it too easy to make loans? Perhaps the time needed before AI came on the
scene gave the borrower time to think it though carefully. However, the methods exist
and are in use at this moment to make decisions. Evidently AI has penetrated the
business of Credit Card Issuers, Collectors, Insurance and Mortgage.

Customer Relationship Management “Support & Marketing”


The Office Assistant in Microsoft’s Office packages uses AI and has a broad installed
base today, with more than 90 percent of the Windows and Macintosh market, at the
very least, this proves that support software containing AI has already penetrated the
market. Advisory Expert Systems have been on the market for a long time.
At HP the interactive advice system CAST/BW, provides quick, accurate hardware
sizing, network configuration, and usage recommendations. The system turns expert
knowledge from SAP, HP internal competency centres, the HP Enterprise Server
Group, and existing SAP. The Expert System functions in the same way as working
directly with HP. Business Warehouse implementations into an easy-to-use advisory
tool, use of robots, and marketing agents, support the assumption that support system
based on AI have already entered the business market and are frequently used.

Company Control
There are several AI-based programs that control what employees do on the Internet,
and what they send and receive in their e-mail at work. It is also believed that while
preventing access to inappropriate web sites could be acceptable, checking employees’
e-mail is going one step too far. Unless a reasonable limit is set, we will have a “Big
100
Brother” society. Furthermore, with all of the electronic information that companies Intelligence Information
receive today, it is expected that intelligent agents will be used more and more often to Systems
process information in automated and customized ways to ease information overload.

Production Management
AI software that learned to ‘breed’ factory schedules generates far better schedules
than those that humans can produce with the help of Genetic Algorithms. With the
case studies it has been proved that Data Mining with ANN, help solve some of the
processing and interpretation problems for companies and have even played a key role
in discovering oil fields.

Finance Management
Some believe that computers with Neural Networks are better at selecting stocks than
people are. However, finding information regarding Neural Networks’ success in the
field of finance is difficult, most likely because successful systems are being treated as
company secrets. The discussion of AI in the financial context has generally indicated
that AI techniques are somewhat useful to most financial applications. AI techniques
should catch on in coming years given the growing complexity of the markets, which
will require more computing power and analysis to deal with information overload. It
seems that many systems are best used as assistants to an existing team of experts
rather than on their own.

We may conclude that AI has gained a foothold in the world of business. That
foothold, moreover, is getting larger and larger as time goes by. One question which
comes to mind is then why has it taken so long before these methods are visible in
business applications. There appear to be four possible answers to that question. First,
it seems that the development of processing power has been a catalyst that made it
possible for AI-based system to gain a foothold in the business world. Furthermore, it
is just lately that affordable computers with sufficient processing power have become
available to companies. Second, AI often competes with business methods that have
been quite successful and in use for very long periods of time. So, why risk changing a
working concept, companies may think. There are also some interpretation difficulties
in some AI systems for instance, old tried and true statistical methods win over ANN
simply because people are unwilling to use a system where they do not see the effect
of each variable (i.e., the Black Box). This can be a high threshold to overcome. Third,
many AI applications involve large investments of money and failure can also be very
costly; this makes the companies circumspect regarding investment decisions. Finally
the fourth reason is simply that new technologies seems always to have a threshold for
acceptance. Furthermore, many critics believe that AI has not fulfilled its promise. Yet
they do not discard it as a method. It is a fact that companies are using AI and earning
money as a result.

3.5 BUSINESS ANALYTICS


Business analytics is a term used for sophisticated forms of business data analysis.
Analytics closely resembles statistical analysis and data mining, but tends to be based
on physics modeling involving extensive computation.

Example: A common application of business analytics is portfolio analysis. Let us


take a case of a bank or lending agency which has a collection of accounts, some from
wealthy people, some from middle class people, and some from poor people. The
question is how to evaluate the whole portfolio.

The bank can make money by lending to wealthy people, but there are only few
wealthy people. The bank can make more money by also lending to middle class
people. The bank can make even more money by lending to poor people.
101
Information Systems Note that poorer people are usually at greater risk of default. Note too, that some poor
people are excellent borrowers. Note too, that a few poor people may eventually
become rich, and will reward the bank for loyalty.

The bank wants to maximize its income, while minimizing its risk, which makes the
portfolio hard to understand.

The analytics solution may combine time series analysis, with many other issues in
order to make decisions on when to lend money to these different borrower segments,
or decisions on the interest rate charged to members of a portfolio segment to cover
any losses among members in that segment.

Business analytics as Change Manager: The best hedge against an uncertain future
is figuring out how to avoid being surprised when the unexpected happens. Better
yet, business executives need to be able to quickly take advantage of changing
conditions with new products and services. To accomplish these somewhat elusive
goals, companies must constantly improve their ability to identify, classify, and
intelligently analyse all available information.

A company’s enterprise-information assets — particularly customer data — can be


vast, but all too often they're squirreled away in application silos. The marketing
department has customer demographics; the accounting department oversees purchase
histories, payment frequency, and contract terms; and the customer-service department
maintains problem reports.

Web sites are adding to the mounds of customer data that companies have to deal
with. Web managers can monitor click stream log files to identify how customers
navigate a site, where they came from, how long they were there, what they
purchased, and where they headed afterward.

The goal of high-end business analytics is to turn these individually useful but often
marginalized data resources into something that lets business managers immediately
grasp the dynamic state of their business. This includes the current and projected
status of their customers by group and individual needs. Ideally, analytics lets
companies combine demographic and behavioural data with sales information to
determine how best to leverage the customer relationship.

A company’s ultimate goal is to precisely target new and existing goods to those
individuals and groups based on the profiles gleaned from the analytic process.
Corporate decision makers need to be increasingly attuned to business opportunities
that arise whenever a customer, business, or industry factor changes. Exploiting
change is the role of business analytics.

Most companies have Web-log data that’s sparse and discrete and a wealth of
transaction data, in some cases going back 20 years or more, that’s rich and
continuous. The nirvana here, is to integrate these data sources in a meaningful way so
a company can tell what its customers are doing now and have done in the past.
Business analysts can take that data, do a little trend analysis, and decide how best to
pitch new products to customers.

There are costs associated with integrating all this data. The investment in gathering
the data and aggregating it in meaningful ways must yield a quantifiable business
benefit. You can gather all kinds of information, but if you don’t have context, if you
don’t advance a business hypothesis and generate a good strategy, it’s pretty much
worthless information.

102
Reviewing historical and current data over time can optimally yield enough trend Intelligence Information
information to feed statistical models that let trained users predict events and trends. Systems
However, there’s a reticence on the part of decision makers to trust “black box”
business models used by consultants and in some software tools without understanding
the parameters being measured.

Most of the data is extremely diverse and often in a constant state of flux, so there’s
usually no single, specific analytic technique appropriate to your data at a particular
stage of its evolution. The upshot is that predictive models must be appropriate to the
task, highly customized to specific business conditions, and targeted to address
specific areas of interest or answer particular questions.

Rather than just having high-end modeling at one end of the spectrum and static
reports at the other, what's needed is analytics and analytic applications that watch for
change and initiate actions at both an individual and a group level. Analytics are most
useful when the application proactively lets the right people know when relevant
business factors change.

One way of spotting trends is to be able to measure just the part of the business that’s
changing. The future level of a lake can be predicted based on how much water is
going in and how much is going out. The same analogy applies to business customers.

It has been observed that half of companies perform daily data warehouse updates,
40% have weekly or monthly updates, and 10% have real-time or near-real-time
updates. It has also been observed that many of the companies performing weekly or
monthly updates are apt to shift en masse to performing daily or continuous updates as
a result of evolving market and competitive conditions.

The need to act upon information is a key driver of high-end analytic applications.
Folding business intelligence back into the business decision-making process,
operational systems, or human interaction is the primary way to make sure that a
company can respond appropriately to changes in customer and market conditions. To
bring about this organisational dynamic, the analytic results must be available to all of
the people within a company. Traditionally, a lot of information gleaned from a
company's business-intelligence tools went to upper management, but it didn't
percolate quickly down into the trenches where it could be acted upon by the rank and
file. However, the percolated information needs to be based on customization for the
company, as company may not like to send all information to every employee.

Business analytics is moving beyond data warehousing, which a limited number of


experts usually use, to include other components, such as publish-and-subscribe
technology to distribute market intelligence to the various employees who need it. If
certain events happen, the affected parties are notified in a timely fashion. This
represents a kind of opting-in capability for specific kinds of information that helps
mid-and low-level decision makers more quickly get the data they need to take action.

Improved search and text-mining techniques are aiding the quest for timely
information. Predictive modeling applies in this scenario as well. This form of
business modeling can help present information based on particular users' past
interests and help predict what a manager might want or need to know in the future.

It’s all well and good to have a group of statisticians sitting in their ivory cubicles, and
it's quite true that companies still need those people to do the data mining today. But if
business intelligence is to be more widely used across the enterprise, people must be
able to act upon it in a timely fashion and fold the information back into the business
process. Critical information about the state of the business must be distributed

103
Information Systems quickly, efficiently, and appropriately to those people and departments that can affect
the company’s adaptability.

These are goals that IT departments have avidly pursued but have hitherto never been
able to grasp fully. Fortunately, today’s advanced analytics tools point to a time when
compiling data, monitoring near-real-time business events, and synthesizing that data
via data-mining and other advanced techniques will let companies respond almost
immediately to perceived or predicted changes in market conditions. Early versions of
these tools already let companies make business forecasts, optimize resources on the
fly, and suggest appropriate actions with unprecedented speed, agility, and accuracy.

Companies should look twice at implementing traditional business-intelligence


solutions and look more toward solutions that deliver analytics at the point of a
business process.

A classic example of this is seen in inventory reordering systems based on supply-and-


demand forecasts. Market data is fed back into the system to determine where, when,
and how much inventory should be reordered. This type of analysis results directly in
a modification of the business processes. The trick is to incorporate this intelligence
into both tactical and strategic decision-making with managers making real-time
decisions.

The latest challenge in business analytics is to capture external data from sources that
companies haven’t really considered before. If there’s a cliché in the making here, it’s
that data abounds, but knowledge acquisition takes a lot more work. Many alternate
sources of data are available via the Web. The number of customer-data sources
continues to expand dramatically. The key will be to determine which of these data
points are more relevant and to figure out organisational processes that will permit
appropriate data to be fed to the analytics engine so a company or department can
respond to it in real time and feed it into ongoing projects, sales efforts, and marketing
campaigns.

Neural networks are flexible models that can be applied to predictive analysis and
pattern-recognition problems. You want to control for different factors and see what's
working for you and what's giving you the most bang for your buck. Well-targeted
analytics will provide yield indicators and trends that the company can exploit to its
advantage.

Another trend that we're seeing is a kind of cross-disciplinary awareness. Companies


that have statisticians with different bases of experience or analysts who are able to
make analogies more easily than most of us have recognised that there are large data
issues in fields such as genomics. Both genetic researchers and companies with
terabyte-level customer-relationship management systems share some common data
management issues. Each group could learn from the other and share common
analytical approaches.

Another way business-intelligence tools are evolving is in interactive analytics, in


which users are able to slice and dice data and also carry out what-if scenarios. Instead
of driving the enterprise by looking in the rearview mirror, you’re looking forward to
what might happen and can strategize on how to reach that outcome. Interactive
analytics is an area where many conventional business-analytical tools fall short.
Organisations want a single view of the customer, particularly given current economic
trends. This requires a move away from point solutions to more integrated systems.
It’s a giant problem to access all of the disparate data sources scattered around the
organisation. The reason it’s a giant problem is that many companies wouldn’t even
know what to do with it once it’s in one place.

104
This was characteristic of the naiveté of early data warehouse projects in the 1990s— Intelligence Information
many of which failed. “We’re seeing much more sophistication around the way high- Systems
end business analytics are approached today, We don’t really just want to bring a lot
of data together; we actually want to work backward from the questions we’re trying
to answer.”

There’s also the issue of corporate management developing a level of trust in


advanced analytical tools. Being comfortable with driving their business based on
associations that aren’t easily visible to the human eye doesn’t come easily to many
CEOs and business managers. As well, many VPs and marketing managers approach
marketing as more of an art than a science and are somewhat resistant to analytic
technologies.

The acceptance problem is twofold. The tools and techniques are still complex and
difficult to use. Companies require the guys with the lab coats to make these tools
hum. The analytic results that derive from them are often barely auditable, especially
when employing things such as neural networks. The sheer sophistication of the tools
makes it difficult for business managers to understand how the software came to a
particular conclusion. As a result, decision makers often feel uncomfortable
implementing results from analysis that they can’t audit, figure out what the
assumptions are, and how the results were derived.

Trust will emerge when people take a couple of these recommendations, implement
them, and see a positive impact on the bottom line. The ultimate outcome of high-end
analytics will be systems that can process diverse business data, draw conclusions, and
alert managers to proposed actions and outcomes.

Hopefully, the impact on business will be companies that are more agile and better
informed about all the conditions both within and outside their corporate boundaries.

Available Business Analytics: Such suits are being offered by several vendors.
Vendors claim that these suits identify trends, perform comparisons and highlight
opportunities in various business functions like supply chain management, even when
large amounts of data are involved. These suits combine technology with human effort
and help decision-makers in business areas such as sourcing, inventory management,
manufacturing, quality, sales and logistics. Business Analytics solutions leverage
investments made in enterprise applications, web technologies, data warehouses and
information obtained from external sources to locate patterns among transactional,
demographic and behavioural data.

Vendors claim that with wafer-thin margins, managing costs is an ongoing challenge.
Business analytics solutions being offered can help managers in sales, marketing,
customer support, supply chain planning and financials understand and respond to key
issues, such as:

• Correctly analysing barriers to market-entry, this varies widely with each


product,

• Responding to competition within a well defined supply tier structure,

• Dealing with the high threat of product substitutes,

• Continually driving product innovation,

• Managing product lifecycles to maximize returns.

Business analytics solutions available have capabilities for:

105
Information Systems Executive Information Systems (EIS): Executive dashboards with drilldown analysis
capabilities that support decision-making at an executive level.

Online Analytical Processing (OLAP): OLAP tools are mainly used by analysts.
They apply relatively simple techniques such as deduction, induction, and pattern
recognition to data in order to derive new information and insights.

Standard reports are designed and built centrally and then published for general use.

There are three types of standard reports:

• Static reports or canned reports: Fixed-format reports that can be generated on


demand.

• Parameterized reports: Fixed layout reports that allow users to specify which
data are to be included, such as date ranges and geographic regions.

• Interactive reports: These reports give users the flexibility to manipulate the
structure, layout and content of a generic report via buttons, drop-down menus
and other interactive devices.

Ad-hoc reports: generated by users as a “one-off” exercise. The only limitations are
the capabilities of the reporting tool and the available data.

Advanced Analytics: Advanced statistical and analytical processing such as


correlations, regressions, sensitivity analysis and hypothesis testing.

Empowers everyone: Provide each person with relevant, complete information


tailored to their role.

• Drive more effective actions: Guide users toward more intelligent actions and
customer interactions.

• Do it in real time: Use real-time intelligence to drive better business outcomes


and operational results every second of every day.

Comprehensive Pre-built Analytic Applications


The Commercial Business Analytics available offer a comprehensive set of industry-
specific analytic applications to optimize performance for sales, service, marketing,
contact center, finance, supplier / supply chain, HR / workforce, and executive
management. These pre-built analytic applications integrate and transform data from a
range of enterprise sources-including Siebel, Oracle, PeopleSoft, SAP, and others-into
actionable intelligence for each business function and user role. A few pre-built
applications are described below:

• Sales Analytics
• Service and Contact Center Analytics
• Marketing Analytics
• Supply Chain Analytics
• Financial Analytics
• Workforce Analytics
• Real-Time Decisions Solutions

106
Stages of a Business Analytics Intelligence Information
Systems
Figure 7 depicts the various stages in a data warehouse and business analytics
initiative. While data analytics comprise the service layer for the applications, the
other stages are equally important. Analytical services have varying applicability
across the high tech value chain.

Examples of Pre-built Analytic Applications: As claimed by vendors, the Business


Analytics platform provides the full range of enterprise business intelligence
functionality-interactive dashboards; full ad hoc, proactive intelligence and alerts;
advanced reporting; and predictive analytics all delivered on one common, modern
Web architecture. Business intelligence (BI) tool enables the enterprise to get it SBI
requirements such as common, shared metadata; heterogeneous data source access;
and server-centric Web architectures, Business Analytics’ make available robust BI
platform which is mature and proven in these areas. Business Analytics platform
provides the next level of enterprise BI functions.

External External
source source

Internal Internal
source source
Data
Source

Bulk Transfer

Real Transfer

Staging
Data
Staging
Error
Handling

Quantitative Quantitative Data


Data Storage Data Storage Storage

Relational
Storage for Data
Analysis

Data Qualitative Advance Qualitative


Mining Analysis Analysis Analysis

Portal
Mining Other Reporting &
Client Browser client Presentation

User

Figure 7: Business analytics – staged representation

107
Information Systems Supply Chain Analytics
It enables more effective management of the complexities of the organisation’s supply
chain. A typical Supply Chain Analytics provides several dashboards, and several
pre-built reports that deliver comprehensive insight across sales, logistics,
procurement, manufacturing, and quality assurance departments. This helps to:

• Better manage customer commitments while optimising inventory and supplier


spend,

• Gain up-to-the-minute insight to inventory, sourcing, and supplier performance.

The given examples of pre-built applications are from Sieable Business Analytics:

Supplier Sourcing Analytics

• Gain detailed visibility into direct and indirect spend,


• View product delivery schedules and payments,
• Identify opportunities to consolidate spend and reduce costs.

Supplier Performance Analytics

• Determine who are the best and worst performing suppliers,


• Monitor price, delivery, and product quality performance,
• Manage supply to minimize business disruption risks.

Inventory Analytics

• Gain visibility into inventory activities to minimize unnecessary expenditures,


• Optimize inventory levels to conserve working capital,
• Ensure customer satisfaction through better product availability.

Enterprise Sales Analytics

Enterprise Sales Analytics provides several key performance indicators and large
number of reports delivered in several customizable dashboards. A typical Enterprise
Sales Analytics enables sales managers and front-line representatives to dramatically
improve sales effectiveness by:

• Providing real-time, actionable insight into every sales opportunity at the point
of customer contact,
• Closing business faster and increasing overall sales revenue,
• Confidently providing more accurate and up-to-date sales forecasts,
• Quickly pinpointing problems and opportunities to close more business,
• Sales Analytics,
• Monitor status and take actions to ensure quota achievement,
• Maximize revenue through better cross-selling and up-selling,
• Shorten sales cycles and increase win rates.

Sales Revenue Analytics

Track sales orders, invoicing, and revenue. Increase customer value and follow-on
sales potential. Proactively manage order pipeline and focus resources to maximize
sales revenue.

108
Sales Revenue and Fulfilment Analytics Intelligence Information
Systems

• Ensure faster customer order fulfilment and revenue recognition,


• Increase customer satisfaction and manage expectations,
• Improve product delivery cycle times,
• Achieve more effective backlog management and sales revenue attainment.

Sales Revenue and Pipeline Analytics

• Accelerate lead to cash cycle through visibility across entire sales process,
• Achieve comprehensive view of customer orders and invoices,
• Maximize sales throughput.

Financial Analytics

This enables understanding and managing the key drivers of shareholder value and
profitability. A typical Financial Analytics helps front-line managers improve
financial performance through complete, up-to-the-minute information on their
department’s expenses and revenue contribution. Users will benefit from:

• Up-to-the-minute information enabling financial managers to take actions that


improve cash flow,
• Lower costs and increased profitability,
• More accurate, timely, and transparent financial reporting,
• Financial Analytics features large nuber of best practice-based key performance
indicators and several reports.

Payables Analysis

• Assess cash management effectiveness,


• Ensure that strategic suppliers receive timely payments,
• Monitor operational effectiveness of the payables department in ensuring lowest
transaction costs.

Receivables Analysis

• Effective working capital management by monitoring DSOs and cash cycles,


• Identify past due accounts and manage collections,
• Manage and control receivables risk.

General Ledger Analysis

• Perform faster end-of-period closing,


• Manage financial performance across locations, customers, products, and
territories,
• Receive real-time alerts on material events that may impact financial condition.

Profitability Analysis

• Identify your most profitable customers, products, and channels,


• Understand profitability drivers across regions, divisions, and profit centres,
• Gain visibility into cost drivers and take actions to improve profitability.

109
Information Systems Marketing Analytics

This enables maximum results from marketing investments. A typical analytic


provides the entire marketing organisation with a complete, up-to-the-minute picture
of customer preferences, buying behavior, and profitability. Marketing Analytics helps
to:
• Develop closer, more valuable customer and prospect relationships,
• Improve marketing effectiveness,
• Maximize the return on marketing investment.

Marketing Planning Analytics

• Achieve better campaign response rates,


• Profile customers for more effective event-based promotions,
• Allocate resources more effectively by identifying what drives campaign results.

Campaign Performance Analytics

• Track and measure campaign effectiveness in real time,


• Understand factors that drive campaign results and lead conversion rates,
• Compare individual campaign results to target metrics.

Customer Insight Analytics

• Understand product affinity for targeted promotions,


• Profile customers, buying behaviour for more effective promotions,
• Gain better insight into segmentation characteristics,
• Understand which offers have the greatest impact on customer behaviour.

3.6 BUSINESS INTELLIGENCE


The term business intelligence (BI) typically refers to a set of business processes for
collecting and analyzing business information. This includes the technology used in
these processes, and the information obtained from these processes.

BI business processes

Organisations typically gather information in order to assess the business environment,


and cover fields such as marketing, research, industry or market research, and
competitor analysis. Competitive organisations accumulate business intelligence in
order to gain sustainable competitive advantage, and may regard such intelligence as a
valuable core competence in some instances.

Generally, BI-collectors glean their primary information from internal business


sources. Such sources help decision-makers understand how well they have
performed. Secondary sources of information include customer needs, customer
decision-making processes, the competition and competitive pressures, conditions in
relevant industries, and general economic, technological, and cultural trends. Industrial
espionage may also provide business intelligence by using covert techniques. A gray
area exists between “normal” business intelligence and industrial espionage.

Each business intelligence system has a specific goal, which derives from an
organisational goal or from a vision statement. Both short-term goals (such as
quarterly numbers to share market) and long term goals (such as shareholder value,
target industry share / size, etc.) exist.

110
BI technology Intelligence Information
Systems
Some observers regard BI as the process of enhancing data into information and then
into knowledge. Persons involved in business intelligence processes may use
application software and other technologies to gather, store, analyze, and provide
access to data, and present that data in a simple, useful manner. The software aids in
Business performance management, and aims to help people make "better" business
decisions by making accurate, current, and relevant information available to them
when they need it.

Some people use the term “BI” interchangeably with “briefing books” or with
“executive information systems”, and the information that they contain. In this sense,
one can regard a business intelligence system as a decision-support system (DSS).

BI software types
People working in business intelligence have developed tools that ease the work,
especially when the intelligence task involves gathering and analyzing large quantities
of unstructured data. Each vendor typically defines Business Intelligence his/her own
way, and markets tools to do BI the way that they see it.

Business intelligence includes tools in various categories, including the following:


• AQL - Associative Query Logic
• Scorecarding
• Business Performance Management and Performance Measurement
• Business Planning
• Business Process Re-engineering
• Competitive Analysis
• Customer Relationship Management (CRM) and Marketing
• Data mining (DM), Data Farming, and Data warehouses
• Decision Support Systems (DSS) and Forecasting
• Document warehouses and Document Management
• Enterprise Management systems
• Executive Information Systems (EIS)
• Finance and Budgeting
• Human Resources
• Knowledge Management
• Mapping, Information visualization, and Dashboarding
• Management Information Systems (MIS)
• Geographic Information Systems (GIS)
• Online Analytical Processing (OLAP) and multidimensional analysis;
sometimes simply called Analytics (based on the so-called hypercube or cube)
• Statistics and Technical Data Analysis
• Supply Chain Management/Demand Chain Management
• Systems intelligence
• Trend Analysis
• User/End-user Query and Reporting
• Web Personalization and Web Mining
• Text mining.

History
Prior to the start of the Information Age in the late 20th century, businesses sometimes
struggled to collect data from non-automated sources. Businesses then lacked the
computing resources to properly analyze the data, and often made business decisions
primarily on the basis of intuition.

As businesses started automating more and more systems, more and more data became
available. However, collection remained a challenge due to a lack of infrastructure for
111
Information Systems data exchange or to incompatibilities between systems. Analysis of the data that was
gathered and reports on the data sometimes took months to generate. Such reports
allowed informed long-term strategic decision-making. However, short-term tactical
decision-making continued to rely on intuition.

In modern businesses, increasing standards, automation, and technologies have led to


vast amounts of data becoming available. Data warehouse technologies have set up
repositories to store this data. Improved Extract, transform, load (ETL) and even
recently Enterprise Application Integration tools have increased the speedy collecting
of data. OLAP reporting technologies have allowed faster generation of new reports
which analyze the data. Business intelligence has now become the art of sieving
through large amounts of data, extracting pertinent information, and turning that
information into knowledge upon which actions can be taken.

Business intelligence software incorporates the ability to data mine, analyze, and
report. Some modern BI software allow users to cross-analyze and perform deep data
research rapidly for better analysis of sales or performance on an individual,
department, or company level. In modern applications of business intelligence
software, managers are able to quickly compile reports from data for forecasting,
analysis, and business decision making.

Indicators
BI often uses Key performance indicators (KPIs) to assess the present state of business
and to prescribe a course of action. More and more organisations have started to make
more data available more promptly. In the past, data only became available after a
month or two, which did not help managers to adjust activities in time to hit Share
Market targets. Recently, banks have tried to make data available at shorter intervals
and have reduced delays.

The KPI methodology was further expanded with the Chief Performance Officer
methodology which incorporated KPIs and root cause analysis into a single
methodology.

KPI example
For example, for businesses which have higher operational/credit risk loading (for
example, credit cards and "wealth management"), a large multi-national bank makes
KPI-related data available weekly, and sometimes offers a daily analysis of numbers.
This means data usually becomes available within 24 hours, necessitating automation
and the use of IT systems.

Designing and implementing a business intelligence programme


When implementing a BI programme one might like to pose a number of questions
and take a number of resultant decisions, such as:

• Goal Alignment queries: The first step determines the short and medium-term
purposes of the programme. What strategic goal(s) of the organisation will the
programme address? What organisational mission/vision does it relate to? A
crafted hypothesis needs to detail how this initiative will eventually improve
results / performance (i.e., a strategy map).

• Baseline queries: Current information-gathering competency needs assessing.


Does the organisation have the capability of monitoring important sources of
information? What data does the organisation collect and how does it store that
data? What are the statistical parameters of this data, e.g., how much random
variation does it contain? Does the organisation measure this?

112
• Cost and risk queries: The financial consequences of a new BI initiative Intelligence Information
should be estimated. It is necessary to assess the cost of the present operations Systems
and the increase in costs associated with the BI initiative. What is the risk that
the initiative will fail? This risk assessment should be converted into a financial
metric and included in the planning?
• Customer and Stakeholder queries: Determine who will benefit from the
initiative and who will pay. Who has a stake in the current procedure? What
kinds of customers/stakeholders will benefit directly from this initiative? Who
will benefit indirectly? What are the quantitative / qualitative benefits? Is the
specified initiative the best way to increase satisfaction for all kinds of
customers, or is there a better way? How will customers’ benefits be monitored?
What about employees? shareholders? and distribution channel members?

• Metrics-related queries: These information requirements must be


operationaged zed into clearly defined metrics. One must decide what metrics to
use for each piece of information being gathered. Are these the best metrics?
How do we know that? How many metrics need to be tracked? If this is a large
number (it usually is), what kind of system can be used to track them? Are the
metrics standardized, so they can be benchmarked against performance in other
organisations? What are the industry standard metrics available?

• Measurement Methodology-related queries: One should establish a


methodology or a procedure to determine the best (or acceptable) way of
measuring the required metrics. What methods will be used, and how frequently
will the organisation collect data? Do industry standards exist for this? Is this the
best way to do the measurements? How do we know that?

• Results-related queries: Someone should monitor the BI programme to ensure


that objectives are being met. Adjustments in the programme may be necessary.
The programme should be tested for accuracy, reliability, and validity. How can
one demonstrate that the BI initiative (rather than other factors) contributed to a
change in results? How much of the change was probably random?.

3.7 ROLE OF BUSINESS INTELLIGENCE


The information economy puts a premium on high quality actionable information —
exactly what Business Intelligence (BI) tools like data warehousing, data mining, and
OLAP can provide to the business. A close look at the different organisational
functions suggests that BI can play a crucial role in almost every function. It can give
new and often surprising insights about customer behavior; thereby helping the
businesses meeting their ever-changing needs and desires. On the supply side, BI can
help businesses to identify their best vendors and determine what separates them from
not so good vendors. It can give businesses better understanding of inventory and its
movement and also help improve production and storefront operations through better
category management. Through a host of analyses and reports, BI can also improve
internal organisational support functions like finance and human resource management
of any business.

Business Intelligence is applicable to all types of businesses; however, the magnitude


of gains may vary. Here we will discuss in detail how BI can improve the key
functional areas and thereby the overall productivity of the business.

3.7.1 Marketing
Smart Businesses in their efforts to meet the competition have reoriented their
business around the customer by improving Customer Relationship Management. In
the mad rush to acquire new customers, they have realized it is equally important to

113
Information Systems retain the existing ones. Increased interaction and sophisticated analysis techniques
have given businesses unprecedented access to the mind of the customer; and they are
using this to develop one-to-one relation with the customer, design marketing and
promotion campaigns, optimize sale front layout, and manage e-commerce operations.
For improving Customer Relationship Management (CRM), the CRM strategy needs
to include:

• Operational CRM: Automating interaction with the customers and sales force,
• Analytical CRM: Sophisticated analysis of the customer data generated by
operational CRM and other sources like Sales Orders transactions, web site
transactions, and third-party data providers.

A typical business organisation has a huge customer base and often customer's needs
are fairly varying. Without the means to analyze voluminous customer data, CRM
strategy is bound to be a failure, therefore, the Analytical CRM forms the core of the
customer relationship strategy of a business.

Marketing and sales functions are the primary beneficiaries of Analytical CRM and
the main touch points from where the insights gained about the customer is absorbed
in the organisation.

Analytical CRM uses key business intelligence tools like data warehousing, data
mining, and OLAP to present a unified view of the customer. Following are some of
the uses of Analytical CRM:

• Customer Classification: Customer classification is a vital ingredient in a


business organisation's marketing strategy. It can offer insights into how
different segments respond to shifts in demographics, fashions and trends. For
example it can help classify customers in the following segments:

a) Customers who respond to new promotions


b) Customers who respond to new product launches
c) Customers who respond to discounts
d) Customers who show a tendency to purchase specific products.

• Campaign / Marketing Promotion Effectiveness Analysis: Once a campaign is


launched, its effectiveness can be studied across different media and in terms of
costs and benefits; this greatly helps in understanding what goes into a
successful marketing campaign. Campaign/ promotion effectiveness analysis
can answer questions such as:

a) Which media channels have been most successful in the past for various
campaigns?
b) Which geographic locations responded well to a particular campaign?
c) What were the relative costs and benefits of this campaign?
d) Which customer segments responded to the campaign?

• Customer Lifetime Value: Not all customers are equally profitable. At the same
time customers who are not very profitable today may have the potential of
being profitable in future. Hence, it is absolutely essential to identify customers
with high lifetime value; the idea is to establish long-term relations with these
customers.
The basic methodology used to calculate customer lifetime value is to deduct the
cost of servicing a customer from the expected future revenue generated by the
customer, add to this the net value of new customers referred by this customer,
and discount the result for the duration of the relationship. Though this sounds
easy, there are a number of subjective variables like overall duration of the

114
customer's relation with the business, gap between intermediate cash flows, and Intelligence Information
discount rate. It is suggested that data mining tools should be used to develop Systems
customized models for calculating customer lifetime value.

1) Customer Loyalty Analysis: It is more economical to retain an existing


customer than to acquire a new one. To develop effective customer
retention programs it is vital to analyze the reasons for customer attrition.
Business Intelligence helps in understanding customer attrition with
respect to various factors influencing a customer and at times one can drill
down to individual transactions, which might have resulted in the change
of loyalty.

2) Cross Selling: Businesses use the vast amount of customer information


available with them to cross sell other products at the time of purchase.
This effort is largely based on the tastes of a particular customer, which
can be analyzed using BI tools based on previous purchases. Businesses
can also ‘up sell’ - sell more profitable products - to the customer at the
time of contact.

3) Product Pricing: Pricing is one of the most crucial marketing decisions


taken by businesses. Often an increase in price of a product can result in
lower sales and customer adoption of replacement products. Using data
warehousing and data mining, businesses can develop sophisticated price
models for different products, which can establish price - sales
relationships for the product and how changes in prices affect the sales of
other products.

4) Target Marketing: Businesses can optimize the overall marketing and


promotion effort by targeting campaigns to specific customers or groups
of customers. Target marketing can be based on a very simple analysis of
the buying habits of the customer or the customer group; but increasingly
data mining tools are being used to define specific customer segments that
are likely to respond to particular types of campaigns.

3.7.2 Sales and Orders


The success of a business in the future would depend on how effectively it manages
multiple delivery channels like the Internet, interactive TV, catalogs, etc. A single
customer is likely to interact with the retailer along multiple channels over a period of
time. This calls for an integrated strategy to serve the customer well, which requires
smooth flow of information across channels. To ensure smooth flow of information
customer data needs to be collected from different channels in one data warehouse.
Customer relationship strategy can then be built around this customer-centric data
warehouse. We have already discussed how Analytical CRM can provide analyses
over the centralized data warehouse. In this section we will explore how data
warehousing and data mining can improve the effectiveness of a channel.

• E-Business Analysis: The Internet has emerged as a powerful alternative


channel for established sales methods. Increasing competition from businesses
operating purely over the Internet has forced the businesses who had not
adopted this route to quickly adopt this channel. Their success would largely
depend on how they use the Net to complement their existing channels. Web
logs and Information forms filled over the web are very rich sources of data that
can provide insightful information about customer's browsing behavior,
purchasing patterns, likes and dislikes, etc. Two main types of analysis done on
the web site data are:

115
Information Systems 1) Web Log Analysis: This involves analyzing the basic traffic information
over the e-commerce web site. This analysis is primarily required to
optimize the operations over the Internet. It typically includes following
analyses:

• Site Navigation: An analysis of the typical route followed by the


user while navigating the web site. It also includes an analysis of the
most popular pages in the web site. This can significantly help in
site optimization by making it more user- friendly.

• Referrer Analysis: An analysis of the sites, which are very prolific


in diverting traffic to the company’s web site.

• Error Analysis: An analysis of the errors encountered by the user


while navigating the web site. This can help in solving the errors
and making the browsing experience more pleasurable.

• Keyword Analysis: An analysis of the most popular keywords used


by various users in Internet search engines to reach the concerned
business e-commerce web site.

2) Web Housing: This involves integration of web log data with data from
other sources like the purchase order transactions, third party data vendors
etc. Once the data is collected in a single customer centric data warehouse,
often referred to as Web house, all the applications already described
under CRM can be implemented. Often a business wants to design
specific campaigns for users who purchase from the e-commerce web site.
In this case, segmentation and profiling can be done specifically for the .e-
customers to understand their needs and browsing behavior. It can also be
used to personalize the content of the e-commerce web site for these users.

3) Channel Profitability: Data warehousing can help analyze channel


profitability, and whether it makes sense for the business to continue
building up expertise in the channel. The decision of continuing with a
channel would also include a number of subjective factors like outlook of
key enabling technologies for that channel.

4) Product Channel Affinity: Some product categories sell particularly well


on certain channels. Data warehousing can help identify hidden product-
channel affinities and help the business design better promotion and
marketing campaigns.

3.7.3 Human Resource


Data warehousing can significantly help in aligning the HR strategy to the overall
business strategy. It can present an integrated view of the workforce and help in
designing retention schemes, improve productivity, and curtail costs. Some BI
applications in HR are:

• Human Resource Reports/ Analytics: Reports and analysis can be generated


to support an integrated view of the workforce. Various analyses include staff
movement and performance, workforce attrition by business, workforce
performance by business, compensation and attrition, and other customized
analyses and reports. The HR data can be integrated with benchmark figures for
the industry and various reports can be generated to measure performance vis-à-
vis industry benchmarks.

116
• Manpower Allocation: This includes allocating manpower based on the Intelligence Information
Systems
demand projections. According to the seasonal variation in demand, temporary
manpower can be hired to maintain service levels. The demand levels vary
within one working day also, which can be used to allocate resources
accordingly.

• HR Portal: Employers need to maintain accurate employee data, which can be


viewed by the employees for information relating to compensation, benefits,
retirement facilities, etc. Payroll data can be integrated with data from other
human resource management applications in the HR data warehouse. This data
can then be circulated within the organisation through the HR portal.

• Training and Succession Planning: Accurate data about the skill sets of the
workforce can be maintained in the data warehouse. This can be used to design
training programs and for effective succession planning.

3.7.4 Finance and Accounts


The role of financial reporting has undergone a paradigm shift during the last decade.
It is no longer restricted to just financial statements required by the law; increasingly it
is being used to help in strategic decision making. Also, many organisations have
embraced a free information architecture, whereby financial information is openly
available for internal use. Many analytics described till now use financial data. Many
companies, across industries, have integrated financial data in their enterprise wide
data warehouse or established separate Financial Data Warehouse (FDW). Following
are some of the uses of BI in finance:

• Budgetary Analysis: Data warehousing facilitates analysis of budgeted versus


actual expenditure for various cost heads like promotion campaigns, energy
costs, salary, etc. OLAP tools can provide drill down facility whereby the
reasons for cost overruns can be analyzed in more detail. It can also be used to
allocate budgets for the coming financial period.

• Fixed Asset Return Analysis: This is used to analyse financial viability of the
fixed assets owned or leased by the company. It would typically involve
measures like profitability per sq. foot of the space, total lease cost vs.
profitability, etc.

• Financial Ratio Analysis: Various financial ratios like debt-equity, liquidity


ratios, etc. can be analyzed over a period of time. The ability to drill down and
join inter-related reports and analyses provided by all major OLAP tool vendors
can make ratio analysis much more perceptive.

• Profitability Analysis: This includes profitability of individual business,


departments within the business, product categories, brands, and individual
SKUs. A major component of profitability analysis is the costs incurred by
departments and the cost of acquiring, storing and allocating shelf space to
particular product categories, brands, or SKUs. It goes without saying that
profitability analysis has an extremely universal appeal and would be required
by other groups within the business organisation.

3.8 BUSINESS INTELLIGENCE TOOLS


Business intelligence tools are a type of Application software designed to help the
business intelligence (BI) business processes. Specifically they are generally tools that
aid in the analysis and presentation of data. While some business intelligence tools

117
Information Systems include ETL functionality, ETL tools are generally not considered business
intelligence tools.

Types of business intelligence tools


• OLAP (including HOLAP, ROLAP and MOLAP)
• Reporting software (also called Pixel perfect reporting software)
• Data mining
• Business performance management (BPM)

Open Source Business Intelligence Products


• Pentaho: enterprise-class reporting, analysis, dashboard, data mining and
workflow capabilities.
• OpenI: simple web application that does out-of-box OLAP reporting.
• Greenplum Inc.: first open source powered database server that can scale to
support multi-terabyte data warehousing demands.
• YALE (Yet Another Learning Environment): free open source software for
Business Intelligence, Knowledge Discovery, Data Mining, Machine Learning,
etc.
• BEE Project: BI Suite of tools ideal for mid size companies that has 50GB or
less of data. It has ETL and uses ROLAP and is under the GPL license.
• MarvelIT: open source Business Intelligence solution based on the Apache
Jetspeed Enterprise Portal and the popular OpenReports reporting application.
• SpagoBI: complete suite for the development of Business Intelligence that
covers data and metadata organisation, static reporting and dimensional
analysis, hidden information discovering by means of data mining techniques,
the building of a structured and dynamic control suite with dashboard
components.
• DecisionStudio-Professional: Comprehensive GPL desktop BI platform built
on best-of-breed open source projects including MySQL, R Environment,
DBDesigner, iReport, Python, etc. It is an advanced graphical desktop data
mining, reporting, modeling, and analysis environment comprehensive
capabilities to each role in the analytics value chain.

Commercial Products used as business intelligence tools:


• ACE*COMM
• Actuate
• Alphablox
• Analysis Center Library
• Applix
• Business Objects
• Cognos
• Cyberscience
• DataHabitat
• Decision Technology
• Information Builders
• Hyperion Solutions Corporation
• KCI Computing
• MaxQ Technologies
• Metrinomics - Metrivox
• Microsoft Analysis Services
• MicroStrategy
• MIS DecisionWare
• OutlookSoft
• Panorama
• Pentaho
• ProClarity

118
• Oracle Corporation Intelligence Information
• QlikView Systems
• Siebel Systems
• SAP Business Information Warehouse
• SAS Institute
• Saksoft
• Synola Ltd
• Stratws.

Same as these Business tool is being discussed in detail in the following unit:

On Line Analytical Processing (OLAP)


OLAP is an acronym for On Line Analytical Processing. It is an approach to quickly
provide the answer to analytical queries that are multi-dimensional in nature. It is part
of the broader category business intelligence, which also includes ETL (Extract,
Transform, Load), relational reporting and data mining. The typical applications of
OLAP are in business reporting for sales, marketing, management reporting, business
performance management (BPM), budgeting and forecasting, financial reporting and
similar areas. The term OLAP was created as a slight modification of the traditional
database term OLTP (On Line Transaction Processing).

Databases configured for OLAP employ a multidimensional data model, allowing for
complex analytical and ad-hoc queries with a rapid execution time. Nigel Pendse has
suggested that an alternative and perhaps more descriptive term to describe the
concept of OLAP is Fast Analysis of Shared Multidimensional Information (FASMI).
They borrow aspects of navigational databases and hierarchical databases that are
speedier than their relational kin.

OLAP Functionality
OLAP takes a snapshot of a set of source data and restructures it into an OLAP cube.
The queries can then be run against this. It has been claimed that for complex queries
OLAP can produce an answer in around 0.1% of the time for the same query on OLTP
relational data.

The cube is created from a star schema or snowflake schema of tables. At the centre is
the fact table which lists the core facts which make up the query. Numerous dimension
tables are linked to the fact tables. These tables indicate how the aggregations of
relational data can be analyzed. The number of possible aggregations is determined by
every possible manner in which the original data can be hierarchically linked. For
example a set of customers can be grouped by city, by district or by country; so with
50 cities, 8 districts and two countries there are three hierarchical levels with 60
members. These customers can be considered in relation to products; if there are 250
products with 20 categories, three families and three departments then there are 276
product members. With just these two dimensions there are 16,560 (276 * 60) possible
aggregations. As the data considered increases the number of aggregations can quickly
total tens of millions or more.

The calculation of the aggregations and the base data combined make up an OLAP
cube, which can potentially contain all the answers to every query which can be
answered from the data. Due to the potentially large number of aggregations to be
calculated, often only a predetermined number are fully calculated while the remainder
are solved on demand.

119
Information Systems Types of OLAP
There are three types of OLAP.

Multidimensional OLAP
MOLAP is the ‘classic’ form of OLAP and is sometimes referred to as just OLAP.
MOLAP uses database structures that are generally optimal attributes such as time
period, location, product or account code. The way that each dimension will be
aggregated is defined in advance by one or more hierarchies.

Relational OLAP
ROLAP works directly with relational databases.The base data and the dimension
tables are stored as relational tables and new tables are created to hold the aggregated
information.

Hybrid OLAP
There is no clear agreement across the industry as to what constitutes "Hybrid OLAP",
except that a database will divide data between relational and specialized storage. For
example, for some vendors, a HOLAP database will use relational tables to hold the
larger quantities of detailed data, and use specialized storage for at least some aspects
of the smaller quantities of more-aggregate or less-detailed data.

Comparison
Each type has certain benefits, although there is disagreement about the specifics of
the benefits between providers. MOLAP is better on smaller sets of data, it is faster to
calculate the aggregations and return answers and needs less storage space.
ROLAP is considered more scalable. However, large volume pre-processing is
difficult to implement efficiently so it is frequently skipped. ROLAP query
performance can therefore suffer.

HOLAP is between the two in all areas, but it can pre-process quickly and scale well.
All types though are prone to database explosion. Database explosion is a
phenomenon causing vast amount of storage space being used by OLAP databases
when certain but frequent conditions are met: high number of dimensions, pre-
calculated results and sparse multidimensional data. The difficulty in implementing
OLAP comes in forming the queries, choosing the base data and developing the
schema, as a result of which most modern OLAP products come with huge libraries of
pre-configured queries. Another problem is in the base data quality - it must be
complete and consistent.

Other types
The following acronyms are also used sometimes, although they are not as widespread
as the ones above

• WOLAP - Web-based OLAP


• DOLAP - Desktop OLAP
• RTOLAP - Real-Time OLAP.

APIs and query languages


Unlike relational databases - which had SQL as the standard query language and wide-
spread APIs such as ODBC, JDBC and OLEDB - there was no such unification in the
OLAP world. The first real standard API was OLEDB for OLAP specification from
Microsoft which appeared in 1997 and introduced the MDX query language. Several
OLAP vendors - both server and client - adopted it. In 2001 Microsoft and Hyperion
announced the XML for Analysis specification, which was endorsed by most of the

120
OLAP vendors. Since this also used MDX as a query language, MDX became the de- Intelligence Information
facto standard in the OLAP world. Systems

Commercial OLAP products

Open Source OLAP

Palo - An Open Source MOLAP Server


Mondrian - An Open Source ROLAP Server
JPalo - Open Source Development Tools for Palo.

Data mining
Data Mining, also known as Knowledge-Discovery in Databases (KDD), is the
process of automatically searching large volumes of data for patterns..
Data Mining can be defined as “The nontrivial extraction of implicit, previously
unknown, and potentially useful information from data and The science of extracting
useful information from large data sets or databases.” Although it is usually used in
relation to analysis of data, data mining, like artificial intelligence, is an umbrella term
and is used with varied meaning in a wide range of contexts. It is usually associated
with a business or other organisation’s need to identify trends. Data mining involves
the process of analyzing data to show patterns or relationships and sorting through
large amounts of data and picking out pieces of relative information or patterns that
occur e.g. picking out statistical information from some data.

A simple example of data mining is its use in a retail sales department. If a store tracks
the purchases of a customer and notices that a customer buys a lot of silk shirts, the
data mining system will make a correlation between that customer and silk shirts. The
sales department will look at that information and may begin direct mail marketing of
silk shirts to that customer, or it may alternatively attempt to get the customer to buy a
wider range of products. In this case, the data mining system used by the retail store
discovered new information about the customer that was previously unknown to the
company. Another widely used (though hypothetical) example is that of a very large
North American chain of supermarkets. Through intensive analysis of the transactions
and the goods bought over a period of time, analysts found that beers and diapers were
often bought together. Though explaining this interrelation might be difficult, taking
advantage of it, on the other hand, should not be hard (e.g., placing the high-profit
diapers next to the high-profit beers). This technique is often referred to as Market

Basket Analysis
In statistical analyses, in which there is no underlying theoretical model, data mining
is often approximated via stepwise regression methods wherein the space of 2k
possible relationships between a single outcome variable and k potential explanatory
variables is smartly searched. With the advent of parallel computing, it became
possible (when k is less than approximately to examine all 2k models.) This procedure
is called all subsets or exhaustive regression. Some of the first applications of
exhaustive regression involved the study of plant data.

Data dredging
Used in the technical context of data warehousing and analysis, the term
“data mining” is neutral. However, it sometimes has a more pejorative usage that
implies imposing patterns (and particularly causal relationships) on data where none
exist. This imposition of irrelevant, misleading or trivial attribute correlation is more
properly criticized as “data dredging” in statistical literature. Another term for this
misuse of statistics is data fishing.

121
Information Systems Used in this latter sense, data dredging implies scanning the data for any relationships,
and then when one is found coming up with an interesting explanation. (This is also
referred to as “over fitting the model”.) The problem is that large data sets invariably
happen to have some exciting relationships peculiar to that data. Therefore, any
conclusions reached are likely to be highly suspect. In spite of this, some exploratory
data work is always required in any applied statistical analysis to get a feel for the
data, so sometimes the line between good statistical practice and data dredging is less
than clear.

One common approach to evaluating the fitness of a model generated via data mining
techniques is called cross validation. Cross validation is a technique that produces an
estimate of generalization error based on resampling. In simple terms, the general idea
behind cross validation is that dividing the data into two or more separate data subsets
allows one subset to be used to evaluate the generalize ability of the model learned
from the other data subset(s). A data subset used to build a model is called a training
set; the evaluation data subset is called the test set. Common cross validation
techniques include the holdout method, k-fold cross validation, and the leave-one-out
method.

Another pitfall of using data mining is that it may lead to discovering correlations that
may not exist. “There have always been a considerable number of people who busy
themselves examining the last thousand numbers which have appeared on a roulette
wheel, in search of some repeating pattern. Sadly enough, they have usually found it.”
However, when properly done, determining correlations in investment analysis has
proven to be very profitable for statistical arbitrage operations (such as pairs trading
strategies), and furthermore correlation analysis has shown to be very useful in risk
management. Indeed, finding correlations in the financial markets, when done
properly, is not the same as finding false patterns in roulette wheels.

Most data mining efforts are focused on developing highly detailed models of some
large data set. Other researchers have described an alternate method that involves
finding the minimal differences between elements in a data set, with the goal of
developing simpler models that represent relevant data.

Notable uses of data mining


• Data mining has been cited as the method by which the U.S. Army unit Able
Danger supposedly had identified the 9/11 attack leader, Mohamed Atta, and
three other 9/11 hijackers as possible members of an al Qaeda cell operating in
the U.S. more than a year before the attack.

Software (Commercial Software available)


• R programming language R is statistical environment and programming
language that fits well for machine learning and data mining.
• Microsoft Analysis Services - Microsoft SQL Server 2005 contains a full suite
of data mining algorithms and tools integrated with the database, OLAP,
Reporting, ETL pipeline, and the development environment.
• Weka Open source data mining software written in Java.
• Neural network software.
• Java Data Mining.

Business performance management


Business performance management (BPM) is a set of processes that help organisations
optimize business performance. BPM is seen as the next generation of business
intelligence (BI). BPM is focused on business processes such as planning and
forecasting. It helps businesses discover efficient use of their business units, financial,
human, and material resources.

122
BPM involves consolidation of data from various sources, querying, and analysis of Intelligence Information
the data, and putting the results into practice. Systems

BPM enhances processes by creating better feedback loops. Continuous and real-time
reviews help to identify and eliminate problems before they grow. BPM's forecasting
abilities help the company take corrective action in time to meet earnings projections.
Forecasting is characterized by a high degree of predictability which is put into good
use to answer what-if scenarios. BPM is useful in risk analysis and predicting
outcomes of merger and acquisition scenarios and coming up with a plan to overcome
potential problems.

BPM provides key performance indicators (KPI) that help companies monitor
efficiency of projects and employees against operational targets.

Metrics / Key Performance Indicators


BPM often uses key performance indicators (KPIs) to assess the present state of
business and to prescribe a course of action. More and more organisations have started
to make data available more promptly. In the past, data only became available after a
month or two, which did not help to suggest to managers that they should adjust
activities in time to hit targets. Recently, banks have tried make data available at
shorter intervals and have reduced delays. For example, for businesses which have
higher operational/credit risk loading (for example, credit cards, A large multi-national
bank makes KPI-related data available weekly, and sometimes offers a daily analysis
of numbers. This means data usually becomes available within 24 hours, necessitating
automation and the use of IT systems.

Most of the time, BPM simply means use of several financial/non-financial


metrics/key performance indicators to assess the present state of business and to
prescribe a course of action.

1) Some of the areas in which top management analysis could gain knowledge
from BPM:
a) Customer-related numbers:
b) New customers acquired
c) Status of existing customers.
2) Attrition of customers (including breakup by reason for attrition)
3) Turnover generated by segments of the customers - these could be demographic
filters.
4) Outstanding balances held by segments of customers and terms of payment -
these could be demographic filters.
5) Collection of bad debts within customer relationships.
6) Demographic analysis of individuals (potential customers) applying to become
customers, and the levels of approval, rejections and pending numbers.
7) Delinquency analysis of customers behind on payments.

This is more an inclusive list than an exclusive one. The above more or less describes
what a bank would do, but could also refer to a telephone company or similar service
sector company.

What is important?

1) KPI related data which is consistent and correct.


2) Timely availability of KPI-related data.
3) Information presented in a format which aids decision making
4) Ability to discern patterns or trends from organised information.

123
Information Systems BPM integrates the company’s processes with CRM or ERP. Companies become able
to gauge customer satisfaction, control customer trends and influence shareholder
value.

BI Vendors: Some of the BI vendors are shown below:


• Business Objects
• Cognos
• EFM Software
• Hyperion Solutions Corporation
• OutlookSoft
• Prophix Software Inc.
• Saksoft
• SAP AG
• SAS Institute
• Systems Union.

3.9 BUSINESS INTELLIGENCE REPORTS


Reporting is a fundamental business requirement as every transaction-based enterprise
application, every database, and each process that workers perform on a day-to-day
basis needs reporting in some fashion. Reporting allows the users to access, format,
and securely deliver data as meaningful information. The power of Business Objects
reporting provides the user with the information they need, when they need it — inside
and outside the organisation. Reporting and analysis have become persistent in
business, and they must be seen as a core requirement and be held to the same
standards as all core technologies.

The reporting and analysis market is mature. Companies have a wide variety of
technology options, from a plethora of BI vendors to platform and application
vendors. The participation by so many vendors reflects two issues:

• Companies want a single BI reporting and analysis solution: Every company


and government organisation wants a single, standard reporting and analysis
solution for the entire organisation as they need to drive down IT support costs
and simultaneously increase the likelihood of a single version of critical data.

• Vendors want to be selected as the BI reporting and analysis standard: Simply


put, the vendor that offers the most comprehensive BI reporting and analysis
solution is in a better position of being selected as the reporting standard. To that
extent, there has been a significant amount of activity by the vendors during the
past years to deliver a broader BI reporting and analysis platform.

Requirements of the Reporting solution are:


Powerful Authoring: Business Objects reporting requires a powerful, flexible, and
open environment for report creation. You need to connect to any data, and then build
highly-formatted reports using intuitive, flexible design tools. In addition, you require
the ability to integrate end-user report viewing, printing, exporting, and creation
capabilities into applications using a comprehensive set of software developer kits and
powerful report processing services.

Flexible Sharing: Once a report is created, you need to publish it on the web, portals,
printers, email, and applications. In enterprise reporting, Business Objects need to
provide a BI platform for secure, highly scalable information delivery to handle large
numbers of end users around the world. For embedded reporting, we need to provide
open application integration and flexible deployment. Our products are required to
integrate tightly with existing infrastructure to meet even the most demanding

124
enterprise and embedded reporting requirements. Reports may be required to be Intelligence Information
integrated into Java, .NET, and COM applications and deployed on Windows, UNIX, Systems
and Linux.

Reporting — the Way Users Work: For end users, reports need to include built-in
interactivity, creating a clean and efficient process where one report will satisfy the
needs of many different individuals. Users need to not only view reports in web
portals, but they also need to explore the information by moving easily from static
consumption to insightful interaction. What’s more, users may require embedding and
securely sharing live reports or parts of reports inside Microsoft Office Word,
PowerPoint, and Excel documents.

Proven Technology: The vendor needs to adopt proven technology which has evolved
over time to become the de facto accepted standard for reporting. It should have
compatibility with leading enterprise application software like SAP, IBM, Microsoft,
Oracle/PeopleSoft, Borland, and BEA.

Types Of Reporting and Analysis Solutions: There are three types of solutions
available as shown below. Some of the vendors who supply solutions have also been
indicated.

At its core, reporting technology is required to be designed for information


distribution. Whether it’s ad-hoc or predefined, the reporting paradigm have to address
the information delivery, publishing and distribution. That is why reporting products
generally have menu items that predefine the query, format the resulting data and then
publish that report to the people it was designed for.

When describing BI technologies, reporting is often lumped together with analysis to


categorize a technology as “reporting and analysis.” This makes some sense because
analysis is what users ultimately do with a report once it is received. If the user knows
ahead of time what data is needed and considers the job complete once the report has
been published, then reporting technology is the right tool for the job. Every
information consumer in the organisation, from executives to production workers, will
use these reporting systems.

However, if the user’s job is to make decisions, this “delivery” model may only be
adequate in certain circumstances and be significantly inadequate in others. The most
popular feature in any reporting system is the “Export to Excel” button simply because
the decision maker’s job with the data is not done when the information is delivered -
the job has just begun.

Using the Repository for More Productive Report Development


Many report designers keep a set of reports that contain pieces of reports that they
would use over and over. In many cases, these reports would contain corporate logos,
commonly used objects (like a company’s address), and even some typical formula
logic. Report designers copy and paste these objects between these “warehouse
reports” and the newly created reports on a regular basis. This technique saves the
designer some time, as they do not have to recreate these objects for each new report.
But, what happens when business logic changes and needs to be updated? This
happens quite often and is a critical piece of the reporting puzzle. The only tried-and-
true method of the past does not solve the problem of how to update business logic or
formulas in reports that has already been created. The report designer has to remember
which reports may have been affected, open each individual report, and update them
by hand-coding the changes. This process is cumbersome and time-consuming but
mission critical. One common location to store and update report objects can
dramatically improve this situation.

125
Information Systems For accomplishing improvement of the productivity of the report design process a
centrally managed location is required so that report designers can access, update,
reuse and share report objects. To extend this concept, report objects can also be
shared and reused among multiple report designers for even greater efficiency.

The Repository is a central database that contains common report objects that can be
shared and reused. The types of report objects that can be shared via the repository
are:

• Text Objects: Reusable text, such as company addresses or confidentiality


statements
• Images: Pictures or logos
• Custom Functions: Business logic that can be reused by passing in new fields as
variables
• SQL Commands: Encapsulated database commands that produce tables to
report from. These SQL statements can also be parameter driven.

The Repository is independent from all other databases connected to any report. It
manages logic, is a standalone database formatting, and objects. But most importantly,
by its very nature as a separate library (or database), it is referenced by the report
when the report is opened, and, as a result, it can actually update the repository objects
automatically for the report designer so that the added overhead of copying new logic
and objects is eliminated.

To add an object to the Repository, it's as simple as dragging and dropping the object
into a folder in the Repository Explorer. The Repository Explorer represents the
repository database as a tree structure made up of folders and objects. The structure of
the repository folders is up to the person working with the repository. Within the
Repository Explorer, the report designer has to have access to the repository objects
that can be placed on the report design surface. Text objects and image objects are
visible from the Repository Explorer. To use Repository Objects in your reports,
simply drag-and-drop these objects on to the report as required.

SQL commands and their properties are visible in the Repository Explorer. You do
not drag-and-drop these objects on the report as they are used in the Database Expert.
You can use SQL Commands like database tables when you're designing your report.
You can save your SQL statements in the repository for later use with the SQL
Command in Repository feature.

Trends
Quality reports are critical. It is not a very unusual scene to find that executives arrive
at meetings armed with spreadsheets only to find that each has different values for the
same metrics. The rest of the meeting is then spent arguing over whose numbers are
correct, rather than in actual decision making based on consistent numbers. This
malaise is largely an offshoot of data fragmentation, the fracture of a single version of
operational truth into multiple data sources, often not correlated with each other. The
most significant part of providing quality reports is the choice and architecture of your
reporting solution.

Quality aside, another important facet of reporting is delivery. Users want reports to be
personalized, organised, timely, easily accessible and in their preferred format. For
example, in today’s business environment it is desirable to be able to e-mail reports to
colleagues, create spreadsheets from report results and access reports securely over the
internet.

One aspect of reporting that is frequently overlooked is its integration with other IT
infrastructure elements. We have seen some customer departments so completely
126
focused on “solving the reporting problem” — evaluating checklists of features from Intelligence Information
various vendors, reading analyst reports on each vendors – that they fail to coordinate Systems
this effort with the bigger company wide picture of how that reporting piece fits into
other infrastructure (such as the central security repository or caching strategy). These
customers then find themselves in the position of having to stitch together their data
warehouse, reporting, security and portal solutions and own the problem of identifying
the correct vendor to call.

Check Your Progress 2

1) State whether True or False


(a) Expert System can be defined as “A computer program
with the expertise embodied in it, based on the expertise
of the knowledge worker”. True False
(b) Artificial intelligence is defined as intelligence exhibited
by an artificial entity like a computer. True False
(c) Distinction between the terms "Neural Network" and
"Artificial Neural Network" can be "Neural network"
indicates networks that are software based and
"Artificial Neural Network" normally refers to
those which are hardware-based. True False
(d) Evolutionary Algorithms is “an algorithm that
maintains a population of structures (usually randomly
generated initially) that evolves according to rules of
selection, recombination, mutation and survival referred to
as genetic operators. True False
(e) Business analytics solutions available must have at least three 3
components i.e. Executive dashboards with drilldown analysis
capabilities, OLAP tools and Reporting generation facilities. True False
(f) On Line Analytical Processing is an approach to quickly providing
the answer to analytical queries that are dimensional in nature.
(g) Data Mining can be defined as the science of extracting useful
information from large data sets or databases. True False
(h) Business performance management is the management technique
which helps organisations to optimize business performance. True False
(i) The types of report objects that can be shared via the repository
are Text Objects, Images, Custom Functions and SQL Command . True False

2) Answer the Following Questions:


(a) For Business Intelligence programme implementation, what questions
must be asked / answered to ensure that BI goals are achieved?
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………
(b) Name the minimum essential requirements for Reporting Solution to be
selected.
……………………………………………………………………………
……………………………………………………………………………
……………………………………………………………………………

3.10 SUMMARY
In this unit we have discussed the most recent and talked about topics of knowledge
management, artificial intelligence and business intelligence. These topics in this unit
have been discussed in detail to make students ready for the real professional life also
list of the vendors who offer these commercial packages has been given. So that you
127
Information Systems can through more detailed information about these commercial packages from the
websites which will definitely improve your knowledge about these softwares.

Artificial / business intelligence areas are presently at the further developing stage and
are becoming more complex, particularly the area of neuro networks which has many
options; therefore, the discussion has been confined more to uses rather than
development aspects, as those do not fit in this course.

3.11 SOLUTIONS / ANSWERS

Check Your Progress 1

1) True or False
(a) True, (b) True, (c) False, (d) True, (e) False,
(f) False, (g) True.

2) Solutions/Answers
(a) The factors which make knowledge management implementation difficult
in an organisation are:

• Geographically different locations for offices / units.


• language
• areas of expertise
• internal conflicts (e.g., professional territoriality)
• generational differences
• union-management relations
• incentives
• the use of visual representations to transfer knowledge (Knowledge
visualization)

Check Your Progress 2

1) True or False
(a) False, (b) True, (c) False, (d) True, (e) True,
(f) True, (g) True, (h) False,

2) Solutions/Answers
(a) For Business Intelligence programme implementation, the questions that
must be asked / answered to ensure that BI goals are achieved

• Goal Alignment queries:


• Baseline queries:
• Cost and risk queries:
• Customer and Stakeholder queries:
• Metrics-related queries:
• Measurement Methodology-related queries:
• Results-related queries.

(b) The minimum essential requirements for Reporting Solution to be


selected are:
• Powerful Authoring
• Flexible Sharing
• Reporting—the Way Users Work
• Proven Technology.
128
Intelligence Information
3.12 FURTHER READINGS/REFERENCES Systems

1 K.C. Laudon. and J.P. Laudon, Management Information Systems: Managing


the Digital Firm (8th Edition), Prentice Hall, New Delhi.

2 P. Hildreth and C. Kimble, Knowledge Networks: Innovation through


Communities of Practice, Idea Group.

3 J. O'Brian, Management Information Systems: Managing Information


Technology in the Networked Enterprise (3rd Ed), Irwin, 1996.

4 Robert Schultheis & Mary Sumner, Management Information Systems-The


Manager’s View, Tata McGraw Hill, New Delhi.

5 Sadagopan S., Management Information Systems, Prentice Hall of India, New


Delhi.

6 Basandra S.K., Management Information Systems, Wheeler Publishing, New


Delhi.

7 Alter S., Information Systems: A Management Perspective, 3/e, Addison


Wesley, New Delhi.

8 http://www-users.cs.york.ac.uk/~kimble/teaching/mis/mis_links.html.

9 http://www.scs.leeds.ac.uk/ukais/Newsletters/vol3no4.html#Definition.

10 http://www.techbooksforfree.com/.

11 http://www.acsac.org/2002/tutorials.html

129

You might also like