Unit 3
Unit 3
3.0 INTRODUCTION
In today’s fast-changing global markets, success is no longer tied to the traditional
inputs of labour, capital or land. The new critical resource is inside the heads of
employees: knowledge. What a company knows? and how it leverages that knowledge
into knowledge management for its use in the organisation. The individual
technologies are not in themselves knowledge management solutions. Instead, when
brought to market they are typically embedded in a smaller number of solutions
packages, each of which is designed to be adaptable to solve a range of business
problems. Examples are portals, collaboration software, and distance learning
software. In this unit we are aiming at imparting knowledge about this knowledge
management and how to use different tools and technologies to achieve the objectives
of an organisation.
Business Analytics and Business Intelligence are concerned with the process of
collecting and analysing domain-specific data stored in data warehouses to derive
valuable insights about customers and emerging markets, and to identify opportunities
as well as key drivers to business growth. It is increasingly being seen as the key
differentiator that provides a competitive edge to companies across industries. The
tools used in Business Analytics are varied. They range from simple slice-and-dice
tools to statistical methods such as log it regression, discriminant analysis and
multivariate analysis, to more sophisticated tools such as neural networks and
optimisation. The application domains are equally varied. How companies, banks,
76
insurance companies and airlines have all been beneficiaries of Business Intelligence Intelligence Information
is what we are also going to discuss in this unit. Systems
3.1 OBJECTIVES
After going through this unit, you will be able to:
• understand the knowledge management system;
• understand the Artificial Intelligence and its use for business;
• understand the use of business intelligence system for Marketing, Human
Resource, and Finance etc;
• design business intellgence system for your organisation;
• understand the use of businesses intelligence tools, and
• understand the use business intelligence reports.
This definition not only gives an indication of what Knowledge Management is, but of
how its advocates often treat the English language. In simpler terms, Knowledge
Management seeks to make the best use of the knowledge that is available to an
organisation, creating new knowledge in the process.
It is helpful to make a clear distinction between knowledge on the one hand, and
information and data on the other.
This first branch had its roots firmly in the use of technology. In this view Knowledge
Management is an issue of information storage and retrieval. It uses ideas derived
77
Information Systems from systems analysis and management theory. This approach led to a boom in
consultancies and in the development of so-called knowledge technologies. Typically
first-generation Knowledge Management involved developing sophisticated data
analysis and retrieval systems with little thought as to how the information they
contained would be developed or used. This led to organisations investing heavily in
technological fixes that had either little impact or a negative impact on the way in
which knowledge was used.
This, of course, was a surprisingly difficult thing to do, essentially because knowledge
is not a commodity but a process. But a suitable epistemology was found, in the form
of that developed by Michael Polanyi. Polanyi’s epistemology objectified the
cognitive component of knowledge – learning and doing – by labelling it tacit
knowledge and for the most part removing it from the public view. Learning and doing
became a ‘black box’ that was not really subject to management; the best that could be
done was to make tacit knowledge explicit.
Its failure to provide any theoretical understanding of how organisations learn new
things and how they act on this information meant that first generation Knowledge
Management was incapable of managing knowledge creation.
Along with this realisation came a change in metaphor. Organisations came to be seen
as capable of learning, and so a link grew between learning theory and management.
The advent of complexity theory and chaos theory provided more metaphors that
enable managers to replace models of organisations as integrated systems with models
of organisations as complex interdependent entities that are capable of responding to
their environment.
Second generation Knowledge Management gives priority to the way in which people
construct and use knowledge. It derives its ideas from complex systems, often making
use of organic metaphors to describe knowledge growth. It is closely related to
organisational learning. It recognises that learning and doing are more important to
organisational success than dissemination and imitation.
78
3.2.2 Knowledge Intelligence Information
Systems
Knowledge is the awareness and understanding of facts, truths or information gained
in the form of experience or learning. Knowledge is an appreciation of the possession
of interconnected details which, in isolation, are of lesser value.
Knowledge is a term with many meanings depending on context, but is (as a rule)
closely related to such concepts as meaning, information, instruction, communication,
representation, learning and mental stimulus.
What constitutes knowledge certainty and truth are controversial issues. These issues
are debated by philosophers, social scientists, and historians. Ludwig Wittgenstein
wrote “On Certainty” — aphorisms on these concepts — exploring relationships
between knowledge and certainty. A thread of his concern has become an entire field,
the philosophy of action.
Customers and end-users also benefit when they have direct access to a knowledge
base to solve their own issues without ever contacting an agent. A growing number of
people now prefer self-service to live interaction, at least for certain problem types.
For some people, self-service fits perfectly into their lifestyle. They are in a hurry and
they need a specific piece of information and that’s all they want. Say, for example, in
a corporate environment, an employee needs to know if there is a Windows 2000
driver for a USB Zip drive. She doesn’t want to wait in a queue. She doesn’t want to
talk to an agent. She just wants to know if there is a driver available and where to find
it. In this case, self-service can be superior to agent-assisted service.
There are some providers of pre-packaged knowledge out there, but our experience is
that while they can be useful to the help desk they are not relevant to customer service
centers which have business-specific content needs. In either case, you must ensure
you have the adequate resources to create and maintain the content you promise.
Creating content is not a one-time project. Also, over time the content must be updated
and supplemented as new products or services are supported as shown in Figure 1.
Empowering agents to add new content as resolutions are discovered is the key to
maintaining a robust system.
80
Intelligence Information
Systems
To be successful, your project must have several champions within the organisation.
These are individuals that believe in the project, enthusiastically advocate it and have
the clout to “make things happen.” Projects that lack a champion generally don’t get
off the ground. Those with only one champion are also at serious risk.
Losing your champion can spell disaster for your project. This is a real problem for
knowledge management projects, due to their continuous duration. If the project
champion transfers, retires or leaves the company, the project often loses its
momentum and the project may falter as someone else takes it over.
What we like to see when we work with clients is a dual-sponsorship: one at the
operational level and one at the executive level. So if an operations manager decides
the company really needs knowledge management, that manager should find
somebody on the executive staff who will agree to support the vision. By having that
dual track of vision the project is more likely to succeed.
Sometimes there is a fear that knowledge management will be used to replace people.
If your staff thinks that is what you are trying to do, then you really need to address
that head-on. If that is not your intention, you should convince your team that current
head count reduction is not the goal. Therefore, you need to look for and plan the
motivation for each party. After all, you are asking people to shift from a system
where being a tower of knowledge is rewarded, to a system where they share their
expertise with everybody on the team.
Each party will have a unique motivation to embrace knowledge management. For
example, in a technical support environment, a frontline tech will have a different
motivational schema than a 3rd level technician. The frontline tech is not going to
have to ask the 2nd line tech as many questions, and can resolve more problems faster.
The 2nd level tech is not going to get as many of the common questions. Level 3
researchers won’t have to start at ground zero when handed a problem by level 2,
because they know that all the intermediate steps have been covered. So as you look
81
Information Systems across the organisation everybody has a different interest and you have to protect all of
them.
Failing to see how knowledge management is going to fit into the rest of the
organisation is a mistake. You must invest the time and energy to understand the
culture, identify motivations and ensure change happens where needed.
In our practice we look for our clients to have a strategic goal for the project rather
than a tactical goal. If you are looking to shorten handle time that’s a tactical
motivation and you’re not as likely to be willing to go through the steps that a
successful enterprise rollout would take. But if it is a strategic initiative, especially
something that is top-down motivated (for instance improving customer service or
improving employee satisfaction) then there is a better value statement involved and
you are not relying on changing one metric. So you might see improvement in
individual metrics like handle time and resolution rates but their value is limited
compared to the return from becoming a collaborative knowledge sharing
organisation.
To get going, decide what goals you are trying to accomplish and why. Then try to
identify a solution and methodology that will help you attain those goals in your
environment. Sometimes people within an organisation may say that a KM initiative is
nice-to-have, but an economic downturn might slow the process down or defer it —
thus, being counterproductive when resources are scarce. But I think it’s
counterproductive to consider KM a nice-to-have because the rewards are equally
beneficial during both a downturn and the inevitable upturn. If you wait until the
upturn then you will be forced to play catch up as your call volumes increase and your
email volume doubles; that’s not the time to introduce a knowledge-powered system
or build a knowledge base. It’s not necessary to hire more employees if you have
resources that are not 100% utilized or if you encourage your agents to contribute
knowledge during their daily workflow.
82
product group and learn from there. It is much better to be comprehensive for a narrow Intelligence Information
topic than fail to get enough depth. Sometimes an enterprise initiative is needed right Systems
away, and it can be done successfully, but it can involve a larger resource commitment
to do a full-scale project all at once. Remember, the depth of your knowledge base
truly depends upon your customers’ needs.
Today’s systems should enable agents to contribute new knowledge during their
natural workflow. This is critical to ensure that solutions that are not currently in the
system can be quickly added once the resolution has been determined. It’s also
important to remember that regular and timely maintenance of the knowledge base is
the key to success. You should also consider appointing resources to maintain the
knowledge. Be sure to build in a mechanism that identifies gaps in content
(information sought but not found), and a process for filling those gaps. If people
repeatedly fail to find what they are looking for they will stop using the system.
83
Information Systems In addition to setting management expectations you have to set customer and end-user
expectations. For example, if you are going to provide customers with Web self-
service for one specific product then you must include the known problems that they
are going to encounter in the knowledge base. In that situation you are better off to set
their expectations that the knowledge base covers only that product and no other.
Customers pose the same extinction risk that your employees do. If they visit the site a
few times and they can’t find an accurate or appropriate answer they will probably not
return again.
There are many ways to “push” your self-service capabilities out to your end-user
audience. Traditional marketing techniques should be employed to promote this
valuable service, such as email, online newsletters or direct mail. Encourage users to
visit your online support site by making it easy to find and access the knowledge base.
Be sure to include the site URL and directions for obtaining a login, if needed, in your
marketing communications.
Another method is to encourage your agents to end support calls by informing the user
of the support site. “Thanks for calling today, I’m glad that I could help you solve
your problem. By the way, we now have a Web self-service site if you’d like to search
the knowledge base. You can find it at www.ABC-Support.com and you can obtain a
login by clicking the request login button on that page.”
Finally, make sure your Internet or intranet site includes an easy-to-find link to your
Web self-service site. A twist on the old saying, “If they can’t find it, they won’t
come.” So make it easy to find, easy to access and easy to use.
The bottom line can be summarized with a quote from Gartner, Inc. – “Those
enterprises that include KM processes as part of their customer relationship
management initiatives have a higher probability of success than those that don’t”.
84
Intelligence Information
3.3 CREATING, DEVELOPING AND SHARING Systems
KNOWLEDGE
Knowledge flows comprise the set of processes, events and activities through which
data, information, knowledge and meta-knowledge are transformed from one state to
another. To simplify the analysis of knowledge flows, the framework described here is
based primarily on the Knowledge Model. The model organizes knowledge flows into
four primary activity areas: knowledge creation, retention, transfer and utilisation
(Figure 2).
Creation
Utilization
Transfer Retention
Figure 2: Knowledge model
Knowledge Creation: This comprises activities associated with the entry of new
knowledge into the system, and includes knowledge development, discovery and
capture.
Knowledge Retention: This includes all activities that preserve knowledge and allow
it to remain in the system once introduced. It also includes those activities that
maintain the viability of knowledge within the system.
Knowledge Transfer: This refers to activities associated with the flow of knowledge
from one party to another. This includes communication, translation, conversion,
filtering and rendering.
Knowledge Utilisation: This includes the activities and events connected with the
application of knowledge to business processes.
Let us, look at the basic processes of knowledge creation and sharing within
organisations and what type of technologies can be applied to knowledge management
and to assess their actual or potential contribution.
A set of systematic and disciplined actions that an organisation can take to obtain the
greatest value from the knowledge available is given the name Knowledge
management. “Knowledge” in this context includes both the experience and
understanding of the people in the organisation and the information artifacts, such as
documents and reports, available within the organisation and in the world outside.
Effective knowledge management typically requires an appropriate combination of
organisational, social, and managerial initiatives along with, in many cases,
deployment of appropriate technology.
85
Information Systems To structure the discussion of processes involved in knowledge creation and sharing
and technologies involved, it is helpful to classify the technologies by reference to the
notions of tacit and explicit knowledge.
• Tacit knowledge is what the knower knows, which is derived from experience
and embodies beliefs and values. Tacit knowledge is actionable knowledge, and
therefore the most valuable. Furthermore, tacit knowledge is the most important
basis for the generation of new knowledge; however, the key to knowledge
creation is the mobilisation and conversion of tacit knowledge.
TACIT-TO-TACIT TACTI-TO-EXPLICIT
Socialisation Externalisation
EXPLICIT-TO-TACIT EXPLICIT-TO-EXPLICIT
Internalisation Combination
Learning-from-Reports E-mail/Reports
Figure 3: Conversion of Knowledge from Tacit-to-Explicit form and vice-versa, Processes &
Techniques
86
Groupware: Groupware is a fairly broad category of application software that Intelligence Information
helps individuals to work together in groups or teams. Groupware can to some Systems
extent support all four of the facets of knowledge transformation. To examine
the role of groupware in socialization we focus on two important aspects: shared
experiences and trust.
Shared experiences are an important basis for the formation and sharing of tacit
knowledge. Groupware provides a synthetic environment, often called a virtual
space, within which participants can share certain kinds of experience; for
example, they can conduct meetings, listen to presentations, have discussions,
and share documents relevant to some task. Indeed, if a geographically dispersed
team never meets face to face, the importance of shared experiences in virtual
spaces is proportionally enhanced. An example of current groupware is Lotus
Notes, which facilitates the sharing of documents and discussions and allows
various applications for sharing information and conducting asynchronous
discussions to be built. Groupware might be thought to mainly facilitate the
combination process, i.e., sharing of explicit knowledge. However, the selection
and discussion of explicit knowledge to some degree constitutes a shared
experience.
Some of the limitations of groupware for tacit knowledge formation and sharing
have been highlighted by recent work on the closely related issue of the degree
of trust established among the participants. It was found that videoconferencing
(at high resolution—not Internet video) was almost as good as face-to-face
meetings, whereas audio conferencing was less effective and text chat least so.
These results suggest that a new generation of videoconferencing might be
helpful in the socialization process, at least in so far as it facilitates the building
of trust. But even current groupware products have features that are found to be
helpful in this regard. In particular, access control, which is a feature of most
commercial products, enables access to the discussions to be restricted to the
team members if appropriate, which has been shown to encourage frankness and
build trust.
87
Information Systems On-line discussion databases are another potential tool to capture tacit
knowledge and to apply it to immediate problems. It needs to be noted that team
members may share knowledge in groupware applications. To be most effective
for externalization, the discussion should be such as to allow the formulation
and sharing of metaphors and analogies, which probably requires a fairly
informal and even freewheeling style. This style is more likely to be found in
chat and other real-time interactions within teams.
Newsgroups and similar forums are open to all, unlike typical team discussions,
and share some of the same characteristics in that questions can be posed and
answered, but differ in that the participants are typically strangers. Nevertheless,
it is found that many people who participate in newsgroups are willing to offer
advice and assistance, presumably driven by a mixture of motivations including
altruism, a wish to be seen as an expert, and the gratitude and positive feedback
contributed by the people they have helped.
Once tacit knowledge has been conceptualized and articulated, thus converting it to
explicit knowledge, capturing it in a persistent form as a report, an e-mail, a
presentation, or a Web page makes it available to the rest of the organisation.
Technology already contributes to knowledge capture through the ubiquitous use of
word processing, which generates electronic documents that are easy to share via the
Web, e-mail, or a document management system. Capturing explicit knowledge in this
way makes it available to a wider audience, and “improving knowledge capture” is a
goal of many knowledge management projects.
These processes do not occur in isolation, but work together in different combinations
in typical business situations. For example, knowledge creation results from
interaction of persons and tacit and explicit knowledge. Through interaction with
others, tacit knowledge is externalized and shared. Although individuals, such as
employees, for example, experience each of these processes from a knowledge
management and therefore an organisational perspective, the greatest value occurs
from their combination since, as already noted, new knowledge is thereby created,
disseminated, and internalized by other employees who can therefore act on it and thus
88
form new experiences and tacit knowledge that can in turn be shared with others and Intelligence Information
so on. Since all the processes of Figure 3 are important, it seems likely that knowledge Systems
management solutions should support all of them, although we must recognise that the
balance between them in a particular organisation will depend on the knowledge
management strategy used.
Table 1 shows some examples of technologies that may be applied to facilitate the
knowledge conversion processes of Figure 3. The individual technologies are not in
themselves knowledge management solutions. Instead, when brought to market they
are typically embedded in a smaller number of solutions packages, each of which is
designed to be adaptable to solve a range of business problems. Examples are portals,
collaboration software, and distance learning software. Each of these can and does
include several different technologies.
Table 1: Examples of technologies that can support or enhance the transformation of knowledge
Contributions to the formation and communication of tacit knowledge, and support for
making it explicit, are currently weaker, although some encouraging developments are
highlighted, such as the use of text-based chat, expertise location, and unrestricted
bulletin boards.
Challenges
What complicates knowledge transfer? There are many factors, including:
• geography
• language
89
Information Systems • areas of expertise
• internal conflicts (e.g., professional territoriality)
• generational differences
• union-management relations
• incentives
• the use of visual representations to transfer knowledge (Knowledge
visualization)
Process
• identifying the key knowledge holders within the organisation
• motivating them to share
• designing a sharing mechanism to facilitate the transfer
• executing the transfer plan
• measuring to ensure the transfer
• applying the knowledge transferred
The advent of the internet brought with it further enabling technologies, including E-
learning, web conferencing, collaborative software, Content management systems,
corporate ‘Yellow pages’ directories, email lists, Wikis, Blogs, and other technologies.
Each enabling technology can expand the level of inquiry available to an employee,
while providing a platform to achieve specific goals or actions. The practice of KM
will continue to evolve with the growth of collaboration applications available by IT
and through the Internet. Since its adoption by the mainstream population and business
90
community, the Internet has led to an increase in creative collaboration, learning and Intelligence Information
research, e-commerce, and instant information. Systems
There are difficulties in the field of artificial intelligence. The problem consists of how
to store and manipulate knowledge in an information system in a formal way so that it
may be used by mechanisms to accomplish a given task. Examples of applications are
expert systems, machine translation systems, computer-aided maintenance systems
and information retrieval systems (including database front-ends).
Some people think it would be best to represent knowledge in the same way that it is
represented in the human mind, which is the only known working intelligence so far,
or to represent knowledge in the form of human language. Unfortunately, we don’t
know how knowledge is represented in the human mind, or how to manipulate human
languages in the same way as the human mind.
For this reason, various artificial languages and notations have been proposed for
representing knowledge. They are typically based on logic and mathematics, and have
easily parsed grammars to ease machine processing.
The recent fashion in knowledge representation languages is to use XML as the low-
level syntax. This tends to make the output of these KR languages easy for machines
to parse, at the expense of human readability.
Examples of notations:
• DATR is an example for representing lexical knowledge
• RDF is a simple notation for representing relationships between objects
From the earliest times, the knowledge frame or just frame has been used. A frame
consists of slots which contain values; for instance, the frame for house might contain
a color slot, number of floors slot, etc.
91
Information Systems Frames can behave something like object-oriented programming languages, with
inheritance of features described by the “is-a” link. However, there has been no small
amount of inconsistency in the usage of the “is-a” link: Ronald J. Brachman wrote a
paper titled “What IS-A is and isn’t”, wherein 29 different semantics were found in
projects whose knowledge representation schemes involved an “is-a” link. Other links
include the “has-part” link.
Frame structures are well-suited for the representation of schematic knowledge and
stereotypical cognitive patterns. The elements of such schematic patterns are weighted
unequally, attributing higher weights to the more typical elements of a schema. A
pattern is activated by certain expectations: If a person sees a big bird, he or she will
classify it rather as a sea eagle than a golden eagle, given his or her “sea-scheme” is
currently activated.
Frames representations are more object-centers than semantic networks: all the facts
and properties of a concept are located in one place - there is no need for costly search
processes in the database.
A script is a type of frame that describes what happens temporally; the usual example
given is that of describing going to a restaurant. The steps include waiting to be seated,
receiving a menu, ordering, etc.
92
problems requires accumulation, induction and inference of experiences to form new Intelligence Information
knowledge. Systems
Schools of thought
AI is divided roughly into two schools of thought: Conventional AI and
Computational Intelligence (CI).
• Fuzzy systems: techniques for reasoning under uncertainty, has been widely
used in modern industrial and consumer product control systems.
With hybrid intelligent systems attempts are made to combine these two groups.
Expert inference rules can be generated through neural network or production rules
from statistical learning such as in ACT-R.
93
Information Systems A promising new approach called intelligence amplification tries to achieve artificial
intelligence in an evolutionary development process as a side-effect of amplifying
human intelligence through technology.
The Expert Systems that companies are starting to use, and the AI groups in many
large companies, were formed in the mid-1980s. Expert Systems started to show limits
on the amount of rules they can work with, and 1986 sales of AI-based hardware and
software were $425 million (WFMO, 2001). Likewise, interest in using Neural Nets in
business applications developed. By the end of the 1980s, Expert Systems were
increasingly used in industry, and other AI techniques were being implemented, often
unnoticed but with beneficial effect (WFMO, 2001). AI revenues reach $1 billion
(MIT, Timeline of AI, 2001).
An Expert System can be developed by: Expert System Shell software that has been
specifically designed to enable quick development, AI languages, such as LISP and
Prolog or through the conventional languages, such as Fortran, C++, Java, etc.
While the Expert System concept may sound futuristic, one of the first commercial
Expert Systems, called Mycin, was already in business use in 1974. Mycin, which
was created by Edward H. Shortliffe at Stanford University, is one of the most famous
Expert Systems. Mycin was designed as a medical diagnosis tool giving information
concerning a patient's symptoms and test results; Mycin attempted to identify the
cause of the patient's infection and suggested treatments. It was observed by some
users that Mycin produced better analysis than medical students or practising doctors,
provided its limitations were observed. Another example of an Expert System is
Dendral, a computerized chemist. According to the Massachusetts Institute of
Technology, the success of Dendral helped to convince computer science researchers
94
that systems using heuristics were capable of mimicking the way human experts solve Intelligence Information
problems. Systems
We may Conclude that the Expert System is an AI application that takes decisions
based on knowledge and inference (the ability to react on the knowledge), as defined
by experts in a certain domain and to solve problems in that domain. The Expert
System normally falls under the definition of Weak AI, and is one of the AI
techniques that has been easiest for companies to embrace. Commercial Expert
Systems were developed during the 1970s, and continue to be used by companies. One
advantage of an Expert System is that it can explain the logic behind a particular
decision, why particular questions were asked, and/or why an alternative was
eliminated. That is not the case with other AI methods.
Artificial Neural Network: Sometimes the following distinction is made between the
terms “Neural Network” and “Artificial Neural Network”. “Neural network” indicates
networks that are hardware based and “Artificial Neural Network” normally refers to
those which are software-based. In the following paragraphs, “Artificial Neural
Network” is sometimes referred to as “Neural Network” or “Neural Computing”.
Neural Networks are an approach, which is inspired by the architecture of the human
brain. In the human brain a neural network exists, which is comprised of over 10
billion neurons; each neuron then builds hundreds and even thousands of connections
with other neurons.
ii) An unsupervised Neural Network has no target outputs. During the learning
process, the neural cells organise themselves in groups, according to input
pattern. The incoming data is not only received by a single neural cell, but also
influences other cells in its neighbourhood. The goal is to group neural cells
with similar functions close together. Self-organisation Learning Algorithms
tend to discover patterns and relationships in that data.
95
Information Systems
Artificial Neural Network Techniques: There are many kinds of Artificial Neural
Networks. No one knows exactly how many. This dissertation only examines the most
common ones. (i) Perceptron, (ii) Multi-Layer-Perceptron, (iii) Backpropagation Net,
(iv) Hopfield Net Physicist, and (v) Kohonen Feature Map.
We can conclude that ANN is inspired by the architecture of the human brain, and
learns to recognise patterns through repeated minor modifications to selected neuron
weights. There are many kinds of ANN techniques that are good at solving problems
involving patterns, pattern mapping, pattern completion, and pattern classification.
ANN pattern recognition capability makes it useful to forecast time series in business.
A Neural Network can easily recognise patterns that have too many variables for
humans to see. They have several advantages over conventional statistical models:
they handle noisy data better, do not have to fulfil any statistical assumptions, and are
generally better at handling large amounts of data with many variables.
A problem with Neural Networks is that it is very difficult to understand their internal
reasoning process, however, this is not entirely accurate. It is possible to get an idea
about the learned ANN variables’ elasticity. By changing one variable at a time,
looking at the changes in the output pattern during that time, at least some information
regarding the importance of the different variables will be visible. Neural Networks
can be very flexible systems for problem solving.
96
• Evolutionary Programming and Evolution Strategy: Evolution programming
Intelligence Information
Systems
uses mutations to evolve populations. Is a stochastic optimisation strategy
similar to Genetic Algorithm, but instead places emphasis on the behavioural
linkage between parents and their offspring, rather than seeking to emulate
specific Genetic Operators as observed in nature. Evolutionary Programming is
very similar to Evolution Strategies, although the two approaches developed
independently.
We may conclude that the EA tries to mimic the process of biological evolution,
complete with natural selection and survival of the fittest. The four main paradigms
are Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Programming,
and Evolution Strategy. EA is a useful method of optimisation when other techniques
are not possible. EAs seem to offer an economic combination of simplicity and
flexibility, and may be the better method for finding quick solutions than the more
expensive and time consuming (but higher quality) OR methods. However, hybrid
system between OR and EA should be able to perform quite well.
Hybrid System: More people have recently begun to consider combining the
approaches into hybrid ones. Hybrid System is the system that uses more than one
problem-solving technique in order to solve a problem”. There is a huge amount of
interest in Hybrid Systems, for example: neural-fuzzy, neural-genetic, and fuzzy-
genetic hybrid systems. Researchers believe they can capture the best of the methods
involved, and outperform the solitary methods.
“Fuzzy Logic and Fuzzy Expert System” and “Data Mining” are deliberately placed
under the heading of Hybrid System. Fuzzy Logic is a method that is combined with
other AI techniques (Hybrid System) to represent knowledge and reality in a better
way. Data Mining does not have to be a Hybrid System, but usually is, for example,
IBM’s DB2 (Data Mining tool), which contains techniques (IBM, 2001) such as
Statistics, ANN, GA, and Model quality graphics, etc. Let us now take a closer look at
the methods.
Fuzzy Logic and Fuzzy Expert Systems Fuzzy Logic resembles human reasoning,
but uses estimated information and vagueness in a better way. The answers to real-
world problems are rarely black or white, true or false, or start or stop. By using Fuzzy
Logic, knowledge can be expressed in a more natural way (fuzzy logic instead of
Boolean “Crisp” logic).
97
Information Systems i) Fuzzy Logic is a departure from classical two-valued sets and logic, that uses
"soft" linguistic (e.g,. large, hot, tall) system variables and a continuous range
of truth values in the interval [0,1], rather than strict binary (True or False)
decisions and assignments.”
Fuzzy Logic is ideal for controlling non-linear systems and for modeling
complex systems where an inexact model exists, or in systems where ambiguity
or vagueness is common. There are many commercial products are available
today which uses Fuzzy logic like washing machines, high speed train etc.
ii) Fuzzy Expert Systems: Often Fuzzy Logic is combined with Expert Systems,
such as the so-called Fuzzy Expert Systems which are the most common use of
Fuzzy logic. These systems are also called “Fuzzy Systems” and use Fuzzy
Logic instead of Boolean (crisp) logic. Fuzzy Expert Systems are used in several
wide-ranging fields, including: Linear and Nonlinear Control Pattern
Recognition, Financial Systems, Operation Research, Data Analysis, Pattern
recognition etc.
We may conclude that a Hybrid system uses more than one technique, such as neural-
fuzzy, neural-genetic, Fuzzy Expert System, Data Mining (most often), etc., to solve a
problem. Fuzzy logic is incorporated into computer systems so that they represent
reality better by using “non-crisp” knowledge. Often Fuzzy Logic is combined with
Expert Systems, the so-called Fuzzy Expert System or more simply, “Fuzzy System.”
Data Mining software most often uses various techniques, including Neural Networks,
statistical and visualization techniques, etc. to turn what are often mountains of data
into useful information. Data Mining does not always contain AI techniques. It is
expected that it is quite possible that Data Mining will become a very useful tool for
companies in the competition for market shares.
Rule 1 : If the velocities at measuring points 3 and 4 are abnormal, then the fan failure is expected.
Rule 2 : If the velocities at measuring points 2 and 3 are abnormal, then the transmission failure is expected.
98
Intelligence Information
Fan Failure Systems
The acceleration and velocity of vibration at the sensing points 1, 2, 3, and 4 are
measured. The causalities between these measured values and failures are obtained by
using expert knowledge. This knowledge is expressed in a matrix and is transformed
into production rules are in Figure 5. The precise diagnosis is carried out based on the
spectral analysis of the vibration data. The levels of the fundamental and higher
components of the data are calculated. The relationship between the level values and
failures is obtained by using expert knowledge. This knowledge is represented by
frames. By using this knowledge source, the process of inference process as shown in
Figure 6.
• Fluid machines
• Electric rotation machines
• Mills
• Stationary electric machines
• Motors
99
Information Systems • Blowers
• Pumps
• Towers
• Drums
Sensing place
• Bearing portions
• Tanks
• Shafts
• Pipes
Condition-based diagnosis techniques have also been used to identify the mode of
failure in abnormal vibration, crack by nondestructive examination (ultrasonic or
X-ray), corrosion, or degradation of insulation. A very popular technique is to detect
the abnormal vibration in the bearing portions and the shaft of rotating machinery. The
level of vibration in the machine axis is measured by using the acceleration pick-up at
regular intervals, thus obtaining the tendency of increasing vibration. Many rotating
machines are maintained by using this method. The precondition for a condition-based
strategy is to make the deteriorating conditions more transparent and predictable. This
is where Artificial Intelligence can be used to bring competitive advantages.
Company Control
There are several AI-based programs that control what employees do on the Internet,
and what they send and receive in their e-mail at work. It is also believed that while
preventing access to inappropriate web sites could be acceptable, checking employees’
e-mail is going one step too far. Unless a reasonable limit is set, we will have a “Big
100
Brother” society. Furthermore, with all of the electronic information that companies Intelligence Information
receive today, it is expected that intelligent agents will be used more and more often to Systems
process information in automated and customized ways to ease information overload.
Production Management
AI software that learned to ‘breed’ factory schedules generates far better schedules
than those that humans can produce with the help of Genetic Algorithms. With the
case studies it has been proved that Data Mining with ANN, help solve some of the
processing and interpretation problems for companies and have even played a key role
in discovering oil fields.
Finance Management
Some believe that computers with Neural Networks are better at selecting stocks than
people are. However, finding information regarding Neural Networks’ success in the
field of finance is difficult, most likely because successful systems are being treated as
company secrets. The discussion of AI in the financial context has generally indicated
that AI techniques are somewhat useful to most financial applications. AI techniques
should catch on in coming years given the growing complexity of the markets, which
will require more computing power and analysis to deal with information overload. It
seems that many systems are best used as assistants to an existing team of experts
rather than on their own.
We may conclude that AI has gained a foothold in the world of business. That
foothold, moreover, is getting larger and larger as time goes by. One question which
comes to mind is then why has it taken so long before these methods are visible in
business applications. There appear to be four possible answers to that question. First,
it seems that the development of processing power has been a catalyst that made it
possible for AI-based system to gain a foothold in the business world. Furthermore, it
is just lately that affordable computers with sufficient processing power have become
available to companies. Second, AI often competes with business methods that have
been quite successful and in use for very long periods of time. So, why risk changing a
working concept, companies may think. There are also some interpretation difficulties
in some AI systems for instance, old tried and true statistical methods win over ANN
simply because people are unwilling to use a system where they do not see the effect
of each variable (i.e., the Black Box). This can be a high threshold to overcome. Third,
many AI applications involve large investments of money and failure can also be very
costly; this makes the companies circumspect regarding investment decisions. Finally
the fourth reason is simply that new technologies seems always to have a threshold for
acceptance. Furthermore, many critics believe that AI has not fulfilled its promise. Yet
they do not discard it as a method. It is a fact that companies are using AI and earning
money as a result.
The bank can make money by lending to wealthy people, but there are only few
wealthy people. The bank can make more money by also lending to middle class
people. The bank can make even more money by lending to poor people.
101
Information Systems Note that poorer people are usually at greater risk of default. Note too, that some poor
people are excellent borrowers. Note too, that a few poor people may eventually
become rich, and will reward the bank for loyalty.
The bank wants to maximize its income, while minimizing its risk, which makes the
portfolio hard to understand.
The analytics solution may combine time series analysis, with many other issues in
order to make decisions on when to lend money to these different borrower segments,
or decisions on the interest rate charged to members of a portfolio segment to cover
any losses among members in that segment.
Business analytics as Change Manager: The best hedge against an uncertain future
is figuring out how to avoid being surprised when the unexpected happens. Better
yet, business executives need to be able to quickly take advantage of changing
conditions with new products and services. To accomplish these somewhat elusive
goals, companies must constantly improve their ability to identify, classify, and
intelligently analyse all available information.
Web sites are adding to the mounds of customer data that companies have to deal
with. Web managers can monitor click stream log files to identify how customers
navigate a site, where they came from, how long they were there, what they
purchased, and where they headed afterward.
The goal of high-end business analytics is to turn these individually useful but often
marginalized data resources into something that lets business managers immediately
grasp the dynamic state of their business. This includes the current and projected
status of their customers by group and individual needs. Ideally, analytics lets
companies combine demographic and behavioural data with sales information to
determine how best to leverage the customer relationship.
A company’s ultimate goal is to precisely target new and existing goods to those
individuals and groups based on the profiles gleaned from the analytic process.
Corporate decision makers need to be increasingly attuned to business opportunities
that arise whenever a customer, business, or industry factor changes. Exploiting
change is the role of business analytics.
Most companies have Web-log data that’s sparse and discrete and a wealth of
transaction data, in some cases going back 20 years or more, that’s rich and
continuous. The nirvana here, is to integrate these data sources in a meaningful way so
a company can tell what its customers are doing now and have done in the past.
Business analysts can take that data, do a little trend analysis, and decide how best to
pitch new products to customers.
There are costs associated with integrating all this data. The investment in gathering
the data and aggregating it in meaningful ways must yield a quantifiable business
benefit. You can gather all kinds of information, but if you don’t have context, if you
don’t advance a business hypothesis and generate a good strategy, it’s pretty much
worthless information.
102
Reviewing historical and current data over time can optimally yield enough trend Intelligence Information
information to feed statistical models that let trained users predict events and trends. Systems
However, there’s a reticence on the part of decision makers to trust “black box”
business models used by consultants and in some software tools without understanding
the parameters being measured.
Most of the data is extremely diverse and often in a constant state of flux, so there’s
usually no single, specific analytic technique appropriate to your data at a particular
stage of its evolution. The upshot is that predictive models must be appropriate to the
task, highly customized to specific business conditions, and targeted to address
specific areas of interest or answer particular questions.
Rather than just having high-end modeling at one end of the spectrum and static
reports at the other, what's needed is analytics and analytic applications that watch for
change and initiate actions at both an individual and a group level. Analytics are most
useful when the application proactively lets the right people know when relevant
business factors change.
One way of spotting trends is to be able to measure just the part of the business that’s
changing. The future level of a lake can be predicted based on how much water is
going in and how much is going out. The same analogy applies to business customers.
It has been observed that half of companies perform daily data warehouse updates,
40% have weekly or monthly updates, and 10% have real-time or near-real-time
updates. It has also been observed that many of the companies performing weekly or
monthly updates are apt to shift en masse to performing daily or continuous updates as
a result of evolving market and competitive conditions.
The need to act upon information is a key driver of high-end analytic applications.
Folding business intelligence back into the business decision-making process,
operational systems, or human interaction is the primary way to make sure that a
company can respond appropriately to changes in customer and market conditions. To
bring about this organisational dynamic, the analytic results must be available to all of
the people within a company. Traditionally, a lot of information gleaned from a
company's business-intelligence tools went to upper management, but it didn't
percolate quickly down into the trenches where it could be acted upon by the rank and
file. However, the percolated information needs to be based on customization for the
company, as company may not like to send all information to every employee.
Improved search and text-mining techniques are aiding the quest for timely
information. Predictive modeling applies in this scenario as well. This form of
business modeling can help present information based on particular users' past
interests and help predict what a manager might want or need to know in the future.
It’s all well and good to have a group of statisticians sitting in their ivory cubicles, and
it's quite true that companies still need those people to do the data mining today. But if
business intelligence is to be more widely used across the enterprise, people must be
able to act upon it in a timely fashion and fold the information back into the business
process. Critical information about the state of the business must be distributed
103
Information Systems quickly, efficiently, and appropriately to those people and departments that can affect
the company’s adaptability.
These are goals that IT departments have avidly pursued but have hitherto never been
able to grasp fully. Fortunately, today’s advanced analytics tools point to a time when
compiling data, monitoring near-real-time business events, and synthesizing that data
via data-mining and other advanced techniques will let companies respond almost
immediately to perceived or predicted changes in market conditions. Early versions of
these tools already let companies make business forecasts, optimize resources on the
fly, and suggest appropriate actions with unprecedented speed, agility, and accuracy.
The latest challenge in business analytics is to capture external data from sources that
companies haven’t really considered before. If there’s a cliché in the making here, it’s
that data abounds, but knowledge acquisition takes a lot more work. Many alternate
sources of data are available via the Web. The number of customer-data sources
continues to expand dramatically. The key will be to determine which of these data
points are more relevant and to figure out organisational processes that will permit
appropriate data to be fed to the analytics engine so a company or department can
respond to it in real time and feed it into ongoing projects, sales efforts, and marketing
campaigns.
Neural networks are flexible models that can be applied to predictive analysis and
pattern-recognition problems. You want to control for different factors and see what's
working for you and what's giving you the most bang for your buck. Well-targeted
analytics will provide yield indicators and trends that the company can exploit to its
advantage.
104
This was characteristic of the naiveté of early data warehouse projects in the 1990s— Intelligence Information
many of which failed. “We’re seeing much more sophistication around the way high- Systems
end business analytics are approached today, We don’t really just want to bring a lot
of data together; we actually want to work backward from the questions we’re trying
to answer.”
The acceptance problem is twofold. The tools and techniques are still complex and
difficult to use. Companies require the guys with the lab coats to make these tools
hum. The analytic results that derive from them are often barely auditable, especially
when employing things such as neural networks. The sheer sophistication of the tools
makes it difficult for business managers to understand how the software came to a
particular conclusion. As a result, decision makers often feel uncomfortable
implementing results from analysis that they can’t audit, figure out what the
assumptions are, and how the results were derived.
Trust will emerge when people take a couple of these recommendations, implement
them, and see a positive impact on the bottom line. The ultimate outcome of high-end
analytics will be systems that can process diverse business data, draw conclusions, and
alert managers to proposed actions and outcomes.
Hopefully, the impact on business will be companies that are more agile and better
informed about all the conditions both within and outside their corporate boundaries.
Available Business Analytics: Such suits are being offered by several vendors.
Vendors claim that these suits identify trends, perform comparisons and highlight
opportunities in various business functions like supply chain management, even when
large amounts of data are involved. These suits combine technology with human effort
and help decision-makers in business areas such as sourcing, inventory management,
manufacturing, quality, sales and logistics. Business Analytics solutions leverage
investments made in enterprise applications, web technologies, data warehouses and
information obtained from external sources to locate patterns among transactional,
demographic and behavioural data.
Vendors claim that with wafer-thin margins, managing costs is an ongoing challenge.
Business analytics solutions being offered can help managers in sales, marketing,
customer support, supply chain planning and financials understand and respond to key
issues, such as:
105
Information Systems Executive Information Systems (EIS): Executive dashboards with drilldown analysis
capabilities that support decision-making at an executive level.
Online Analytical Processing (OLAP): OLAP tools are mainly used by analysts.
They apply relatively simple techniques such as deduction, induction, and pattern
recognition to data in order to derive new information and insights.
Standard reports are designed and built centrally and then published for general use.
• Parameterized reports: Fixed layout reports that allow users to specify which
data are to be included, such as date ranges and geographic regions.
• Interactive reports: These reports give users the flexibility to manipulate the
structure, layout and content of a generic report via buttons, drop-down menus
and other interactive devices.
Ad-hoc reports: generated by users as a “one-off” exercise. The only limitations are
the capabilities of the reporting tool and the available data.
• Drive more effective actions: Guide users toward more intelligent actions and
customer interactions.
• Sales Analytics
• Service and Contact Center Analytics
• Marketing Analytics
• Supply Chain Analytics
• Financial Analytics
• Workforce Analytics
• Real-Time Decisions Solutions
106
Stages of a Business Analytics Intelligence Information
Systems
Figure 7 depicts the various stages in a data warehouse and business analytics
initiative. While data analytics comprise the service layer for the applications, the
other stages are equally important. Analytical services have varying applicability
across the high tech value chain.
External External
source source
Internal Internal
source source
Data
Source
Bulk Transfer
Real Transfer
Staging
Data
Staging
Error
Handling
Relational
Storage for Data
Analysis
Portal
Mining Other Reporting &
Client Browser client Presentation
User
107
Information Systems Supply Chain Analytics
It enables more effective management of the complexities of the organisation’s supply
chain. A typical Supply Chain Analytics provides several dashboards, and several
pre-built reports that deliver comprehensive insight across sales, logistics,
procurement, manufacturing, and quality assurance departments. This helps to:
The given examples of pre-built applications are from Sieable Business Analytics:
Inventory Analytics
Enterprise Sales Analytics provides several key performance indicators and large
number of reports delivered in several customizable dashboards. A typical Enterprise
Sales Analytics enables sales managers and front-line representatives to dramatically
improve sales effectiveness by:
• Providing real-time, actionable insight into every sales opportunity at the point
of customer contact,
• Closing business faster and increasing overall sales revenue,
• Confidently providing more accurate and up-to-date sales forecasts,
• Quickly pinpointing problems and opportunities to close more business,
• Sales Analytics,
• Monitor status and take actions to ensure quota achievement,
• Maximize revenue through better cross-selling and up-selling,
• Shorten sales cycles and increase win rates.
Track sales orders, invoicing, and revenue. Increase customer value and follow-on
sales potential. Proactively manage order pipeline and focus resources to maximize
sales revenue.
108
Sales Revenue and Fulfilment Analytics Intelligence Information
Systems
• Accelerate lead to cash cycle through visibility across entire sales process,
• Achieve comprehensive view of customer orders and invoices,
• Maximize sales throughput.
Financial Analytics
This enables understanding and managing the key drivers of shareholder value and
profitability. A typical Financial Analytics helps front-line managers improve
financial performance through complete, up-to-the-minute information on their
department’s expenses and revenue contribution. Users will benefit from:
Payables Analysis
Receivables Analysis
Profitability Analysis
109
Information Systems Marketing Analytics
BI business processes
Each business intelligence system has a specific goal, which derives from an
organisational goal or from a vision statement. Both short-term goals (such as
quarterly numbers to share market) and long term goals (such as shareholder value,
target industry share / size, etc.) exist.
110
BI technology Intelligence Information
Systems
Some observers regard BI as the process of enhancing data into information and then
into knowledge. Persons involved in business intelligence processes may use
application software and other technologies to gather, store, analyze, and provide
access to data, and present that data in a simple, useful manner. The software aids in
Business performance management, and aims to help people make "better" business
decisions by making accurate, current, and relevant information available to them
when they need it.
Some people use the term “BI” interchangeably with “briefing books” or with
“executive information systems”, and the information that they contain. In this sense,
one can regard a business intelligence system as a decision-support system (DSS).
BI software types
People working in business intelligence have developed tools that ease the work,
especially when the intelligence task involves gathering and analyzing large quantities
of unstructured data. Each vendor typically defines Business Intelligence his/her own
way, and markets tools to do BI the way that they see it.
History
Prior to the start of the Information Age in the late 20th century, businesses sometimes
struggled to collect data from non-automated sources. Businesses then lacked the
computing resources to properly analyze the data, and often made business decisions
primarily on the basis of intuition.
As businesses started automating more and more systems, more and more data became
available. However, collection remained a challenge due to a lack of infrastructure for
111
Information Systems data exchange or to incompatibilities between systems. Analysis of the data that was
gathered and reports on the data sometimes took months to generate. Such reports
allowed informed long-term strategic decision-making. However, short-term tactical
decision-making continued to rely on intuition.
Business intelligence software incorporates the ability to data mine, analyze, and
report. Some modern BI software allow users to cross-analyze and perform deep data
research rapidly for better analysis of sales or performance on an individual,
department, or company level. In modern applications of business intelligence
software, managers are able to quickly compile reports from data for forecasting,
analysis, and business decision making.
Indicators
BI often uses Key performance indicators (KPIs) to assess the present state of business
and to prescribe a course of action. More and more organisations have started to make
more data available more promptly. In the past, data only became available after a
month or two, which did not help managers to adjust activities in time to hit Share
Market targets. Recently, banks have tried to make data available at shorter intervals
and have reduced delays.
The KPI methodology was further expanded with the Chief Performance Officer
methodology which incorporated KPIs and root cause analysis into a single
methodology.
KPI example
For example, for businesses which have higher operational/credit risk loading (for
example, credit cards and "wealth management"), a large multi-national bank makes
KPI-related data available weekly, and sometimes offers a daily analysis of numbers.
This means data usually becomes available within 24 hours, necessitating automation
and the use of IT systems.
• Goal Alignment queries: The first step determines the short and medium-term
purposes of the programme. What strategic goal(s) of the organisation will the
programme address? What organisational mission/vision does it relate to? A
crafted hypothesis needs to detail how this initiative will eventually improve
results / performance (i.e., a strategy map).
112
• Cost and risk queries: The financial consequences of a new BI initiative Intelligence Information
should be estimated. It is necessary to assess the cost of the present operations Systems
and the increase in costs associated with the BI initiative. What is the risk that
the initiative will fail? This risk assessment should be converted into a financial
metric and included in the planning?
• Customer and Stakeholder queries: Determine who will benefit from the
initiative and who will pay. Who has a stake in the current procedure? What
kinds of customers/stakeholders will benefit directly from this initiative? Who
will benefit indirectly? What are the quantitative / qualitative benefits? Is the
specified initiative the best way to increase satisfaction for all kinds of
customers, or is there a better way? How will customers’ benefits be monitored?
What about employees? shareholders? and distribution channel members?
3.7.1 Marketing
Smart Businesses in their efforts to meet the competition have reoriented their
business around the customer by improving Customer Relationship Management. In
the mad rush to acquire new customers, they have realized it is equally important to
113
Information Systems retain the existing ones. Increased interaction and sophisticated analysis techniques
have given businesses unprecedented access to the mind of the customer; and they are
using this to develop one-to-one relation with the customer, design marketing and
promotion campaigns, optimize sale front layout, and manage e-commerce operations.
For improving Customer Relationship Management (CRM), the CRM strategy needs
to include:
• Operational CRM: Automating interaction with the customers and sales force,
• Analytical CRM: Sophisticated analysis of the customer data generated by
operational CRM and other sources like Sales Orders transactions, web site
transactions, and third-party data providers.
A typical business organisation has a huge customer base and often customer's needs
are fairly varying. Without the means to analyze voluminous customer data, CRM
strategy is bound to be a failure, therefore, the Analytical CRM forms the core of the
customer relationship strategy of a business.
Marketing and sales functions are the primary beneficiaries of Analytical CRM and
the main touch points from where the insights gained about the customer is absorbed
in the organisation.
Analytical CRM uses key business intelligence tools like data warehousing, data
mining, and OLAP to present a unified view of the customer. Following are some of
the uses of Analytical CRM:
a) Which media channels have been most successful in the past for various
campaigns?
b) Which geographic locations responded well to a particular campaign?
c) What were the relative costs and benefits of this campaign?
d) Which customer segments responded to the campaign?
• Customer Lifetime Value: Not all customers are equally profitable. At the same
time customers who are not very profitable today may have the potential of
being profitable in future. Hence, it is absolutely essential to identify customers
with high lifetime value; the idea is to establish long-term relations with these
customers.
The basic methodology used to calculate customer lifetime value is to deduct the
cost of servicing a customer from the expected future revenue generated by the
customer, add to this the net value of new customers referred by this customer,
and discount the result for the duration of the relationship. Though this sounds
easy, there are a number of subjective variables like overall duration of the
114
customer's relation with the business, gap between intermediate cash flows, and Intelligence Information
discount rate. It is suggested that data mining tools should be used to develop Systems
customized models for calculating customer lifetime value.
115
Information Systems 1) Web Log Analysis: This involves analyzing the basic traffic information
over the e-commerce web site. This analysis is primarily required to
optimize the operations over the Internet. It typically includes following
analyses:
2) Web Housing: This involves integration of web log data with data from
other sources like the purchase order transactions, third party data vendors
etc. Once the data is collected in a single customer centric data warehouse,
often referred to as Web house, all the applications already described
under CRM can be implemented. Often a business wants to design
specific campaigns for users who purchase from the e-commerce web site.
In this case, segmentation and profiling can be done specifically for the .e-
customers to understand their needs and browsing behavior. It can also be
used to personalize the content of the e-commerce web site for these users.
116
• Manpower Allocation: This includes allocating manpower based on the Intelligence Information
Systems
demand projections. According to the seasonal variation in demand, temporary
manpower can be hired to maintain service levels. The demand levels vary
within one working day also, which can be used to allocate resources
accordingly.
• Training and Succession Planning: Accurate data about the skill sets of the
workforce can be maintained in the data warehouse. This can be used to design
training programs and for effective succession planning.
• Fixed Asset Return Analysis: This is used to analyse financial viability of the
fixed assets owned or leased by the company. It would typically involve
measures like profitability per sq. foot of the space, total lease cost vs.
profitability, etc.
117
Information Systems include ETL functionality, ETL tools are generally not considered business
intelligence tools.
118
• Oracle Corporation Intelligence Information
• QlikView Systems
• Siebel Systems
• SAP Business Information Warehouse
• SAS Institute
• Saksoft
• Synola Ltd
• Stratws.
Same as these Business tool is being discussed in detail in the following unit:
Databases configured for OLAP employ a multidimensional data model, allowing for
complex analytical and ad-hoc queries with a rapid execution time. Nigel Pendse has
suggested that an alternative and perhaps more descriptive term to describe the
concept of OLAP is Fast Analysis of Shared Multidimensional Information (FASMI).
They borrow aspects of navigational databases and hierarchical databases that are
speedier than their relational kin.
OLAP Functionality
OLAP takes a snapshot of a set of source data and restructures it into an OLAP cube.
The queries can then be run against this. It has been claimed that for complex queries
OLAP can produce an answer in around 0.1% of the time for the same query on OLTP
relational data.
The cube is created from a star schema or snowflake schema of tables. At the centre is
the fact table which lists the core facts which make up the query. Numerous dimension
tables are linked to the fact tables. These tables indicate how the aggregations of
relational data can be analyzed. The number of possible aggregations is determined by
every possible manner in which the original data can be hierarchically linked. For
example a set of customers can be grouped by city, by district or by country; so with
50 cities, 8 districts and two countries there are three hierarchical levels with 60
members. These customers can be considered in relation to products; if there are 250
products with 20 categories, three families and three departments then there are 276
product members. With just these two dimensions there are 16,560 (276 * 60) possible
aggregations. As the data considered increases the number of aggregations can quickly
total tens of millions or more.
The calculation of the aggregations and the base data combined make up an OLAP
cube, which can potentially contain all the answers to every query which can be
answered from the data. Due to the potentially large number of aggregations to be
calculated, often only a predetermined number are fully calculated while the remainder
are solved on demand.
119
Information Systems Types of OLAP
There are three types of OLAP.
Multidimensional OLAP
MOLAP is the ‘classic’ form of OLAP and is sometimes referred to as just OLAP.
MOLAP uses database structures that are generally optimal attributes such as time
period, location, product or account code. The way that each dimension will be
aggregated is defined in advance by one or more hierarchies.
Relational OLAP
ROLAP works directly with relational databases.The base data and the dimension
tables are stored as relational tables and new tables are created to hold the aggregated
information.
Hybrid OLAP
There is no clear agreement across the industry as to what constitutes "Hybrid OLAP",
except that a database will divide data between relational and specialized storage. For
example, for some vendors, a HOLAP database will use relational tables to hold the
larger quantities of detailed data, and use specialized storage for at least some aspects
of the smaller quantities of more-aggregate or less-detailed data.
Comparison
Each type has certain benefits, although there is disagreement about the specifics of
the benefits between providers. MOLAP is better on smaller sets of data, it is faster to
calculate the aggregations and return answers and needs less storage space.
ROLAP is considered more scalable. However, large volume pre-processing is
difficult to implement efficiently so it is frequently skipped. ROLAP query
performance can therefore suffer.
HOLAP is between the two in all areas, but it can pre-process quickly and scale well.
All types though are prone to database explosion. Database explosion is a
phenomenon causing vast amount of storage space being used by OLAP databases
when certain but frequent conditions are met: high number of dimensions, pre-
calculated results and sparse multidimensional data. The difficulty in implementing
OLAP comes in forming the queries, choosing the base data and developing the
schema, as a result of which most modern OLAP products come with huge libraries of
pre-configured queries. Another problem is in the base data quality - it must be
complete and consistent.
Other types
The following acronyms are also used sometimes, although they are not as widespread
as the ones above
120
OLAP vendors. Since this also used MDX as a query language, MDX became the de- Intelligence Information
facto standard in the OLAP world. Systems
Data mining
Data Mining, also known as Knowledge-Discovery in Databases (KDD), is the
process of automatically searching large volumes of data for patterns..
Data Mining can be defined as “The nontrivial extraction of implicit, previously
unknown, and potentially useful information from data and The science of extracting
useful information from large data sets or databases.” Although it is usually used in
relation to analysis of data, data mining, like artificial intelligence, is an umbrella term
and is used with varied meaning in a wide range of contexts. It is usually associated
with a business or other organisation’s need to identify trends. Data mining involves
the process of analyzing data to show patterns or relationships and sorting through
large amounts of data and picking out pieces of relative information or patterns that
occur e.g. picking out statistical information from some data.
A simple example of data mining is its use in a retail sales department. If a store tracks
the purchases of a customer and notices that a customer buys a lot of silk shirts, the
data mining system will make a correlation between that customer and silk shirts. The
sales department will look at that information and may begin direct mail marketing of
silk shirts to that customer, or it may alternatively attempt to get the customer to buy a
wider range of products. In this case, the data mining system used by the retail store
discovered new information about the customer that was previously unknown to the
company. Another widely used (though hypothetical) example is that of a very large
North American chain of supermarkets. Through intensive analysis of the transactions
and the goods bought over a period of time, analysts found that beers and diapers were
often bought together. Though explaining this interrelation might be difficult, taking
advantage of it, on the other hand, should not be hard (e.g., placing the high-profit
diapers next to the high-profit beers). This technique is often referred to as Market
Basket Analysis
In statistical analyses, in which there is no underlying theoretical model, data mining
is often approximated via stepwise regression methods wherein the space of 2k
possible relationships between a single outcome variable and k potential explanatory
variables is smartly searched. With the advent of parallel computing, it became
possible (when k is less than approximately to examine all 2k models.) This procedure
is called all subsets or exhaustive regression. Some of the first applications of
exhaustive regression involved the study of plant data.
Data dredging
Used in the technical context of data warehousing and analysis, the term
“data mining” is neutral. However, it sometimes has a more pejorative usage that
implies imposing patterns (and particularly causal relationships) on data where none
exist. This imposition of irrelevant, misleading or trivial attribute correlation is more
properly criticized as “data dredging” in statistical literature. Another term for this
misuse of statistics is data fishing.
121
Information Systems Used in this latter sense, data dredging implies scanning the data for any relationships,
and then when one is found coming up with an interesting explanation. (This is also
referred to as “over fitting the model”.) The problem is that large data sets invariably
happen to have some exciting relationships peculiar to that data. Therefore, any
conclusions reached are likely to be highly suspect. In spite of this, some exploratory
data work is always required in any applied statistical analysis to get a feel for the
data, so sometimes the line between good statistical practice and data dredging is less
than clear.
One common approach to evaluating the fitness of a model generated via data mining
techniques is called cross validation. Cross validation is a technique that produces an
estimate of generalization error based on resampling. In simple terms, the general idea
behind cross validation is that dividing the data into two or more separate data subsets
allows one subset to be used to evaluate the generalize ability of the model learned
from the other data subset(s). A data subset used to build a model is called a training
set; the evaluation data subset is called the test set. Common cross validation
techniques include the holdout method, k-fold cross validation, and the leave-one-out
method.
Another pitfall of using data mining is that it may lead to discovering correlations that
may not exist. “There have always been a considerable number of people who busy
themselves examining the last thousand numbers which have appeared on a roulette
wheel, in search of some repeating pattern. Sadly enough, they have usually found it.”
However, when properly done, determining correlations in investment analysis has
proven to be very profitable for statistical arbitrage operations (such as pairs trading
strategies), and furthermore correlation analysis has shown to be very useful in risk
management. Indeed, finding correlations in the financial markets, when done
properly, is not the same as finding false patterns in roulette wheels.
Most data mining efforts are focused on developing highly detailed models of some
large data set. Other researchers have described an alternate method that involves
finding the minimal differences between elements in a data set, with the goal of
developing simpler models that represent relevant data.
122
BPM involves consolidation of data from various sources, querying, and analysis of Intelligence Information
the data, and putting the results into practice. Systems
BPM enhances processes by creating better feedback loops. Continuous and real-time
reviews help to identify and eliminate problems before they grow. BPM's forecasting
abilities help the company take corrective action in time to meet earnings projections.
Forecasting is characterized by a high degree of predictability which is put into good
use to answer what-if scenarios. BPM is useful in risk analysis and predicting
outcomes of merger and acquisition scenarios and coming up with a plan to overcome
potential problems.
BPM provides key performance indicators (KPI) that help companies monitor
efficiency of projects and employees against operational targets.
1) Some of the areas in which top management analysis could gain knowledge
from BPM:
a) Customer-related numbers:
b) New customers acquired
c) Status of existing customers.
2) Attrition of customers (including breakup by reason for attrition)
3) Turnover generated by segments of the customers - these could be demographic
filters.
4) Outstanding balances held by segments of customers and terms of payment -
these could be demographic filters.
5) Collection of bad debts within customer relationships.
6) Demographic analysis of individuals (potential customers) applying to become
customers, and the levels of approval, rejections and pending numbers.
7) Delinquency analysis of customers behind on payments.
This is more an inclusive list than an exclusive one. The above more or less describes
what a bank would do, but could also refer to a telephone company or similar service
sector company.
What is important?
123
Information Systems BPM integrates the company’s processes with CRM or ERP. Companies become able
to gauge customer satisfaction, control customer trends and influence shareholder
value.
The reporting and analysis market is mature. Companies have a wide variety of
technology options, from a plethora of BI vendors to platform and application
vendors. The participation by so many vendors reflects two issues:
Flexible Sharing: Once a report is created, you need to publish it on the web, portals,
printers, email, and applications. In enterprise reporting, Business Objects need to
provide a BI platform for secure, highly scalable information delivery to handle large
numbers of end users around the world. For embedded reporting, we need to provide
open application integration and flexible deployment. Our products are required to
integrate tightly with existing infrastructure to meet even the most demanding
124
enterprise and embedded reporting requirements. Reports may be required to be Intelligence Information
integrated into Java, .NET, and COM applications and deployed on Windows, UNIX, Systems
and Linux.
Reporting — the Way Users Work: For end users, reports need to include built-in
interactivity, creating a clean and efficient process where one report will satisfy the
needs of many different individuals. Users need to not only view reports in web
portals, but they also need to explore the information by moving easily from static
consumption to insightful interaction. What’s more, users may require embedding and
securely sharing live reports or parts of reports inside Microsoft Office Word,
PowerPoint, and Excel documents.
Proven Technology: The vendor needs to adopt proven technology which has evolved
over time to become the de facto accepted standard for reporting. It should have
compatibility with leading enterprise application software like SAP, IBM, Microsoft,
Oracle/PeopleSoft, Borland, and BEA.
Types Of Reporting and Analysis Solutions: There are three types of solutions
available as shown below. Some of the vendors who supply solutions have also been
indicated.
However, if the user’s job is to make decisions, this “delivery” model may only be
adequate in certain circumstances and be significantly inadequate in others. The most
popular feature in any reporting system is the “Export to Excel” button simply because
the decision maker’s job with the data is not done when the information is delivered -
the job has just begun.
125
Information Systems For accomplishing improvement of the productivity of the report design process a
centrally managed location is required so that report designers can access, update,
reuse and share report objects. To extend this concept, report objects can also be
shared and reused among multiple report designers for even greater efficiency.
The Repository is a central database that contains common report objects that can be
shared and reused. The types of report objects that can be shared via the repository
are:
The Repository is independent from all other databases connected to any report. It
manages logic, is a standalone database formatting, and objects. But most importantly,
by its very nature as a separate library (or database), it is referenced by the report
when the report is opened, and, as a result, it can actually update the repository objects
automatically for the report designer so that the added overhead of copying new logic
and objects is eliminated.
To add an object to the Repository, it's as simple as dragging and dropping the object
into a folder in the Repository Explorer. The Repository Explorer represents the
repository database as a tree structure made up of folders and objects. The structure of
the repository folders is up to the person working with the repository. Within the
Repository Explorer, the report designer has to have access to the repository objects
that can be placed on the report design surface. Text objects and image objects are
visible from the Repository Explorer. To use Repository Objects in your reports,
simply drag-and-drop these objects on to the report as required.
SQL commands and their properties are visible in the Repository Explorer. You do
not drag-and-drop these objects on the report as they are used in the Database Expert.
You can use SQL Commands like database tables when you're designing your report.
You can save your SQL statements in the repository for later use with the SQL
Command in Repository feature.
Trends
Quality reports are critical. It is not a very unusual scene to find that executives arrive
at meetings armed with spreadsheets only to find that each has different values for the
same metrics. The rest of the meeting is then spent arguing over whose numbers are
correct, rather than in actual decision making based on consistent numbers. This
malaise is largely an offshoot of data fragmentation, the fracture of a single version of
operational truth into multiple data sources, often not correlated with each other. The
most significant part of providing quality reports is the choice and architecture of your
reporting solution.
Quality aside, another important facet of reporting is delivery. Users want reports to be
personalized, organised, timely, easily accessible and in their preferred format. For
example, in today’s business environment it is desirable to be able to e-mail reports to
colleagues, create spreadsheets from report results and access reports securely over the
internet.
One aspect of reporting that is frequently overlooked is its integration with other IT
infrastructure elements. We have seen some customer departments so completely
126
focused on “solving the reporting problem” — evaluating checklists of features from Intelligence Information
various vendors, reading analyst reports on each vendors – that they fail to coordinate Systems
this effort with the bigger company wide picture of how that reporting piece fits into
other infrastructure (such as the central security repository or caching strategy). These
customers then find themselves in the position of having to stitch together their data
warehouse, reporting, security and portal solutions and own the problem of identifying
the correct vendor to call.
3.10 SUMMARY
In this unit we have discussed the most recent and talked about topics of knowledge
management, artificial intelligence and business intelligence. These topics in this unit
have been discussed in detail to make students ready for the real professional life also
list of the vendors who offer these commercial packages has been given. So that you
127
Information Systems can through more detailed information about these commercial packages from the
websites which will definitely improve your knowledge about these softwares.
Artificial / business intelligence areas are presently at the further developing stage and
are becoming more complex, particularly the area of neuro networks which has many
options; therefore, the discussion has been confined more to uses rather than
development aspects, as those do not fit in this course.
1) True or False
(a) True, (b) True, (c) False, (d) True, (e) False,
(f) False, (g) True.
2) Solutions/Answers
(a) The factors which make knowledge management implementation difficult
in an organisation are:
1) True or False
(a) False, (b) True, (c) False, (d) True, (e) True,
(f) True, (g) True, (h) False,
2) Solutions/Answers
(a) For Business Intelligence programme implementation, the questions that
must be asked / answered to ensure that BI goals are achieved
8 http://www-users.cs.york.ac.uk/~kimble/teaching/mis/mis_links.html.
9 http://www.scs.leeds.ac.uk/ukais/Newsletters/vol3no4.html#Definition.
10 http://www.techbooksforfree.com/.
11 http://www.acsac.org/2002/tutorials.html
129