Unit 4
Unit 4
We can find the probability of an uncertain event by using the below formula.
Sample space: The collection of all possible events is called sample space.
Random variables: Random variables are used to represent the events and
objects in the real world.
Conditional probability:
Example:In a class, there are 70% of the students who like English and 40% of
the students who likes English and mathematics, and then what is the percent of
students those who like English also like mathematics?
________________________________________________________________
Uncertainty:
Till now, we have learned knowledge representation using first-order logic and
propositional logic with certainty, which means we were sure about the
predicates. With this knowledge representation, we might write A→B, which
means if A is true then B is true, but consider a situation where we are not sure
about whether A is true or not then we cannot express this statement, this
situation is called uncertainty.
Causes of uncertainty:
Following are some leading causes of uncertainty to occur in the real world.
Experimental Errors:
In scientific research and experimentation, errors can occur at various
stages, such as data collection, measurement, and analysis. These errors
can introduce uncertainty in the results and conclusions drawn from the
experiments.
Equipment Fault:
In many AI systems, machines and sensors are used to collect data and
make decisions. However, these machines can be subject to faults,
malfunctions, or inaccuracies, leading to uncertainty in the outcomes
generated by AI systems.
Temperature Variation:
Many real-world applications of AI, such as weather prediction,
environmental monitoring, and energy management, are sensitive to
temperature variations. However, temperature measurements can be
subject to uncertainty due to factors such as sensor accuracy, calibration
errors, and environmental fluctuations.
Climate Change:
Climate change is a global phenomenon that introduces uncertainty in
various aspects of our lives. For example, predicting the impacts of
climate change on agriculture, water resources, and infrastructure requires
dealing with uncertain data and models
Probabilistic reasoning:
In the real world, there are lots of scenarios, where the certainty of something is
not confirmed, such as "It will rain today," "behavior of someone for some
situations," "A match between two teams or two players." These are probable
sentences for which we can assume that it will happen but not sure about it, so
here we use probabilistic reasoning.
In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
o Bayes' rule
o Bayesian Statistics
Bayes' Rule:
Bayes' rule is a fundamental theorem in probability theory that allows
updating probabilities based on new evidence. It provides a principled
way to combine prior knowledge with new data to update the
probabilities of different outcomes. Bayes' rule has been widely used in
AI for classification, prediction, and decision-making tasks where
uncertainty needs to be addressed.
Where:
Bayesian Statistics:
Bayesian statistics is a branch of statistics that uses probabilistic reasoning to
analyze and interpret data. It provides a framework for making statistical
inferences and estimating probabilities based on data and prior knowledge.
Bayesian statistics has been applied in various fields, such as medical
research, environmental modeling, and social sciences, to deal with uncertainty
and make informed decisions.
Example:
________________________________________________________________
In its most general form, a decision network represents information about the
agent’s current state, its possible actions, the state that will result from the
agent’s action, and the utility of that state. It therefore provides a substrate for
implementing utility-based agents of the type first introduced in Section 2.4.5.
Figure 16.6 shows a decision network for the airport siting problem. It
illustrates the three types of nodes used:
Chance nodes (ovals) represent random variables, just as they do in Bayesian
networks. The agent could be uncertain about the construction cost, the level of
air traffic and the potential for litigation, and the Deaths, Noise, and total Cost
variables, each of which also depends on the site chosen. Each chance node has
associated with it a conditional distribution that is indexed by the state of the
parent nodes. In decision networks, the parent nodes can include decision nodes
as well as chance nodes. Note that each of the current-state chance nodes could
be part of a large Bayesian network for assessing construction costs, air traffic
levels, or litigation potentials.
Decision nodes (rectangles) represent points where the decision maker has a
choice of
actions. In this case, the AirportSite action can take on a different value for each
site under consideration. The choice influences the cost, safety, and noise that
will result. In this chapter, we assume that we are dealing with a single decision
node. Chapter 17 deals with cases in which more than one decision must be
made.
Utility nodes (diamonds) represent the agent’s utility function.? The utility node
has as parents all variables describing the outcome that directly affect utility.
Associated with the utility node is a description of the agent’s utility as a
function of the parent attributes. The description could be just a tabulation of the
function, or it might be a parameterized additive or linear function of the
attribute values.
A simplified form is also used in many cases. The notation remains identical,
but the chance nodes describing the outcome state are omitted. Instead, the
utility node is connected directly to the current-state nodes and the decision
node. In this case, rather than representing a utility function on outcome states,
the utility node represents the expected utility associated with each action, as
defined in Equation (16.1) on page 611; that is, the node is associated with an
action-utility function (also known as a Q-function in reinforcement learning, as
described in Chapter 21). Figure 16.7 shows the action-utility representation of
the airport siting problem.
Notice that, because the Noise, Deaths, and Cost chance nodes in Figure 16.6
refer to future states, they can never have their values set as evidence variables.
Thus, the simplified version that omits these nodes can be used whenever the
more general form can be used. Although the simplified form contains fewer
nodes, the omission of an explicit description of the outcome of the siting
decision means that it is less flexible with respect to changes in circumstances.
For example, in Figure 16.6, a change in aircraft noise levels can be reflected by
a change in the conditional probability table associated with the Noise node,
whereas a change in the weight accorded to noise pollution in the utility
function can be reflected by
a change in the utility table. In the action-utility diagram, Figure 16.7, on the
other hand, all such changes have to be reflected by changes to the action-utility
table. Essentially, the action-utility formulation is a compiled version of the
original formulation.
________________________________________________________________
EXPERT SYSTEM
In artificial intelligence (AI), an expert system is a computer system
emulating the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning
through bodies of knowledge, represented mainly as if–then rules rather
than through conventional procedural code.
The knowledge base. This is where the information the expert system
draws upon is stored. Human experts provide facts about the expert
system's particular domain or subject area are provided that are
organized in the knowledge base. The knowledge base often contains
a knowledge acquisition module that enables the system to gather
knowledge from external sources and store it in the knowledge base.
Or
The knowledge base represents facts and rules. It consists of knowledge
in a particular domain as well as rules to solve a problem, procedures
and intrinsic data relevant to the domain.
The user interface. This is the part of the expert system that end users
interact with to get an answer to their question or problem.
Or
This module makes it possible for a non-expert user to interact with the
expert system and find a solution to the problem.
The Inference Engine generally uses two strategies for acquiring knowledge from
the Knowledge Base, namely –
Forward Chaining
Backward Chaining
Forward Chaining –
Forward Chaining is a strategic process used by the Expert System to answer the
questions – What will happen next. This strategy is mostly used for managing
tasks like creating a conclusion, result or effect. Example – prediction or share
market movement status.
Forward Chaining
Backward Chaining –
Backward Chaining is a strategy used by the Expert System to answer the
questions – Why this has happened. This strategy is mostly used to find out the
root cause or reason behind it, considering what has already happened. Example –
diagnosis of stomach pain, blood cancer or dengue, etc.
Backward Chaining
1. Backward chaining reads and processes a set of facts to reach a
logical conclusion about why something happened. An example
of backward chaining would be examining a set of symptoms to reach
a medical diagnosis.