0% found this document useful (0 votes)
2 views

unit 4 AI

Probabilistic reasoning is crucial in AI for managing uncertainty arising from unreliable data, experimental errors, and environmental variability. It employs concepts like prior and posterior probabilities, conditional probability, and full joint distribution to make informed decisions in unpredictable scenarios. Decision networks further enhance this by integrating chance, decision, and utility nodes to analyze complex decision-making situations.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

unit 4 AI

Probabilistic reasoning is crucial in AI for managing uncertainty arising from unreliable data, experimental errors, and environmental variability. It employs concepts like prior and posterior probabilities, conditional probability, and full joint distribution to make informed decisions in unpredictable scenarios. Decision networks further enhance this by integrating chance, decision, and utility nodes to analyze complex decision-making situations.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Probabilistic reasoning is essential in Artificial Intelligence (AI) for handling situations where knowledge is

incomplete, uncertain, or based on unreliable data.

Uncertainty

In logic-based systems, knowledge is usually expressed with certainty. For example, using propositional or
first-order logic, a statement like A→B means "if A is true, then B is true." However, in many real-world
scenarios, we can't be sure whether A is true. This lack of certainty is known as uncertainty.

Causes of Uncertainty:(unreliable sources..data ni experiment cheyanikey instruments kavali…ah instruments


enviromnt nundi vasthai)

 Unreliable Sources: Information may come from sources that are incomplete or incorrect.
 Experimental Errors: Data collected from experiments might include mistakes.
 Equipment Faults: Instruments used to gather data might malfunction.
 Environmental Variability: Natural factors like temperature, weather, and climate can introduce
variability that causes uncertainty.

Why Use Probabilistic Reasoning in AI?(combines pt+lr)-when unpredic,unknow,large)

Probabilistic reasoning combines probability theory with logical reasoning to handle uncertain knowledge.
This method is crucial in cases like:

 Unpredictable Outcomes: Situations where the future state is unknown, such as weather predictions.
 Large Number of Variables: When there are too many possibilities to handle using simple logical
approaches.
 Unknown Errors:
 When errors or uncertainty arise due to factors not considered in the model.

Probability

Probability is a numerical measure that represents the likelihood of an event happening. The probability value
always falls between 0 and 1, where:

 P(A)= 1: The event A is certain to occur.


 P(A)= 0: The event A is certain not to occur.

The relationship between an event happening and not happening is given by:

P(A)+P(¬A)=1

Where P(¬A) is the probability of event A not occurring.

Key Components of Probabilistic Reasoning

1. Event

An event is an outcome or a specific occurrence that we are trying to measure. For example, in the statement "It
will rain today," the event is the occurrence of rain.

2. Sample Space
The sample space is the set of all possible outcomes of an experiment or a scenario. For example, in a coin toss,
the sample space is {heads, tails}.

3. Random Variables

A random variable is a variable that takes different values based on the outcomes of a random process. It
represents uncertain events in the real world. Random variables can be:

 Discrete: Take a countable number of values (e.g., outcomes of a dice roll).


 Continuous: Take any value within a range (e.g., temperature).

4. Prior Probability

Prior probability refers to the probability of an event before considering new evidence. It reflects the initial
belief or knowledge about the event.

For example, if we know that 30% of the population has a certain disease, the prior probability of a randomly
selected individual having the disease is 0.30.

5. Posterior Probability

Posterior probability is the updated probability of an event after taking into account new evidence. This is
where Bayes' Theorem comes into play (we will explore this more when discussing Bayesian reasoning). The
posterior probability combines the prior probability and the new data to give a refined estimate.

6. Conditional Probability

Conditional probability is the probability of one event occurring given that another event has already
occurred. It is expressed as P(A∣B), meaning "the probability of event A occurring, given that event B has
already occurred." The formula is:
Example

In summary:

 Conditional Probability helps us find the probability of one event given that another event has already
occurred (57% of English-liking students also like mathematics).
 Joint Probability tells us the probability of two events happening together (40% of all students like
both subjects).
Prior Probability P(A)

 Definition: This is the probability of an event happening before we have any additional information or evidence
about other events.
 Context: Think of this as what you initially believe about an event based on past experiences or general
knowledge, without considering any other factors.
 Example:
o Suppose you're trying to figure out how likely it is that a person enjoys jogging. Based on what you know
(perhaps from experience or general surveys), you estimate that around 40% of people in your
neighborhood enjoy jogging. This estimate—40%—is your prior probability for the event "person enjoys
jogging," denoted as P(A).
o You make this estimate without taking into account any other factors, like whether they own running
shoes or whether they exercise regularly.

Marginal Probability P(B)

Definition: This is the total probability of an event occurring, taking into account all the
possible reasons that could lead to that event.

 Context: It reflects the overall chance of something happening without focusing on any specific condition or
cause.
 Example:
o Now, think about the event “person drinks coffee regularly.” You estimate that 60% of people drink
coffee regularly. This is your marginal probability for the event “drinks coffee,” denoted as P(B)P(B)P(B).
o This 60% includes everyone who drinks coffee, regardless of whether they jog, have a busy lifestyle, or
work long hours. It’s just the overall likelihood of drinking coffee, considering all possible factors
combined.

Putting It Together in Daily Life

Let's say you're interested in how these two events—liking jogging (A) and drinking coffee (B)—are related.
Your prior probability of someone enjoying jogging (P(A)is based on what you know about jogging itself,
without any knowledge about their coffee habits. Similarly, the marginal probability of drinking coffee
(P(B) is just the overall chance of people drinking coffee, without considering jogging.

These probabilities are independent of each other initially, and they help set the foundation for further analysis.
If you later learn some connection between these activities, you can update your probabilities accordingly.
Inference using full joint distribution
Inference Using Full Joint Distribution in AI

Concept(method in ai),

Inference using a full joint distribution is a probabilistic reasoning method in AI. A joint distribution
represents the probabilities of all possible combinations of values for different random variables in a system. In
this method, we use these probabilities to infer the likelihood of specific outcomes or events.

Breaking it Down:

 Full Joint Distribution: A table or matrix that lists every possible combination of random variables and
their associated probabilities. This provides a complete view of how variables interact.
 Inference: Inference is the process of calculating the probability of an event or variable based on known
information. Using the full joint distribution, we can answer questions like, "What is the probability of
rain given that the sprinkler is on?"

Why Use It?

AI systems often deal with uncertainty. This helps in decision-making, diagnosis, and prediction.

Example (Simplified):

Consider a scenario where we want to predict if the grass is wet based on two factors:

1. Rain (R) – Whether it’s raining (True/False)


2. Sprinkler (S) – Whether the sprinkler is on (True/False)

The joint probability distribution for these two factors looks like this:

Rain Sprinkler (S) Probability


(R)
True True 0.10
True False 0.20
False True 0.25
False False 0.45

 Each row shows a different combination of Rain and Sprinkler, and the numbers are the probabilities
of those combinations.
Benefits of Full Joint Distribution:(paina chepinave..complete-ante full
joint,inference,uncertainity)

1. Complete Picture: It provides the most detailed view of all possible outcomes.
2. Accurate Inferences: You can make exact inferences about specific events by adding relevant
probabilities.
3. Handles Uncertainty: This approach allows AI systems to deal with uncertain environments where
multiple factors interact.

Challenges:

Stora
ge and Computation: Handling a large number of variables requires significant memory and processing
power.

How Inference Helps AI:

 AI systems often face situations where they need to make decisions under uncertainty. By using the full
joint distribution, the AI can infer the likelihood of certain events based on observed factors, allowing it
to make informed decisions.
 For example, in medical diagnosis, AI can infer the probability of a disease given symptoms, helping
doctors make better treatment decisions.

Summary:

Inference using full joint distribution is a powerful probabilistic reasoning tool in AI. It provides an exact way
to calculate the probability of events based on all possible outcomes. Though it offers complete information, it
can become computationally intensive as the number of variables grows. This approach is key in fields like
decision-making, diagnosis, and prediction in AI.
See this also..for other example and explanation
Bayes Rule

Q32
\
Bayesian Network
Construction
Exact and Approximate inference
Temporal Model
Hidden Markov Model
MDP-Formulation
Utility Theory

Extra content
Multi Attribute utility functions

What is a Decision Network?


Decision networks are graphical models used to represent and solve decision-making problems. They
extend Bayesian networks by incorporating decision and utility nodes, allowing for a comprehensive analysis
of decision scenarios.
Components of Decision Networks
A decision network consists of three types of nodes:
 Chance Nodes: Represent random variables and their possible values, capturing the uncertainty in the
decision-making process.

 Decision Nodes: Represent the choices available to the decision-maker.


 Utility Nodes: Represent the utility or value of the outcomes, helping to evaluate and compare
different decision paths.

Example of a Decision Network


Consider a simple medical diagnosis scenario where a doctor needs to decide whether to order a test for a
patient based on the likelihood of a disease and the cost of the test. The decision network for this scenario
might include:
 Chance Nodes: Disease presence (Yes/No), Test result (Positive/Negative)
 Decision Node: Order test (Yes/No)
 Utility Node: Overall patient health outcome and cost
The doctor can use the decision network to evaluate the expected utility of ordering the test versus not
ordering it, taking into account the probabilities of disease presence and test results, and the utility values
associated with different outcomes.
Value Iteration
Policy iteration
POMDPs

You might also like