0% found this document useful (0 votes)
7 views

Unit 4

This document is based on the computer organisation

Uploaded by

darkbeast815
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unit 4

This document is based on the computer organisation

Uploaded by

darkbeast815
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT 4

As probabilistic reasoning uses probability and related terms, so before


understanding probabilistic reasoning, let's understand some common terms:

Probability: Probability can be defined as a chance that an uncertain event will


occur. It is the numerical measure of the likelihood that an event will occur. The
value of probability always remains between 0 and 1 that represent ideal
uncertainties.

1. 0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.


1. P(A) = 0, indicates total uncertainty in an event A.
1. P(A) =1, indicates total certainty in an event A.

We can find the probability of an uncertain event by using the below formula.

o P(¬A) = probability of a not happening event.


o P(¬A) + P(A) = 1.

Event: Each possible outcome of a variable is called an event.

Sample space: The collection of all possible events is called sample space.

Random variables: Random variables are used to represent the events and
objects in the real world.

Prior probability: The prior probability of an event is probability computed


before observing new information.

Posterior Probability: The probability that is calculated after all evidence or


information has taken into account. It is a combination of prior probability and
new information.

Conditional probability:

Conditional probability is a probability of occurring an event when another


event has already happened. Let's suppose, we want to calculate the event A
when event B has already occurred, "the probability of A under the conditions
of B", it can be written as:Where P(A⋀B)= Joint probability of a and
B ,P(B)= Marginal probability of B.
If the probability of A is given and we need to find the probability of B, then it
will be given as:It can be explained by using the below Venn diagram, where B
is occurred event, so sample space will be reduced to set B, and now we can
only calculate event A when event B is already occurred by dividing the
probability of P(A⋀B) by P( B ).

Example:In a class, there are 70% of the students who like English and 40% of
the students who likes English and mathematics, and then what is the percent of
students those who like English also like mathematics?

Solution:Let, A is an event that a student likes Mathematics .B is an event that


a student likes English.Hence, 57% are the students who like English also
like Mathematics.

________________________________________________________________

Probabilistic reasoning in Artificial intelligence

Uncertainty:

Till now, we have learned knowledge representation using first-order logic and
propositional logic with certainty, which means we were sure about the
predicates. With this knowledge representation, we might write A→B, which
means if A is true then B is true, but consider a situation where we are not sure
about whether A is true or not then we cannot express this statement, this
situation is called uncertainty.

So to represent uncertain knowledge, where we are not sure about the


predicates, we need uncertain reasoning or probabilistic reasoning.

From self-driving cars to virtual personal assistants, AI technologies have


become integral to our daily routines. However, one of the key challenges that
AI systems face is dealing with uncertainty. Uncertainty arises due to various
factors such as unreliable sources of Information, experimental errors,
equipment faults, temperature variations, and climate change, among others. To
address this challenge, probabilistic reasoning techniques have gained
significant importance in AI, allowing machines to make decisions and
predictions in uncertainty. Probabilistic reasoning is a technique used in AI to
address uncertainty by modeling and reasoning with probabilistic Information.
It allows AI systems to make decisions and predictions based on the
probabilities of different outcomes, taking into account uncertain or incomplete
Information. Probabilistic reasoning provides a principled approach to handling
uncertainty, allowing machines to reason about uncertain situations in a
rigorous and quantitative mannerProbabilistic reasoning is a technique used in
AI to address uncertainty by modeling and reasoning with probabilistic
Information. It allows AI systems to make decisions and predictions based on
the probabilities of different outcomes, taking into account uncertain or
incomplete Information. Probabilistic reasoning provides a principled approach
to handling uncertainty, allowing machines to reason about uncertain situations
in a rigorous and quantitative manner.

Causes of uncertainty:

Following are some leading causes of uncertainty to occur in the real world.

1. Information occurred from unreliable sources.


2. Experimental Errors
3. Equipment fault
4. Temperature variation
5. Climate change.

Information Occurred from Unreliable Sources:


AI systems rely on data to make decisions and predictions. However, data
obtained from various sources may not always be reliable. Data can be
incomplete, inconsistent, or biased, leading to uncertainty in the outcomes
generated by AI systems.

Experimental Errors:
In scientific research and experimentation, errors can occur at various
stages, such as data collection, measurement, and analysis. These errors
can introduce uncertainty in the results and conclusions drawn from the
experiments.

Equipment Fault:
In many AI systems, machines and sensors are used to collect data and
make decisions. However, these machines can be subject to faults,
malfunctions, or inaccuracies, leading to uncertainty in the outcomes
generated by AI systems.

Temperature Variation:
Many real-world applications of AI, such as weather prediction,
environmental monitoring, and energy management, are sensitive to
temperature variations. However, temperature measurements can be
subject to uncertainty due to factors such as sensor accuracy, calibration
errors, and environmental fluctuations.
Climate Change:
Climate change is a global phenomenon that introduces uncertainty in
various aspects of our lives. For example, predicting the impacts of
climate change on agriculture, water resources, and infrastructure requires
dealing with uncertain data and models

Probabilistic reasoning:

Probabilistic reasoning is a way of knowledge representation where we apply


the concept of probability to indicate the uncertainty in knowledge. In
probabilistic reasoning, we combine probability theory with logic to handle the
uncertainty.

We use probability in probabilistic reasoning because it provides a way to


handle the uncertainty that is the result of someone's laziness and ignorance.

In the real world, there are lots of scenarios, where the certainty of something is
not confirmed, such as "It will rain today," "behavior of someone for some
situations," "A match between two teams or two players." These are probable
sentences for which we can assume that it will happen but not sure about it, so
here we use probabilistic reasoning.

Probabilistic reasoning is a key aspect of artificial intelligence (AI) that allows


for handling uncertainty and ambiguity in decision-making. It is a powerful
technique that enables AI systems to make informed decisions even when faced
with incomplete or noisy data. Probabilistic reasoning is widely used in various
AI applications such as machine learning, natural language
processing, robotics, computer vision, and many more..

Need of probabilistic reasoning in AI:

o When there are unpredictable outcomes.


o When specifications or possibilities of predicates becomes too large to
handle.
o When an unknown error occurs during an experiment.

The need for probabilistic reasoning in AI arises because uncertainty is


inherent in many real-world applications. For example, there is often
uncertainty in the symptoms, test results, and patient history in medical
diagnosis. In autonomous vehicles, there is uncertainty in the sensor
measurements, road conditions, and traffic patterns. In financial markets,
there is uncertainty in stock prices, economic indicators, and investor
behavior. Probabilistic reasoning techniques allow AI systems to deal
with these uncertainties and make informed decisions.

In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:

o Bayes' rule
o Bayesian Statistics

 Bayes' Rule:
Bayes' rule is a fundamental theorem in probability theory that allows
updating probabilities based on new evidence. It provides a principled
way to combine prior knowledge with new data to update the
probabilities of different outcomes. Bayes' rule has been widely used in
AI for classification, prediction, and decision-making tasks where
uncertainty needs to be addressed.

Mathematically, Bayes' Theorem is expressed as:

P(A|B) = (P(B|A) * P(A)) / P(B)

Where:

 P(A|B) represents the posterior probability, which is the probability of


event A occurring given that event B has occurred.
 P(B|A) represents the likelihood, which is the probability of observing
event B given that event A has occurred.
 P(A) represents the prior probability, which is the initial probability of
event A occurring before considering any new evidence.
 P(B) represents the marginal likelihood, which is the probability of
observing event B, regardless of whether event A has occurred.

Bayesian Statistics:
Bayesian statistics is a branch of statistics that uses probabilistic reasoning to
analyze and interpret data. It provides a framework for making statistical
inferences and estimating probabilities based on data and prior knowledge.
Bayesian statistics has been applied in various fields, such as medical
research, environmental modeling, and social sciences, to deal with uncertainty
and make informed decisions.
Example:

Let's consider an example of a medical diagnosis system that uses probabilistic


reasoning to handle uncertainty. The system is designed to diagnose a specific
disease based on a patient's symptoms, medical history, and test results.

A Bayesian network (also known as a Bayes network, Bayes net, belief


network, or decision network) is a probabilistic graphical model that
represents a set of variables and their conditional dependencies via a directed
acyclic graph (DAG)

________________________________________________________________

16.5 DECISION NETWORKS


In this section, we look at a general mechanism for making rational decisions.
The notation is often called an influence diagram (Howard and Matheson,
1984), but we will use the more descriptive term decision network. Decision
networks combine Bayesian networks with additional node types for actions and
utilities. We use airport siting as an example.

16.5.1 Representing a decision problem with a decision network

In its most general form, a decision network represents information about the
agent’s current state, its possible actions, the state that will result from the
agent’s action, and the utility of that state. It therefore provides a substrate for
implementing utility-based agents of the type first introduced in Section 2.4.5.
Figure 16.6 shows a decision network for the airport siting problem. It
illustrates the three types of nodes used:
Chance nodes (ovals) represent random variables, just as they do in Bayesian
networks. The agent could be uncertain about the construction cost, the level of
air traffic and the potential for litigation, and the Deaths, Noise, and total Cost
variables, each of which also depends on the site chosen. Each chance node has
associated with it a conditional distribution that is indexed by the state of the
parent nodes. In decision networks, the parent nodes can include decision nodes
as well as chance nodes. Note that each of the current-state chance nodes could
be part of a large Bayesian network for assessing construction costs, air traffic
levels, or litigation potentials.
Decision nodes (rectangles) represent points where the decision maker has a
choice of

actions. In this case, the AirportSite action can take on a different value for each
site under consideration. The choice influences the cost, safety, and noise that
will result. In this chapter, we assume that we are dealing with a single decision
node. Chapter 17 deals with cases in which more than one decision must be
made.

Utility nodes (diamonds) represent the agent’s utility function.? The utility node
has as parents all variables describing the outcome that directly affect utility.
Associated with the utility node is a description of the agent’s utility as a
function of the parent attributes. The description could be just a tabulation of the
function, or it might be a parameterized additive or linear function of the
attribute values.
A simplified form is also used in many cases. The notation remains identical,
but the chance nodes describing the outcome state are omitted. Instead, the
utility node is connected directly to the current-state nodes and the decision
node. In this case, rather than representing a utility function on outcome states,
the utility node represents the expected utility associated with each action, as
defined in Equation (16.1) on page 611; that is, the node is associated with an
action-utility function (also known as a Q-function in reinforcement learning, as
described in Chapter 21). Figure 16.7 shows the action-utility representation of
the airport siting problem.
Notice that, because the Noise, Deaths, and Cost chance nodes in Figure 16.6
refer to future states, they can never have their values set as evidence variables.
Thus, the simplified version that omits these nodes can be used whenever the
more general form can be used. Although the simplified form contains fewer
nodes, the omission of an explicit description of the outcome of the siting
decision means that it is less flexible with respect to changes in circumstances.
For example, in Figure 16.6, a change in aircraft noise levels can be reflected by
a change in the conditional probability table associated with the Noise node,
whereas a change in the weight accorded to noise pollution in the utility
function can be reflected by

a change in the utility table. In the action-utility diagram, Figure 16.7, on the
other hand, all such changes have to be reflected by changes to the action-utility
table. Essentially, the action-utility formulation is a compiled version of the
original formulation.

16.5.2 Evaluating decision networks


Actions are selected by evaluating the decision network for each possible
setting of the decision node. Once the decision node is set, it behaves exactly
like a chance node that has been set as an evidence variable. The algorithm for
evaluating decision networks is the following:
1. Set the evidence variables for the current state.
2. For each possible value of the decision node:
(a) Set the decision node to that value.
(b) Calculate the posterior probabilities for the parent nodes of the utility node,
using a standard probabilistic inference algorithm.
(c) Calculate the resulting utility for the action.
3. Return the action with the highest utility.

________________________________________________________________
EXPERT SYSTEM
 In artificial intelligence (AI), an expert system is a computer system
emulating the decision-making ability of a human expert.
 Expert systems are designed to solve complex problems by reasoning
through bodies of knowledge, represented mainly as if–then rules rather
than through conventional procedural code.

 An expert system is a computer program that uses artificial intelligence


(AI) technologies to simulate the judgment and behavior of a human or an
organization that has expertise and experience in a particular field.
 Expert systems are usually intended to complement, not replace, human
experts.
 The concept of expert systems was developed in the 1970s by computer
scientist Edward Feigenbaum, a computer science professor at Stanford
University and founder of Stanford's Knowledge Systems Laboratory.
 These systems can improve their performance over time as they gain
more experience, just as humans do.
 Expert systems accumulate experience and facts in a knowledge base and
integrate them with an inference or rules engine -- a set of rules for
applying the knowledge base to situations provided to the program.
 An expert system relies on having a good knowledge base. Experts add
information to the knowledge base, and nonexperts use the system to
solve complex problems that would usually require a human expert.
 The process of building and maintaining an expert system is
called knowledge engineering.
 Knowledge engineers ensure that expert systems have all the necessary
information to solve a problem. They use various knowledge
representation methodologies, such as symbolic patterns, to do this. The
system's capabilities can be enhanced by expanding the knowledge base
or creating new sets of rules.
Characteristics of an Expert System :
 Human experts are perishable, but an expert system is permanent.
 It helps to distribute the expertise of a human.
 One expert system may contain knowledge from more than one human
experts thus making the solutions more efficient.
 It decreases the cost of consulting an expert for various domains such as
medical diagnosis.
 They use a knowledge base and inference engine.
 Expert systems can solve complex problems by deducing new facts
through existing facts of knowledge, represented mostly as if-then rules
rather than through conventional procedural code.
 Expert systems were among the first truly successful forms of artificial
intelligence (AI) software.

What are the components of an expert system?
There are three main components of an expert system:

 The knowledge base. This is where the information the expert system
draws upon is stored. Human experts provide facts about the expert
system's particular domain or subject area are provided that are
organized in the knowledge base. The knowledge base often contains
a knowledge acquisition module that enables the system to gather
knowledge from external sources and store it in the knowledge base.
Or
The knowledge base represents facts and rules. It consists of knowledge
in a particular domain as well as rules to solve a problem, procedures
and intrinsic data relevant to the domain.

 The inference engine. This part of the system pulls relevant


information from the knowledge base to solve a user's problem. It is
a rules-based system that maps known information from the
knowledge base to a set of rules and makes decisions based on those
inputs. Inference engines often include an explanation module that
shows users how the system came to its conclusion.
Or
The function of the inference engine is to fetch the relevant knowledge
from the knowledge base, interpret it and to find a solution relevant to
the user’s problem. The inference engine acquires the rules from its
knowledge base and applies them to the known facts to infer new facts.
Inference engines can also include an explanation and debugging
abilities.

 The user interface. This is the part of the expert system that end users
interact with to get an answer to their question or problem.
Or
This module makes it possible for a non-expert user to interact with the
expert system and find a solution to the problem.

 Knowledge Acquisition and Learning Module –The function of this


component is to allow the expert system to acquire more and more
knowledge from various sources and store it in the knowledge base.
 Explanation Module –
This module helps the expert system to give the user an explanation about
how the expert system reached a particular conclusion.

The Inference Engine generally uses two strategies for acquiring knowledge from
the Knowledge Base, namely –
Forward Chaining
 Backward Chaining
Forward Chaining –
Forward Chaining is a strategic process used by the Expert System to answer the
questions – What will happen next. This strategy is mostly used for managing
tasks like creating a conclusion, result or effect. Example – prediction or share
market movement status.

Forward Chaining

1. Forward chaining reads and processes a set of facts to make a logical


prediction about what will happen next. An example of forward
chaining would be making predictions about the movement of the stock
market.

Backward Chaining –
Backward Chaining is a strategy used by the Expert System to answer the
questions – Why this has happened. This strategy is mostly used to find out the
root cause or reason behind it, considering what has already happened. Example –
diagnosis of stomach pain, blood cancer or dengue, etc.

Backward Chaining
1. Backward chaining reads and processes a set of facts to reach a
logical conclusion about why something happened. An example
of backward chaining would be examining a set of symptoms to reach
a medical diagnosis.

These systems have played a large role in many industries, including


thefollowing:

 Financial services, where they make decisions about asset


management, act as robo-advisors and make predictions about the
behavior of various markets and other financial indicators.
 Mechanical engineering, where they troubleshoot complex
electromechanical machinery.
 Telecommunications, where they are used to make decisions about
network technologies used and maintenance of existing networks.
 Healthcare, where they assist with medical diagnoses.
 Agriculture, where they forecast crop damage.
 Customer service, where they help schedule orders, route customer
requests and solve problems.
 Transportation, where they contribute in a range of areas, including
pavement conditions, traffic light control, highway design, bus and
train scheduling and maintenance, and aviation flight patterns and air
traffic control.
 Law, where automation is starting to be used to deliver legal services,
and to make civil case evaluations and assess product liability.
What are some examples of expert systems?
Expert systems that are in use include the following examples:

 CaDet (Cancer Decision Support Tool) is used to identify cancer in its


earliest stages.
 DENDRAL helps chemists identify unknown organic molecules.
 DXplain is a clinical support system that diagnoses various diseases.
 MYCIN identifies bacteria such as bacteremia and meningitis, and
recommends antibiotics and dosages.
 PXDES determines the type and severity of lung cancer a person has.
 R1/XCON is an early manufacturing expert system that automatically
selects and orders computer components based on customer
specifications.
What are the advantages of expert systems?
Expert systems have several benefits over the use of human experts:

 Accuracy. Expert systems are not prone to human error or emotional


influence. They make decisions based on defined rules and facts.
 Permanence. Human experts eventually leave their role, and a lot of
specific knowledge may go with them. Knowledge-based systems
provide a permanent repository for knowledge and information.
 Logical deduction. Expert systems draw conclusions from existing
facts using various types of rules, such as if-then rules.
 Cost control. Expert systems are relatively inexpensive compared to
the cost of employing human experts. They can help reach decisions
more efficiently, which saves time and cuts costs.
 Multiple experts. Multiple experts contribute to an expert system's
knowledge base. This provides more knowledge to draw from and
prevents any one expert from skewing the decision-making.
What are the challenges of expert systems?
Among expert systems' shortcomings are the following:

 Linear thinking. Expert systems lack true problem-solving ability.


One of the advantages of human intelligence is that it can reason in
nonlinear ways and use ancillary information to draw conclusions.
 Lack of intuition. Human intuition enables people to use common
sense and gut feelings to solve problems. Machines don't have
intuition. And emulating gut-feeling decision-making using
mechanical logic could take much longer than an expert using
intrinsic heuristic knowledge to come to a quick conclusion.
 Lack of emotion. In some cases -- medical diagnoses, for example --
human emotion is useful and necessary. For example, the disclosure of
sensitive medical information to a patient requires emotional
intelligence that an expert system may not have.
 Points of failure. Expert systems are only as good as the quality of
their knowledge base. If they are supplied with inaccurate
information, it can compromise their decisions.

You might also like