Object Automation Software Solutions Pvt Ltd in collaboration with SRM Ramapuram delivered Workshop for Skill Development on Artificial Intelligence.
Uncertain Knowledge and reasoning by Mr.Abhishek Sharma, Research Scholar from Object Automation.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document provides an overview of constraint satisfaction problems (CSPs). It defines a CSP as consisting of variables with domains of possible values, and constraints specifying allowed value combinations. CSPs can represent many problems using variables and constraints rather than explicit state representations. Backtracking search is commonly used to solve CSPs by trying value assignments and backtracking when constraints are violated.
Problem Characteristics in Artificial IntelligenceBharat Bhushan
Artificial Intelligence is a “way of making a computer, a computer-controlled robot, or software think intelligently, in the similar manner the intelligent humans think”.
Since artificial intelligence (AI) is mainly related to the search process, it is important to have some methodology to choose the best possible solution.
To choose an appropriate method for a particular problem first we need to categorize the problem based on the following characteristics.
Is the problem decomposable into small sub-problems which are easy to solve?
Can solution steps be ignored or undone?
Is the universe of the problem is predictable?
Is a good solution to the problem is absolute or relative?
Is the solution to the problem a state or a path?
What is the role of knowledge in solving a problem using artificial intelligence?
Does the task of solving a problem require human interaction?
1. Is the problem decomposable into small sub-problems which are easy to solve?
Can the problem be broken down into smaller problems to be solved independently?
See also Water Jug Problem in Artificial Intelligence
The decomposable problem can be solved easily.
Example: In this case, the problem is divided into smaller problems. The smaller problems are solved independently. Finally, the result is merged to get the final result.
Is the problem decomposable
2. Can solution steps be ignored or undone?
In the Theorem Proving problem, a lemma that has been proved can be ignored for the next steps.
Such problems are called Ignorable problems.
In the 8-Puzzle, Moves can be undone and backtracked.
Such problems are called Recoverable problems.
In Playing Chess, moves can be retracted.
Such problems are called Irrecoverable problems.
Ignorable problems can be solved using a simple control structure that never backtracks. Recoverable problems can be solved using backtracking. Irrecoverable problems can be solved by recoverable style methods via planning.
3. Is the universe of the problem is predictable?
In Playing Bridge, We cannot know exactly where all the cards are or what the other players will do on their turns.
Uncertain outcome!
For certain-outcome problems, planning can be used to generate a sequence of operators that is guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good probability of leading to a solution. Plan revision is made as the plan is carried out and the necessary feedback is provided.
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem, we have to try all paths to find the shortest one.
See also Generate and Test Heuristic Search - Artificial Intelligence
Any path problem can be solved using heuristics that suggest good paths to explore.
For best-path problems, a much more exhaustive search will be performed.
5. Is the solution to the problem a state or a path
The Water Jug Problem, the path that leads to the goal must be reported.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
This document summarizes topics covered in an artificial intelligence session, including game theory, optimal decision making in games, alpha-beta search, Monte Carlo tree search, and constraint satisfaction problems. It provides details on algorithms like min-max, alpha-beta pruning, and modifying search algorithms to use heuristic evaluation functions and cutoffs. Evaluation functions are described as a way to estimate game positions and combine features through expected value or weighted linear combinations. The next session will cover Monte Carlo tree search.
The document discusses knowledge representation using propositional logic and predicate logic. It begins by explaining the syntax and semantics of propositional logic for representing problems as logical theorems to prove. Predicate logic is then introduced as being more versatile than propositional logic for representing knowledge, as it allows quantifiers and relations between objects. Examples are provided to demonstrate how predicate logic can formally represent statements involving universal and existential quantification.
Knowledge-based agents can accept new tasks in the form of explicitly described goals and adapt to changes in their environment by updating relevant knowledge. They maintain a knowledge base of facts about the environment and use an inference engine to deduce new information and determine what actions to take. The knowledge base stores sentences expressed in a knowledge representation language and the inference engine applies logical rules to deduce new facts or answer queries. Propositional logic is often used to represent knowledge, where sentences consist of proposition symbols connected by logical connectives like AND, OR, and NOT.
This document discusses inference in first-order logic. It defines sound and complete inference and introduces substitution. It then discusses propositional vs first-order inference and introduces universal and existential quantifiers. The key techniques of first-order inference are unification, which finds substitutions to make logical expressions identical, and forward chaining inference, which applies rules like modus ponens to iteratively derive new facts from a knowledge base.
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Ashish Duggal
The following are the topics in this presentation Prepositional Logic (PL) and First-order Predicate Logic (FOPL) is used for knowledge representation in artificial intelligence (AI).
There are also sub-topics in this presentation like logical connective, atomic sentence, complex sentence, and quantifiers.
This PPT is very helpful for Computer science and Computer Engineer
(B.C.A., M.C.A., B.TECH. , M.TECH.)
This document summarizes key topics from a session on problem solving by search algorithms in artificial intelligence. It discusses uninformed search strategies like breadth-first search and depth-first search. It also covers informed, heuristic search strategies such as greedy best-first search and A* search which use heuristic functions to estimate distance to the goal. Examples are provided to illustrate best first search, and it describes how this algorithm expands nodes and uses priority queues to order nodes by estimated cost. The next session is slated to cover the A* search algorithm in more detail.
The document discusses classical AI planning and different planning approaches. It introduces state-space planning which searches for a sequence of state transformations, and plan-space planning which searches for a plan satisfying certain conditions. It also discusses hierarchical planning which decomposes tasks into simpler subtasks, and universal classical planning which uses different refinement techniques including state-space and plan-space refinements. Classical planning makes simplifying assumptions but its principles can still be applied to games with some workarounds.
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
The Certainty Factor Theory uses numeric values between -1 and 1 to represent the likelihood or certainty of statements or hypotheses being true based on evidence. It was developed for artificial intelligence systems to represent uncertain or incomplete information. The Certainty Factor can be calculated based on the Measure of Belief and Measure of Disbelief of hypotheses given evidence, and formulas are provided to combine Certainty Factors from multiple pieces of evidence. However, the theory has limitations, such as difficulty accurately assigning certainty values and the limited numeric range. The Dempster-Shafer Theory was introduced to address some of the limitations of probability theory. It defines a mass function over all subsets of a set of possible conclusions to represent degrees of belief, and uses belief
The document discusses planning and problem solving in artificial intelligence. It describes planning problems as finding a sequence of actions to achieve a given goal state from an initial state. Common assumptions in planning include atomic time steps, deterministic actions, and a closed world. Blocks world examples are provided to illustrate planning domains and representations using states, goals, and operators. Classical planning approaches like STRIPS are summarized.
The Wumpus World is a simulated cave environment where an agent must explore rooms connected by passageways to find gold and escape without being eaten by the Wumpus or falling in a pit. The agent has sensors to detect stench, breeze, glitter, bumps and screams but can only see local information. It can move between rooms or use actions like shoot, grab, and climb out. The goal is to get the highest score by finding gold and escaping while taking the fewest actions and avoiding dangers.
The document discusses randomized algorithms. Randomized algorithms employ randomness as part of their logic, using random bits to guide their behavior. This allows them to achieve good average performance over many trials. The output or running time of a randomized algorithm is a random variable. Advantages include simplicity, efficiency through testing many possibilities, and better complexity bounds than deterministic algorithms. Disadvantages include potential for hardware failures from long runtimes, high memory usage for repeated processes, and longer runtimes as operations split into many parts.
This document discusses state space representation in artificial intelligence. It provides examples of how state space representation can be used to model problems. Specifically, it describes:
1) The water jug problem, where the goal is to fill a 4 gallon jug with 2 gallons using only a 3 gallon jug. The initial and goal states are defined along with the possible state transitions.
2) Production rules for solving the water jug problem by pouring water between the jugs or emptying jugs.
3) The step-by-step solution to the water jug problem by applying the production rules to reach the goal state of filling the 4 gallon jug with 2 gallons.
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Reasoning is the process of deriving logical conclusions from facts or premises. There are several types of reasoning including deductive, inductive, abductive, analogical, and formal reasoning. Reasoning is a core component of artificial intelligence as AI systems must be able to reason about what they know to solve problems and draw new inferences. Formal logic provides the foundation for building reasoning systems through symbolic representations and inference rules.
The document describes logical agents and knowledge representation. It contains the following key points:
- Logical agents use knowledge representation and reasoning to solve problems and generate new knowledge. This enables intelligent behavior in partially observable environments.
- A knowledge-based agent's central component is its knowledge base, which contains sentences in a formal language that can be queried or added to.
- Wumpus World is described as an example environment, where the agent must navigate, avoid dangers, and find gold using limited sensory information and logical reasoning.
- Propositional and predicate logic are introduced as knowledge representation languages. Forward and backward chaining are also described as techniques for logical inference.
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
This document discusses analyzing the efficiency of algorithms. It introduces the framework for analyzing algorithms in terms of time and space complexity. Time complexity indicates how fast an algorithm runs, while space complexity measures the memory required. The document outlines steps for analyzing algorithms, including measuring input size, determining the basic operations, calculating frequency counts of operations, and expressing efficiency in Big O notation order of growth. Worst-case, best-case, and average-case time complexities are also discussed.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
The document discusses probability and acting under uncertainty in artificial intelligence. It covers several key concepts:
1) Agents must often act under uncertainty due to partial observability or non-determinism. They rely on belief states representing possible world states and generating contingency plans, but these can become large and unwieldy.
2) Probabilistic reasoning uses probability distributions over possible world states to represent an agent's beliefs. Bayes' rule allows computing conditional probabilities given evidence to update these beliefs.
3) Independence assumptions allow factoring full joint probability distributions over all variables, making computation more tractable when variables are conditionally independent.
Mathematical Background for Artificial Intelligenceananth
Mathematical background is essential for understanding and developing AI and Machine Learning applications. In this presentation we give a brief tutorial that encompasses basic probability theory, distributions, mixture models, anomaly detection, graphical representations such as Bayesian Networks, etc.
This document discusses inference in first-order logic. It defines sound and complete inference and introduces substitution. It then discusses propositional vs first-order inference and introduces universal and existential quantifiers. The key techniques of first-order inference are unification, which finds substitutions to make logical expressions identical, and forward chaining inference, which applies rules like modus ponens to iteratively derive new facts from a knowledge base.
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Ashish Duggal
The following are the topics in this presentation Prepositional Logic (PL) and First-order Predicate Logic (FOPL) is used for knowledge representation in artificial intelligence (AI).
There are also sub-topics in this presentation like logical connective, atomic sentence, complex sentence, and quantifiers.
This PPT is very helpful for Computer science and Computer Engineer
(B.C.A., M.C.A., B.TECH. , M.TECH.)
This document summarizes key topics from a session on problem solving by search algorithms in artificial intelligence. It discusses uninformed search strategies like breadth-first search and depth-first search. It also covers informed, heuristic search strategies such as greedy best-first search and A* search which use heuristic functions to estimate distance to the goal. Examples are provided to illustrate best first search, and it describes how this algorithm expands nodes and uses priority queues to order nodes by estimated cost. The next session is slated to cover the A* search algorithm in more detail.
The document discusses classical AI planning and different planning approaches. It introduces state-space planning which searches for a sequence of state transformations, and plan-space planning which searches for a plan satisfying certain conditions. It also discusses hierarchical planning which decomposes tasks into simpler subtasks, and universal classical planning which uses different refinement techniques including state-space and plan-space refinements. Classical planning makes simplifying assumptions but its principles can still be applied to games with some workarounds.
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
The Certainty Factor Theory uses numeric values between -1 and 1 to represent the likelihood or certainty of statements or hypotheses being true based on evidence. It was developed for artificial intelligence systems to represent uncertain or incomplete information. The Certainty Factor can be calculated based on the Measure of Belief and Measure of Disbelief of hypotheses given evidence, and formulas are provided to combine Certainty Factors from multiple pieces of evidence. However, the theory has limitations, such as difficulty accurately assigning certainty values and the limited numeric range. The Dempster-Shafer Theory was introduced to address some of the limitations of probability theory. It defines a mass function over all subsets of a set of possible conclusions to represent degrees of belief, and uses belief
The document discusses planning and problem solving in artificial intelligence. It describes planning problems as finding a sequence of actions to achieve a given goal state from an initial state. Common assumptions in planning include atomic time steps, deterministic actions, and a closed world. Blocks world examples are provided to illustrate planning domains and representations using states, goals, and operators. Classical planning approaches like STRIPS are summarized.
The Wumpus World is a simulated cave environment where an agent must explore rooms connected by passageways to find gold and escape without being eaten by the Wumpus or falling in a pit. The agent has sensors to detect stench, breeze, glitter, bumps and screams but can only see local information. It can move between rooms or use actions like shoot, grab, and climb out. The goal is to get the highest score by finding gold and escaping while taking the fewest actions and avoiding dangers.
The document discusses randomized algorithms. Randomized algorithms employ randomness as part of their logic, using random bits to guide their behavior. This allows them to achieve good average performance over many trials. The output or running time of a randomized algorithm is a random variable. Advantages include simplicity, efficiency through testing many possibilities, and better complexity bounds than deterministic algorithms. Disadvantages include potential for hardware failures from long runtimes, high memory usage for repeated processes, and longer runtimes as operations split into many parts.
This document discusses state space representation in artificial intelligence. It provides examples of how state space representation can be used to model problems. Specifically, it describes:
1) The water jug problem, where the goal is to fill a 4 gallon jug with 2 gallons using only a 3 gallon jug. The initial and goal states are defined along with the possible state transitions.
2) Production rules for solving the water jug problem by pouring water between the jugs or emptying jugs.
3) The step-by-step solution to the water jug problem by applying the production rules to reach the goal state of filling the 4 gallon jug with 2 gallons.
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Reasoning is the process of deriving logical conclusions from facts or premises. There are several types of reasoning including deductive, inductive, abductive, analogical, and formal reasoning. Reasoning is a core component of artificial intelligence as AI systems must be able to reason about what they know to solve problems and draw new inferences. Formal logic provides the foundation for building reasoning systems through symbolic representations and inference rules.
The document describes logical agents and knowledge representation. It contains the following key points:
- Logical agents use knowledge representation and reasoning to solve problems and generate new knowledge. This enables intelligent behavior in partially observable environments.
- A knowledge-based agent's central component is its knowledge base, which contains sentences in a formal language that can be queried or added to.
- Wumpus World is described as an example environment, where the agent must navigate, avoid dangers, and find gold using limited sensory information and logical reasoning.
- Propositional and predicate logic are introduced as knowledge representation languages. Forward and backward chaining are also described as techniques for logical inference.
Fundamentals of the Analysis of Algorithm EfficiencySaranya Natarajan
This document discusses analyzing the efficiency of algorithms. It introduces the framework for analyzing algorithms in terms of time and space complexity. Time complexity indicates how fast an algorithm runs, while space complexity measures the memory required. The document outlines steps for analyzing algorithms, including measuring input size, determining the basic operations, calculating frequency counts of operations, and expressing efficiency in Big O notation order of growth. Worst-case, best-case, and average-case time complexities are also discussed.
This document discusses different types of intelligent agents. It describes four basic types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Simple reflex agents select actions based only on the current percept, while model-based reflex agents maintain an internal model of the world. Goal-based agents use goals to determine desirable situations. Utility-based agents maximize an internal utility function that represents the performance measure. The document also discusses agent functions, percepts, environments, and the PEAS properties of task environments.
The document discusses probability and acting under uncertainty in artificial intelligence. It covers several key concepts:
1) Agents must often act under uncertainty due to partial observability or non-determinism. They rely on belief states representing possible world states and generating contingency plans, but these can become large and unwieldy.
2) Probabilistic reasoning uses probability distributions over possible world states to represent an agent's beliefs. Bayes' rule allows computing conditional probabilities given evidence to update these beliefs.
3) Independence assumptions allow factoring full joint probability distributions over all variables, making computation more tractable when variables are conditionally independent.
Mathematical Background for Artificial Intelligenceananth
Mathematical background is essential for understanding and developing AI and Machine Learning applications. In this presentation we give a brief tutorial that encompasses basic probability theory, distributions, mixture models, anomaly detection, graphical representations such as Bayesian Networks, etc.
This document provides an introduction to probabilistic and stochastic models in machine learning. It discusses key concepts like probabilistic modeling, importance of probabilistic ML models, Bayesian inference, basics of probability theory, Bayes' rule, and examples of how to apply Bayes' theorem in machine learning. The document covers conditional probability, prior and posterior probability, and how a Naive Bayes classifier uses Bayes' theorem for classification tasks.
This document discusses handling uncertainty through probabilistic reasoning and machine learning techniques. It covers sources of uncertainty like incomplete data, probabilistic effects, and uncertain outputs from inference. Approaches covered include Bayesian networks, Bayes' theorem, conditional probability, joint probability distributions, and Dempster-Shafer theory. It provides examples of calculating conditional probabilities and using Bayes' theorem. Bayesian networks are defined as directed acyclic graphs representing probabilistic dependencies between variables, and examples show how to represent domains of uncertainty and perform probabilistic reasoning using a Bayesian network.
This document discusses different types of data and statistical concepts. It begins by describing the major types of data: numerical, categorical, and ordinal. Numerical data represents quantitative measurements, categorical data has no inherent mathematical meaning, and ordinal data has categorical categories with a mathematical order. It then discusses statistical measures like the mean, median, mode, standard deviation, variance, percentiles, moments, covariance, correlation, conditional probability, and Bayes' theorem. Examples are provided to help explain each concept.
Dr. Abhay Pratap Pandey introduces statistical inference and its key concepts. Statistical inference allows making conclusions about a population based on a sample. It involves estimation and hypothesis testing. Estimation determines population parameters using sample statistics. Hypothesis testing determines if sample data provides sufficient evidence to reject claims about population parameters. The document defines key terms like population, sample, parameter, statistic, and discusses properties of estimators like unbiasedness and consistency. It also explains hypothesis testing concepts like null and alternative hypotheses, types of errors, and steps to conduct hypothesis tests on a population mean. An example demonstrates hypothesis testing for a population mean using a z-test.
Informed search algorithms use heuristics to more efficiently find goal nodes in large search spaces. Heuristics estimate how close a state is to the goal and help guide the search. The heuristic function must be admissible, meaning its estimated cost must be less than or equal to the actual cost. Bayes' theorem allows calculating conditional probabilities and is fundamental to probabilistic reasoning, which represents knowledge with uncertainty using probabilities. Fuzzy set theory introduces vagueness by assigning membership degrees between 0 and 1 to represent how well something belongs to a set, like how sunny a day is based on cloud cover.
The document discusses discrete and continuous probability distributions, explaining that a discrete distribution applies to variables that can take on countable values while a continuous distribution is used for variables that can take any value within a range. It provides examples of discrete variables like coin flips and continuous variables like weights. The document also outlines the differences between discrete and continuous probability distributions in how they are represented and calculated.
This presentation contains my one day lectures which introduces fuzzy set theory, operations on fuzzy sets, some engineering control applications using Mamdamn model.
STSTISTICS AND PROBABILITY THEORY .pptxVenuKumar65
The document discusses key concepts in probability theory including probability, random experiments, sample spaces, events, random variables, probability distributions, and Bayes' theorem. It covers the binomial, Poisson, and normal distributions and their characteristics and applications. Decision theory is introduced as analyzing choices under uncertainty involving defining problems, identifying outcomes, assessing criteria, and evaluating alternatives to make optimal decisions.
CHAPTER 1 THEORY OF PROBABILITY AND STATISTICS.pptxanshujain54751
Probability theory is a branch of mathematics that uses concepts like sample space, probability distributions, and random variables to assign numerical likelihoods to the chances of outcomes occurring in random phenomena. It involves both theoretical and experimental approaches. Key aspects of probability theory include defining events and random variables, understanding independent and dependent events, and using formulas to calculate probabilities. Probability theory has various applications, like in finance to model markets, in product design to reduce failure probabilities, and in casinos to shape games of chance.
Statistical hypothesis testing is an important tool for scientists to critically evaluate hypotheses using empirical data. It helps keep scientists honest by requiring them to statistically test their ideas rather than accepting them uncritically. One should be skeptical of any paper that claims an alternative hypothesis is supported without providing a statistical test. A key statistical test is the chi-square test, which compares observed data to expected frequencies under the null hypothesis. It calculates a test statistic and compares it to critical values in tables to determine if the null hypothesis can be rejected in favor of the alternative hypothesis. Proper use of statistical testing is part of the scientific method and moral imperative for scientists.
This document provides an overview of hypothesis testing in the context of regression analysis with a single regressor. It discusses stating the population parameter of interest, using sample data to estimate this parameter, determining the standard error of the estimator, and conducting hypothesis tests by comparing the test statistic to critical values. Examples are provided to illustrate testing whether the slope coefficient is statistically different from zero, and interpreting results based on p-values and significance levels.
This interactive course aims to equip students with an in-depth comprehension of
data science principles and methodologies, with a strong emphasis on practical
applications.
This document summarizes Avesta Sasan's background and research focus. It begins with an introduction of Avesta Sasan and their position as an Associate Professor at UC Davis. The document then discusses some challenges with keeping hardware development pace with the rapid growth and increasing resource demands of artificial intelligence. This includes charts showing the growth in model size and training costs outpacing hardware improvements. It argues that contributions are needed from across machine learning, hardware architecture, circuit design, and manufacturing to improve AI speed and efficiency. The document proposes using machine learning techniques in electronic design automation flows to help automate and accelerate physical design and RTL synthesis steps in order to help close the gap between hardware and AI.
The CHIPS Alliance is a Linux Foundation project that develops open source hardware specifications, implementations, verification tools, and IP blocks. It aims to lower the costs of hardware development through collaboration and shared resources. Members include companies and organizations working on CPUs, interconnects, I/O, machine learning accelerators, and more. The CHIPS Alliance uses Apache 2.0 licensing to encourage IP contribution and participation while allowing commercial use of outputs. It provides a neutral environment for hardware collaboration across companies and countries.
The document discusses various RTL design methodologies including modular design, hierarchical design, design abstraction, RTL coding guidelines, design verification, timing constraints, power optimization, area optimization, and design reuse. It defines each methodology, provides examples, and outlines the advantages of applying these methodologies in RTL design.
This document discusses the challenges of designing AI chips and how high-level synthesis (HLS) can help address them. It notes that AI design requirements change rapidly, requiring a nimble design approach. HLS allows designs to be specified at a high level in languages like C++ and optimized through automated exploration, enabling faster design cycles compared to manual RTL development. Case studies demonstrate how HLS can achieve better power, performance and area than hand-coded RTL. The document argues HLS is well-suited for AI chip design due to its ability to rapidly apply design intent and optimize architectures through automation.
AI-Inspired IOT Chiplets and 3D Heterogeneous IntegrationObject Automation
Ultra low power processor cores and 2.5D/3D heterogeneous chiplet integration are required for emerging IOT applications. Wafer-Level-Substrate demonstrates solid RF performance with sub 1um L/S capability, pad less vias, and active/passive embedding capabilities, enabling multi-die/chiplet and silicon photonic packaging. 3DHI stacking using high bandwidth substrate enables modular testing and provides effective thermal management.
This document discusses the state of artificial intelligence (AI) and NASSCOM's role in enabling AI adoption in India. It covers three key drivers of the AI revolution: vast amounts of data, mega computing power, and massive funding. NASSCOM's focus areas include accelerating India's digital transformation, developing talent and infrastructure for AI, and addressing barriers to responsible AI adoption such as skills shortages and regulatory uncertainty. The document presents data on global AI readiness and the potential economic impact of generative AI technologies.
CDAC presentation as part of Global AI Festival and FutureObject Automation
This document discusses generative artificial intelligence (GenAI) and its opportunities and challenges. It provides an overview of what GenAI is, including its evolution from earlier forms of AI. It discusses GenAI applications in areas like advisory systems, assistants, cooperation, augmentation, and autonomous systems. Examples are given of image captioning, screenplay assistance, and cybersecurity applications like flagging unusual network behavior. Challenges around invisible data perturbation and GenAI attacks are mentioned. The document concludes with a discussion of Gartner's hype cycle for GenAI, opportunities, challenges, and recommendations around GenAI threat mitigation including trust, risk management, democratization, and governance frameworks.
Global AI Festival and Future is a digital broadcast of thought-provoking discussions and insights from world AI leaders. The event covers global and regional streams, helping you learn about the latest technological improvements, practical use cases, and industry trends.
Key Outcomes of the Event
- Gain knowledge about Latest Technology Trends
- Networking Opportunity with Technical Leaders
- Opportunities to become a thought leader
- Learn and Understand Industry based AI Solutions
- Getting Access to world super fast GPU compute
- International Internship opportunities
- Quiz Competition to win prizes and placements
Object Automation, a technology company based in California,
has been concentrating on latest technologies and emerging
tech partnerships. These include research and solution
development, the development of onshore and offshore
technology projects, the establishment of tech centers of
excellence in AI, quantum, and chip design, Technology
workshops and boot camps for corporates, special labs for
universities, and cutting-edge industry projects.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach including online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
This document provides an overview of an AI and Applications bootcamp program. The program includes a variety of courses that provide both theoretical foundations and practical skills in AI, machine learning, and related topics. It utilizes a blended learning approach with online videos, live virtual classes, projects, and masterclasses. The program aims to help professionals gain expertise in in-demand AI skills and advance their careers. It covers topics such as deep learning, computer vision, natural language processing, and more.
The document provides information on various technology course offerings from Object Automation System Solutions Inc. The courses include AI in Enterprise, Generative AI, UI Developer, Cyber Crime, Integrated Azure, and Chip Design. For each course, a brief description is given of the topics that will be covered. The document also provides information on who should take each course and highlights of the course delivery approach, which includes classroom and online live classes, pre-recorded classes, placement support, and implementing concepts in real-time projects.
The document outlines a 40-hour training agenda on enterprise artificial intelligence covering machine learning libraries and algorithms, building and deploying machine learning models using structured and unstructured databases, digital agriculture using deep learning, and model operations for production. The agenda is split over 5 days and covers topics such as regression, classification, model building with Docker, GitHub, IBM DB2, Cassandra, and deployment operations.
AI and Data Privacy in 2025: Global TrendsInData Labs
In this infographic, we explore how businesses can implement effective governance frameworks to address AI data privacy. Understanding it is crucial for developing effective strategies that ensure compliance, safeguard customer trust, and leverage AI responsibly. Equip yourself with insights that can drive informed decision-making and position your organization for success in the future of data privacy.
This infographic contains:
-AI and data privacy: Key findings
-Statistics on AI data privacy in the today’s world
-Tips on how to overcome data privacy challenges
-Benefits of AI data security investments.
Keep up-to-date on how AI is reshaping privacy standards and what this entails for both individuals and organizations.
This is the keynote of the Into the Box conference, highlighting the release of the BoxLang JVM language, its key enhancements, and its vision for the future.
Big Data Analytics Quick Research Guide by Arthur MorganArthur Morgan
This is a Quick Research Guide (QRG).
QRGs include the following:
- A brief, high-level overview of the QRG topic.
- A milestone timeline for the QRG topic.
- Links to various free online resource materials to provide a deeper dive into the QRG topic.
- Conclusion and a recommendation for at least two books available in the SJPL system on the QRG topic.
QRGs planned for the series:
- Artificial Intelligence QRG
- Quantum Computing QRG
- Big Data Analytics QRG
- Spacecraft Guidance, Navigation & Control QRG (coming 2026)
- UK Home Computing & The Birth of ARM QRG (coming 2027)
Any questions or comments?
- Please contact Arthur Morgan at [email protected].
100% human made.
Andrew Marnell: Transforming Business Strategy Through Data-Driven InsightsAndrew Marnell
With expertise in data architecture, performance tracking, and revenue forecasting, Andrew Marnell plays a vital role in aligning business strategies with data insights. Andrew Marnell’s ability to lead cross-functional teams ensures businesses achieve sustainable growth and operational excellence.
HCL Nomad Web – Best Practices and Managing Multiuser Environmentspanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-and-managing-multiuser-environments/
HCL Nomad Web is heralded as the next generation of the HCL Notes client, offering numerous advantages such as eliminating the need for packaging, distribution, and installation. Nomad Web client upgrades will be installed “automatically” in the background. This significantly reduces the administrative footprint compared to traditional HCL Notes clients. However, troubleshooting issues in Nomad Web present unique challenges compared to the Notes client.
Join Christoph and Marc as they demonstrate how to simplify the troubleshooting process in HCL Nomad Web, ensuring a smoother and more efficient user experience.
In this webinar, we will explore effective strategies for diagnosing and resolving common problems in HCL Nomad Web, including
- Accessing the console
- Locating and interpreting log files
- Accessing the data folder within the browser’s cache (using OPFS)
- Understand the difference between single- and multi-user scenarios
- Utilizing Client Clocking
Mobile App Development Company in Saudi ArabiaSteve Jonas
EmizenTech is a globally recognized software development company, proudly serving businesses since 2013. With over 11+ years of industry experience and a team of 200+ skilled professionals, we have successfully delivered 1200+ projects across various sectors. As a leading Mobile App Development Company In Saudi Arabia we offer end-to-end solutions for iOS, Android, and cross-platform applications. Our apps are known for their user-friendly interfaces, scalability, high performance, and strong security features. We tailor each mobile application to meet the unique needs of different industries, ensuring a seamless user experience. EmizenTech is committed to turning your vision into a powerful digital product that drives growth, innovation, and long-term success in the competitive mobile landscape of Saudi Arabia.
Special Meetup Edition - TDX Bengaluru Meetup #52.pptxshyamraj55
We’re bringing the TDX energy to our community with 2 power-packed sessions:
🛠️ Workshop: MuleSoft for Agentforce
Explore the new version of our hands-on workshop featuring the latest Topic Center and API Catalog updates.
📄 Talk: Power Up Document Processing
Dive into smart automation with MuleSoft IDP, NLP, and Einstein AI for intelligent document workflows.
#StandardsGoals for 2025: Standards & certification roundup - Tech Forum 2025BookNet Canada
Book industry standards are evolving rapidly. In the first part of this session, we’ll share an overview of key developments from 2024 and the early months of 2025. Then, BookNet’s resident standards expert, Tom Richardson, and CEO, Lauren Stewart, have a forward-looking conversation about what’s next.
Link to recording, transcript, and accompanying resource: https://bnctechforum.ca/sessions/standardsgoals-for-2025-standards-certification-roundup/
Presented by BookNet Canada on May 6, 2025 with support from the Department of Canadian Heritage.
Technology Trends in 2025: AI and Big Data AnalyticsInData Labs
At InData Labs, we have been keeping an ear to the ground, looking out for AI-enabled digital transformation trends coming our way in 2025. Our report will provide a look into the technology landscape of the future, including:
-Artificial Intelligence Market Overview
-Strategies for AI Adoption in 2025
-Anticipated drivers of AI adoption and transformative technologies
-Benefits of AI and Big data for your business
-Tips on how to prepare your business for innovation
-AI and data privacy: Strategies for securing data privacy in AI models, etc.
Download your free copy nowand implement the key findings to improve your business.
UiPath Community Berlin: Orchestrator API, Swagger, and Test Manager APIUiPathCommunity
Join this UiPath Community Berlin meetup to explore the Orchestrator API, Swagger interface, and the Test Manager API. Learn how to leverage these tools to streamline automation, enhance testing, and integrate more efficiently with UiPath. Perfect for developers, testers, and automation enthusiasts!
📕 Agenda
Welcome & Introductions
Orchestrator API Overview
Exploring the Swagger Interface
Test Manager API Highlights
Streamlining Automation & Testing with APIs (Demo)
Q&A and Open Discussion
Perfect for developers, testers, and automation enthusiasts!
👉 Join our UiPath Community Berlin chapter: https://community.uipath.com/berlin/
This session streamed live on April 29, 2025, 18:00 CET.
Check out all our upcoming UiPath Community sessions at https://community.uipath.com/events/.
Noah Loul Shares 5 Steps to Implement AI Agents for Maximum Business Efficien...Noah Loul
Artificial intelligence is changing how businesses operate. Companies are using AI agents to automate tasks, reduce time spent on repetitive work, and focus more on high-value activities. Noah Loul, an AI strategist and entrepreneur, has helped dozens of companies streamline their operations using smart automation. He believes AI agents aren't just tools—they're workers that take on repeatable tasks so your human team can focus on what matters. If you want to reduce time waste and increase output, AI agents are the next move.
Artificial Intelligence is providing benefits in many areas of work within the heritage sector, from image analysis, to ideas generation, and new research tools. However, it is more critical than ever for people, with analogue intelligence, to ensure the integrity and ethical use of AI. Including real people can improve the use of AI by identifying potential biases, cross-checking results, refining workflows, and providing contextual relevance to AI-driven results.
News about the impact of AI often paints a rosy picture. In practice, there are many potential pitfalls. This presentation discusses these issues and looks at the role of analogue intelligence and analogue interfaces in providing the best results to our audiences. How do we deal with factually incorrect results? How do we get content generated that better reflects the diversity of our communities? What roles are there for physical, in-person experiences in the digital world?
Linux Support for SMARC: How Toradex Empowers Embedded DevelopersToradex
Toradex brings robust Linux support to SMARC (Smart Mobility Architecture), ensuring high performance and long-term reliability for embedded applications. Here’s how:
• Optimized Torizon OS & Yocto Support – Toradex provides Torizon OS, a Debian-based easy-to-use platform, and Yocto BSPs for customized Linux images on SMARC modules.
• Seamless Integration with i.MX 8M Plus and i.MX 95 – Toradex SMARC solutions leverage NXP’s i.MX 8 M Plus and i.MX 95 SoCs, delivering power efficiency and AI-ready performance.
• Secure and Reliable – With Secure Boot, over-the-air (OTA) updates, and LTS kernel support, Toradex ensures industrial-grade security and longevity.
• Containerized Workflows for AI & IoT – Support for Docker, ROS, and real-time Linux enables scalable AI, ML, and IoT applications.
• Strong Ecosystem & Developer Support – Toradex offers comprehensive documentation, developer tools, and dedicated support, accelerating time-to-market.
With Toradex’s Linux support for SMARC, developers get a scalable, secure, and high-performance solution for industrial, medical, and AI-driven applications.
Do you have a specific project or application in mind where you're considering SMARC? We can help with Free Compatibility Check and help you with quick time-to-market
For more information: https://www.toradex.com/computer-on-modules/smarc-arm-family
Massive Power Outage Hits Spain, Portugal, and France: Causes, Impact, and On...Aqusag Technologies
In late April 2025, a significant portion of Europe, particularly Spain, Portugal, and parts of southern France, experienced widespread, rolling power outages that continue to affect millions of residents, businesses, and infrastructure systems.
AI EngineHost Review: Revolutionary USA Datacenter-Based Hosting with NVIDIA ...SOFTTECHHUB
I started my online journey with several hosting services before stumbling upon Ai EngineHost. At first, the idea of paying one fee and getting lifetime access seemed too good to pass up. The platform is built on reliable US-based servers, ensuring your projects run at high speeds and remain safe. Let me take you step by step through its benefits and features as I explain why this hosting solution is a perfect fit for digital entrepreneurs.
HCL Nomad Web – Best Practices und Verwaltung von Multiuser-Umgebungenpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-nomad-web-best-practices-und-verwaltung-von-multiuser-umgebungen/
HCL Nomad Web wird als die nächste Generation des HCL Notes-Clients gefeiert und bietet zahlreiche Vorteile, wie die Beseitigung des Bedarfs an Paketierung, Verteilung und Installation. Nomad Web-Client-Updates werden “automatisch” im Hintergrund installiert, was den administrativen Aufwand im Vergleich zu traditionellen HCL Notes-Clients erheblich reduziert. Allerdings stellt die Fehlerbehebung in Nomad Web im Vergleich zum Notes-Client einzigartige Herausforderungen dar.
Begleiten Sie Christoph und Marc, während sie demonstrieren, wie der Fehlerbehebungsprozess in HCL Nomad Web vereinfacht werden kann, um eine reibungslose und effiziente Benutzererfahrung zu gewährleisten.
In diesem Webinar werden wir effektive Strategien zur Diagnose und Lösung häufiger Probleme in HCL Nomad Web untersuchen, einschließlich
- Zugriff auf die Konsole
- Auffinden und Interpretieren von Protokolldateien
- Zugriff auf den Datenordner im Cache des Browsers (unter Verwendung von OPFS)
- Verständnis der Unterschiede zwischen Einzel- und Mehrbenutzerszenarien
- Nutzung der Client Clocking-Funktion
3. Agenda
Uncertain Knowledge and Reasoning
Causes of uncertainty
Probabilistic Reasoning
Bayes Rules & use cases
Axioms of probability
Use case of Bayes Theorem in AI
Project – Signature forgery Detection using CNN Siamese network
4. Knowledge Representation and Reasoning in AI
• Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things, which is
knowledge and as per their knowledge they perform various actions in the real world. But how machines
do all these things comes under knowledge representation and reasoning.
• KR, KRR is the part of AI which concerned with AI agents thinking and how thinking contributes to
intelligent behavior of agents.
• It is responsible for representing information about the real world so that a computer can understand and
can utilize this knowledge to solve the AI problems.
• It is also a way which describes how we can represent knowledge in AI.
• Knowledge representation is not just storing data into some database, but it also enables an intelligent
machine to learn from that knowledge and experiences so that it can behave intelligently like a human.
5. knowledge needs to be represented in AI systems
• Object: All the facts about objects. E.g., Guitars contains strings, trumpets are brass instruments.
• Events: Events are the actions which occur in our world.
• Performance: It describe behavior which involves knowledge about how to do things.
• Meta-knowledge: It is knowledge about what we know.
• Facts: Facts are the truths about the real world and what we represent.
Knowledge-Base: The central component of the knowledge-based agents is the knowledge base. It is
represented as KB. The Knowledgebase is a group of the Sentences (Here, sentences are used as a technical
term and not identical with the English language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and situations.
Following are the types of knowledge in artificial intelligence
6. Issues in knowledge representation
• Relationships among attributes : The attributes used to describe objects are nothing but the entities.
However, the attributes of an object do not depend on the encoded specific knowledge
• Choosing the granularity of representation : While deciding the granularity, it is necessary to know:
• i. What are the primitives and at what level should the knowledge be represented? ii. What should be the number (small or large) of
low-level primitives or high-level facts? High-level facts may be insufficient to draw the conclusion while Low-level primitives may
require a lot of storage.
• Representing sets of objects : There are some properties of objects which satisfy the condition of a set
together but not as individual; Example: Consider the assertion made in the sentences: "There are more
sheep than people in Australia", and "English speakers can be found all over the world." These facts can be
described by including an assertion to the sets representing people, sheep, and English
• Finding the right structure as needed : To describe a particular situation, it is always important to find the
access of right structure. This can be done by selecting an initial structure and then revising the choice
7. Uncertainty
• If knowledge representation is with certain, means we were sure about the predicates. With this
knowledge representation, we might write A→B, which means if A is true then B is true,
• But consider a situation where we are not sure about whether A is true or not then we cannot
express this statement, this situation is called uncertainty.
• So to represent uncertain knowledge, where we are not sure about the predicates, we need
uncertain reasoning or probabilistic reasoning.
Causes of uncertainty
• Information occurred from unreliable sources.
• Experimental Errors
• Equipment fault
• Temperature variation
• Climate change.
8. Uncertain Knowledge and Reasoning
• Uncertain knowledge: When the available knowledge has multiple causes leading
to multiple effects or incomplete knowledge of causality in the domain
• Uncertain data: Data that is missing, unreliable, inconsistent or noisy
• Uncertain knowledge representation: The representations which provides a restricted
model of the real system, or has limited expressiveness
• Inference: In case of incomplete or default reasoning methods, conclusions drawn might
not be completely accurate.
9. Uncertainty can be dealt with using
• Probability theory
• Truth Maintenance systems
• Fuzzy logic.
10. Probabilistic reasoning
• It is a way of knowledge representation where we apply the concept of probability to
indicate the uncertainty in knowledge.
• We combine probability theory with logic to handle the uncertainty.
• We use probability in probabilistic reasoning because it provides a way to handle the
uncertainty that is the result of someone's laziness and ignorance.
• In the real world, there are lots of scenarios, where the certainty of something is not
confirmed, such as "It will rain today," "behavior of someone for some situations," "A
match between two teams or two players."
• These are probable sentences for which we can assume that it will happen but not sure
about it, so here we use probabilistic reasoning.
11. Need of probabilistic reasoning in AI
• When there are unpredictable outcomes.
• When specifications or possibilities of predicates becomes too large to handle.
• When an unknown error occurs during an experiment.
• In probabilistic reasoning, there are two ways to solve problems with uncertain
knowledge:
• Bayes' rule
• Bayesian Statistics
12. Probability
• Probability can be defined as a chance that an uncertain event will occur. It is the
numerical measure of the likelihood that an event will occur. The value of probability
always remains between 0 and 1 that represent ideal uncertainties.
• 0 ≤ P(A) ≤ 1, where P(A) is the probability of an event A.
• P(A) = 0, indicates total uncertainty in an event A.
• P(A) =1, indicates total certainty in an event A.
13. Probability
• P(¬A) = probability of a not happening event.
• P(¬A) + P(A) = 1.
• Event: Each possible outcome of a variable is called an event.
• Sample space: The collection of all possible events is called sample space.
• Random variables: Random variables are used to represent the events and objects in the
real world.
• Prior probability: The prior probability of an event is probability computed before
observing new information.
• Posterior Probability: The probability that is calculated after all evidence or information
has taken into account. It is a combination of prior probability and new information.
14. Axioms of Probability
• There are three axioms of probability that make the foundation of probability theory-
• Axiom 1: Probability of Event
• The first one is that the probability of an event is always between 0 and 1. 1 indicates
definite action of any of the outcome of an event and 0 indicates no outcome of the event
is possible.
• Axiom 2: Probability of Sample Space
• For sample space, the probability of the entire sample space is 1.
• Axiom 3: Mutually Exclusive Events
• And the third one is- the probability of the event containing any possible outcome of two
mutually disjoint is the summation of their individual probability.
15. Probability of Event
• The first axiom of probability is that the probability of any event is between 0 and 1.
• As we know the formula of probability is that we divide the total number of outcomes in the
event by the total number of outcomes in sample space.
• And the event is a subset of sample space, so the event cannot have more outcome than the
sample space.
• Clearly, this value is going to be between 0 and 1 since the denominator is always greater than the
numerator.
16. Probability of Sample Space
• The second axiom is that the probability for the entire sample space equals 1.
17. Mutually Exclusive Event
• When there is nothing common between A and B. These particular type of events which is
called Mutually Exclusive Events.
• These Mutually exclusive events mean that such events cannot occur together or in other words,
they don’t have common values or we can say their intersection is zero/null. We can also
represent such events as follows:
18. Conditional probability
• Conditional probability is a probability of occurring an event when another event has
already happened.
• Let's suppose, we want to calculate the event A when event B has already occurred,
"the probability of A under the conditions of B", it can be written as:Where P(A⋀B)= Joint
probability of a and B
• P(B)= Marginal probability of B.
• If the probability of A is given and we need to find the probability of B, then it will be given
as:
20. Example
• In a class, there are 70% of the students who like English and 40% of the students who
likes English and mathematics, and then what is the percent of students those who like
English also like mathematics?
• Let, A is an event that a student likes Mathematics
• B is an event that a student likes English.
• Hence, 57% are the students who like English also like Mathematics.
22. Bayes' theorem
• Bayes' theorem is also known as Bayes' rule, Bayes' law, or Bayesian reasoning, which
determines the probability of an event with uncertain knowledge.
• In probability theory, it relates the conditional probability and marginal probabilities of
two random events.
• Bayes' theorem was named after the British mathematician Thomas Bayes. The Bayesian
inference is an application of Bayes' theorem, which is fundamental to Bayesian statistics.
• It is a way to calculate the value of P(B|A) with the knowledge of P(A|B).
• Bayes' theorem allows updating the probability prediction of an event by observing new
information of the real world.
23. Example
• If cancer corresponds to one's age then by using Bayes' theorem, we can determine the probability
of cancer more accurately with the help of age.
• Bayes' theorem can be derived using product rule and conditional probability of event A with known
event B:
• As from product rule we can write:
• P(A ⋀ B)= P(A|B) P(B) or
• Similarly, the probability of event B with known event A:
• P(A ⋀ B)= P(B|A) P(A)
• Equating right hand side of both the equations, we will get:
24. Example
• The above equation (a) is called as Bayes' rule or Bayes' theorem. This equation is basic of most modern
AI systems for probabilistic inference.
• It shows the simple relationship between joint and conditional probabilities. Here,
• P(A|B) is known as posterior, which we need to calculate, and it will be read as Probability of hypothesis A
when we have occurred an evidence B.
• P(B|A) is called the likelihood, in which we consider that hypothesis is true, then we calculate the
probability of evidence.
• P(A) is called the prior probability, probability of hypothesis before considering the evidence
• P(B) is called marginal probability, pure probability of an evidence.
• In the equation (a), in general, we can write P (B) = P(A)*P(B|Ai), hence the Bayes' rule can be written as:
• Where A1
, A2
, A3
,........, An
is a set of mutually exclusive and exhaustive events.
25. Applying Bayes' rule
• Bayes' rule allows us to compute the single term P(B|A) in terms of P(A|B), P(B), and P(A).
• This is very useful in cases where we have a good probability of these three terms and want to determine
the fourth one.
• Suppose we want to perceive the effect of some unknown cause, and want to compute that cause, then
the Bayes' rule becomes:
26. Example-1
• A doctor is aware that disease meningitis causes a patient to have a stiff neck, and it occurs 80% of the
time. He is also aware of some more facts, which are given as follows:
• The Known probability that a patient has meningitis disease is 1/30,000.
• The Known probability that a patient has a stiff neck is 2%.
• Let a be the proposition that patient has stiff neck and b be the proposition that patient has meningitis. ,
so we can calculate the following as:
• P(a|b) = 0.8
• P(b) = 1/30000
• P(a)= .02
• Hence, we can assume that 1 patient out of 750 patients has meningitis disease with a stiff neck.
27. Question
From a standard deck of playing cards, a single card is drawn. The probability that the card is king is 4/52,
then calculate posterior probability P(King|Face), which means the drawn face card is a king card.4
• P(king): probability that the card is King= 4/52= 1/13
• P(face): probability that a card is a face card= 3/13
• P(Face|King): probability of face card when we assume it is a king = 1
• Putting all values in equation (i) we will get:
28. Application of Bayes' theorem in Artificial intelligence
• Bayes Theorem is also widely used in the field of machine learning and AI
• Including its use in a probability framework for fitting a model to a training
dataset, referred to as maximum a posteriori or MAP for short, and in developing models
for classification predictive modeling problems such as the Bayes Optimal Classifier
and Naive Bayes.
• It is used to calculate the next step of the robot when the already executed step is given.
• Bayes' theorem is helpful in weather forecasting.
• It can solve the Monty Hall problem.