0% found this document useful (0 votes)
21 views7 pages

UNIT 5 Artificial Intelligence

The document provides an overview of learning, learning agents, and various learning methods including supervised, unsupervised, and reinforcement learning. It discusses key concepts such as decision trees, information theory, and information gain, along with their applications in machine learning. Additionally, it covers statistical learning methods, neural networks, and reinforcement learning elements and processes.

Uploaded by

Harshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views7 pages

UNIT 5 Artificial Intelligence

The document provides an overview of learning, learning agents, and various learning methods including supervised, unsupervised, and reinforcement learning. It discusses key concepts such as decision trees, information theory, and information gain, along with their applications in machine learning. Additionally, it covers statistical learning methods, neural networks, and reinforcement learning elements and processes.

Uploaded by

Harshini
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1.

Learning

 Definition: Learning is the process of improving system performance by acquiring


knowledge or skills through experience or data.
 Applications: Image recognition, language processing, autonomous vehicles.

2. Learning Agents

 Definition: Agents designed to learn and improve their performance based on


interactions with their environment.
 Key Components:
o Learning Element: Responsible for improving performance.
o Performance Element: Executes the learned knowledge.
o Critic: Provides feedback for learning.
o Problem Generator: Suggests exploratory actions.

3. Classification of Learning

 Supervised Learning: Learns a function from labeled data. (e.g., Spam email
detection)
 Unsupervised Learning: Finds patterns in unlabeled data. (e.g., Clustering
customers)
 Reinforcement Learning: Learns by interacting with the environment and receiving
rewards/punishments. (e.g., Chess-playing bots)

4. Learning Elements

 Definition: Essential components that facilitate learning in a system.


 Components:
o Data (experience)
o Model (hypothesis space)
o Feedback mechanism

5. Inductive Learning Methods

 Definition: Methods that generalize from specific examples to form general rules.
 Common Techniques:
o Decision Trees
o Neural Networks
o Rule-based learning
6. Learning Decision Tree

 Definition: A tree-like model for classification or regression tasks.


 Construction Steps:
o Select the best attribute based on criteria like information gain.
o Split the dataset into subsets.
o Repeat recursively until stopping conditions are met.

7. Attribute-Based Representation

 Definition: Representation of objects or data instances using attribute-value pairs.


 Example: A car can be represented using attributes like color, brand, and engine
capacity.

8. Choosing an Attribute

 Definition: Selecting the best attribute for splitting data during decision tree
construction.
 Criteria:
o Information gain
o Gini index
o Gain ratio

9. Laboratory 13: Program to Demonstrate the Precedence Properties of


Operators in C Language

 Objective: To illustrate how operators in C are evaluated based on their precedence


and associativity.
 Example Program:

c
CopyEdit
#include <stdio.h>
int main() {
int result = 10 + 5 * 2; // Multiplication has higher precedence
than addition.
printf("Result: %d\n", result); // Outputs 20
return 0;
}

 Key Concepts:
o Precedence determines the order of evaluation.
o Associativity determines the direction of evaluation.
10. Decision Tree Learning

 Definition: A supervised learning method that builds a model resembling a tree


structure for decision-making.
 Applications:
o Loan approval systems
o Medical diagnosis

11. Hypothesis Spaces

 Definition: The set of all possible hypotheses that a learning algorithm can consider
when searching for a solution to a problem.
 Key Points:
o A hypothesis is a proposed explanation or model that predicts outcomes based
on input features.
o The hypothesis space is determined by the model's parameters and structure
(e.g., linear models, decision trees, neural networks).

Information Theory

Definition:
Information theory is a mathematical framework for quantifying the transmission, processing,
and storage of information. It measures the uncertainty, or lack of predictability, in a dataset
and helps evaluate how much information is gained or lost in communication or decision-
making processes.

Key Concepts of Information Theory:

1. Entropy (H):
o A measure of uncertainty or randomness in a dataset.
o Higher entropy indicates higher uncertainty, while lower entropy indicates greater
predictability.
o Formula: H(X)=−∑i=1nP(xi)log⁡2P(xi)H(X) = -\sum_{i=1}^n P(x_i) \log_2
P(x_i)H(X)=−i=1∑nP(xi)log2P(xi) Where P(xi)P(x_i)P(xi) is the probability of outcome
xix_ixi.

Example:

o A fair coin toss has two outcomes (Heads, Tails), each with a probability of 0.5.
H=−(0.5⋅log⁡20.5+0.5⋅log⁡20.5)=1 bit.H = -(0.5 \cdot \log_2 0.5 + 0.5 \cdot \log_2
0.5) = 1 \text{ bit.}H=−(0.5⋅log20.5+0.5⋅log20.5)=1 bit.
2. Redundancy:
o Excess information that does not add value or meaning.
o Example: Repeating the same data multiple times during communication.
3. Mutual Information:
o Quantifies how much information two variables share.
4. Applications of Information Theory:
o Data compression (e.g., JPEG, MP3).
o Error detection and correction in communication.
o Decision-making in machine learning (e.g., feature selection).

Information Gain

Definition:
Information gain (IG) measures the reduction in uncertainty (entropy) of a dataset when it is
split based on a specific attribute. It is a key metric in decision trees for selecting the best
attribute to split the data.

Formula:
IG=H(Parent)−∑(∣Child∣∣Parent∣⋅H(Child))IG = H(Parent) - \sum \left( \frac{|Child|}{|Parent|} \cdot
H(Child) \right)IG=H(Parent)−∑(∣Parent∣∣Child∣⋅H(Child))

Where:

 H(Parent)H(Parent)H(Parent): Entropy of the dataset before splitting.


 H(Child)H(Child)H(Child): Entropy of each subset after splitting.
 ∣Child∣/∣Parent∣|Child| / |Parent|∣Child∣/∣Parent∣: Proportion of instances in the child
subset.

Steps to Compute Information Gain:

1. Calculate the entropy of the entire dataset.


2. Split the dataset into subsets based on an attribute.
3. Compute the entropy of each subset.
4. Find the weighted average entropy of the subsets.
5. Subtract the weighted average entropy from the original dataset entropy.

Example:

Dataset:

 10 instances: 6 positive (Yes) and 4 negative (No).


 Entropy of parent dataset: H(Parent)=−610log⁡2610−410log⁡2410=0.971 bits.H(Parent) = -
\frac{6}{10} \log_2 \frac{6}{10} - \frac{4}{10} \log_2 \frac{4}{10} = 0.971 \text{
bits.}H(Parent)=−106log2106−104log2104=0.971 bits.

Split the data based on an attribute (e.g., Color: Red, Blue):


 Red subset: 4 Yes, 1 No. H(Red)=0.721.H(Red) = 0.721.H(Red)=0.721.
 Blue subset: 2 Yes, 3 No. H(Blue)=0.971.H(Blue) = 0.971.H(Blue)=0.971.

Weighted entropy:

H(Split)=510⋅0.721+510⋅0.971=0.846.H(Split) = \frac{5}{10} \cdot 0.721 + \frac{5}{10} \cdot 0.971 =


0.846.H(Split)=105⋅0.721+105⋅0.971=0.846.

Information Gain:

IG=H(Parent)−H(Split)=0.971−0.846=0.125.IG = H(Parent) - H(Split) = 0.971 - 0.846 =


0.125.IG=H(Parent)−H(Split)=0.971−0.846=0.125.

Applications of Information Gain:

1. Decision Trees:
o Used to decide the best attribute for splitting data.
2. Feature Selection:
o Helps identify the most informative features in machine learning models.
3. Classification Tasks:
o Improves prediction accuracy by prioritizing attributes that reduce uncertainty.

1. Explanation-Based Learning (EBL)

 Focuses on using prior knowledge to generalize from a single example.


 Derives a general rule from a specific example by analyzing the structure and
reasoning behind it.

2. Hypothesis

 Represents assumptions about patterns or relationships in data.


 A hypothesis can be tested to validate its accuracy or usefulness in predictions.

3. Statistical Learning Methods

 Involves applying statistical models to infer patterns from data.


 Techniques: Linear regression, logistic regression, and Bayesian inference.

4. Naïve Bayes

 A probabilistic classifier using Bayes’ theorem.


 Assumes all features are independent.
 Applications: Spam filtering, text classification.

5. Laboratory 14: Program to Calculate Factorial of a Number

 A factorial of a number nnn is the product of all positive integers less than or equal to
nnn.
 Example:

makefile
CopyEdit
Input: 5
Output: 120 (5! = 5 × 4 × 3 × 2 × 1)

6. Instance-Based Learning

 Memorizes instances and uses them directly for predictions.


 Example: kkk-Nearest Neighbors.

7. Neural Networks

 Mimics the human brain to process data.


 Components:
o Input Layer: Accepts features.
o Hidden Layers: Perform computation.
o Output Layer: Produces predictions.
 Example: Image recognition.

8. Reinforcement Learning (RL)

 An agent interacts with the environment to learn a policy that maximizes cumulative
rewards.

9. Elements of Reinforcement Learning

 Agent: Learns to make decisions.


 Environment: Where the agent operates.
 Reward: Feedback based on the agent's actions.

10. Reinforcement Learning Problem


 A scenario where the agent aims to maximize rewards by exploring different
strategies.

11. Agent-Environment Interface

 Interaction mechanism between the agent and environment.


 Includes actions taken by the agent and feedback from the environment.

12. Steps for Reinforcement Learning

1. Initialize policy and value functions.


2. Observe the environment.
3. Perform actions based on policy.
4. Update policy using rewards.

13. Problem Solving Methods for RL

 Approaches include Q-Learning, SARSA, and Deep Q-Networks.

14. Laboratory 15: Program to Implement Five House Logic Puzzle Problem

 A logic puzzle involving constraints where each house has unique attributes (e.g.,
color, nationality, pet, drink).
 Goal: Use logical reasoning to determine attributes for all houses.

You might also like