AI
AI
- **Artificial Intelligence** is the branch of computer science that focuses on creating systems capable
of performing tasks that require human intelligence. These tasks include learning, reasoning, problem-
solving, understanding natural language, perception, and decision-making.
- AI systems can be categorized into **narrow AI**, which is designed for specific tasks (e.g., facial
recognition, voice assistants), and **general AI**, which aims to mimic human intelligence across a
broad range of tasks (though this is still theoretical).
- AI combines computational models, algorithms, and data to create systems that exhibit traits such as
adaptability, learning from experience, and autonomous decision-making.
- The concept of AI began with the development of computers, where scientists like **Alan Turing**
proposed machines that could mimic human reasoning. In 1950, Turing proposed the **Turing Test**, a
way to evaluate a machine's ability to exhibit intelligent behavior indistinguishable from a human.
- AI as a field was formally established at the **Dartmouth Conference** in 1956, organized by **John
McCarthy**, **Marvin Minsky**, and others. This conference coined the term "Artificial Intelligence"
and set the stage for future research.
- Early AI research was highly optimistic, with the belief that creating human-like intelligence was
possible in a short time. Programs like **Logic Theorist** and **General Problem Solver**
demonstrated AI’s potential to solve mathematical and logical problems.
- The rise of **machine learning** and **deep learning** in the late 1990s and early 2000s, along
with increased computational power, massive datasets, and breakthroughs in neural networks, led to
the resurgence of AI.
1. **Mathematics**:
- AI heavily relies on areas like **probability**, **statistics**, and **linear algebra**. For instance,
machine learning models depend on probability theory for making decisions under uncertainty.
2. **Computer Science**:
- The development of algorithms, data structures, and computational theory are central to AI. These
elements enable the efficient processing of information, which is crucial for tasks like search
optimization, knowledge representation, and decision-making.
- AI draws inspiration from the structure and functioning of the human brain. Understanding how
humans learn, perceive, and reason informs the development of AI algorithms.
4. **Linguistics**:
- Natural language processing (NLP), a branch of AI, is rooted in linguistics. It helps computers
understand, interpret, and generate human languages.
5. **Philosophy**:
- AI's theoretical foundation explores questions related to intelligence, consciousness, and ethics,
drawing heavily from philosophy. Concepts like reasoning, decision-making, and learning are examined
from an AI perspective.
- These fields contribute to AI’s understanding of feedback mechanisms, optimization, and stability,
which are important in designing intelligent control systems.
- **Intelligent Agents** are autonomous entities that observe and act upon an environment to achieve
specific goals. They can sense their environment through sensors, make decisions using reasoning or
learning algorithms, and act on the environment through actuators.
3. **Learning and Adaptation**: They can learn from past experiences and improve performance over
time.
- Examples include:
- **Simple Reflex Agents**: Act based on the current situation (e.g., thermostats).
- **Model-Based Agents**: Use internal models to predict and make informed decisions (e.g., self-
driving cars).
- **Learning Agents**: Continuously improve their behavior based on past experiences (e.g.,
recommendation systems).
- AI systems can process vast amounts of data and find patterns that are difficult or impossible for
humans to detect (e.g., in finance, healthcare, and marketing).
3. **Improved Decision-Making**:
- AI algorithms can support better decision-making by analyzing trends and predicting future outcomes
(e.g., fraud detection, medical diagnostics).
4. **Enhancing Personalization**:
- AI-driven personalization can enhance user experiences by tailoring recommendations (e.g., Netflix,
Amazon).
5. **Availability**:
- AI systems can work 24/7 without fatigue, providing consistent service (e.g., chatbots for customer
service).
- AI is used to address issues like climate change through predictive models, healthcare improvements
via precision medicine, and disaster management.
1. **Data Dependency**:
- AI systems rely heavily on data. Without high-quality, relevant data, AI models can underperform or
make incorrect predictions.
- Most current AI systems are narrow in scope, excelling in specific tasks but unable to perform
generalized human-like reasoning across multiple domains.
3. **Ethical and Bias Issues**:
- AI systems can perpetuate existing biases if trained on biased data, leading to unfair outcomes (e.g.,
in hiring, criminal justice).
5. **Job Displacement**:
- Automation through AI could lead to job loss in industries like manufacturing, retail, and services,
creating economic and social challenges.
- AI systems can be vulnerable to attacks, and their use raises concerns about data privacy, especially
in applications like surveillance and facial recognition.
### Assignment -2
- **Uninformed Search**, also known as **blind search**, refers to search strategies that explore the
search space without any domain-specific knowledge. These algorithms do not have additional
information about the goal's location or the cost of paths leading to the goal. They only use the
information provided by the problem's structure.
- Key characteristics:
1. **No Heuristics**: Uninformed search algorithms do not use any heuristics or estimates of how far
a node is from the goal.
2. **Exploration Based on Structure**: They rely solely on exploring the structure of the state space
(e.g., through adjacency lists or graphs) to find solutions.
3. **Complete**: Uninformed searches guarantee that if a solution exists, it will eventually be found.
4. **Time-Consuming**: Since these searches don't use additional information, they may explore
many unnecessary paths before finding the solution.
- Uniform-Cost Search
1. **Breadth-First Search (BFS)** is an uninformed search algorithm that explores all the nodes at the
present depth level before moving on to the nodes at the next depth level. It systematically expands the
shallowest nodes first, guaranteeing that it finds the shortest path in an unweighted graph.
2. **Algorithm Steps**:
1. **Initialize the Queue**: Start by placing the root (or starting node) into a queue.
2. **Dequeue Node**: Remove the node from the front of the queue and mark it as "visited."
3. **Expand Node**: Add all of its unvisited child nodes to the queue (if not already visited or in the
queue).
4. **Repeat**: Continue dequeuing nodes, visiting them, and enqueuing their unvisited children until
the queue is empty or the goal node is found.
3. **Key Features**:
- **Completeness**: BFS is complete, meaning it will always find a solution if one exists.
- **Optimality**: BFS is optimal in terms of path length if all edges have equal cost, as it finds the
shallowest (or shortest) solution first.
- **Time and Space Complexity**: BFS requires large memory space, as it stores all nodes at each
depth level. The time complexity is **O(b^d)** where:
- Suppose you are searching for a specific number in a graph where each node represents a number.
BFS will first examine all numbers at level 1, then level 2, and so on, until it finds the desired number.
1. **Hill Climbing** is a heuristic search algorithm that continuously moves towards the direction of
increasing value, as defined by a heuristic function. It seeks to improve its position by iteratively
choosing the neighboring state with the highest value until it reaches a peak, which may be a local
maximum, plateau, or global maximum.
2. **Algorithm Steps**:
2. **Evaluation**: Calculate the value of the current state using a heuristic or evaluation function.
3. **Generate Neighbors**: Expand the current node and evaluate the neighboring nodes
(successors).
4. **Move to Neighbor**: Move to the neighbor with the highest value, provided it offers an
improvement over the current state.
5. **Repeat**: Continue this process until no better neighboring state is found (local or global
maximum).
3. **Variants**:
- **Simple Hill Climbing**: Chooses the first better neighbor without comparing all options.
- **Steepest Ascent Hill Climbing**: Examines all neighbors and chooses the one with the highest
increase in value.
- **Stochastic Hill Climbing**: Chooses a random better neighbor rather than the best one.
4. **Limitations**:
- **Local Maxima**: The algorithm can get stuck at local maxima (points higher than surrounding
neighbors but not the highest overall).
- **Plateaus**: Large flat regions where all neighbors have the same value, leading to inefficiency.
- **Ridges**: Narrow areas that are difficult to climb because progress can only be made by moving
indirectly.
5. **Example**:
- Consider a terrain map with varying altitudes. Hill climbing would choose to move from one point to
another based on increasing altitude until it reaches the highest possible point (which may be local, not
global, maxima).
1. **Informed Search**:
- Uses domain-specific knowledge or heuristics to guide the search process towards the goal more
efficiently.
- It can prioritize nodes that are more likely to lead to the goal, reducing time and space complexity.
- **Example**: A* Algorithm uses heuristics to estimate the distance from the current state to the
goal.
2. **Uninformed Search**:
- Does not use any problem-specific information beyond the definition of the state space.
- It explores the search space blindly, examining all possible paths systematically.
- **Example**: Breadth-First Search (BFS) explores all nodes level by level without any knowledge of
the goal's location.
3. **Key Differences**:
1. **Knowledge**:
2. **Efficiency**:
- Informed search is typically more efficient and faster than uninformed search due to guidance from
heuristics.
3. **Optimality**:
- Both can be optimal, but informed searches can handle more complex spaces efficiently by focusing
on the most promising paths.
1. **Best-First Search (BFS)** is an informed search algorithm that uses a priority queue to explore the
most promising node based on a given evaluation function. It expands the node that appears to be the
"best" according to a heuristic function.
2. **Algorithm Steps**:
1. **Initialization**: Start by adding the initial node to a priority queue, with its priority determined by
the heuristic evaluation function.
2. **Node Selection**: Dequeue the node with the highest priority (lowest heuristic cost) and expand
it.
3. **Goal Test**: If the selected node is the goal, terminate the search.
4. **Update Queue**: Add the neighboring nodes to the queue based on their heuristic values.
5. **Repeat**: Continue the process until the goal is reached or the queue is empty.
3. **Heuristic Function**:
- The heuristic function, often denoted as **h(n)**, estimates the cost of the cheapest path from the
current node **n** to the goal.
4. **Example**:
- Imagine a robot navigating a city to reach a specific location. The Best-First Search would choose the
roads that seem closest to the destination (based on straight-line distance or travel time) until it reaches
the goal.
5. **Advantages**:
- Best-First Search is faster and more efficient than uninformed searches because it uses the heuristic
to focus the search on the most promising areas of the search space.
6. **Disadvantages**:
- It may not always find the optimal solution because it focuses on the node that seems best without
considering the overall cost of the path.
- **Key Concepts**:
4. **Equilibrium**: A situation where no player can improve their payoff by changing their strategy
unilaterally (e.g., Nash Equilibrium).
- **Applications in AI**: AI uses game theory in multi-agent systems, economic simulations, competitive
games (like chess), and collaborative settings like negotiations.
- **Alpha-Beta Pruning** is an optimization technique for the **Minimax algorithm** used in game
tree search. It reduces the number of nodes evaluated by the Minimax algorithm by pruning branches
that won't affect the final decision. This makes the search more efficient without sacrificing correctness.
- **Key Concepts**:
1. **Alpha**: The best value that the maximizer (Max player) can guarantee so far.
2. **Beta**: The best value that the minimizer (Min player) can guarantee so far.
- **Algorithm Steps**:
- If the current node is worse than a previously explored option for the opponent, stop exploring this
branch (prune it).
- **Benefits**:
- Significantly reduces the number of nodes explored, speeding up decision-making in games like chess
or checkers.
- **Monte Carlo Tree Search (MCTS)** is a heuristic search algorithm for decision-making in games. It
relies on random simulations to evaluate the potential of different moves and selects the move with the
highest chance of success.
- **Algorithm Steps**:
1. **Selection**: Starting from the root node, select a child node that maximizes a certain exploration-
exploitation trade-off (like UCB1).
2. **Expansion**: If the selected node is not terminal, expand it by adding one or more child nodes.
3. **Simulation**: Simulate a random playthrough (or rollout) from the expanded node until a
terminal state (win, loss, draw) is reached.
4. **Backpropagation**: Update the node's statistics (e.g., win rate) based on the simulation result,
and propagate this information back to the root.
- **Applications**:
- MCTS is widely used in complex games like **Go** or **Chess**, where the search space is vast, and
traditional methods are less efficient.
- The time and space complexity of game search algorithms like Minimax and MCTS can grow
exponentially with the size of the game tree (e.g., depth and branching factor).
2. **Imperfect Information**:
- Many real-world games involve incomplete information, which traditional game search algorithms
like Minimax can't handle well.
3. **Heuristic Dependence**:
- Performance of game search algorithms often depends on the quality of the heuristic or evaluation
function, which may not always accurately represent the true value of a game state.
4. **Pruning Errors**:
- Alpha-Beta Pruning can sometimes miss optimal moves if the evaluation function or depth of search
is not tuned correctly.
5. **Non-Optimal Play**:
- Algorithms like MCTS rely on simulations and might not guarantee optimal play, especially in games
where perfect knowledge of the future states is required.
- **Optimal Decisions** in games refer to choosing actions that maximize an agent's chances of
achieving the best possible outcome (usually winning), assuming that all players act rationally.
- **In Zero-Sum Games** (e.g., chess), the optimal strategy minimizes the opponent's best outcome
while maximizing one's own.
- **Minimax Algorithm** is used to make optimal decisions by considering the best possible moves for
the opponent at every stage and countering them accordingly. If combined with **Alpha-Beta
Pruning**, it ensures the most efficient path to the optimal solution.
---
- **Knowledge** in AI refers to the information, facts, rules, and relationships that an AI system uses to
understand, reason, and make decisions about the world. Knowledge is represented in ways that
machines can interpret and manipulate, enabling intelligent behavior.
- **Types of Knowledge**:
1. **Declarative Knowledge**:
2. **Procedural Knowledge**:
3. **Key Differences**:
- **Propositional Logic** is a formal system in AI used to represent and reason about simple statements
(propositions) that can either be true or false. It involves logical connectives like AND (∧), OR (∨), NOT
(¬), and implications (→).
- **Example**:
**P → Q**.
- If *P* is true (it is raining) and the implication holds, then *Q* must also be true (the ground is wet).
- **Applications**:
- Propositional logic is used in rule-based systems, expert systems, and automated reasoning tasks.
### 4. **What is Predicate Logic? How is it Different from Propositional Logic?**
1. **Predicate Logic**:
- **Predicate Logic** (also known as First-Order Logic) extends propositional logic by dealing with
predicates, which express properties of objects or relationships between objects, rather than whole
propositions.
- **Syntax**: It includes variables, quantifiers (∀ for "for all," ∃ for "there exists"), and predicates that
can take arguments.
**Example**:
1. **Level of Expressiveness**:
- Propositional logic deals with whole statements that are either true or false.
- Predicate logic allows for the representation of complex relationships between objects and the use
of quantifiers.
- Predicate logic includes variables (like x) and quantifiers (like ∀, ∃), allowing for more detailed
representations.
3. **Application Scope**:
- Predicate logic is used for more complex reasoning involving objects, relationships, and their
properties.
- **Resolution Steps**:
1. **Convert to Conjunctive Normal Form (CNF)**: All propositions are rewritten as a conjunction of
disjunctions of literals.
2. **Resolve Contradictions**: For two clauses where one contains a literal and the other contains its
negation, eliminate these literals and combine the remaining parts.
- **Example**:
- Given:
- *P ∨ Q*
- *¬Q ∨ R*
- **Applications**:
- Resolution is widely used in automated theorem proving, logic programming, and in AI systems for
reasoning.