0% found this document useful (0 votes)
15 views

Artificial Intelligence 2

Uploaded by

chrohittrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Artificial Intelligence 2

Uploaded by

chrohittrar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

A Look Back: The Research History of Neural Networks

The concept of neural networks dates back to the early 1940s with pioneering work
by Warren McCulloch and Walter Pitts. However, limitations in computing power and
theoretical understanding hindered significant progress for decades.

Here are some key milestones in Neural Network research:

• 1950s: The Perceptron, a simple neural network model, was introduced.


However, limitations were discovered that hindered its ability to solve complex
problems.
• 1960s-1980s: A period of decline due to theoretical limitations and lack of
computational resources.
• 1980s onwards: A resurgence fueled by advancements in computing power
and the development of new algorithms like backpropagation, which allowed
for efficient training of more complex networks.
• Today: Deep Learning, a subfield of neural networks using many layers, has
revolutionized AI, achieving state-of-the-the-art performance in various
domains like speech recognition, computer vision, and natural language
processing.

Understanding these historical developments is crucial because they shaped


the current state of neural networks. It highlights the importance of continuous
research and technological advancements in driving AI progress.

Model of artificial neuron

Let's consider the example of determining whether to buy a smartphone based on its
features:

1. Inputs: When you're deciding whether to buy a smartphone, you consider


various features like camera quality, battery life, storage capacity, and price.
These are the inputs to your decision-making process.
2. Weights: Now, not all features are equally important to you. For example, you
might prioritize camera quality and battery life over storage capacity. So, you
assign weights to each feature based on their importance to you. Let's say you
assign a higher weight to camera quality and battery life and a lower weight
to storage capacity.
3. Summation Function: After assigning weights, you sum up all the weighted
inputs. Let's say you're considering a smartphone with a camera score of 8/10
(weighted by 0.7), battery life of 2 days (weighted by 0.8), storage capacity of
64GB (weighted by 0.5), and a price of $600 (weighted by -0.6, as higher prices
are less desirable). The weighted sum would be
0.7×8+0.8×2+0.5×64−0.6×6000.7×8+0.8×2+0.5×64−0.6×600. This
gives you a total weighted sum.
4. Activation Function: Now, you set a threshold for what features are good
enough for you to buy the smartphone. Let's say your threshold is 20. If the
total weighted sum exceeds 20, you decide to buy the smartphone; otherwise,
you don't.
5. Output: Based on whether the total weighted sum crosses the threshold or
not, you make your decision. If it crosses the threshold, you decide to buy the
smartphone. Otherwise, you don't.

So, in this example, the artificial neuron (your decision-making process) takes in
inputs like camera quality, battery life, storage capacity, and price, weighs them
based on their importance, adds them up, compares the total to a threshold, and
then decides whether to output "buy the smartphone" or "don't buy the
smartphone."

Action Under Uncertainity


Certainly! Let's dive into the topic of "Acting under
Uncertainty" within the realm of artificial intelligence.

When we talk about "uncertainty" in AI, we're referring to


situations where we don't have complete information or
where outcomes are not entirely predictable. This is common
in real-world scenarios because the world is inherently
uncertain.

Now, when an AI system needs to make decisions or take


actions in such uncertain environments, it needs to be smart
about it. It's like when you're playing a game and you don't
know what move your opponent will make next. You have to
think about all the possible moves they might make and then
decide on your next move based on that uncertainty.

In AI, we use different techniques to handle this uncertainty.


One important technique is called probabilistic reasoning.
This involves assigning probabilities to different outcomes
based on the available information. So instead of saying, "I
know exactly what will happen next," the AI might say,
"There's a 70% chance that this will happen and a 30%
chance that that will happen."

Another key concept is decision theory. This is about figuring


out the best course of action to take given the uncertainties.
It's like weighing the potential risks and rewards of different
actions and choosing the one that seems most promising.

To put it simply, "acting under uncertainty" in AI is all about


making smart decisions when you don't have all the answers.
It's about being flexible, adaptive, and able to handle
whatever the world throws at you.

Unit 2
Alright, class, today we'll be diving into the fascinating world of heuristic functions in
AI. Imagine you're lost in a maze, and you need to find the exit. Exhaustively
checking every path would be slow and inefficient. That's where heuristics come in –
they're like experienced guides that help you prioritize which paths to explore first.
What is a Heuristic Function?
A heuristic function, often simply called a heuristic, is an educated guess that
estimates the cost of reaching the goal state from any given state within a problem.
Think of it as a rule of thumb that helps AI algorithms make informed decisions
during the search process.
Here's a diagram to illustrate:
Current State (A)
|
V
State B ----- State C (Goal)
\ /
\ /
State D

The heuristic function, denoted by h(n), takes a state (like A, B, C, or D) as input and
outputs an estimated cost to get from that state to the goal (State C in this case).
While not always perfect, a good heuristic significantly reduces the search space by
prioritizing states closer (according to the estimate) to the goal.
How are Heuristics Calculated?
The way we calculate heuristics depends on the specific problem. Here are some
common approaches:
• Distance-based heuristics: In maze problems, we might use the Manhattan
distance (sum of the absolute differences in coordinates) between the current
state and the goal.
• Misplaced tile heuristic: For an 8-puzzle, this heuristic counts the number of
tiles out of place compared to the goal state.
• Domain-specific knowledge: For chess, a heuristic might evaluate the
material advantage (number of pieces) or the king's safety.
Why Use Heuristics?
There are two main reasons why heuristics are crucial in AI:
• Efficiency: By prioritizing states closer to the goal, heuristics significantly
reduce the number of states explored, leading to faster solutions.
• Intractability: Many real-world problems have enormous search spaces,
making it impossible to explore them all. Heuristics make these problems
tractable by guiding the search towards promising areas.
Hill Climbing algorithm real life use case:

• Real-life use cases:

1. Route Optimization: In navigation systems, hill climbing can be used


to iteratively improve route efficiency by making local adjustments
based on current traffic conditions or road closures.

2. Machine Learning: Hill climbing algorithms are used in some


optimization techniques within machine learning, such as feature
selection or parameter tuning in models like neural networks.

3. Network Routing: In telecommunications, hill climbing can help


optimize data packet routing by dynamically adjusting routes based on
network congestion or failures.

4. Game Playing: In certain types of game-playing AI, hill climbing can be


employed to make local decisions, such as determining the next move
in a chess game based on immediate board evaluation.

5. Financial Optimization: In financial markets, hill climbing can be


applied to optimize investment portfolios by iteratively adjusting asset
allocations based on local market conditions.

6. Resource Allocation: Hill climbing algorithms can be used in resource


allocation problems, such as scheduling tasks in a manufacturing
environment or assigning resources in project management.

You might also like