0% found this document useful (0 votes)
20 views75 pages

Artificial Intelligence: Ai Project Cycle

The document outlines the AI Project Cycle, which consists of stages such as problem scoping, data acquisition, modeling, and evaluation. It emphasizes the importance of understanding the problem, acquiring relevant data, and utilizing various AI models including supervised and unsupervised learning. Additionally, it discusses techniques like reinforcement learning and neural networks, highlighting their roles in developing efficient AI solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views75 pages

Artificial Intelligence: Ai Project Cycle

The document outlines the AI Project Cycle, which consists of stages such as problem scoping, data acquisition, modeling, and evaluation. It emphasizes the importance of understanding the problem, acquiring relevant data, and utilizing various AI models including supervised and unsupervised learning. Additionally, it discusses techniques like reinforcement learning and neural networks, highlighting their roles in developing efficient AI solutions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

ARTIFICIAL INTELLIGENCE

CLASS 10

AI PROJECT CYCLE
AI Project Cycle
Make a greeting card for your mother for her birthday. Lets see some ideas
1. You might go online and checkout some video.
2. After finalising the design, make a list of things that are required.
3. Check if you have the material with you or not. If not, you would go out and get
all the items ready.
4. Once you have everything with you, you would start making the card.
5. If you make a mistake in the card somewhere which cannot be rectified, you
would start remaking it.
6. Once the greeting card is made, you would gift it to your mother.
Are these steps relatable?
Do you think your steps might differ?
Similarly, AI Project Cycle provides us with an appropriate framework which
can lead us towards the goal. The AI Project Cycle mainly has 5 stages
Various parameters are
● Acquire data- understanding what the parameters that are related to
problem are scoping.
● Data acquisition - collecting data from various reliable and authentic
sources.
● Select various models which give a suitable output.
● Test the model and figure out which is the most efficient one.
● Develop algorithm
● Test your model
● Finally, after evaluation, the project cycle is now complete and what
you get is your AI project.
1. Problem Scoping
Identifying the problem and having a vision to solve it, is
what Problem Scoping is about. Take a look at the Sustainable
Development Goals.

17 goals have been announced by the United nations which are termed
as the Sustainable Development Goals. The aim is to achieve these goals
by the end of 2030.

https://www.youtube.com/watch?v=M-iJM02m_Hg
Many goals correspond to the problem which we might observe around us too.
One should look for such problems and try to solve them as this would make
their lives better. Scoping a problem is not that easy as we need to have a
deeper understanding around it so that the picture becomes clearer while we
are working to solve it.

The 4Ws Problem canvas helps in identifying the key elements related to the
problem
Who?
It helps you in analyzing the people getting affected directly or
indirectly due to it. Find out who the ‘Stakeholders’ to this problem are
and what you know about them.
Stakeholders are the people who face this problem and would be
benefitted with the solution.
What?
What is the problem and how do you know that it is a
problem?
Evidence to prove the problem you have selected actually exists.
Newspaper, Media, announcements, etc
Where?
Who is associated with the problem and what the problem
actually is; you need to focus on the context/situation/location of the
problem. This block will help you look into the situation in which the
problem arises, the context of it, and the locations where it is prominent.
Why?
What is to be solved; and where will the solution be deployed.
Thus in the “Why” canvas, think about the benefits which the
stakeholders would get from the solution and how would it benefit them
as well as the society.
After filling the 4Ws Problem
canvas, you now need to
summarise all the cards into one
template. The Problem Statement
Template helps us to summarise all
the key points into one single
Template so that in future,
whenever there is need to look
back at the basis of the problem,
we can take a look at the Problem
Statement Template and
understand the key elements of it.
Data Acquisition
Data means, a piece of information or facts and statistics collected together for reference or
analysis. Whenever we want an AI project to be able to predict an output, we need to train it first using
data.
For example, To predict the salary of any employee based on his previous salaries, you would feed the
data of his previous salaries into the machine. This is the data with which the machine can be trained.
Once it is ready, it will predict his next salary efficiently.
The previous salary data here is known as Training Data and the next salary prediction data set is
known as the Testing Data.
Training data needs to be relevant and authentic. In the previous example, if the training data was not
of the previous salaries but of his expenses. If the previous salary data was not authentic, then the
prediction could have gone wrong.
Data Features

Data features refer to the type of data you want to collect.


Previous example, data features would be salary amount, increment
percentage, increment period, bonus, etc. The various ways to collect
data.
Sometimes you acquire data from random websites. Such data might not
be authentic as its accuracy cannot be proved.
Find a reliable source of data from where some authentic information
can be taken.
Collect open-sourced data and not someone’s property. One of the most
reliable and authentic sources of information, are the open-sourced
websites hosted by the government. These government portals have
general information collected in suitable format which can be
downloaded and used wisely.
Govt. portals are: data.gov.in, india.gov.in
Data Exploration
Data exploration is explore a large data set in an unstructured way, it
can use a combination of manual methods and automated tools such as data
visualizations, charts, and initial reports.

Example

If you go to the library and pick up a random book, you first try to go
through its content quickly by turning pages and by reading the description
before borrowing it, because it helps you in understanding if the book is
appropriate to your needs and interests or not.
Modelling
In Data exploration, we have seen various types of graphical
representations which can be used for representing different parameters
of data. The graphical representation makes the data easy to understand.

But when it comes to machines accessing and analysing data, it needs


the data in the most basic form of numbers (which is binary – 0s and 1s)
and when it comes to discovering patterns and trends in data, the
machine goes in for mathematical representations of the same.
AI models have two kinds of approaches:

• Rule based: Decision tress


• Learning Based: Machine learning and Deep learning
Rule Based Approach
The machine follows the rules or instructions mentioned by the developer
Example, you have a dataset which tell you about the conditions on the basis of which
one can decide if an elephant may be spotted or not while on safari.
The parameters are Outlook, Temperature, Humidity and Wind. The various possibilities
of these parameters and see in which case the elephant may be spotted and in which
case it may not. You feed this data in to the machine along with the rules which tell the
machine all the possibilities.
The machine trains on this data and now is ready to be tested. While testing the
machine, you tell the machine that Outlook = Overcast, Temperature = Normal,
Humidity = Normal and Wind = Weak. On the basis of this testing dataset now the
machine will if the elephant has been spotted before or not and will display the
prediction to you. This is known as a rule-based approach.
Decision trees
A decision tree is a series of nodes, a directional graph that starts at the base with a single
node and extends to the many leaf nodes that represent the categories that the tree can
classify. Another way to think of a decision tree is as a flow chart, where the flow starts at
the root node and leaves. This look like an upside-down tree. It's a rule-based Al model
which helps the machine in predicting what an element is with the help of various
decisions (or rules) fed to it.

This tree consists of the following components:


• Questions/Conditions are Nodes or Roots
• Yes/No options represent Edges or Branches
• End actions are Leaves of the tree.
Advantage is: Interpretable- It can be easily understood by the
people without analytical and mathematical background.
A drawback for this approach is that the learning is static. The
machine once trained, does not take into consideration any
changes made in the original training dataset.
If you try testing the machine on a dataset which is different
from the rules and data you fed it at the training stage, the
machine would fail on it and will not learn from its mistake.
Once trained, the model cannot improvise itself on the basis of
feedbacks.
Learning Based Approach
Relationship or patterns in data are not defined by the developer. It has the ability to
simulate intelligence
Example
A dataset comprising of 100 images of apples and bananas each. These images depict
apples and bananas in various shapes and sizes. These images are then labelled as either
apple or banana so that all apple images are labelled ‘apple’ and all the banana images have
‘banana’ as their label. Now, the AI model is trained with this dataset and the model is
programmed in such a way that it can distinguish between an apple image and a banana
image according to their features and can predict the label of any image which is fed to it as
an apple or a banana. After training, the machine is now fed with testing data. So, the
model adapts to the features on which it has been trained and accordingly predicts if the
image is of an apple or banana. In this way, the machine learns by itself by adapting to the
new data which is flowing in.
https://www.youtube.com/watch?v=1FZ0A1QCMWc
Difference : https://www.youtube.com/watch?v=xtOg44r6dsE
Supervised Learning
Supervised learning comes from the idea that an algorithm
is learning from a training dataset.

https://www.youtube.com/watch?v=iiaMsi7BGjY
There are two types are regression and classification
https://www.youtube.com/watch?v=iAFJUCQNIFI
Numerical Data can be Discrete or Continuous:
Discrete data is counted, while Continuous data is measured
Discrete Data
Discrete Data can only take certain values.
Example: the number of students in a class
We can't have half a student!
Example: the results of rolling 2 dice
Only has the values 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12
Continuous Data
Continuous Data can take any value (within a range)
Examples:
A person's height: could be any value (within the range of human heights), not just certain
fixed heights,
Time in a race: you could even measure it to fractions of a second,
A dog's weight,
The length of a leaf,
Classification
Classification problem is when the
output variable is a category such as
“Black” or “coloured”, “good” or “bad".
It separates the data into multiple discrete
classes or labels.
•It answers “What class?”
•Applied when the output has finite and
discrete values.
Eg Given the age and salary of consumers
,predict whether they will be interested in
buying a house.
Regression
Regression is the process of
finding a model that predicts a
continuous value based on its input
variables.
•It answers “How much”
•Applied when the output is continuous
number
•A simple regression algorithm
:y=mx+c
Example:
The relationship between
environmental temperature and
humidity levels.
Unsupervised Learning
It works on unlabelled dataset. The unsupervised learning models are used
to identify relationships, patterns and trends out of the data which is fed into it. It
helps the user in understanding what the data is about and what are the major
features identified by the machine in it.
Example
Random data of 1000 dog images and you wish to understand some pattern
out of it, you would feed this data into the unsupervised learning model and would
train the machine on it. After training, the machine would come up with patterns
which it was able to identify out of it. The Machine might come up with patterns
which are already known to the user like colour or it might even come up with
something very unusual like the size of the dogs.
Unsupervised learning models divided into two categories:Clustering and
Association
Types of Unsupervised
learning algorithms

• Clustering: classifies the output


data into groups. This is the case of
customer segmentations according to
what they have purchased.
• Association: discover rules within
the data set. For instance, those
customers who buy a car also take
out insurance, which is why the
algorithm detects this rule.
Classification vs regression vs Clustering
Clustering
“Clustering” is the process of
Association
grouping similar entities together. Finding associations amongst
The goal of this unsupervised items within large commercial
machine learning technique is to
find similarities in the data point and databases.
group similar data points together.
Dimensionality Reduction:
Dimensionality reduction is the process of reducing the number of features (or
dimensions) in a dataset while retaining as much information as possible. This can be
done for a variety of reasons, such as to reduce the complexity of a model, to improve the
performance of a learning algorithm, or to make it easier to visualize the data.
Dimensionality reduction is a key technique within unsupervised learning.
We humans are able to visualise upto 3-Dimensions only but according to a lot of theories and
algorithms, there are various entities which exist beyond 3-Dimensions. For example, in Natural
language Processing, the words are considered to be N-Dimensional entities. Which means that
we cannot visualise them as they exist beyond our visualisation ability. Hence, to make sense
out of it, we need to reduce their dimensions. Here, dimensionality reduction algorithm is used.
As we reduce the dimension of an entity, the information which it contains starts getting
distorted.
For example, if we have a ball in our hand, it is 3-Dimensions right now. But if we click its
picture, the data transforms to 2-D as an image is a 2-Dimensional entity. Now, as soon as we
reduce one dimension, at least 50% of the information is lost as now we will not know about the
back of the ball. Whether the ball was of same colour at the back or not? Or was it just a
hemisphere? If we reduce the dimensions further, more and more information will get lost.
REINFORCEMENT LEARNING
In reinforcement learning, a program (the “agent”) interacts with an environment
dynamically, making choices for its next course of action. Based on the current state of
the environment, the positive and negative rewards, and actions taken, the agent
must learn the best method to accomplish the task.
Here are some important terms used in Reinforcement AI:
Agent: It is an assumed entity which performs actions in an environment to gain some
reward.
Environment (e): A scenario that an agent has to face.
Reward (R): An immediate return given to an agent when he or she performs specific
action or task.
State (s): State refers to the current situation returned by the environment.
Punishment: Negative reward
Reinforcement learning: https://youtu.be/TnUYcTuZJpM
Example: A robot takes a big step
foward,however the robot loses balance
and then falls.The next time ,it takes a
smaller step and is able to hold its
balance. The robot tries variations like
this many times; eventually ,it learns the
right size of steps to take and walk
steadily.
The robot learns how to walk based on
certain rewards (which is staying in
balance) and punishment (falling
down).This feedback is considered as
“reinforcement “for doing and not doing
an action.
Evaluation Once a model has been made and trained, it needs to go
through proper testing so that one can calculate the efficiency and
performance of the model. Hence, the model is tested with the help of
Testing Data (which was separated out of the acquired dataset at Data
Acquisition stage) and the efficiency of the model is calculated on the
basis of the parameters mentioned below:
Example:
4ws canvas
Neural Networks
A neural network is either a system software or hardware that
works similar to the tasks performed by neurons of human brain.
Neural networks are loosely modelled after how neurons in the human
brain behave. The key advantage of neural networks, are that they are
able to extract data features automatically without needing the input of
the programmer.
It is a fast and efficient way to solve problems for which the
dataset is very large, such as in images.

https://www.youtube.com/watch?v=rEDzUT3ymw4
https://www.youtube.com/watch?v=vpOLiDyhNUA
Artificial Intelligence, Machine Learning, Deep Learning
Machine learning(ML)
Machine learning is an application of AI.
It provides systems the ability to automatically learn and improve from
experience without being explicitly programmed. So instead of you
writing the code,what you do is you feed the data to the generic
algorithm and the machine builds the logic based on the given data.
Machine learning focuses on the development of computer programs
that can access data and use it learn for themselves.

https://www.youtube.com/watch?v=hvTomFHBEKo
Advantages:
• Easily identifies trends and patterns :It can review large amount of data
and discover trends and patterns out of it.
• No human intervention needed
• Continuous improvement
Disadvantages:
• Time and resources: Needs more time to train the model
• High error-susceptibility: Machine Learning is autonomous but highly
susceptible to errors. Suppose, you train an algorithm with data sets
small enough to not be inclusive. You end up with biased predictions
coming from biased training set.
• Can't offer rational reasons: Machine learning systems can't offer
rational reasons for a particular prediction or decision. They are limited
to answering questions rather than posing them.
Machine Learning Examples from Day-to-Day Life

• Virtual Personal Assistants(Alexa, siri, google)


• Social Media Services
• Email Spam
• Online Customer Support
• Online Transportation Networks
• Search Engine Result Refining
• Handwriting recognition
• Speech recognition
Deep learning[Deep neural learning or Deep neural network]

Deep learning is an AI function that imitates the workings of the human


brain in processing data and creating patterns for use in decision making.

https://www.youtube.com/watch?v=6M5VXKLf4D4
Real World Applications of Deep Leaning

• Computer vision: Self Driving Cars.


• Virtual Assistants.
• Entertainment (Netflix)
• Visual Recognition
. networks are loosely modelled
Neural
after how neurons in the human brain
behave. The key advantage of neural
networks are that they are able to
extract data features automatically
without needing the input of the
programmer. A neural network is
essentially a system of organizing
machine learning algorithms to
perform certain tasks. It is a fast and
efficient way to solve problems for
which the dataset is very large, such
as in images.
A Neural Network is divided into multiple layers and each layer is
further divided into several blocks called nodes. Each node has its own
task to accomplish which is then passed to the next layer. The first layer
of a Neural Network is known as the input layer.

The job of an input layer is to acquire data and feed it to the Neural
Network. No processing occurs at the input layer. Next to it, are the
hidden layers. Hidden layers are the layers in which the whole
processing occurs. Their name essentially means that these layers are
hidden and are not visible to the user.
Each node of these hidden layers has its own machine learning algorithm which it
executes on the data received from the input layer. The processed output is then fed
to the subsequent hidden layer of the network. There can be multiple hidden layers
in a neural network system and their number depends upon the complexity of the
function for which the network has been configured. Also, the number of nodes in
each layer can vary accordingly. The last hidden layer passes the final processed
data to the output layer which then gives it to the user as the final output. Similar to
the input layer, output layer too does not process the data which it acquires. It is
meant for user-interface.
Some of the features of a Neural Network are
AI in our daily life
https://www.youtube.com/watch?v=Y46zXHvUB1s
THANK YOU

You might also like