0% found this document useful (0 votes)
15 views66 pages

AI_slides1

Slides on artificial intelligence

Uploaded by

ik241168
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views66 pages

AI_slides1

Slides on artificial intelligence

Uploaded by

ik241168
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

Fast progression of AI

Fast progression of AI
Need of Data Lebling : Supervised or Unsupervised Way
Supervised Learning:
1.Create a set of labeled data, i.e. "correct" data with both input and output information:
pictures of cars and trucks, each with the corresponding class names.

2. Feed the model with that labeled training dataset:

The Machine Learning algorithm begins to "see" patterns between input (image) and output
(class). The algorithm might learn complex relationships as "the distance between wheels is
larger for trucks" – note that in reality, it is usually hard to interpret algorithms in such a way.

3. Test the model on unseen data and measure how accurately it predicts the class.

The term supervised learning stems from the fact that, in the beginning, we gave the algorithm
a data set in which the “correct answers” were given. This is the key difference between
unsupervised learning.
Ways of Data Lebling
Ways of Data Lebling
There are two main tasks of supervised learning:

•Regression: Predict a continuous numerical value. Example: "How long will it


take you to drive home from work given distance, traffic, time, and day of the
week?“

•Classification: Assign a label. Example: "Is this a picture of a car or a truck?"


Unsupervised Learning
• The model is not provided with the correct results
during the training.
• Can be used to cluster the input data in classes on
the basis of their statistical properties only.
• Cluster significance and labeling.
• The labeling can be carried out even if the labels are
only available for a small number of objects
representative of the desired classes.
Reinforcement learning

In reinforcement learning, the algorithm (in this context also often referred to as an agent) learns
through trial-and-error using feedback from its actions. Rewards and punishment operate as
signals for desired and undesired behavior.
Examples of Supervised and
Unsupervised
Learning Algorithms
Decision Trees
• Is another classification method.
• A decision tree is a set of simple rules, such as "if the
sepal length is less than 5.45, classify the specimen as
setosa."
• Decision trees are also nonparametric because they do
not require any assumptions about the distribution of
the variables in each class.
Types of Clustering
• Types of clustering:
– HIERARCHICAL: finds successive clusters using previously
established clusters
• agglomerative (bottom‐up): start with each element in a separate cluster
and merge them accordingly to a given property
• divisive (top‐down)
– PARTITIONAL: usually determines all clusters at once
Distances
• Determine the similarity between two clusters and
the shape of the clusters.
In case of strings…
• The Hamming distance between two strings of equal length is
the number of positions at which the corresponding symbols
are different.
– measures the minimum number of substitutions required to
change one string into the other
• The Levenshtein (edit) distance is a metric for measuring the
amount of difference between two sequences.
– is defined as the minimum number of edits needed to transform
one string into the other.

1001001 LD(BIOLOGY, BIOLOGIA)=2


1000100 BIOLOGY ‐> BIOLOGI (substitution)
HD=3 BIOLOGI ‐> BIOLOGIA (insertion)
Normalization
VAR: the mean of each attribute
of the transformed set of data
points is reduced to zero by
subtracting the mean of each
attribute from the values of the
attributes and dividing the result
by the standard deviation of the
attribute.

RANGE (Min‐Max Normalization): subtracts the minimum value of an attribute from each value
of the attribute and then divides the difference by the range of the attribute. It has the
advantage of preserving exactly all relationship in the data, without adding any bias.

SOFTMAX: is a way of reducing the influence of extreme values or outliers in the data without
removing them from the data set. It is useful when you have outlier data that you wish to
include in the data set while still preserving the significance of data within a standard deviation
of the mean.
KMeans: how it works
Kmeans: Pro and Cons
Summary
• Finding the optimal approach
• Supervised Models
– Neural Networks
– Multi Layer Perceptron
– Decision Trees
• Unsupervised Models
– Different Types of Clustering
– Distances and Normalization
– Kmeans
• Combining different models
– Committee Machines
– Introducing a Priori Knowledge
– Sleeping Expert Framework
Deep Learning vs Classical Machine Learning
Deep Learning vs Classical Machine Learning
Deep Learning vs Classical Machine Learning
Deep Learning vs Classical Machine Learning
▪ Traditional Machine Learning algorithms have simpler structure, such as linear regression or a decision
tree,

▪ Deep Learning is based on an artificial neural network. This multi-layered ANN is, like a human brain,
complex and intertwined.

▪ DL algorithms require much less human intervention. Manually feature engineering and a classifier
selection is needed to sort images, check whether the output is as required, and adjust the algorithm if
this is not the case

▪ As a deep learning algorithm, however, the features are extracted automatically, and the algorithm
learns from its own errors.

▪ Deep Learning requires much more data than a traditional Machine Learning algorithm to function
properly. Due to the complex multi-layer structure, a deep learning system needs a large dataset to
eliminate fluctuations and make high-quality interpretations.
Sources of Data
Increasing numbers of smartphones and internet devices

Growth of social media:

New data collection and storage technologies(IOT)


Sources of Data : Characteristics
Another Classification:

Expert System

Predictive AI

Generative AI
Predictive AI
Generative AI
Selection of Model
Considerations for Opting Computer Vision
Equipment Intelligence
Energy providers are continually seeking to improve the
management of their transmission network through efficient
network
investment opportunities

Asset inspections, vegetation management, works delivery


and capital projects represent a large part of an energy
providers spend and involve managing a number of critical
operational risks.

Using drones powered by computer vision, enables


monitoring of hard to reach places by enhancing their
maintenance schedules and workforce as well as material and
equipment allocation to jobs.

Structural assessment of technical conditions of various


infrastructure elements, structural alignment validation,
detection of overheating elements and identification of
corrosion.
Product Intelligence
Amazon Go is a new kind of store with no checkout required and
utilises the world’s most advanced shopping technology so
customers never have to wait in line.

The store concept uses several technologies similar to those


found in self-driving cars, including computer vision, deep learning
algorithms, and sensor fusion to automate the purchase, checkout,
and payment steps associated with a retail transaction.

Cameras within the store recognise individuals, track them around


the store, know which account is linked to each customer,
understand exactly which product and how many of each are put into
their bag, and tally it all up with
high confidence.

As a second step of security and verification, the store is equipped


with pressure sensors allowing the computer to detect when and
from where an item is removed.

The next layer of confidence is built on customer history.


The more you shop at Amazon Go, the more informed the computer
will be on your shopping habits and history.
Document Intelligence
The digitisation of enterprise data is a requirement for
organisations when on their digital transformation
journey and to comply with increasing regulatory
requirements.

Document organization, summarization, digitization,


transcription, translation etc can be helped by AI
solutions.

Computer Vision, OCR and Natural Language


Processing (NLP) have made a huge impact in the
healthcare sector across the globe.

Using CV in conjunction with NLP has increased the


accuracy of doctors and nurses handwritten notes,
significantly reducing the number of errors made
when categorising patients, and helping predict high
risk individuals.
People Intelligence

▪ A facial recognition system uses biometrics to map


facial features from a digital image or video source and
is capable of identifying or verifying a person by
comparing the information to a database of known
faces.

▪ There are number of commercial applications using


facial recognition from surveillance to marketing.

▪ Gait recognition can also be used to identify


individuals.

▪ Skeletal motion analysis can be used to detect


suspicious activities and
violence.
Security Intelligence Computer Vision is giving surveillance
cameras autonomous intelligence,
letting them analyse live video with no
human involvement. This could be good
news for public safety, helping police and
first responders more easily spot
crimes and accidents.

Computer Vision applications in the


security space have been implemented to
identify:
• Foreign objects in a public place (i.e.
unattended bag, litter, etc.)
• Hazards (i.e. spills, etc.)
• Disorderly behaviour (i.e. violence)
• Crowd control
• Individuals in distress (i.e. injured, panic)
• Workplace safety breaches
• Theft
• Unauthorised access
• Other crimes and accidents
Process Intelligence
Object detection using Regression

You might also like