0% found this document useful (0 votes)
24 views

Intro ML Lecture 1

The document outlines the Machine Learning course (EC 326) including its examination pattern, syllabus, and historical context. It covers various machine learning concepts such as supervised, unsupervised, and reinforcement learning, along with notable algorithms and applications. Additionally, it provides a timeline of significant milestones in the development of machine learning and artificial intelligence from the 1940s to recent advancements.

Uploaded by

neker96528
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Intro ML Lecture 1

The document outlines the Machine Learning course (EC 326) including its examination pattern, syllabus, and historical context. It covers various machine learning concepts such as supervised, unsupervised, and reinforcement learning, along with notable algorithms and applications. Additionally, it provides a timeline of significant milestones in the development of machine learning and artificial intelligence from the 1940s to recent advancements.

Uploaded by

neker96528
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

2/27/2023

Machine Learning
Subject code: EC 326

Scheme: 3(L)-0(T)-2(P)-3(C)
Machine Learning
Subject faculties:

•Dr. Kishor P. Upla (Subject coordinator)

•Ms. Anjali Sarvaiya (Laboratory)

Machine Learning: Examination Pattern Machine Learning: Syllabus

Total Marks: 100(Theory) + 50 (Laboratory)


Theory (100 Marks):
•Mid semester examination: 30 marks
•End semester examination: 50 marks
•Class tests/assignments: 20 marks
Laboratory (50 Marks)
• Continuation Evaluation + Project Assignment + Practical
Exam

Overview Overview
• Machine Learning Basics
• Supervised Learning •Unsupervised Learning
•Classification •Clustering
•Artificial Neural Network •Terms and parameters related to Unsupervised learning
•k-Nearest Neighbour
•SVM •Reinforcement Learning
•Decision Tree •Markov Decision Process (MDP)
•Naïve Bayes •Q-learning
•Etc… •Value function approximation
•Regression •etc.,
•Linear and Logistic
•Tree-based •Dimensionality Reduction
•Other related theory •Principal Component Analysis (PCA)
•Singular Value Decomposition (SVD)
•etc.,

1
2/27/2023

What is ML? What is ML?

What is ML?
Machine Learning is said as a subset of artificial intelligence that is
mainly concerned with the development of algorithms which allow a
computer to learn from the data and past experiences on their own.
OR
Machine learning enables a machine to automatically learn from data, improve performance
from experiences, and predict things without being explicitly programmed.

The term machine learning was first introduced by Arthur


Samuel in 1959.

Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and AI.

Applications of ML
• Image and Speech Recognition
I.e., Alexa and Google Home and
Tagging the name on any photo as we have seen on
Facebook
•Google Maps (Provides traffic info before the start of our journey)
•Online customer support (widely used in medical, banking, health,
Stock market, etc.,)
•Google Translation
•Prediction (in banking or for forecast prediction)
•Feature extraction
•Self-driving Cars
•Advertise recommendations
•Video Surveillance
•Email and Spam filtering
•Real-time dynamic pricing
•Gaming and Education
•Virtual Assistants

2
2/27/2023

AI/ML/DP TIMELINE
Working of ML
With the help of sample historical data, which is known as training data,
machine learning algorithms build a mathematical model that helps
in making predictions or decisions without being explicitly programmed.

--Started in 40s with a very important book on human


cognition, and it has been accelerating only recently due
to the development of new algorithms and methods
but also due to the wide availability of the
technology itself.

Warren McCulloch (left) and Walter Pitts (right) 'The Organization of Behavior' by Donald Hebb, New
York (1949).
1945
Machine learning history starts with the first mathematical 1949
model of neural networks presented in the scientific paper "A The book "The Organization of Behavior" by Donald Hebb is published.
logical calculus of the ideas immanent in nervous activity" by This book has theories on how behavior relates to neural networks and
Warren McCulloch and Walter Pitts. brain activity and is about to become one of the monumental pillars of
machine learning development.

Alan M. Turing
Arthur Samuel and IBM 700 (February 24, 1956)
1950 – The Turing Test
-- The Turing Test was proposed by Alan Turing, an English computer 1950s
scientist, as a measure of a computer’s intelligence in 1950. It’s a way to Arthur Samuel, a pioneer in machine learning, created a program for playing
measure artificial intelligence. championship-level computer checkers.
-- Turing test has been criticized on the grounds that it is difficult to create a • Additionally, Samuel utilized a minimax algorithm (which is still widely used for
games today) of finding the optimal move, assuming that the opponent is also
fair and accurate test, as well as because intelligence is not adequately
playing optimally.
measured by this test alone. However, it remains an essential milestone in the
•He also designed mechanisms for his program to continuously improve, for
history of artificial intelligence research. instance, by remembering previous checker moves and comparing them with
chances of winning.
•Arthur Samuel is the first person to come up with and popularize the term "machine
learning".

3
2/27/2023

Marvin Minsky (left), Dean Edmonds (right) and the SNARC machine
with 40 Hebb synapses.
Marvin Minsky, Claude Shannon, Ray Solomonoff and other
scientists at the Dartmouth Summer Research Project on Artificial
1951 Intelligence (1955)
When most computers still used punched cards to run, Marvin
Minsky and Dean Edmonds built the first artificial neural network, 1956
consisting of 40 interconnected neurons with short- and long- The Dartmouth Workshop is sometimes referred to as "the birthplace of
term memory. artificial intelligence". During a two-month period, a group of prominent
scientists in the fields of math, engineering, computer and cognitive
sciences have gathered to establish and brainstorm the fields of AI and
ML research.

Alexey (Oleksii) Ivakhnenko and first Deep Neural Network (1967)

First page of the article 'Nearest Neighbor Pattern Classification' by Thomas


1967--Ukrainian-born soviet scientists Alexey (Oleksii) Cover (bottom) and Peter Hart (top).
Ivakhnenko and Valentin Lapa have developed hierarchical representation
of neural network that uses polynomial activation function and are trained
using Group Method of Data Handling (GMDH). It is considered as the first 1967
ever multi-layer perceptron and Ivakhnenko is often considered as the Thomas Cover and Peter E. Hart from Stanford publish an article
father of deep learning. in IEEE Transactions on Information Theory (Volume: 13, Issue: 1,
January 1967, pages 22-27) about the nearest neighbor
algorithm (used for classification and regression in machine learning).

The first "AI" winter


• The duration of 1974 to 1980 was the
tough time for AI and ML researchers,
and this duration was called as AI
winter.
• In this duration, failure of machine Stanford cart autonomously avoids objects.

translation occurred, and people had


reduced their interest from AI, which 1975
led to reduced funding by the Stanford cart, a project in development since the 60s, has reached
an important milestone. It was a remotely controlled robot, which
government to the researches. could move around the space autonomously with 3D mapping
and navigation.

4
2/27/2023

Kunihiko Fukushima and the architecture of the


Neocognitron.

1979 NETtalk network architecture from Sejnowski and


Japanese computer scientist Kunihiko Fukushima publishes his work Rosenberg
on neocognitron, a hierarchical multilayered network which is used to 1985
detect patterns and inspires convolutional neural networks — systems Terrence Sejnowski, combining his knowledge in biology and neural
used nowadays for analyzing images. networks, invents NETtalk, a program with a purpose of breaking down
and simplifying models of human cognitive tasks in order for machine to
potentially learn how to perform them. His program learns to pronounce
English words the same way a baby does.

The second AI winter (1987-1993)


• The duration between the years 1987 to
1993 was the second AI Winter
duration.
Paul Smolensky and the scheme of the Restricted Boltzmann
Machine (RBM) • Again Investors and government
stopped in funding for AI research as
1986 due to high cost but not efficient result.
Cognitive scientist Paul Smolensky comes up with a Restricted
Boltzmann machine (RBM) which can analyze a set of inputs and learn
The expert system such as XCON was
probability distribution from them. Nowadays, this algorithm is popular very cost effective.
for topic modeling (for instance, based on the most popular words in an
article, AI determines its possible topics) or for AI-powered
recommendations (based on previous purchases, what are you likely to
buy next).

Garry Kasparov (left) and IBM Deep Blue with operator


Tin Kam Ho, Random decision forests (1995) (right)IBM chess computer,

1995 1997
Random decision forests are introduced in a paper published by Tin Deep Blue, beats world champion Garry Kasparov in chess.
Kam Ho. This algorithm creates and merges multiple AI decisions At the time this achievement was seen as a proof of machines
into a "forest". When relying on multiple different decision trees, the catching up to human intelligence.
model significantly improves in its accuracy and decision-making.

5
2/27/2023

Igor Aizenberg, 'Multi-Valued and Universal Binary Neurons' Fei-Fei Li, creator of ImageNet
Book Cover (2000)
2009
A massive visual database of labeled images ImageNet is launched
by Fei-Fei Li. Li wanted to expand on the data available for training
2000 algorithms, since she believed that AI and ML must have good training
First mention of the term "deep learning" by a Ukrainian-born data that reflects the real world in order to be truly practical and
neural networks researcher Igor Aizenberg in the context of useful. The Economist described the creation of this database as an
Boolean threshold neurons. exceptional event for popularizing AI throughout the whole tech
community, marking the new era of deep learning history.

Goodfellow, Pouget-Abadie, Mirza, Xu, Warde-Farley, Ozair,


Google Brain learns to identify cats on photos (Right:
Courville, Bengio
Andrew Ng, Head of Google Brain, 2011-2012)

2014
Having an extensive machine learning background, Google’s X A group of prominent scientists (Goodfellow, Pouget-Abadie,
Lab team has developed an artificial intelligence algorithm Google Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio)
Brain, which later in 2012 became famously good at image develop Generative adversarial networks (GAN) frameworks that
processing, being able to identify cats in pictures. teach AI how to generate new data based on the training set.

DeepFace working scheme (by Facebook) Google Sibyl Logo

2014 2014
Facebook research team develops DeepFace, a deep Google introduces Sibyl, a large scale machine learning system, to
leaning facial recognition system — nine-layer neural the public. Many novel algorithms are presented together with the
network trained on 4 million images of Facebook users. system itself. For instance, parallel boosting, column-oriented data
This AI is able to spot human faces in images with the and stats in RAM — all for performance improvement. Sibyl is
same accuracy as humans do (approximately 97.35%). largely used for Google's prediction models, specifically ranking
products and pages, measuring user behavior and for advertising.

6
2/27/2023

Eugene Goostman user interface


DeepMind's AlphaGo
2014
Eugene Goostman is the first chatbot that some regard as having
passed the Turing test. It was developed by three friends- 2015
programmers, Vladimir Veselov, Eugene Demchenko and Sergey AlphaGo program is the first AI to beat a professional Go player. Go
Ulasen. Eugene Goostman was portrayed as being a 13-year-old boy is one of the oldest and hardest abstract strategy games, which was
from Odessa, Ukraine, who has a pet guinea pig and a father who is a previously thought to be a near-impossible game to teach a computer.
gynecologist. On 7 June 2014, in a Turing test competition at the
Royal Society, Goostman won after 33% of the judges were
convinced that the bot was human.

Waymo launches commercial service in Phoenix (2017)

2017
Waymo starts testing autonomous cars in the US with backup
drivers only at the back of the car. Later the same year they
introduce completely autonomous taxis in the city of Phoenix.

Classification of Machine Learning

What is
Supervised/Unsupervised/
Reinforcement
Learning?

7
2/27/2023

Reinforcement Learning

Comparison Popular Machine Learning Algorithms


Criteria Supervised ML Unsupervised ML Reinforcement ML

•Naive Bayes algorithm


Definition Learns by Trained using Works on •KNN algorithm
using labelled unlabelled data interacting with
data without any the environment
•Decision tree
guidance. •SVM algorithm
Type of data No – predefined
•Linear regression
Labelled data Unlabelled data
data •Logistic regression
Supervision Extra supervision No supervision No supervision
•K-means
•Random forest algorithm
Discover Learn a
Aim Calculate outcomes
underlying series of •Dimensionality reduction algorithms
patterns action •Gradient boosting algorithm and
Risk Self Driving
AdaBoost algorithm
Application Recommendation
Evaluation, Cars,
System, Anomaly
Forecast Gaming,
Detection
Sales Healthcare

8
2/27/2023

You might also like