Intro ML Lecture 1
Intro ML Lecture 1
Machine Learning
Subject code: EC 326
Scheme: 3(L)-0(T)-2(P)-3(C)
Machine Learning
Subject faculties:
Overview Overview
• Machine Learning Basics
• Supervised Learning •Unsupervised Learning
•Classification •Clustering
•Artificial Neural Network •Terms and parameters related to Unsupervised learning
•k-Nearest Neighbour
•SVM •Reinforcement Learning
•Decision Tree •Markov Decision Process (MDP)
•Naïve Bayes •Q-learning
•Etc… •Value function approximation
•Regression •etc.,
•Linear and Logistic
•Tree-based •Dimensionality Reduction
•Other related theory •Principal Component Analysis (PCA)
•Singular Value Decomposition (SVD)
•etc.,
1
2/27/2023
What is ML?
Machine Learning is said as a subset of artificial intelligence that is
mainly concerned with the development of algorithms which allow a
computer to learn from the data and past experiences on their own.
OR
Machine learning enables a machine to automatically learn from data, improve performance
from experiences, and predict things without being explicitly programmed.
Arthur Lee Samuel (December 5, 1901 – July 29, 1990) was an American pioneer in the field of computer gaming and AI.
Applications of ML
• Image and Speech Recognition
I.e., Alexa and Google Home and
Tagging the name on any photo as we have seen on
Facebook
•Google Maps (Provides traffic info before the start of our journey)
•Online customer support (widely used in medical, banking, health,
Stock market, etc.,)
•Google Translation
•Prediction (in banking or for forecast prediction)
•Feature extraction
•Self-driving Cars
•Advertise recommendations
•Video Surveillance
•Email and Spam filtering
•Real-time dynamic pricing
•Gaming and Education
•Virtual Assistants
2
2/27/2023
AI/ML/DP TIMELINE
Working of ML
With the help of sample historical data, which is known as training data,
machine learning algorithms build a mathematical model that helps
in making predictions or decisions without being explicitly programmed.
Warren McCulloch (left) and Walter Pitts (right) 'The Organization of Behavior' by Donald Hebb, New
York (1949).
1945
Machine learning history starts with the first mathematical 1949
model of neural networks presented in the scientific paper "A The book "The Organization of Behavior" by Donald Hebb is published.
logical calculus of the ideas immanent in nervous activity" by This book has theories on how behavior relates to neural networks and
Warren McCulloch and Walter Pitts. brain activity and is about to become one of the monumental pillars of
machine learning development.
Alan M. Turing
Arthur Samuel and IBM 700 (February 24, 1956)
1950 – The Turing Test
-- The Turing Test was proposed by Alan Turing, an English computer 1950s
scientist, as a measure of a computer’s intelligence in 1950. It’s a way to Arthur Samuel, a pioneer in machine learning, created a program for playing
measure artificial intelligence. championship-level computer checkers.
-- Turing test has been criticized on the grounds that it is difficult to create a • Additionally, Samuel utilized a minimax algorithm (which is still widely used for
games today) of finding the optimal move, assuming that the opponent is also
fair and accurate test, as well as because intelligence is not adequately
playing optimally.
measured by this test alone. However, it remains an essential milestone in the
•He also designed mechanisms for his program to continuously improve, for
history of artificial intelligence research. instance, by remembering previous checker moves and comparing them with
chances of winning.
•Arthur Samuel is the first person to come up with and popularize the term "machine
learning".
3
2/27/2023
Marvin Minsky (left), Dean Edmonds (right) and the SNARC machine
with 40 Hebb synapses.
Marvin Minsky, Claude Shannon, Ray Solomonoff and other
scientists at the Dartmouth Summer Research Project on Artificial
1951 Intelligence (1955)
When most computers still used punched cards to run, Marvin
Minsky and Dean Edmonds built the first artificial neural network, 1956
consisting of 40 interconnected neurons with short- and long- The Dartmouth Workshop is sometimes referred to as "the birthplace of
term memory. artificial intelligence". During a two-month period, a group of prominent
scientists in the fields of math, engineering, computer and cognitive
sciences have gathered to establish and brainstorm the fields of AI and
ML research.
4
2/27/2023
1995 1997
Random decision forests are introduced in a paper published by Tin Deep Blue, beats world champion Garry Kasparov in chess.
Kam Ho. This algorithm creates and merges multiple AI decisions At the time this achievement was seen as a proof of machines
into a "forest". When relying on multiple different decision trees, the catching up to human intelligence.
model significantly improves in its accuracy and decision-making.
5
2/27/2023
Igor Aizenberg, 'Multi-Valued and Universal Binary Neurons' Fei-Fei Li, creator of ImageNet
Book Cover (2000)
2009
A massive visual database of labeled images ImageNet is launched
by Fei-Fei Li. Li wanted to expand on the data available for training
2000 algorithms, since she believed that AI and ML must have good training
First mention of the term "deep learning" by a Ukrainian-born data that reflects the real world in order to be truly practical and
neural networks researcher Igor Aizenberg in the context of useful. The Economist described the creation of this database as an
Boolean threshold neurons. exceptional event for popularizing AI throughout the whole tech
community, marking the new era of deep learning history.
2014
Having an extensive machine learning background, Google’s X A group of prominent scientists (Goodfellow, Pouget-Abadie,
Lab team has developed an artificial intelligence algorithm Google Mirza, Xu, Warde-Farley, Ozair, Courville, Bengio)
Brain, which later in 2012 became famously good at image develop Generative adversarial networks (GAN) frameworks that
processing, being able to identify cats in pictures. teach AI how to generate new data based on the training set.
2014 2014
Facebook research team develops DeepFace, a deep Google introduces Sibyl, a large scale machine learning system, to
leaning facial recognition system — nine-layer neural the public. Many novel algorithms are presented together with the
network trained on 4 million images of Facebook users. system itself. For instance, parallel boosting, column-oriented data
This AI is able to spot human faces in images with the and stats in RAM — all for performance improvement. Sibyl is
same accuracy as humans do (approximately 97.35%). largely used for Google's prediction models, specifically ranking
products and pages, measuring user behavior and for advertising.
6
2/27/2023
2017
Waymo starts testing autonomous cars in the US with backup
drivers only at the back of the car. Later the same year they
introduce completely autonomous taxis in the city of Phoenix.
What is
Supervised/Unsupervised/
Reinforcement
Learning?
7
2/27/2023
Reinforcement Learning
8
2/27/2023