0% found this document useful (0 votes)
17 views

Deep Learning and Its Applications

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Deep Learning and Its Applications

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

DEEP LEARNING AND

ITS APPLICATIONS

By :Divakar Keshri
PhD NIT TRICHY
CA Department
405120005
CONTENTS UNIT1
 Various types of Learning.
 Machine Learning: issues and
challenges.
 CPU vs. GPU.
 Massive parallelism.
 Introduction to Deep Learning.
 Deep Learning Models: CNN RNN AE
GAN.
 Real world applications of Deep
Learning.
 Packages used for Deep Learning.
VARIOUS TYPES LEARNING.

 What is Machine Learning?


 Machine Learning is an application of
Artificial Intelligence that enables
systems to learn from vast volumes of
data and solve specific problems.
 It uses computer algorithms that
improve their efficiency automatically
through experience.
 There are primarily three types of
machine learning: Supervised,
Unsupervised, and Reinforcement
Learning.
OVERVIEW:

 Supervised Learning
 Supervised learning is a type of machine
learning that uses labeled data to train
machine learning models.
 In labeled data, the output is already
known. The model just needs to map
the inputs to the respective outputs.
 An example of supervised learning is to
train a system that identifies the image
of an animal.
EXAMPLE
ALGORITHMS:

 Algorithms:
 Some of the most popularly used
supervised learning algorithms are:
 Linear Regression
 Logistic Regression
 Support Vector Machine
 K Nearest Neighbor
 Decision Tree
 Random Forest
 Naive Bayes
UNSUPERVISED LEARNING
 Unsupervised learning is a type of machine learning
that uses unlabeled data to train machines.

 Unlabeled data doesn’t have a fixed output variable.

 The model learns from the data, discovers the


patterns and features in the data, and returns the
output.

 An unsupervised learning technique that uses the


images of vehicles to classify if it’s a bus or a truck.

 The model learns by identifying the parts of a vehicle,


such as a length and width of the vehicle, the front,
and rear end covers, roof hoods, the types of wheels
used, etc. Based on these features, the model
classifies if the vehicle is a bus or a truck.
EXAMPLE
ALGORITHMS:

 Algorithms:
 Selecting the right algorithm depends on
the type of problem you are trying to
solve. Some of the common examples of
unsupervised learning are:

 K Means Clustering
 Hierarchical Clustering
 DBSCAN
 Principal Component Analysis
REINFORCEMENT LEARNING

 Reinforcement Learning trains a


machine to take suitable actions and
maximize its rewards in a particular
situation.
 It uses an agent and an environment to
produce actions and rewards.
 The agent has a start and an end state.
But, there might be different paths for
reaching the end state.
 In this learning technique, there is no
predefined target variable.
EXAMPLE
MACHINE LEARNING: ISSUES
 Issues in Machine Learning are:
 Inadequate Training Data.
 Poor quality of data.
 Non-representative training data.
 Overfitting and Underfitting.
 Monitoring and maintenance.
 Getting bad recommendations.
 Lack of skilled resources.
 Customer Segmentation.
 Process Complexity of Machine
Learning.
 Data Bias.
 Lack of Explainability.
 Slow implementations and results.
CHALLENGES OF MACHINE
LEARNING
 Challenges of Machine Learning are:

 Technological Singularity: Interesting questions when we


contemplate the potential use of autonomous systems, such
as self-driving vehicles.

 AI Impact on Jobs: While the majority of public opinion about


artificial intelligence revolves around job loss, the issue should
likely be changed.

 Privacy: Privacy is often frequently discussed in relation to


data privacy security, data protection, and security.

 Bias and Discrimination: what kind of data could you


analyse when evaluating a candidate for a particular job.

 Accountability: There isn't a significant law to control AI


practices. There's no mechanism for enforcement to make
sure that ethical AI is being used.
CPU VS. GPU
 Difference between CPU and GPU:
 The basic difference between CPU and GPU
is that CPU emphasis on low latency.
Whereas, GPU emphasis on high throughput.
 CPU consumes or needs more memory than
GPU.
 The speed of CPU is less than GPU’s speed.
 CPU contain minute powerful cores.
 CPU is suitable for serial instruction
processing.
 While GPU is suitable for parallel instruction
processing.
CENTRAL PROCESSING
UNIT (CPU):
 CPU is known as brain for every
ingrained system.
 CPU comprises the arithmetic logic unit
(ALU), accustomed quickly to store the
information and perform calculations
and Control Unit (CU) for performing
instruction sequencing as well as
branching.
 CPU interacts with more computer
components such as memory, input and
output for performing instruction.
CPU
GPU
 GPU is used to provide the images in
computer games.
 GPU is faster than CPU’s speed and it
emphasis on high throughput.
 It’s generally incorporated with
electronic equipment for sharing RAM
with electronic equipment that is nice
for the foremost computing task.
 It contains more ALU units than CPU.
GPU
MASSIVE PARALLELISM
 The term "massively parallel" refers to the use of a
huge number of computer processors (or
independent computers) to perform a set of
coordinated computations simultaneously.
 MPP (massively parallel processing) is a storage
structure that allows numerous processors to conduct
program activities in a coordinated manner.
 MPP databases can now manage vast volumes of
data and do considerably faster analytics on large
datasets.
 Massively parallel processor arrays (MPPAs), a form of
an integrated circuit containing hundreds or
thousands of central processing units (CPUs) and
random-access memory (RAM) banks, are also
referred to as massively parallel processors arrays
INTRODUCTION TO DEEP LEARNING.
 Deep learning is a subfield of
machine learning focusing on
learning data representations as
successive layers of increasingly
meaningful representations.
ML VS.DL
ANATOMY OF DL
DEEP LEARNING MODELS:
CNN RNN AE GAN
 Convolutional neural network
(CNN, ConvNet):
 Dense or fully-connected: each neuron
connected to all neurons in previous
layer.
 CNN: only connected to a small “local”
set of neurons
 Radically reduces number
of network connections
CNN
 Convolutional Neural
NETWORK(CNN) :

 A Convolutional Neural Network (CNN) is


a type of deep learning algorithm that is
particularly well-suited for image
recognition and processing tasks.

 It is made up of multiple layers,


including convolutional layers, pooling
layers, and fully connected layers.
TYPES OF CNN MODELS
 Different Types of CNN Models:

 LeNet

 AlexNet

 ResNet

 GoogleNet

 MobileNet

 VGG
RNN
 Recurrent Neural Network(RNN) is a type of
Neural Network where the output from the
previous step are fed as input to the current step.
 In traditional neural networks, all the inputs and
outputs are independent of each other, but in
cases like when it is required to predict the next
word of a sentence, the previous words are
required and hence there is a need to remember
the previous words.
 Thus RNN came into existence, which solved this
issue with the help of a Hidden Layer.
 The main and most important feature of RNN is
Hidden state, which remembers some
information about a sequence.
RNN
 RNN have a “memory” which
remembers all information about what
has been calculated.

 It uses the same parameters for each


input as it performs the same task on all
the inputs or hidden layers to produce
the output.

 This reduces the complexity of


parameters, unlike other neural
networks.
AE
 Autoencoders (AE) are neural networks that
aims to copy their inputs to their outputs.
 They work by compressing the input into a
latent-space representation, and then
reconstructing the output from this
representation.
 This kind of network is composed of two parts :
 Encoder: This is the part of the network that
compresses the input into a latent-space
representation. It can be represented by an
encoding function h=f(x).
 Decoder: This part aims to reconstruct the input
from the latent space representation. It can be
represented by a decoding function r=g(h).
AE
GAN
 Generative adversarial networks (GANs)
are an exciting recent innovation in
machine learning.
 GANs are generative models: they
create new data instances that
resemble your training data.
 For example, GANs can create images
that look like photographs of human
faces, even though the faces don't
belong to any real person.
GAN
PACKAGES USED FOR DEEP LEARNING.
PACKAGES USED FOR DEEP
LEARNING.

You might also like