0% found this document useful (0 votes)
5 views

Deep Learning Week 101

Uploaded by

Adan Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Deep Learning Week 101

Uploaded by

Adan Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Artificial Intelligence (AI)

* Similar to electricity (100 years ago) -> AI is transforming multiple industries


* We are living in an AI-powered society.
Supervised learning: Many examples of input-output mappings implementing them and categorizing new data inputs.
Machine Learning Unsupervised learning: Figuring things based on patterns and statistics in data without initial mappings.
Algorithms Transfer learning: Transferring mappings from one task with many examples to another with less data.
Reinforcement learning: continually giving feedback like “good computer/bad computer.”
Idea
Iterative process of developing machine learning models: Better Algorithms -> Improve the time needed to run an
experiment or to train a model and thus enable us to produce better models. E.g., ReLU function vs Sigmoid function.
Experiment Code
Universal function approximators -> Given training examples, they learn to approximate any continuous function mapping
inputs X to outputs Y. Learn hierarchical representations -> complex relationships into simpler components.
Neural Networks Parameters (weights and biases) grant them flexibility to fit a wide range of functions.
Nonlinear activation functions -> Model complex, nonlinear relationships between inputs and outputs.
Optimization algorithms -> (e.g., gradient descent) -> adjust parameters to minimize the difference
between predicted and actual outputs over many training examples.. Limited computational power during the 80s hindered
When the input/output is a sequence (e.g., a the development and training of deep neural networks.
Recurrent Neural Why is used for
sequence of words)
Networks (RNN) machine translation? Involves sequential input (the source language sentence) and sequential output (the target
Convolutional Neural Networks (CNN) -> designed to process grid-
language translation)
like data, such as images. RNNs are designed to process sequential It can be trained as a supervised learning problem using labeled data, where the input is a
data, such as time series or natural language. CNNs capture spatial
dependencies and local patterns in the input data. RNNs handle sentence in the source language (e.g., English), and the corresponding output is the
temporal dependencies and sequences. CNNs -> computer vision
tasks. – RNNs -> NLP tasks.
translated sentence in the target language (e.g., French).
Structured -> well-defined schema and can be organized into a tabular format, like a spreadsheet or a database table.

Data Type E.g., A demographic dataset with statistics on different cities' population, GDP per capita, and economic growth.
Unstructured -> does not have a predefined schema or format. E.g., text documents, images, audio files, and videos.
Artificial Intelligence (AI)
* Similar to electricity (100 years ago) -> AI is transforming multiple industries
* We are living in an AI-powered society.
Supervised learning: Many examples of input-output mappings implementing them and categorizing new data inputs.
Machine Learning Unsupervised learning: Figuring things based on patterns and statistics in data without initial mappings.
Algorithms Transfer learning: Transferring mappings from one task with many examples to another with less data.
Reinforcement learning: continually giving feedback like “good computer/bad computer.”
Idea
Iterative process of developing machine learning models: Better Algorithms -> Improve the time needed to run an
experiment or to train a model and thus enable us to produce better models. E.g., ReLU function vs Sigmoid function.
Experiment Code
Universal function approximators -> Given training examples, they learn to approximate any continuous function mapping
inputs X to outputs Y. Learn hierarchical representations -> complex relationships into simpler components.
Neural Networks Parameters (weights and biases) grant them flexibility to fit a wide range of functions.
Nonlinear activation functions -> Model complex, nonlinear relationships between inputs and outputs.
Optimization algorithms -> (e.g., gradient descent) -> adjust parameters to minimize the difference
between predicted and actual outputs over many training examples.. Limited computational power during the 80s hindered
When the input/output is a sequence (e.g., a the development and training of deep neural networks.
Recurrent Neural Why is used for
sequence of words)
Networks (RNN) machine translation? Involves sequential input (the source language sentence) and sequential output (the target
Convolutional Neural Networks (CNN) -> designed to process grid-
language translation)
like data, such as images. RNNs are designed to process sequential It can be trained as a supervised learning problem using labeled data, where the input is a
data, such as time series or natural language. CNNs capture spatial
dependencies and local patterns in the input data. RNNs handle sentence in the source language (e.g., English), and the corresponding output is the
temporal dependencies and sequences. CNNs -> computer vision
tasks. – RNNs -> NLP tasks.
translated sentence in the target language (e.g., French).
Structured -> well-defined schema and can be organized into a tabular format, like a spreadsheet or a database table.

Data Type E.g., A demographic dataset with statistics on different cities' population, GDP per capita, and economic growth.
Unstructured -> does not have a predefined schema or format. E.g., text documents, images, audio files, and videos.

You might also like