0% found this document useful (0 votes)
15 views

Syllabus Udacity Default en Us

Uploaded by

Parth Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Syllabus Udacity Default en Us

Uploaded by

Parth Pandey
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Computer Vision Nanodegree

Syllabus

Contact Info
While going through the program, if you have questions about anything, you can reach us at
[email protected]. For help from Udacity Mentors and your peers visit the Udacity Classroom.

Nanodegree Program Info


Version: 4.0.0

Length of Program: 69 Days*

* This is a self-paced program and the length is an estimation of total hours the average student may take to complete all required coursework, including lecture
and project time. Actual hours may vary.

Part 1: Introduction to Computer Vision

Project: Facial Keypoint Detection

Apply your knowledge of image processing and deep learning to create a CNN for facial keypoint (eyes,
mouth, nose, etc.) detection.

Supporting Lessons
Lesson Summary

Welcome to Computer Vision Welcome to the Computer Vision Nanodegree program!

You are starting a challenging but rewarding journey! Take 5


Knowledge, Community, and Careers
minutes to read how to get help with projects and content.

What to do if you have questions about your account or


Get Help with Your Account
general questions about the program.

Learn how images are represented numerically and implement


Image Representation & Classification image processing techniques, such as color masking and
binary classification.

Learn about frequency in images and implement your own


Convolutional Filters and Edge Detection image filters for detecting edges and shapes in an image. Use a
computer vision library to perform face detection.

Program a corner detector and learn techniques, like k-means


Types of Features & Image Segmentation
clustering, for segmenting an image into unique parts.

Learn how to describe objects and images using feature


Feature Vectors
vectors.

Define and train your own convolution neural network for


CNN Layers and Feature Visualization clothing recognition. Use feature visualization techniques to
see what a network has learned.

Project: Optimize Your GitHub Profile

Other professionals are collaborating on GitHub and growing their network. Submit your profile to ensure
your profile is on par with leaders in your field.

Supporting Lessons

Lesson Summary

Learn about common jobs in computer vision, and get tips on


Jobs in Computer Vision
how to stay active in the community.

Part 2: Optional: Cloud Computing

Part 3: Advanced Computer Vision & Deep Learning


Project: Image Captioning

Train a CNN-RNN model to predict captions for a given image. Your main task will be to implement an
effective RNN decoder for a CNN encoder.

Supporting Lessons

Lesson Summary

Learn about advances in CNN architectures and see how


Advanced CNN Architectures region-based CNN’s, like Faster R-CNN, have allowed for fast,
localized object recognition in images.

Learn about the YOLO (You Only Look Once) multi-object


YOLO
detection model and work with a YOLO implementation.

Explore how memory can be incorporated into a deep learning


RNN's model using recurrent neural networks (RNNs). Learn how
RNNs can learn from and generate ordered sequences of data.

Luis explains Long Short-Term Memory Networks (LSTM), and


Long Short-Term Memory Networks
similar architectures which have the benefits of preserving
(LSTMs)
long term memory.

Learn about a number of different hyperparameters that are


used in defining and training deep learning models. We'll
Hyperparameters
discuss starting values and intuitions for tuning each
hyperparameter.

Attention is one of the most important recent innovations in


Optional: Attention Mechanisms deep learning. In this section, you'll learn how attention
models work and go over a basic code implementation.

Learn how to combine CNNs and RNNs to build a complex,


Image Captioning
automatic image captioning model.

Project: Improve Your LinkedIn Profile

Find your next job or connect with industry peers on LinkedIn. Ensure your profile attracts relevant leads that
will grow your professional network.

Part 4: Object Tracking and Localization

Project: Landmark Detection & Tracking (SLAM)

Implement SLAM, a robust method for tracking an object over time and mapping out its surrounding
environment, using elements of probability, motion models, and linear algebra.
Supporting Lessons

Lesson Summary

This lesson introduces a way to represent motion


Introduction to Motion mathematically, outlines what you'll learn in this section, and
introduces optical flow.

Learn to implement a Bayesian filter to locate a robot in space


Robot Localization
and represent uncertainty in robot motion.

Write sense and move functions (and debug) a 2D histogram


Mini-project: 2D Histogram Filter
filter!

Learn the intuition behind the Kalman Filter, a vehicle tracking


Introduction to Kalman Filters algorithm, and implement a one-dimensional tracker of your
own.

Learn about representing the state of a car in a vector that can


Representing State and Motion
be modified using linear algebra.

Linear Algebra is a rich branch of math and a useful tool. In


Matrices and Transformation of State this lesson you'll learn about the matrix operations that
underly multidimensional Kalman Filters.

Learn how to implement SLAM: simultaneously localize an


Simultaneous Localization and Mapping autonomous vehicle and create a map of landmarks in an
environment.

Review the basics of calculus and see how to derive the x and y
Optional: Vehicle Motion and Calculus components of a self-driving car's motion from sensor
measurements and other data.

Udacity

Generated Mon Jul 13 04:57:25 PDT 2020

You might also like