SlideShare a Scribd company logo
Presented By: Aayush Srivastava
& Divyank Saxena
Custom ML Model
Deployment using
AWS Sagemaker
Lack of etiquette and manners is a huge turn off.
KnolX Etiquettes
Punctuality
Join the session 5 minutes prior to
the session start time. We start on
time and conclude on time!
Feedback
Make sure to submit a constructive
feedback for all sessions as it is
very helpful for the presenter.
Silent Mode
Keep your mobile devices in silent
mode, feel free to move out of
session in case you need to attend
an urgent call.
Avoid Disturbance
Avoid unwanted chit chat during
the session.
Our Agenda
01 What is Machine Learning
02 What is AWS Sagemaker
03
How Does Sagemaker Work
04
Deploying Custom ML Model
05
Demo
05
06
Benefits of AWS Sagemaker
.
What is Machine Learning ?
● According to Arthur Samuel(1959), Machine Learning algorithms enable the computers to learn from
data, and even improve themselves, without being explicitly programmed.
● Few Day to Day Applications of Machine learning
1.Image recognition
2.Speech Recognition
3.Product Recommendation
4.Virtual Personal Assistant
Machine Learning Life Cycle
● Amazon Web Services (AWS) is an on-demand cloud platform offered by Amazon, that provides
service over the internet. AWS services can be used to build, monitor, and deploy any application type
in the cloud. Here's where the AWS Sagemaker comes into play.
● AWS is a broadly adopted cloud platform that offers several on-demand operations like compute
power, database storage, content delivery, etc., to help corporates scale and grow.
What is AWS
● Amazon SageMaker is a managed service in the Amazon Web Services (AWS) public cloud. It
provides the tools to build, train and deploy machine learning models for predictive analytics
applications. The platform automates the tedious work of building a production-ready artificial
intelligence (AI) pipeline.
● Deploying ML models is challenging, even for experienced application developers. Amazon
SageMaker aims to simplify the process.
● AutoML is also supported by sagemaker. It process of automating the tasks of applying machine
learning to real-world problems.
What is Amazon Sagemaker
Some Benefits of Using AWS SageMaker
•Highly Scalable
•It helps in creating and managing compute instance with the least amount of time
•It helps in storing all ML components in one place
•All your logs get easily stored in CloudWatch logs
• Amazon’s own pre-built models are highly optimized to run on AWS
•Maintains Uptime — Process keeps on running without any stoppage.
•High Data Security
Benefits of Amazon Sagemaker
● AWS SageMaker simplifies ML modeling into three steps:
1. Build/Train
2. Test and Tune
3. Deploy
● The following diagram shows how machine learning works with AWS SageMaker lets look in more
detail about each of these steps.
How does Amazon SageMaker work?
Train
● AWS SageMaker helps developers to customize Machine Learning instances with the Jupyter
notebook interface
● It provides more than 15 widely used ML Algorithm for training purpose
● It gives the capability to select the required server size for our notebook instance
● A user can write code (for creating model training jobs) using notebook instance
Train, Test and Tune , Deploy
Train
● After creating a notebook instance a user can select type of environments he will be using.
● The necessary libraries will come pre-installed according to your notebook instance type
Train, Test and Tune , Deploy
Tune
● Set up and import required libraries
● Define a few environment variables and manage them for training the model
● Train and tune the inbuilt Sagemaker Algorithm by tweaking the hyperparameters according to your
need
Train, Test and Tune , Deploy
Tune
● Hyperparameter tuning is achieved by adding a suitable combination of algorithm parameters like:
Train, Test and Tune , Deploy
Testing
● It helps us in managing the model performance by using manual testing
● Evaluating the accuracy of the ML model
● It helps us to make sure that the achieved loss is acceptable for your task
● Helpful in checking model performance on real data
Train, Test and Tune , Deploy
Deploy
● Once tuning is done, models can be deployed to SageMaker endpoints
● In the endpoints, a real-time prediction is performed
● For deployment use:
● Now, evaluate your model and determine whether you have achieved your business goals
● To test your endpoint for prediction you can use the same Jupyter notebook instance
Train, Test and Tune , Deploy
Deploying Fully Custom ML model
● What if you have a trained model outside Sagemaker can you still deploy it?
● What if you want to utilize certain algorithms or frameworks that are not supported by Sagemaker
instance is it possible?
● What if you want to create a REST endpoint to deploy your custom model is it possible?
Answer: YES
If you want to use SageMaker as the service to deploy your model, it involves deploying to 3 AWS
services: AWS SageMaker, AWS Elastic Container Registry (ECR), which provides versioning and
access control for container images, and AWS Simple Cloud Storage (S3). The diagram below
describes the process in detail.
Deploying Fully Custom ML model
Deploying Fully Custom ML model
Requirements
1. Docker
○ Docker is a software platform that allows you to build, test, and deploy applications quickly. Using
Docker, you can quickly deploy and scale applications in any environment and know your code will run.
○ We will be using docker to contanarize our Application along with its dependencies.
○ Installation: $ sudo snap install docker
2. Flask
○ Flask is a web development framework developed in python
○ We are using it to specify the logic of how to handle your ML inference requests. It lets you respond to
/ping and /invocations
○ Installation : pip install Flask
3. AWS CLI
○ AWS command line interface is being used to to push our docker container image to AWS ECR
○ Installation : $ sudo apt-get install awscli
Deploying Fully Custom ML model
Let’s break the process in to 4 steps:
● Step 1: Building the model and saving the artifacts.
● Step 2: Defining the server and Inference code.
● Step 3: Build a Sagemaker Container.
● Step 4: Creating Model, Endpoint Configuration, and Endpoint.
Deploying Fully Custom ML model
Step 1: Building the model and saving the artifacts.
● Build the model and serialize the object, which is used for prediction. In this demo we are using rasa to
train a NLU model.
● Once you train the model, save that artifact in tar.gz. You can upload the artifact in S3 bucket or you can
include it in a the container.
Deploying Fully Custom ML model
Step 2: Defining the server and inference code.
● When an endpoint is invoked Sagemaker interacts with the Docker container, which runs the inference
code for hosting services and processes the request and returns the response.Containers need to
implement a web server that responds to /invocations and /ping on port 8080.
● Inference code in the container will receive GET requests from the infrastructure and it should respond to
Sagemaker with an HTTP 200 status code and an empty body, which indicates that the container is ready
to accept inference requests at invocations endpoint.
● And invocations is the endpoint that receives POST requests and responds according to the format
specified in the algorithm. To make the model REST API, you need Flask, which is WSGI(Web Server
Gateway Interface) application framework.
Deploying Fully Custom ML model
Step 3: Sagemaker Container.
● Sagemaker uses docker containers extensively. You can put your scripts, algorithms, and inference code
for your models in the containers, which includes the runtime, system tools, libraries and other code to
deploy your models, which provides flexibility to run your own model.
● You create Docker containers from images that are saved in a repository. You build the images from
scripted instructions provided in a Dockerfile.
● The Dockerfile describes the image that you want to build with complete operating system installation of
the system that you want to run.
● And you need to copy the project folder where you have to /opt/code and make it as a working directory.
● The Amazon Sagemaker Containers library places the scripts that the container will run in the /opt/code/
directory
Deploying Fully Custom ML model
● How SageMaker Runs Your Inference Image
To configure a container to run as an executable, use an ENTRYPOINT instruction in a Dockerfile. Note the
following:
For model inference, SageMaker runs the container as:
● Docker run image serve
SageMaker overrides default CMD statements in a container by specifying the serve argument after the
image name. The serve argument overrides arguments that you provide with the CMD command in the
Dockerfile. We recommend that you use the exec form of the ENTRYPOINT instruction:
● ENTRYPOINT ["executable", "param1", "param2"]
For example:
ENTRYPOINT ["python", "k_means_inference.py"]
Deploying Fully Custom ML model
Step 4: Creating Model, Endpoint Configuration, and Endpoint.
● Creating models can be done by API or AWS management console . Provide Model name and IAM role.
● In Container definition, choose to provide artifacts and inference image location and provide the S3
location of the artifacts and Image URI.
● After creating the model, create Endpoint Configuration and add the model which has been created.
● Create Endpoint using the existing configuration.
● Now your custom model has been deployed and you can hit the “invocation api” via postman.
Demo
Thank You !
Get in touch with us:
Lorem Studio, Lord Building
D4456, LA, USA
Ad

Recommended

Centralized logging
Centralized logging
blessYahu
 
Integrating Apache Kafka Into Your Environment
Integrating Apache Kafka Into Your Environment
confluent
 
Introduction to Azure Functions
Introduction to Azure Functions
Callon Campbell
 
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...
Technical Deep Dive: Using Apache Kafka to Optimize Real-Time Analytics in Fi...
confluent
 
Log analysis using elk
Log analysis using elk
Rushika Shah
 
AWS 클라우드 이해하기-사례 중심 (정민정) - AWS 웨비나 시리즈
AWS 클라우드 이해하기-사례 중심 (정민정) - AWS 웨비나 시리즈
Amazon Web Services Korea
 
Building Azure Logic Apps
Building Azure Logic Apps
BizTalk360
 
Centralized Logging System Using ELK Stack
Centralized Logging System Using ELK Stack
Rohit Sharma
 
Introduction to Azure Databricks
Introduction to Azure Databricks
James Serra
 
Serverless Application Development with Azure
Serverless Application Development with Azure
Callon Campbell
 
The A-Z of Data: Introduction to MLOps
The A-Z of Data: Introduction to MLOps
DataPhoenix
 
API Gateway report
API Gateway report
Gleicon Moraes
 
Presto: SQL-on-anything
Presto: SQL-on-anything
DataWorks Summit
 
Ml ops on AWS
Ml ops on AWS
PhilipBasford
 
Migrating Oracle database to PostgreSQL
Migrating Oracle database to PostgreSQL
Umair Mansoob
 
MLOps Virtual Event: Automating ML at Scale
MLOps Virtual Event: Automating ML at Scale
Databricks
 
Airflow at lyft
Airflow at lyft
Tao Feng
 
Introduction to Kafka Streams
Introduction to Kafka Streams
Guozhang Wang
 
Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1
PgTraining
 
Grafana introduction
Grafana introduction
Rico Chen
 
Amazon SageMaker for MLOps Presentation.
Amazon SageMaker for MLOps Presentation.
Knoldus Inc.
 
Dynatrace が特別な7つの理由
Dynatrace が特別な7つの理由
Harry Hiyoshi
 
Google Vertex AI
Google Vertex AI
VikasBisoi
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Shiao-An Yuan
 
AWS basics
AWS basics
mbaric
 
Einführung in AWS - Übersicht über die wichtigsten Services
Einführung in AWS - Übersicht über die wichtigsten Services
AWS Germany
 
Azure DataBricks for Data Engineering by Eugene Polonichko
Azure DataBricks for Data Engineering by Eugene Polonichko
Dimko Zhluktenko
 
RabbitMQ
RabbitMQ
Sarunyhot Suwannachoti
 
Amazon SageMaker workshop
Amazon SageMaker workshop
Julien SIMON
 
Demystifying Machine Learning with AWS (ACD Mumbai)
Demystifying Machine Learning with AWS (ACD Mumbai)
AWS User Group Pune
 

More Related Content

What's hot (20)

Introduction to Azure Databricks
Introduction to Azure Databricks
James Serra
 
Serverless Application Development with Azure
Serverless Application Development with Azure
Callon Campbell
 
The A-Z of Data: Introduction to MLOps
The A-Z of Data: Introduction to MLOps
DataPhoenix
 
API Gateway report
API Gateway report
Gleicon Moraes
 
Presto: SQL-on-anything
Presto: SQL-on-anything
DataWorks Summit
 
Ml ops on AWS
Ml ops on AWS
PhilipBasford
 
Migrating Oracle database to PostgreSQL
Migrating Oracle database to PostgreSQL
Umair Mansoob
 
MLOps Virtual Event: Automating ML at Scale
MLOps Virtual Event: Automating ML at Scale
Databricks
 
Airflow at lyft
Airflow at lyft
Tao Feng
 
Introduction to Kafka Streams
Introduction to Kafka Streams
Guozhang Wang
 
Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1
PgTraining
 
Grafana introduction
Grafana introduction
Rico Chen
 
Amazon SageMaker for MLOps Presentation.
Amazon SageMaker for MLOps Presentation.
Knoldus Inc.
 
Dynatrace が特別な7つの理由
Dynatrace が特別な7つの理由
Harry Hiyoshi
 
Google Vertex AI
Google Vertex AI
VikasBisoi
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Shiao-An Yuan
 
AWS basics
AWS basics
mbaric
 
Einführung in AWS - Übersicht über die wichtigsten Services
Einführung in AWS - Übersicht über die wichtigsten Services
AWS Germany
 
Azure DataBricks for Data Engineering by Eugene Polonichko
Azure DataBricks for Data Engineering by Eugene Polonichko
Dimko Zhluktenko
 
RabbitMQ
RabbitMQ
Sarunyhot Suwannachoti
 
Introduction to Azure Databricks
Introduction to Azure Databricks
James Serra
 
Serverless Application Development with Azure
Serverless Application Development with Azure
Callon Campbell
 
The A-Z of Data: Introduction to MLOps
The A-Z of Data: Introduction to MLOps
DataPhoenix
 
Migrating Oracle database to PostgreSQL
Migrating Oracle database to PostgreSQL
Umair Mansoob
 
MLOps Virtual Event: Automating ML at Scale
MLOps Virtual Event: Automating ML at Scale
Databricks
 
Airflow at lyft
Airflow at lyft
Tao Feng
 
Introduction to Kafka Streams
Introduction to Kafka Streams
Guozhang Wang
 
Oracle to Postgres Migration - part 1
Oracle to Postgres Migration - part 1
PgTraining
 
Grafana introduction
Grafana introduction
Rico Chen
 
Amazon SageMaker for MLOps Presentation.
Amazon SageMaker for MLOps Presentation.
Knoldus Inc.
 
Dynatrace が特別な7つの理由
Dynatrace が特別な7つの理由
Harry Hiyoshi
 
Google Vertex AI
Google Vertex AI
VikasBisoi
 
Introduction to Apache Kafka
Introduction to Apache Kafka
Shiao-An Yuan
 
AWS basics
AWS basics
mbaric
 
Einführung in AWS - Übersicht über die wichtigsten Services
Einführung in AWS - Übersicht über die wichtigsten Services
AWS Germany
 
Azure DataBricks for Data Engineering by Eugene Polonichko
Azure DataBricks for Data Engineering by Eugene Polonichko
Dimko Zhluktenko
 

Similar to AWS ML Model Deployment (20)

Amazon SageMaker workshop
Amazon SageMaker workshop
Julien SIMON
 
Demystifying Machine Learning with AWS (ACD Mumbai)
Demystifying Machine Learning with AWS (ACD Mumbai)
AWS User Group Pune
 
ACDKOCHI19 - Demystifying amazon sagemaker
ACDKOCHI19 - Demystifying amazon sagemaker
AWS User Group Kochi
 
Demystifying Amazon Sagemaker (ACD Kochi)
Demystifying Amazon Sagemaker (ACD Kochi)
AWS User Group Pune
 
Where ml ai_heavy
Where ml ai_heavy
Randall Hunt
 
Build, Train and Deploy ML Models using Amazon SageMaker
Build, Train and Deploy ML Models using Amazon SageMaker
Hagay Lupesko
 
Quickly and easily build, train, and deploy machine learning models at any scale
Quickly and easily build, train, and deploy machine learning models at any scale
AWS Germany
 
Build, train and deploy ML models with SageMaker (October 2019)
Build, train and deploy ML models with SageMaker (October 2019)
Julien SIMON
 
Build, train and deploy your ML models with Amazon Sage Maker
Build, train and deploy your ML models with Amazon Sage Maker
AWS User Group Bengaluru
 
End-to-End Machine Learning with Amazon SageMaker
End-to-End Machine Learning with Amazon SageMaker
Sungmin Kim
 
AWS reinvent 2019 recap - Riyadh - AI And ML - Ahmed Raafat
AWS reinvent 2019 recap - Riyadh - AI And ML - Ahmed Raafat
AWS Riyadh User Group
 
Amazon SageMaker Build, Train and Deploy Your ML Models
Amazon SageMaker Build, Train and Deploy Your ML Models
AWS Riyadh User Group
 
Custom processing and modeling with Amazon SageMaker - 2024-09-26
Custom processing and modeling with Amazon SageMaker - 2024-09-26
Alessandra Bilardi
 
Strata CA 2019: From Jupyter to Production Manu Mukerji
Strata CA 2019: From Jupyter to Production Manu Mukerji
Manu Mukerji
 
Scale Machine Learning from zero to millions of users (April 2020)
Scale Machine Learning from zero to millions of users (April 2020)
Julien SIMON
 
Advanced Machine Learning with Amazon SageMaker
Advanced Machine Learning with Amazon SageMaker
Julien SIMON
 
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
Provectus
 
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
Julien SIMON
 
Machine Learning with Amazon SageMaker
Machine Learning with Amazon SageMaker
Vladimir Simek
 
AI Stack on AWS: Amazon SageMaker and Beyond
AI Stack on AWS: Amazon SageMaker and Beyond
Provectus
 
Amazon SageMaker workshop
Amazon SageMaker workshop
Julien SIMON
 
Demystifying Machine Learning with AWS (ACD Mumbai)
Demystifying Machine Learning with AWS (ACD Mumbai)
AWS User Group Pune
 
ACDKOCHI19 - Demystifying amazon sagemaker
ACDKOCHI19 - Demystifying amazon sagemaker
AWS User Group Kochi
 
Demystifying Amazon Sagemaker (ACD Kochi)
Demystifying Amazon Sagemaker (ACD Kochi)
AWS User Group Pune
 
Build, Train and Deploy ML Models using Amazon SageMaker
Build, Train and Deploy ML Models using Amazon SageMaker
Hagay Lupesko
 
Quickly and easily build, train, and deploy machine learning models at any scale
Quickly and easily build, train, and deploy machine learning models at any scale
AWS Germany
 
Build, train and deploy ML models with SageMaker (October 2019)
Build, train and deploy ML models with SageMaker (October 2019)
Julien SIMON
 
Build, train and deploy your ML models with Amazon Sage Maker
Build, train and deploy your ML models with Amazon Sage Maker
AWS User Group Bengaluru
 
End-to-End Machine Learning with Amazon SageMaker
End-to-End Machine Learning with Amazon SageMaker
Sungmin Kim
 
AWS reinvent 2019 recap - Riyadh - AI And ML - Ahmed Raafat
AWS reinvent 2019 recap - Riyadh - AI And ML - Ahmed Raafat
AWS Riyadh User Group
 
Amazon SageMaker Build, Train and Deploy Your ML Models
Amazon SageMaker Build, Train and Deploy Your ML Models
AWS Riyadh User Group
 
Custom processing and modeling with Amazon SageMaker - 2024-09-26
Custom processing and modeling with Amazon SageMaker - 2024-09-26
Alessandra Bilardi
 
Strata CA 2019: From Jupyter to Production Manu Mukerji
Strata CA 2019: From Jupyter to Production Manu Mukerji
Manu Mukerji
 
Scale Machine Learning from zero to millions of users (April 2020)
Scale Machine Learning from zero to millions of users (April 2020)
Julien SIMON
 
Advanced Machine Learning with Amazon SageMaker
Advanced Machine Learning with Amazon SageMaker
Julien SIMON
 
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
Data Summer Conf 2018, “Build, train, and deploy machine learning models at s...
Provectus
 
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
AWS re:Invent 2018 - ENT321 - SageMaker Workshop
Julien SIMON
 
Machine Learning with Amazon SageMaker
Machine Learning with Amazon SageMaker
Vladimir Simek
 
AI Stack on AWS: Amazon SageMaker and Beyond
AI Stack on AWS: Amazon SageMaker and Beyond
Provectus
 
Ad

More from Knoldus Inc. (20)

Angular Hydration Presentation (FrontEnd)
Angular Hydration Presentation (FrontEnd)
Knoldus Inc.
 
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
Knoldus Inc.
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
Knoldus Inc.
 
Kanban Metrics Presentation (Project Management)
Kanban Metrics Presentation (Project Management)
Knoldus Inc.
 
Java 17 features and implementation.pptx
Java 17 features and implementation.pptx
Knoldus Inc.
 
Chaos Mesh Introducing Chaos in Kubernetes
Chaos Mesh Introducing Chaos in Kubernetes
Knoldus Inc.
 
GraalVM - A Step Ahead of JVM Presentation
GraalVM - A Step Ahead of JVM Presentation
Knoldus Inc.
 
Nomad by HashiCorp Presentation (DevOps)
Nomad by HashiCorp Presentation (DevOps)
Knoldus Inc.
 
Nomad by HashiCorp Presentation (DevOps)
Nomad by HashiCorp Presentation (DevOps)
Knoldus Inc.
 
DAPR - Distributed Application Runtime Presentation
DAPR - Distributed Application Runtime Presentation
Knoldus Inc.
 
Introduction to Azure Virtual WAN Presentation
Introduction to Azure Virtual WAN Presentation
Knoldus Inc.
 
Introduction to Argo Rollouts Presentation
Introduction to Argo Rollouts Presentation
Knoldus Inc.
 
Intro to Azure Container App Presentation
Intro to Azure Container App Presentation
Knoldus Inc.
 
Insights Unveiled Test Reporting and Observability Excellence
Insights Unveiled Test Reporting and Observability Excellence
Knoldus Inc.
 
Introduction to Splunk Presentation (DevOps)
Introduction to Splunk Presentation (DevOps)
Knoldus Inc.
 
Code Camp - Data Profiling and Quality Analysis Framework
Code Camp - Data Profiling and Quality Analysis Framework
Knoldus Inc.
 
AWS: Messaging Services in AWS Presentation
AWS: Messaging Services in AWS Presentation
Knoldus Inc.
 
Amazon Cognito: A Primer on Authentication and Authorization
Amazon Cognito: A Primer on Authentication and Authorization
Knoldus Inc.
 
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
Knoldus Inc.
 
Managing State & HTTP Requests In Ionic.
Managing State & HTTP Requests In Ionic.
Knoldus Inc.
 
Angular Hydration Presentation (FrontEnd)
Angular Hydration Presentation (FrontEnd)
Knoldus Inc.
 
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
Optimizing Test Execution: Heuristic Algorithm for Self-Healing
Knoldus Inc.
 
Self-Healing Test Automation Framework - Healenium
Self-Healing Test Automation Framework - Healenium
Knoldus Inc.
 
Kanban Metrics Presentation (Project Management)
Kanban Metrics Presentation (Project Management)
Knoldus Inc.
 
Java 17 features and implementation.pptx
Java 17 features and implementation.pptx
Knoldus Inc.
 
Chaos Mesh Introducing Chaos in Kubernetes
Chaos Mesh Introducing Chaos in Kubernetes
Knoldus Inc.
 
GraalVM - A Step Ahead of JVM Presentation
GraalVM - A Step Ahead of JVM Presentation
Knoldus Inc.
 
Nomad by HashiCorp Presentation (DevOps)
Nomad by HashiCorp Presentation (DevOps)
Knoldus Inc.
 
Nomad by HashiCorp Presentation (DevOps)
Nomad by HashiCorp Presentation (DevOps)
Knoldus Inc.
 
DAPR - Distributed Application Runtime Presentation
DAPR - Distributed Application Runtime Presentation
Knoldus Inc.
 
Introduction to Azure Virtual WAN Presentation
Introduction to Azure Virtual WAN Presentation
Knoldus Inc.
 
Introduction to Argo Rollouts Presentation
Introduction to Argo Rollouts Presentation
Knoldus Inc.
 
Intro to Azure Container App Presentation
Intro to Azure Container App Presentation
Knoldus Inc.
 
Insights Unveiled Test Reporting and Observability Excellence
Insights Unveiled Test Reporting and Observability Excellence
Knoldus Inc.
 
Introduction to Splunk Presentation (DevOps)
Introduction to Splunk Presentation (DevOps)
Knoldus Inc.
 
Code Camp - Data Profiling and Quality Analysis Framework
Code Camp - Data Profiling and Quality Analysis Framework
Knoldus Inc.
 
AWS: Messaging Services in AWS Presentation
AWS: Messaging Services in AWS Presentation
Knoldus Inc.
 
Amazon Cognito: A Primer on Authentication and Authorization
Amazon Cognito: A Primer on Authentication and Authorization
Knoldus Inc.
 
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
ZIO Http A Functional Approach to Scalable and Type-Safe Web Development
Knoldus Inc.
 
Managing State & HTTP Requests In Ionic.
Managing State & HTTP Requests In Ionic.
Knoldus Inc.
 
Ad

Recently uploaded (20)

Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
The Future of Product Management in AI ERA.pdf
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
10 Key Challenges for AI within the EU Data Protection Framework.pdf
10 Key Challenges for AI within the EU Data Protection Framework.pdf
Priyanka Aash
 
Techniques for Automatic Device Identification and Network Assignment.pdf
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
Cluster-Based Multi-Objective Metamorphic Test Case Pair Selection for Deep N...
Cluster-Based Multi-Objective Metamorphic Test Case Pair Selection for Deep N...
janeliewang985
 
UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Safe Software
 
9-1-1 Addressing: End-to-End Automation Using FME
9-1-1 Addressing: End-to-End Automation Using FME
Safe Software
 
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik
 
Mastering AI Workflows with FME by Mark Döring
Mastering AI Workflows with FME by Mark Döring
Safe Software
 
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Oh, the Possibilities - Balancing Innovation and Risk with Generative AI.pdf
Priyanka Aash
 
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
" How to survive with 1 billion vectors and not sell a kidney: our low-cost c...
Fwdays
 
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
Salesforce Summer '25 Release Frenchgathering.pptx.pdf
yosra Saidani
 
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC and Open Hackathons Monthly Highlights June 2025
OpenACC
 
The Future of Product Management in AI ERA.pdf
The Future of Product Management in AI ERA.pdf
Alyona Owens
 
AI VIDEO MAGAZINE - June 2025 - r/aivideo
AI VIDEO MAGAZINE - June 2025 - r/aivideo
1pcity Studios, Inc
 
Curietech AI in action - Accelerate MuleSoft development
Curietech AI in action - Accelerate MuleSoft development
shyamraj55
 
OpenPOWER Foundation & Open-Source Core Innovations
OpenPOWER Foundation & Open-Source Core Innovations
IBM
 
10 Key Challenges for AI within the EU Data Protection Framework.pdf
10 Key Challenges for AI within the EU Data Protection Framework.pdf
Priyanka Aash
 
Techniques for Automatic Device Identification and Network Assignment.pdf
Techniques for Automatic Device Identification and Network Assignment.pdf
Priyanka Aash
 
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
GenAI Opportunities and Challenges - Where 370 Enterprises Are Focusing Now.pdf
Priyanka Aash
 
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Coordinated Disclosure for ML - What's Different and What's the Same.pdf
Priyanka Aash
 
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Using the SQLExecutor for Data Quality Management: aka One man's love for the...
Safe Software
 
Cluster-Based Multi-Objective Metamorphic Test Case Pair Selection for Deep N...
Cluster-Based Multi-Objective Metamorphic Test Case Pair Selection for Deep N...
janeliewang985
 
UserCon Belgium: Honey, VMware increased my bill
UserCon Belgium: Honey, VMware increased my bill
stijn40
 
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Smarter Aviation Data Management: Lessons from Swedavia Airports and Sweco
Safe Software
 
9-1-1 Addressing: End-to-End Automation Using FME
9-1-1 Addressing: End-to-End Automation Using FME
Safe Software
 
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
AI Agents and FME: A How-to Guide on Generating Synthetic Metadata
Safe Software
 
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik - Passionate Tech Enthusiast
Raman Bhaumik
 
Mastering AI Workflows with FME by Mark Döring
Mastering AI Workflows with FME by Mark Döring
Safe Software
 

AWS ML Model Deployment

  • 1. Presented By: Aayush Srivastava & Divyank Saxena Custom ML Model Deployment using AWS Sagemaker
  • 2. Lack of etiquette and manners is a huge turn off. KnolX Etiquettes Punctuality Join the session 5 minutes prior to the session start time. We start on time and conclude on time! Feedback Make sure to submit a constructive feedback for all sessions as it is very helpful for the presenter. Silent Mode Keep your mobile devices in silent mode, feel free to move out of session in case you need to attend an urgent call. Avoid Disturbance Avoid unwanted chit chat during the session.
  • 3. Our Agenda 01 What is Machine Learning 02 What is AWS Sagemaker 03 How Does Sagemaker Work 04 Deploying Custom ML Model 05 Demo 05 06 Benefits of AWS Sagemaker
  • 4. . What is Machine Learning ? ● According to Arthur Samuel(1959), Machine Learning algorithms enable the computers to learn from data, and even improve themselves, without being explicitly programmed. ● Few Day to Day Applications of Machine learning 1.Image recognition 2.Speech Recognition 3.Product Recommendation 4.Virtual Personal Assistant
  • 6. ● Amazon Web Services (AWS) is an on-demand cloud platform offered by Amazon, that provides service over the internet. AWS services can be used to build, monitor, and deploy any application type in the cloud. Here's where the AWS Sagemaker comes into play. ● AWS is a broadly adopted cloud platform that offers several on-demand operations like compute power, database storage, content delivery, etc., to help corporates scale and grow. What is AWS
  • 7. ● Amazon SageMaker is a managed service in the Amazon Web Services (AWS) public cloud. It provides the tools to build, train and deploy machine learning models for predictive analytics applications. The platform automates the tedious work of building a production-ready artificial intelligence (AI) pipeline. ● Deploying ML models is challenging, even for experienced application developers. Amazon SageMaker aims to simplify the process. ● AutoML is also supported by sagemaker. It process of automating the tasks of applying machine learning to real-world problems. What is Amazon Sagemaker
  • 8. Some Benefits of Using AWS SageMaker •Highly Scalable •It helps in creating and managing compute instance with the least amount of time •It helps in storing all ML components in one place •All your logs get easily stored in CloudWatch logs • Amazon’s own pre-built models are highly optimized to run on AWS •Maintains Uptime — Process keeps on running without any stoppage. •High Data Security Benefits of Amazon Sagemaker
  • 9. ● AWS SageMaker simplifies ML modeling into three steps: 1. Build/Train 2. Test and Tune 3. Deploy ● The following diagram shows how machine learning works with AWS SageMaker lets look in more detail about each of these steps. How does Amazon SageMaker work?
  • 10. Train ● AWS SageMaker helps developers to customize Machine Learning instances with the Jupyter notebook interface ● It provides more than 15 widely used ML Algorithm for training purpose ● It gives the capability to select the required server size for our notebook instance ● A user can write code (for creating model training jobs) using notebook instance Train, Test and Tune , Deploy
  • 11. Train ● After creating a notebook instance a user can select type of environments he will be using. ● The necessary libraries will come pre-installed according to your notebook instance type Train, Test and Tune , Deploy
  • 12. Tune ● Set up and import required libraries ● Define a few environment variables and manage them for training the model ● Train and tune the inbuilt Sagemaker Algorithm by tweaking the hyperparameters according to your need Train, Test and Tune , Deploy
  • 13. Tune ● Hyperparameter tuning is achieved by adding a suitable combination of algorithm parameters like: Train, Test and Tune , Deploy
  • 14. Testing ● It helps us in managing the model performance by using manual testing ● Evaluating the accuracy of the ML model ● It helps us to make sure that the achieved loss is acceptable for your task ● Helpful in checking model performance on real data Train, Test and Tune , Deploy
  • 15. Deploy ● Once tuning is done, models can be deployed to SageMaker endpoints ● In the endpoints, a real-time prediction is performed ● For deployment use: ● Now, evaluate your model and determine whether you have achieved your business goals ● To test your endpoint for prediction you can use the same Jupyter notebook instance Train, Test and Tune , Deploy
  • 16. Deploying Fully Custom ML model ● What if you have a trained model outside Sagemaker can you still deploy it? ● What if you want to utilize certain algorithms or frameworks that are not supported by Sagemaker instance is it possible? ● What if you want to create a REST endpoint to deploy your custom model is it possible? Answer: YES If you want to use SageMaker as the service to deploy your model, it involves deploying to 3 AWS services: AWS SageMaker, AWS Elastic Container Registry (ECR), which provides versioning and access control for container images, and AWS Simple Cloud Storage (S3). The diagram below describes the process in detail.
  • 18. Deploying Fully Custom ML model Requirements 1. Docker ○ Docker is a software platform that allows you to build, test, and deploy applications quickly. Using Docker, you can quickly deploy and scale applications in any environment and know your code will run. ○ We will be using docker to contanarize our Application along with its dependencies. ○ Installation: $ sudo snap install docker 2. Flask ○ Flask is a web development framework developed in python ○ We are using it to specify the logic of how to handle your ML inference requests. It lets you respond to /ping and /invocations ○ Installation : pip install Flask 3. AWS CLI ○ AWS command line interface is being used to to push our docker container image to AWS ECR ○ Installation : $ sudo apt-get install awscli
  • 19. Deploying Fully Custom ML model Let’s break the process in to 4 steps: ● Step 1: Building the model and saving the artifacts. ● Step 2: Defining the server and Inference code. ● Step 3: Build a Sagemaker Container. ● Step 4: Creating Model, Endpoint Configuration, and Endpoint.
  • 20. Deploying Fully Custom ML model Step 1: Building the model and saving the artifacts. ● Build the model and serialize the object, which is used for prediction. In this demo we are using rasa to train a NLU model. ● Once you train the model, save that artifact in tar.gz. You can upload the artifact in S3 bucket or you can include it in a the container.
  • 21. Deploying Fully Custom ML model Step 2: Defining the server and inference code. ● When an endpoint is invoked Sagemaker interacts with the Docker container, which runs the inference code for hosting services and processes the request and returns the response.Containers need to implement a web server that responds to /invocations and /ping on port 8080. ● Inference code in the container will receive GET requests from the infrastructure and it should respond to Sagemaker with an HTTP 200 status code and an empty body, which indicates that the container is ready to accept inference requests at invocations endpoint. ● And invocations is the endpoint that receives POST requests and responds according to the format specified in the algorithm. To make the model REST API, you need Flask, which is WSGI(Web Server Gateway Interface) application framework.
  • 22. Deploying Fully Custom ML model Step 3: Sagemaker Container. ● Sagemaker uses docker containers extensively. You can put your scripts, algorithms, and inference code for your models in the containers, which includes the runtime, system tools, libraries and other code to deploy your models, which provides flexibility to run your own model. ● You create Docker containers from images that are saved in a repository. You build the images from scripted instructions provided in a Dockerfile. ● The Dockerfile describes the image that you want to build with complete operating system installation of the system that you want to run. ● And you need to copy the project folder where you have to /opt/code and make it as a working directory. ● The Amazon Sagemaker Containers library places the scripts that the container will run in the /opt/code/ directory
  • 23. Deploying Fully Custom ML model ● How SageMaker Runs Your Inference Image To configure a container to run as an executable, use an ENTRYPOINT instruction in a Dockerfile. Note the following: For model inference, SageMaker runs the container as: ● Docker run image serve SageMaker overrides default CMD statements in a container by specifying the serve argument after the image name. The serve argument overrides arguments that you provide with the CMD command in the Dockerfile. We recommend that you use the exec form of the ENTRYPOINT instruction: ● ENTRYPOINT ["executable", "param1", "param2"] For example: ENTRYPOINT ["python", "k_means_inference.py"]
  • 24. Deploying Fully Custom ML model Step 4: Creating Model, Endpoint Configuration, and Endpoint. ● Creating models can be done by API or AWS management console . Provide Model name and IAM role. ● In Container definition, choose to provide artifacts and inference image location and provide the S3 location of the artifacts and Image URI. ● After creating the model, create Endpoint Configuration and add the model which has been created. ● Create Endpoint using the existing configuration. ● Now your custom model has been deployed and you can hit the “invocation api” via postman.
  • 25. Demo
  • 26. Thank You ! Get in touch with us: Lorem Studio, Lord Building D4456, LA, USA