0% found this document useful (0 votes)
20 views

7 - From ML To Production

Uploaded by

Sefouane bechikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

7 - From ML To Production

Uploaded by

Sefouane bechikh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Chapter 7.

From ML to
Production
Model Deployment
Making a machine learning project is like creating joy! Whether you're working
on your own cool project or part of a team of data wizards tackling a tricky
problem, the ultimate dream is to launch your models into action.

Now, how do you do that? Well, there are plenty of fun ways! You can turn your
model into a web service, transform it into an API, put it on a Raspberry PI, or
even make it live on a mobile device. In this chapter, we'll dive into the exciting
world of deployment – we'll explore different methods, share some cool tricks,
introduce useful tools, talk about the best ways to do it, and even throw in
some MLOps architecture magic! Let's make our models shine!
What is model deployment in machine
learning?
Model deployment in machine learning is like bringing your hard work to life.
Picture this: you and your team spent months creating a super-smart model
that can spot fraudulent products in your online store with almost perfect
accuracy. Now, here's the next step – you don't just want it to sit there, you
want it to actively prevent fraud in real-time, keeping your ecommerce haven
safe from shady stuff.

So, what's model deployment? It's the thrilling move of taking your machine
learning model from the lab (where it's been fine-tuned and polished) and
putting it to work in the real world. In this live environment, your model starts
making predictions based on actual, real-world data.
Steps to take before deploying
an ML model
Just like DevOps has become a well-polished set of practices over the years,
MLOps is all about unveiling your model to the world. MLOps stands right at
the crossroads of DevOps (which follows a step-by-step action plan) and the
ever-evolving world of software deployment. While DevOps does its job with a
solid push, MLOps doesn't stop there – it keeps on working, even after your
model has hit the ground running. It's like the difference between delivering a
package and then making sure it's doing its job perfectly by keeping an eye on it
and giving it a little extra training if needed. MLOps is the superhero that never
takes a break!

What comes next is often referred to as ML architecture, and before you dive
into deploying your models, here are some handy tips to keep in mind.

First off, even if your model boasts impressive features that deliver over 95%
accuracy, be cautious – loading a feature-heavy model into production could
potentially crash your system.
Now, let's talk about a few key principles:

• Portability: Imagine your model as a globetrotter – the more portable it is,


the easier it can hop from one machine to another. A portable model
means quicker load times and less stress when it comes to rewriting code.

• Scalability: A model needs to be like a superhero cape – ready to adapt to


real-world challenges, process business data, predict outcomes, and
scale up as the problems grow.

• Reproducibility: Your model should be a magician's hat, able to recreate


any trick or scenario that might pop up in the future.
• Testing: Think of it as trying on different superhero costumes – your
model should have the ability to be tested in various versions even
after it's out there in the real world.

• Automation: Let your model take care of some tasks itself –


automation is like giving your model superpowers in the MLOps
pipeline, making things smoother and faster.
So, before you launch your model into action, make sure it's not just accurate
but also flexible, adaptable, and ready for the unpredictable adventures of the
real world!

Containers:
Imagine containers as magical boxes that revolutionized how we deploy
software. They're like mini-universes where everything your program needs to
run smoothly is neatly packed inside. Now, in MLOps, these containers are like
superhero suits – they help us create the perfect environments super quickly.
Pair them up with CI/CD workflows (which we'll talk about next), and voila!
Scaling becomes a piece of cake! Just make sure every step of your machine
learning journey is packed snugly in its own container for maximum
awesomeness!
CI/CD:
Alright, now picture this: Continuous Integration (CI) and Continuous
Deployment (CD) are like your trusty sidekicks in the world of MLOps.
They're the ones who make sure your whole ML process is running
smoothly from start to finish. With automated testing at every step, your
deployments become as smooth as butter melting on a hot pancake!
Testing
Testing time! There are tons of ways to put your model through its paces before
it hits the big leagues of production. But not all tests are created equal for our
ML pipeline. Let's dive into a few fun ones:

• Differential Tests: Think of this like comparing the performance of your


model now with how it did in the past. It's like checking if your superhero got
stronger over time or if they need a power-up!

• Benchmark Tests: Ever wonder how your model stacks up against its older
versions? Benchmark tests are here to save the day! They help you measure
your model's performance and decide if it's time to add some cool new
features or just sit back and relax.
• Load/Stress Test: This one's all about putting your CPUs and GPUs
through their paces in big ML projects. It's like making sure your
superheroes can handle all the action without breaking a sweat!

So, there you have it – containers, CI/CD, and testing in MLOps, making
sure your machine learning journey is as smooth and fun as possible!
The Deployment Pipeline Step
by Step
Let's break down the deployment process step by step in simple terms:

• Preparation: First things first, gather all your tools and make sure your
model is ready to go. It's like gearing up before a big adventure!

• Containerize Your Model: Imagine putting your model in a magical box


called a container. This makes it easy to transport and ensures everything it
needs to run smoothly is packed inside.

• Choose Your Deployment Platform: Decide where you want your model to
live – whether it's on a web server, a cloud platform, or even a Raspberry Pi.
It's like picking the perfect home for your model superhero!

• Deploy Your Container: Time to release your model into the wild! Deploy
your container to your chosen platform and let it start making predictions in
the real world.
• Monitor and Test: Keep an eye on your model's performance and run tests
regularly to make sure everything is running smoothly. It's like checking in on
your superhero to make sure they're saving the day without any hiccups.

• Scale if Needed: If your model starts getting really popular and needs to
handle more data or requests, you can scale it up by adding more resources.
It's like giving your superhero sidekick reinforcements when they're facing a big
challenge.

• Update and Iterate: As time goes on, you might need to tweak your model or
add new features. Keep updating and iterating to make sure your model stays
ahead of the game.

MLOps is all about automating and optimizing the deployment process,


making it faster, more efficient, and easier to manage. With MLOps by your
side, deploying models becomes a breeze!
Deploying a machine learning
model on the web
Streamlit
Streamlit is an open-source Python library that makes it easy to create web
applications for machine learning and data science projects. It allows you to
build interactive and user-friendly web interfaces directly from your Python
scripts, without needing to write HTML, CSS, or JavaScript code.

Here is a tutorial on Streamlit that you can go through


: https://docs.streamlit.io/get-started

Note : in general, an AI project has a specified structure. You may check this
GitHub repository for more information
Saving model
After training a model of your choice save it, for example using the following
code :

rom sklearn.ensemble import RandomForestClassifier


import joblib
# Load example dataset
iris = load_iris()
X, y = iris.data, iris.target
# Train a model (Random Forest Classifier for example)
model = RandomForestClassifier()
model.fit(X, y)
# Save the model to a file
joblib.dump(model, 'your_trained_model.pkl')
Step 1: Install Streamlit

• Create a folder named myapp


• Open the folder in visual studio
• Inside that project create a virtual environment using `python -m venv env`
• Activate the env environment using the activate script
• Install streamlit using `pip install streamlit`
• Install all the packages you used when training and testing the model
Step2: Create a file for the app

• Inside the folder create a file called app.py


• Copy the following code :
import streamlit as st
import pandas as pd
import joblib
# Load the trained model
model = joblib.load('your_trained_model.pkl')
# Define function to make predictions
def predict(input_features):
# Perform any necessary preprocessing on the input_features
# Make predictions using the loaded model
prediction = model.predict(input_features)
return prediction
# Create the web interface
def main():
st.title('Your Machine Learning Model Deployment')
st.write('Enter the features below to get predictions:')
# Create input fields for user to enter data
feature1 = st.number_input('Feature 1')
feature2 = st.number_input('Feature 2')
# Add more input fields as needed
# Combine input features into a DataFrame
input_data = pd.DataFrame({'Feature 1': [feature1], 'Feature 2':
[feature2]})
# Add more features as needed
if st.button('Predict'):
prediction = predict(input_data)
st.write('Prediction:', prediction)
if __name__ == '__main__':
main()
Step3: Run the app
• Run the Streamlit app by executing the following command in your
terminal: streamlit run app.py
The realm of MLOps is vast and continually expanding, encompassing a diverse
array of practices and tools aimed at streamlining the machine learning lifecycle.

At its core, MLOps integrates principles from DevOps with the unique challenges
posed by machine learning workflows, ensuring efficient model development,
deployment, and maintenance. One critical aspect of MLOps is monitoring, which
involves continuously tracking model performance, data drift, and system health
in real-time. Monitoring enables teams to detect and address issues promptly,
ensuring that machine learning models operate effectively in production
environments.

With the rapid evolution of technology and the increasing adoption of artificial
intelligence across industries, MLOps and monitoring play pivotal roles in
maximizing the value and reliability of machine learning systems, driving
innovation, and empowering organizations to leverage the full potential of their
data-driven solutions

You might also like