0% found this document useful (0 votes)
121 views

Presentation 1

MLOps is an ML philosophy and practice designed to unify the development of ML (dev) and operations of ML (Ops). It ensures automation and monitoring in all aspects of ML system development like integration, testing, release, deployment and infrastructure maintenance. Key aspects of MLOps include training reproducibility, autoscaling compute resources, efficient workflows with continuous integration/deployment, and capabilities to meet governance objectives. MLOps re-engineers models for production by addressing issues like model degradation over time and tracking statistical results to trigger alerts or rollbacks when needed. It also applies DevOps principles of continuous integration and delivery to ML systems through an automated CI/CD pipeline.

Uploaded by

Asish Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
121 views

Presentation 1

MLOps is an ML philosophy and practice designed to unify the development of ML (dev) and operations of ML (Ops). It ensures automation and monitoring in all aspects of ML system development like integration, testing, release, deployment and infrastructure maintenance. Key aspects of MLOps include training reproducibility, autoscaling compute resources, efficient workflows with continuous integration/deployment, and capabilities to meet governance objectives. MLOps re-engineers models for production by addressing issues like model degradation over time and tracking statistical results to trigger alerts or rollbacks when needed. It also applies DevOps principles of continuous integration and delivery to ML systems through an automated CI/CD pipeline.

Uploaded by

Asish Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 5

MLOps: Continuous Delivery & Automation Pipelines in Machine Learning | Page 1

MLOps: A Reality!!!!
Data science and ML have become essential
capabilities to solve dynamic real-world issues, shift Many organisations are also engaging in their data
industries and generate value in all fields. At science teams and ML resources to guide decisions
present, you have access to components for that can provide their customers with value
successful application of ML:. creation.

• Wide sets of data


• Cheap computing resources on-demand MLOps is an ML philosophy and practise designed
• Specialized ML accelerators on different cloud to unify the device production of ML (dev) and
platforms operation of ML (Ops). MLOps process ensures that
• Quick progress in various fields of ML research in all ML system development, such as integration,
(such as computer vision, natural language testing, release, deployment and infrastructures
understanding, and recommendations AI maintenance, you advocate for Automation and
systems). Monitoring.

FEATURES OF MLOps

Training Reproducibility Autoscaling, powerful Efficient workflows with Advanced capabilities


Advanced tracking of managed compute, no- scheduling and to meet governance
datasets, code, code deploy and tools management capabilities and control objectives
experiments for easy model training to build and deploy with and promote model
Environments in a rich and deployment. continuous transparency and
model registry integration/continuous fairness.
deployment (CI/CD).
DRIVERS TO MLOps: Data have proved a strategic distinguishing factor
over the decades. After reports have been
generated exclusively by IT from overnight data
warehouses, however, top performers have shifted
from passively reporting to predictive and
medication analysis, expanded their expertise in
data science, and modified agreed paradigms in
order to advance their enterprises. In recent years,
rapidly declining processing costs and improved
productivity have given organisations new
opportunities to maximise their data. A variety of
organisations have been gathering data over the
years or even decades, in their data centres, data
markets, data lakes and organisational hubs. Data
scientists can implement and train an ML model
with predictive performance on an offline holdout
dataset, provided appropriate training data for their
use case. However the significant problem isn't
developing an ML model, the complexity is creating
an integrated ML framework and to continuously
run it in development. It is evident in Google's long
history of manufacturing ML services, several flaws
MLOps: Continuous Delivery & Automation Pipelines in Machine Learning | Page 2

MLOps: Re-engineering Models


DevOps Vs MLOps: o Production: Not only due to
in-optimal labeling, but also
This practise offers advantages because of continuous data
such as minimising development profiles, ML algorithms can
times, increasing deployment achieve reduced efficiency. In
speed and successful launches. other words, more models
You incorporate two principles in than typical software systems
the development of software will degrade and this
systems to achieve these deterioration needs to be
benefits: taken into consideration.
• Continuous integration (CI) Therefore you intend to track
• Living constantly (CD) statistical results on your data
and control your model's
An ML System is a software online output to submit alerts
system, which means that you or reverse when your results
can develop and operate ML vary.
systems in a reliable manner.
However, MLOps in various other In continuous integration of
ways: version control systems, unit
o Team Skills: The team usually testing, integration testing and
includes data scientists or ML- continuous delivery of a software
researchers in an ML project, module or kit, ML and other
who concentrate on data software platforms are identical.
discovery, model creation and There are however a few
testing. These individuals are important variations in ML:
not professional software
developers who are qualified • The CI no longer deals only
to develop services in the with code and components
production class. testing and validation but also
o Development: In nature, ML is with data, information
research. To determine what is schemes and modelling and
appropriate for the issue as testing and validation.
rapidly as feasible, you can try • CD is no longer a major
out various features, software kit, it is a framework
algorithms, simulation (ML training pipeline) that can
modelling and parameters set deploy another service
up. The task is to track what automatically (model
succeeded and did not and to prediction service).
keep reproductivity while • CT is a new structure, special
optimising the reusability of to ML systems which includes
code. retraining and supporting the
o Testing: It is more critical than designs automatically.
evaluating other software
systems to assess an ML Adoption of ML involves a
framework. You require data cultural change and a technical
validation, qualified model framework with individuals,
quality assessment, and systems and networks that
pattern matching, along with function in a sensitive and agile
traditional unit and integration manner: an approach that can be
tests. called MLOps. It can not be
o Deployment: The generated immediately by
implementation of an offline learning from those at the
ML model as a forecast model forefront of ML how to map the
is not so easy in an ML potential of the creativity that
environment. ML systems can drives MLOp against the unique
need a multi-step pipeline to needs and resources of an
retrain and deploy the design organisation. This is the right
automatically. This pipeline thing to do.
MLOps: Continuous Delivery & Automation Pipelines in Machine Learning | Page 3

CI/CD PIPELINE AUTOMATION

Implementation of ML using CI/CD

Characteristics Of Automated Pipelines


MLOps: Continuous Delivery & Automation Pipelines in Machine Learning | Page 4

CI/CD PIPELINE AUTOMATION


MLOps CI/CD AUTOMATION: • Continuous integration
pipeline: You construct and
A robust automated CI/CD system test source code. Pipeline
is required for a fast and reliable modules (packages, runables
upgrade of production pipelines. and artefacts) are the outputs
This automated CI/CD System of this step to be implemented
helps your datologists to quickly later in the project.
develop new concepts for feature • Pipeline Continuous Delivery:
engineering. These ideas could The CI phase artefacts are
be applied and new pipeline deployed in the target setting.
components designed and tested This stage's output is an
automatically to the desired installed pipeline with a new
setting. model implementation.
• Automatic triggering: The
MLOps setup includes intended or triggered pipelines
components: are performed automatically
• Source Control during operation. The
• Test & Build Services performance of this phase is a
• Deployment Services trained model which is passed
• Model Registry into the register of models.
• Feature Store • Continuous delivery model:
• ML Metadata Store You serve the qualified model
• ML Pipeline Orchestrator as a forecast service. The
performance is a prediction
CHARACTERISTICS: service used for the model.
The pipeline consists of following • Monitoring: statistics are
stages: collected based on live data on
• Development and model results. A trigger to
experimentation: New ML complete the pipeline or run a
algorithms and models where new experimental cycle is the
the experimental steps are performance of this stage.
ordered are recursively
checked. The output of this Before the pipeline begins a new
stage is the ML pipeline steps' experiment, the method for data
source code which is then processing is still a manual
transferred to a source process for data scientists. A
repository. manual process is also the model
analysis phase.
ADDITIONAL COMPONENTS:
ML Pipeline Triggers: Based on
your use case, you can automate
the ML output pipes to retrain
the models using new data:
• On request: manual pipeline
ad-hoc execution.
• On a schedule: Fresh
information on a daily, weekly,
or monthly basis are available
routinely for the ML system.
The re - training frequency
often relies on how much the
data trends shift and how
costly your models are to
retrain.
• New training data is accessible
as new data is obtained and
available in the source
MLOps: Continuous Delivery & Automation Pipelines in Machine Learning | Page 5

DATA SCIENCE STEPS FOR ML

1. Data extraction: You pick and integrate the appropriate data from different data sources for the ML
process.
2. Data Analysis: You perform the EDA to learn the available data for ML model development.
Exploratory data analysis.
3. Data Preparation: the data for the ML task is planned. This method includes data purification, which
separates the data into preparation, testing and validation sets.
4. Model training: the data scientist uses various techniques for training various ML models using the
prepared data.
5. Model assessment: The model is tested on a holdout test set for model quality assessment. The
results of this step are a series of measurements for evaluating the model consistency.
6. Model Validation: The model is verified appropriate for implementation – it is greater than a certain
baseline in its predictive efficiency.
7. Model Servicing: The validated model is used to serve predictions in a target setting. The following
deployment may be:
• An integrated model on a mobile device or side.
• Part of a scheme of lots prediction.
• The model monitoring: a new iteration in the ML phase is monitored for the model's
predictive efficiency.
CONTINUOUS INTEGRATION: • Trial of the expected artefacts • Test the forecasting service by
In this configuration, as new code is generated by each portion in the calling the service API and
committed to or pushed into the pipeline. verifying that the answer you are
repository source code, the pipeline • Integration testing among the expecting is available. Typically,
and its components are designed, components of the pipeline. this test catches problems that
checked and packed. The CI method can happen when the model
can involve the following checks, in CONTINUOUS DELIVERY: version is modified and expects a
addition to the building of packages, Your framework provides constantly different input.
container images and executables: new pipeline implementations to the • Automated use in a testing area,
• Evaluate your engineering logic target environment at this stage, for instance, by moving code to
for your feature. providing the newly trained model the dev environment.
• Test unit the various methods with prediction services. To deliver • Half-automatic deployment in a
that your model implements. You pipelines and models quickly and pre-production environment for
have a function that takes a effectively continuously, take the instance, when reviews approve
categorical column and encodes following into account: improvements, is triggered by
the function as a single-hot • Until deploying your model, fusion of code in the main
function, for instance. verify consistency of your model branch.
• Testing to converge your concept with the mentioned challenges. • The manual deployment of the
training (that is, the loss of your You need to confirm, for example pipeline in the pre-production
model goes down by iterations that the packages that the design environment after many
and overfits a few sample needs are enabled in the successful runs.
records). servicing environment and that
• To verify that the NaN values are the usable memory, calculation
not generated by your model and accelerator resources.
training because of division by
zero or the manipulation of large
or small values.

You might also like