MLOps Components Tools Process and Metrics - A Systematic Literature Review
MLOps Components Tools Process and Metrics - A Systematic Literature Review
This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2024.Doi Number
ABSTRACT With the growing popularity of machine learning, implementations of the environment for
developing and maintaining these models, called MLOps, are becoming more common. The number of
publications in this area is relatively small, although growing rapidly. Our goal was to review the current state
of the literature in the MLOps area and answer the following research questions: What classes of tools are
used in MLOps environments? Which tool implementations are the most popular? What processes are
implemented within MLOps? What metrics are used to measure the effectiveness of MLOps implementation?
Based on this review, we identified classes of tools included in the MLOps architecture, along with their most
popular implementations. While some tools originate from DevOps practices, others, such as Model
Orchestrators, Feature Stores, and Model Repositories, are unique to MLOps. We propose a reference MLOps
architecture based on these findings and outline the stages of the model production process. We also sought
metrics that would allow us to assess and compare the effectiveness of MLOps practices, but unfortunately,
we were unable to find a satisfactory answer in this area.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
implementation of MLOps in organizations. From this goal, al. [6] identified ways of operationalizing models: packaging
we derived the following research questions: and integration (containerization), deployment, serving and
RQ1. What classes of tools do MLOps solutions comprise? inference, monitoring and logging. They describe challenges
RQ2. What tools are used to implement MLOps in during those steps and tools used during operationalization of
organizations? models. The review [7] is the most comprehensive among
RQ3. What processes are included in MLOps solutions? those discussed. The questions answered by the authors
RQ4. What metrics are used to assess the success of a MLOps concern three issues: terms used to describe AI production
implementation? processes, tasks included in these processes, and challenges
In this study, the term 'MLOps solution' refers to a related to them. Based on our literature analysis, we did not
comprehensive framework of tools and processes designed to find an up-to-date review that answered our research
support the development and deployment of ML models questions.
within organizations. We use the term ‘component’ as a class
of tools. We also define ‘tool’ as one of the numerous III. RESEARCH METHOD
implementations of a component. Thus, each component can
have various implementations as different tools from different A. Research query
vendors. The MLOps process refers to a set of activities When working on the literature review, we followed the
performed by data scientists or fully automated activities. guidelines by Kitchenham, et al. [8]. We started the literature
These activities involve the use of MLOps tools, but they are review by selecting the databases that were searched. We have
not synonymous. selected the most popular databases that have been used in
The remainder of this paper is organized as follows. other reviews, namely:
Section 2 discusses the literature review and the rationale for A. EBSCO
conducting this review. In Section 3, you can find a detailed B. IEEE
explanation of the steps taken to prepare the review and its C. Proquest
limitations. Section 4 provides answers to the research D. Science Direct
questions. The paper’s summary and plans for future work are E. Springer
outlined in Section 5. We performed the following query on these databases:
mlops OR “machine learning operations” OR “ML
II. RELATED LITERATURE operations” OR “AI operations” OR “Artificial intelligence
The existing literature includes several reviews related to operations”.
MLOps. We have found five reviews related to ours. First, the Each database had different detailed search options
review [3] examines MLOps tools and their features. This is related to the query. Therefore, we have selected the following
an area that we also explore in this review. However, we search options in individual databases:
would like to present the current results for 2024, whereas in • EBSCO: We excluded the following databases:
the article in question the latest references are from 2020. In GreenFILE, Health Source, MEDLINE, Agricola.
addition, its authors used non-peer-reviewed online sources, • Science Direct: We have filtered the results to include
which is justified by the early stage of MLOps development in the following subject areas: Computer Science;
2020 and the limited number of publications and tools in this Engineering; Decision Sciences; Business,
area. Review [4], which includes articles published up to 2022, Management and Accounting.
leverages peer-reviewed sources such as IEEE Xplore, ACM, • IEEE: we did not filter the results.
Springer, and Elsevier. However, it was limited to 18 articles • ProQuest: We filtered results to include Scholarly
discussed, which is also justified by the stage of development Journals.
of the MLOps area. The other literature reviews have a similar • Springer: We selected the following filters: Language –
limitation and we will not raise this argument when discussing English; Disciplines – Computer Science, Engineering,
subsequent reviews. The review [4] discussed how MLOps is Business and management; Subdisciplines – Artificial
understood in the literature, how it differs from DevOps, and intelligence, Computational intelligence, Machine
what the most important topics are discussed in it. However, learning, Computer communication networks,
we want to focus on the architectural dimension of MLOps Computer applications, Computer engineering,
and to answer other research questions. The next review we networks.
discuss is [5]. The authors answer questions about what the
MLOps literature focuses on in Data Science projects, what B. Selection process
activities it automates, and what the applications of MLOps From the results obtained in response to the query, we
are. The authors also present a high-level categorization of selected texts that were included in the literature review. The
articles into areas such as tool, review, framework and selection of articles comprised two stages. The first involved
application. Therefore, they focus more on the management selecting articles based on titles and abstracts. At this stage,
area than on the architectural area. Ask Berstad Kolltveit, et we excluded articles clearly unrelated to our field and retained
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
those potentially relevant. In the second stage, the content of The IEEE database was characterized by the highest
the articles was analyzed to select the final set from those efficiency, as 8.5% of the articles were used in the review.
selected in the first stage. To make the selection process clear The exact percentage distribution is shown in Fig. 3. Fig. 4
and repeatable, we always followed the inclusion and shows the division of publications by years, and clarifies
exclusion criteria. We included publications that met the that the MLOPS area gains popularity. Because database
following criteria: queries were made in April 2024, the result for 2024 was
• describe the components of MLOps, low.
• describe MLOps processes,
• describe tools used to implement MLOps,
• describe metrics that allow for assessing the effects of
MLOPS implementation.
We excluded publications that met the following criteria:
• present a technical tutorial of a single tool,
• focus on Auto ML processes without touching on the
MLOps area,
• present an implementation of models whose production
use they call MLOps,
• present notation (UML or similar) for modeling
MLOps,
• present network solutions that can support the work of FIGURE 3. Percentage of query results passed into 2’nd stage
ML models.
For each article that passed the second phase, we recorded
for statistical purposes:
• what classes of tools are used in MLOps architecture
design,
• what solutions are used that implement those classes of
tools,
• what steps are proposed in MLOps processes,
• what KPIs were used to assess the effects of MLOps
implementation.
The selection of publications began with 2615 search FIGURE 4. Number of publications in years after 2’nd stage
results. In the first stage, publications that would definitely
not be included in the review based on the title and abstract C. Limitations
were eliminated. 135 publications passed the first stage. In our work, we found two significant limitations that may
Finally, 41 publications were selected for the review after affect the results. The first one is the data sources. MLOps
the second stage. The distribution of articles by stages and is of interest not only to scientists but also to private
databases is presented in Figure 2, which shows that not all companies. We assume that a lot of work in this area has
text databases gave equally satisfactory results. Although been done in organizations that do not publish their results.
Springer produced the highest number of initial results, The second limitation is the short time of MLOps
none of its publications met the inclusion criteria. development and its rapid evolution at the same time. At
the moment, it is not possible to clearly define the MLOps
architecture for years and at the same time it is necessary to
update the results frequently.
IV. RESULTS
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
TABLE I
REFERENCES TO CLASSES OF MLOPS TOOLS IN LITERATURE
includes measures of model quality. Depending on the play a similar role in the MLOps architecture. However,
model type, different metrics may be stored. they are often combined to get the best effect. Databases
The next most popular tool types were the Model store data that is then used in MLOps processes. A Feature
Orchestrator (20 occurrences) and CI/CD (19 occurrences). Store differs from a database by emphasizing data access
These tools may seem similar to each other, but they have and organization specifically for machine learning
different responsibilities in the MLOps architecture. The purposes. Using it, you can organize data as features. Data
Model Orchestrator handles the processes of training, organized in this way can then be downloaded from the
monitoring, and scoring models. The CI/CD tool was used database using a special API provided by the Feature Store.
to build containers that support processes executed in the Most often, this is done using a ready-made Python library.
Model Orchestrator. Additionally, CI/CD tools were used Some of the Feature Store class solutions are equipped with
in software testing and implementation. This especially their own database and make up a complete data access and
affected those models that were exposed by the REST API, storage solution. Others focus on the aspect of data access
because then the model was “wrapped” in a REST and management, but require providing them with a
framework, such as DjangoRestFramework or Flask. What database on which they can operate. During the review, we
both classes of tools have in common is the ability to encountered every combination:
execute pipelines. Therefore, it was possible to find a • database without a feature store,
proposal for CI/CD class tools to act as a Model • stand-alone feature store,
orchestrator (such as Jenkins). However, such situations • feature store with an external database.
were rare, and most often the tools belonging to both groups Another class of tools in the MLOps architecture is
were separate. Model Monitoring (11 occurrences). They monitor the
Another group of tool classes found in MLOps quality of models. This does not just apply to storing model
solutions are classic DevOps tools (which also include metrics, which is often done by a model repository. Here,
CI/CD tools). We can include such classes as: Code it is primarily about calculating these metrics, as well as
Repository, Containerization, Container Repository, other auxiliary indicators, thanks to which you can improve
Container Orchestration and Queue Management. the model in advance, before the results deteriorate
Although these tools are not explicitly mentioned in all significantly. This refers to indicators such as data drift.
publications, we can suspect that they are included in most The last class of tools we have encountered is Compute
solutions in the MLOps area. Using a Code Repository and Spread. Although we found it in only 3 publications. Its
Containerization along with tools to manage task is to distribute computations between servers, but at a
containerization (repository and orchestrator) is a modern lower level than a Model Orchestrator does. Model
standard of software development, and this also applies to Orchestrator operates at the level of individual steps in the
software in Machine Learning. Using a queue management pipeline. Compute Spread Server concerns the spreading of
system is not standard, but is used to perform tasks that take calculations in a single step, such as the spreading of
longer than would be acceptable with the REST API. This calculations in training a neural network.
applies in particular to scoring tasks, as training is most Table 1 summarizes the occurrences of each tool class in
often managed by the Model Orchestrator. the reviewed publications, indicating where they were
The next classes of tools are Feature Store (12 mentioned.
occurrences) and Database (9 occurrences). Both classes
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
TABLE II
REFERENCES TO MLOPS TOOLS IN LITERATURE
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
In monitoring, there are four solutions that are not Recently, however, solutions that are easier to maintain,
mutually exclusive. These include Evidently AI, such as Github Actions and GitLab CI, have been gaining
Prometheus, Grafana and Neptune AI. The functions of the popularity. The only implementation in queue management
latter also allow it to be classified as a Model Repository that we have encountered is Kafka.
class. Prometheus and Grafana are also popular solutions in Within the analyzed materials, we identified two
DevOps and allow monitoring of metrics and logs provided solutions belonging to the Distributed Computation class:
by running containers. What deserves special mention is Flower and Determined. Both frameworks enable the
Evidently AI, which is a solution dedicated to the needs of distribution of tasks within the model training process,
machine learning. It allows the calculation of many metrics particularly for hyperparameter search.
that support the assessment of the quality and variability of
data and model results during its operation. It also has C. Ad RQ3: MLOps process
ready-made dashboards that can be run in Grafana. The process of creating and maintaining models comprises
The next classes of tools are Code Repository and different steps depending on the source. These differences
CI/CD. In the first category, the most popular is Github (3 have two causes. The first is differing interpretations of the
occurrences), and in the second category, Jenkins (4 process, leading to variations in how steps are defined and
occurrences). Jenkins is still the most popular tool in this grouped, even though the core actions remain consistent.
category, but this is mainly because of Jenkins’ long and Here, the difference is only in the naming and grouping of
rich history, which has led to its dominance in this area. activities into steps. Differences also arise from model
TABLE III
REFERENCES TO MLOPS PROCESS STEPS IN LITERATURE
Model evaluation [42], [45], [43], [44], [38], [46], [26], [47], [36], [48] 10
Model delivery Model registration [45], [21], [23], [26], [36] 5
CI/CD [42], [45], [21], [23], [38], [46], [26], [36], [48] 9
Model Monitoring [42], [45], [21], [44], [23], [46], [47], [36], [48] 9
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
architectures and usage methods. For example, model CI/CD processes must occur, the scope of which is
scoring can be performed cyclically within the pipeline of analogous to traditional DevOps process.
the Model Orchestrator or online using the REST API After model deployment, the last step is to monitor the
served by container. This results in significantly different solution. It includes both data monitoring and metrics that
actions required to use the model. evaluate the quality of model operation. Monitoring data is
All the steps that we extracted from the literature are especially important when data drifts occur. Regardless of
included in the Fig. 5, and references to the descriptions of the monitored area, the detection of anomalies may cause
the steps are provided in the Table 3. We have divided the the decision to start the entire process from the beginning.
steps into four stages. The first stage involves the analysis
of the business problem and it was not further developed in D. Ad RQ4: MLOps efficiency metrics
the analyzed literature because it is tangential but outside Unfortunately, we did not find an answer to Research
the scope of MLOps. Question 4 in the analyzed literature. None of the reviewed
The name we chose for the second stage is ‘Data articles presented metrics that could measure the
Preparation’. It includes the steps: Data collection, Feature effectiveness of MLOps implementation in an organization.
selection, Data validation, Data analysis, Data However, we found two publications that presented
preprocessing. The names of the steps we mention are not methods for assessing the maturity of MLOps processes in
exactly the same in the cited sources. For example, you may an organization, namely: [2], [49]. The content of these
encounter the names feature selection and feature articles is closest to the answer to the question we asked,
engineering. Since these steps involve similar activities, we but they do not answer it.
decided on a common name. Other steps, which may appear In [2], the authors list the following stages of MLOps
under different names in different sources, follow the same Maturity Model: Ad hoc, DataOps, Manual MLOps,
principle. The feature selection step aims to extract features Automated MLOps, Kaizen. For each stage, they describe
from the data that will be used in subsequent steps of model the criteria that are necessary to achieve it. The text [49]
preparation. The Data validation step covers slightly presents methods for evaluating the capability to produce
broader activities than those typically implemented in models. They list the different components of the
software. Namely, in addition to verifying data structures assessment in the areas of: Data Integration, Data
and correctness, their completeness, timeliness, Preparation, Modeling, Deployment.
consistency, reliability, etc. are also checked. Data analysis We identified several potential reasons for the absence
includes detecting relationships between data and assessing of publications addressing metrics. The most likely reason
the possibility of their use in the model. The data is the short history of MLOps implementations. In order to
preprocessing step includes transforming data into the develop metrics, and above all to discuss market standards
forms required by the trained model, such as variable for these metrics, there must be a sufficient base for their
coding and normalization. comparison. Therefore, MLOps would have to have a more
The next stage is Model Preparation. It includes Model established form. This is especially true for stabilizing the
training and Model evaluation. Model training can be scope of MLOps, because different authors interpret it
performed directly by the orchestrator (e.g. Kubeflow), or differently. There is a rich and stable base of metrics related
the orchestrator can ask another tool to distribute the to classic software development. This may satisfy some
computation, or the scattering can be performed in parallel needs for metrics in the MLOps area. The last reason we
at both levels. After the model is trained, it is evaluated. suspect is that organizations have developed individual
Model evaluation parameters are calculated, depending on metrics to evaluate MLOps implementations, but they have
its type, which will ultimately be sent to the repository not been published. This leads us to a proposal for further
together with the model. research in this area. Future research should analyze
After the model is prepared, it is deployed. Depending MLOps implementations in organizations to identify
on how the model is used, this may be a simpler or more existing metrics and establish a standardized framework. In
complex operation. A simpler variant is when the the absence of such metrics, we would propose metrics
implementation involves registering a new model in the based on the analysis of MLOps processes.
registry and then marking it as production. In this variant, V. Conclusion and future work
the software that uses the model will download its new MLOps is a young and growing field of machine learning.
version from the repository. In particular, it may be a In the reviewed articles, one could find many architectural
pipeline managed by a model orchestrator or a service that proposals, used tools and processes. Despite the variety of
uses models as part of a provided REST API. A more approaches, certain foundational elements and leading
complex variant of this process involves implementing a solutions can be identified as the core of MLOps. These
microservice that exposes the model via an API. In this solutions can be divided into two parts: basic DevOps and
variant, in addition to the previously mentioned steps, dedicated MLOps. The first category includes CI/CD tool
classes, Code Repository, Container Orchestrator. The
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
second category includes primarily the following: Model sources of information about MLOps implementations to
Orchestrator, Model Repository and Feature Store. A include private companies that do not publish their results
summary of MLOps tool classes, together with the most but agree to share them with us.
popular solutions, is presented in Fig. 6. It is also the
MLOps reference architecture, which can be referred to REFERENCES
when building dedicated solutions. We also defined the [1] Mohammad Heydari, Zahra Rezvani, Challenges and Experiences of
model production process in the MLOps environment. Iranian Developers with MLOps at Enterprise, 2023 7th Iranian
Conference on Advances in Enterprise Architecture (ICAEA 2023),
However, market standards for some steps, such as model 2023, DOI: 10.1109/ICAEA60387.2023.10414442
monitoring, have not yet been developed. Finally, the [2] Meenu Mary John, et. al., Advancing MLOps from Ad hoc to Kaizen,
absence of publications detailing metrics for evaluating 2023 49th Euromicro Conference on Software Engineering and
Advanced Applications (SEAA), 2023, DOI:
MLOps implementations was unexpected. Perhaps this is 10.1109/IC2E52221.2021.00034
due to the immaturity of this area and measurable indicators [3] Gilberto Recupito, et al. , A Multivocal Literature Review of MLOps
will be developed to assess the quality of the implemented Tools and Features, 2022 48th Euromicro Conference on Software
Engineering and Advanced Applications (SEAA), 2022, DOI
MLOps environment and compare these indicators between 10.1109/SEAA56994.2022.00021
companies. [4] Tsakani Mboweni, et al., A Systematic Review of Machine Learning
In the future, we plan to work in three areas. First, we want DevOps, Proc. of the International Conference on Electrical,
Computer and Energy Technologies (ICECET 2022), 2022, DOI:
to address the problem of limited answers to the question
10.1109/ICECET55527.2022.9872968
about metrics. We want to conduct research in private [5] Christian Haertel, et al., MLOps in Data Science Projects: A Review,
companies and prepare a proposal for metrics for MLOps 2023 IEEE International Conference on Big Data (BigData), 2023,
implementations. Second, we want to continuously update DOI: 10.1109/BigData59044.2023.10386139
[6] Ask Berstad Kolltveit, Jingyue Li, Operationalizing Machine Learning
the MLOps architecture and publish when significant Models - A Systematic Literature Review, Workshop on Software
changes to this work occur. Third, we want to expand the Engineering for Responsible AI, 2022,
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
[7] Monika Steidl, et al., The pipeline for the continuous development of [25] Enrico Alberti, et al., AI Lifecycle Zero-Touch Orchestration within
artificial intelligencen models—Current state of research and practice, the Edge-to-Cloud Continuum for Industry 5.0, Systems 2024, 2024,
The Journal of Systems & Software, 2023, DOI: https://doi.org/10.3390/systems12020048
[8] B. Kitchenham and S. Charters, Guidelines for performing systematic, [26] Ioannis Karamitsos, et al., Applying DevOps Practices of Continuous
, 2007, Automation for Machine Learning, Information 2020, 11, 363, 2020,
[9] Chorwon Kim, et al., A Microservice-based MLOps Platform for DOI: 10.3390/info11070363
Efficient Development of AI Services in an Edge-Cloud Environment, [27] Antonio Carlos Cob-Parro, et al., Fostering Agricultural
2023 14th International Conference on Information and Transformation through AI An Open-Source AI Architecture
Communication Technology Convergence, 2023, DOI: Exploiting the MLOps Paradigm, Agronomy 2024, 14, 259, 2023,
10.1109/ICTC58733.2023.10392296 DOI: https://doi.org/10.3390/agronomy14020259
[10] Daniel Seaman, et al., An Approach To Experiment Reproducibility [28] Desta Haileselassie Hagos, et al., Scalable Artificial Intelligence for
Through MLOps and Semantic Web Technologies, 2023 XLIX Latin Earth Observation Data Using Hopsworks, Remote Sens. 2022, 14,
American Computer Conference, 2023, DOI: 1889, 2022, DOI: https://doi.org/10.3390/rs14081889
10.1109/CLEI60451.2023.10346140 [29] Ruibo Chen, et al., An automatic model management system and its
[11] Emmanuel Raj, et al., Edge MLOps: An Automation Framework for implementation for AIOps on microservice platforms, The Journal of
AIoT Applications, 2021 IEEE International Conference on Cloud Supercomputing (2023) 79:11410–11426, 2023, DOI:
Engineering, 2021, https://doi.org/10.1007/s11227-023-05123-4
[12] Gregor Cerar, et al., Feature Management for Machine Learning [30] Dhaval Gajjar, How to Implement MLOps, , 2024,
Operation Pipelines in AI Native Networks, 2023 International Balkan [31] Ayush Shridhar, Deepak Nadig, Heuristic-based Resource Allocation
Conference on Communications and Networking (BalkanCom), 2023, for Cloud-native Machine Learning Workloads, 2022 IEEE
DOI: 10.1109/BalkanCom58402.2023.10167936 International Conference on Advanced Networks and
[13] Anas Bodor, et al., From Development to Deployment: An Approach Telecommunications Systems (ANTS), 2022, DOI:
tonMLOps Monitoring for Machine Learning Model 10.1109/ANTS56424.2022.10227727
Operationalization, 2023 14th International Conference on Intelligent [32] Le Phi Hung, et al., IncWAD: An Incremental Learning Approach for
Systems: Theories and Applications, 2023, DOI: Web Attack Detection using MLOps, 2023 International Conference
10.1109/SITA60746.2023.10373733 on Advanced Technologies for Communications (ATC), 2023, DOI:
[14] T Vishwambari, Sonali Agrawal, Integration of Open-Source Machine 10.1109/ATC58710.2023.10318852
Learning Operations Tools into a Single Framework, 2023 [33] Niranjan DR, Mohana, Jenkins Pipelines: A Novel Approach to
International Conference on Computing, Communication, and Machine Learning Operations (MLOps), Proceedings of the
Intelligent Systems, 2023, DOI: International Conference on Edge Computing and Applications
10.1109/ICCCIS60361.2023.10425558 (ICECAA 2022), 2022, DOI: 10.1109/ICECAA55415.2022.9936252
[15] Dominik Kreuzberger, et al., Machine Learning Operations (MLOps): [34] Yue Zhou, et al., Towards MLOps: A Case Study of ML Pipeline
Overview, Definition, and Architecture, IEEE Access, 2023, DOI Platform, 2020 International Conference on Artificial Intelligence and
10.1109/ACCESS.2023.3262138 Computer Engineering (ICAICE), 2020, DOI:
[16] Satvik Garg, et al., On Continuous Integration / Continuous Delivery 10.1109/ICAICE51518.2020.00102
for Automated Deployment of Machine Learning Models using [35] Amirhossein Hossein Nia, et al., Unlocking the Power of Data in
MLOps, 2021 IEEE Fourth International Conference on Artificial Telecom: Building an Effective MLOps Infrastructure for Model
Intelligence and Knowledge Engineering, 2021, DOI: Deployment, 2023 7th Iranian Conference on Advances in Enterprise
10.1109/AIKE52691.2021.00010 Architecture (ICAEA 2023), 2023, DOI:
[17] Peini Liu, et al., Scanflow-K8s: Agent-based Framework for 10.1109/ICAEA60387.2023.10414445
Autonomic Management and Supervision of ML Workflows in [36] Rakshith Subramanya, et al., From DevOps to MLOps: Overview and
Kubernetes Clusters, 2022 22nd International Symposium on Cluster, Application to Electricity Market Forecasting, Applied Sciences 2022,
Cloud and Internet Computing (CCGrid), 2022, DOI: 12, 9851, 2022, DOI: https://doi.org/10.3390/app12199851
10.1109/CCGRID54584.2022.00047 [37] Huan Chen, et al., A Federated implementation for MLOps framework
[18] Mariam Barry, et al., StreamMLOps: Operationalizing Online based on non-intrusive load monitoring, 2023 IEEE 5th Eurasia
Learning for Big Data Streaming & Real-Time Applications, 2023 Conference on IOT, Communication and Engineering (ECICE), 2023,
IEEE 39th International Conference on Data Engineering (ICDE), DOI: 10.1109/ECICE59523.2023.10383048
2023, DOI: 10.1109/ICDE55515.2023.00272 [38] Gabriele Baldoni, et al., A Dataflow-Oriented Approach for Machine-
[19] Filippo Lanubile, et al., Teaching MLOps in Higher Education Learning-Powered Internet of Things Applications, Electronics 2023,
through Project-Based Learning, 2023 IEEE/ACM 45th International 12, 3940, 2023, DOI: https://doi.org/10.3390/electronics12183940
Conference on Software Engineering: Software Engineering [39] Konstantinos Filippou, et al., Structure Learning and Hyperparameter
Education and Training (ICSE-SEET), 2023, DOI: 10.1109/ICSE- Optimization Using an Automated Machine Learning (AutoML)
SEET58685.2023.00015 Pipeline, Information 2023, 14, 232, 2023, DOI:
[20] Meenu Mary John, et al., Towards MLOps: A Framework and https://doi.org/10.3390/info14040232
Maturity Model, 2021 47th Euromicro Conference on Software [40] Mattia Antonini, et al., Tiny-MLOps: a framework for orchestrating
Engineering and Advanced Applications (SEAA), 2021, DOI: ML applications at the far edge of IoT systems, 2022 IEEE
10.1109/SEAA53835.2021.00050 International Conference on Evolving and Adaptive Intelligent
[21] Tim Raffin, et al., A reference architecture for the operationalization Systems (EAIS), 2022, DOI: 10.1109/EAIS51927.2022.9787703
of machine learning models in manufacturing, 10th CIRP Global Web [41] Raúl Miñón, et al., Pangea An MLOps Tool for Automatically
Conference – Material Aspects of Manufacturing Processes, 2022, Generating Infrastructure and Deploying Analytic Pipelines in Edge,
[22] Juan Pineda-Jaramillo, Francesco Viti, MLOps in freight rail Fog and Cloud Layers, Sensors 2022, 22, 4425, 2022, DOI:
operations, Engineering Applications of Artificial Intelligence 123 https://doi.org/10.3390/s22124425
(2023) 106222, 2023, [42] Matteo Testi et al., MLOps: A Taxonomy and a Methodology, IEEE
[23] Tim Raffin, et al., Qualitative assessment of the impact of Access, 2022, DOI: 10.1109/ACCESS.2022.3181730
manufacturing-specific influences on Machine Learning Operations, [43] Florian Bachinger et al., Automated Machine Learning for Industrial
Procedia CIRP 115 (2022) 136–141, 2022, Applications Challenges and Opportunities, Procedia Computer
[24] Mandepudi Nobel Chowdary, et al., Accelerating the Machine Science 232 (2024) 1701–1710, 2024, DOI:
Learning Model Deployment using MLOps, 4th International 10.1016/j.procs.2024.01.168
Conference on Intelligent Circuits and Systems, 2022, [44] Leif Sundberg, Jonny Holmström, Democratizing artificial
doi:10.1088/1742-6596/2327/1/012027 intelligence: How no-code AI can leverage machine learning
operations, 2023 Kelley School of Business, Indiana University, 2023,
DOI: https://doi.org/10.1016/j.bushor.2023.04.003
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2025.3534990
[45] Alexandre Carqueja et al., On the Democratization of Machine at the Military University of Technology.
Learning Pipelines, 2022 IEEE Symposium Series on Computational
Intelligence (SSCI), 2022, DOI: 10.1109/SSCI51031.2022.10022107
She also has over 5 years of experience in conducting IT
[46] Gonca Gürses-Tran, Antonello Monti, Advances in Time Series business and system analysis. She is currently pursuing a
Forecasting Development for Power Systems’ Operation with PhD in the area of machine learning.
MLOps, Forecasting 2022, 4, 501–524, 2022, DOI:
https://doi.org/10.3390/forecast4020028
[47] Vlad Stirbu et al., Continuous design control for machine learning in
certified medical systems, Software Quality Journal (2023) 31:307–
333, 2022, DOI: https://doi.org/10.1007/s11219-022-09601-5
[48] Anas Bodor et al., Machine Learning Models Monitoring in MLOps
Context: Metrics and Tools, International Journal of Interactive
Mobile Technologies (iJIM), 2023, DOI:
https://doi.org/10.3991/ijim.v17i23.43479
[49] Henrik Heymann et al., Assessment Framework for Deployability of
Machine Learning Models in Production, 16th CIRP Conference on
Intelligent Computation in Manufacturing Engineering, 2023, DOI:
10.1016/j.procir.2023.06.007
[50] Grad View Research, Artificial Intelligence Market Size, Share &
Trends Analysis Report By Solution, By Technology (Deep Learning,
Machine Learning, NLP, Machine Vision, Generative AI), By
Function, By End-use, By Region, And Segment Forecasts, 2024 –
2030, Report ID: GVR-1-68038-955-5
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/