Instant download Deploy Machine Learning Models to Production: With Flask, Streamlit, Docker, and Kubernetes on Google Cloud Platform Pramod Singh pdf all chapter
Instant download Deploy Machine Learning Models to Production: With Flask, Streamlit, Docker, and Kubernetes on Google Cloud Platform Pramod Singh pdf all chapter
com
https://textbookfull.com/product/deploy-machine-learning-
models-to-production-with-flask-streamlit-docker-and-
kubernetes-on-google-cloud-platform-pramod-singh/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/biota-grow-2c-gather-2c-cook-loucas/
textboxfull.com
https://textbookfull.com/product/learn-pyspark-build-python-based-
machine-learning-and-deep-learning-models-1st-edition-pramod-singh/
textboxfull.com
Practical Machine Learning with AWS : Process, Build,
Deploy, and Productionize Your Models Using AWS Himanshu
Singh
https://textbookfull.com/product/practical-machine-learning-with-aws-
process-build-deploy-and-productionize-your-models-using-aws-himanshu-
singh/
textboxfull.com
https://textbookfull.com/product/beginning-mlops-with-mlflow-deploy-
models-in-aws-sagemaker-google-cloud-and-microsoft-azure-sridhar-alla/
textboxfull.com
Deploy Machine
Learning Models
to Production
With Flask, Streamlit, Docker, and
Kubernetes on Google Cloud Platform
—
Pramod Singh
Deploy Machine
Learning Models to
Production
With Flask, Streamlit, Docker,
and Kubernetes on Google
Cloud Platform
Pramod Singh
Deploy Machine Learning Models to Production
Pramod Singh
Bangalore, Karnataka, India
iii
Table of Contents
Deep Learning�����������������������������������������������������������������������������������������������������22
Human Brain Neuron vs. Artificial Neuron�����������������������������������������������������23
Activation Functions��������������������������������������������������������������������������������������26
Neuron Computation Example�����������������������������������������������������������������������28
Neural Network���������������������������������������������������������������������������������������������30
Training Process��������������������������������������������������������������������������������������������32
Role of Bias in Neural Networks��������������������������������������������������������������������35
CNN���������������������������������������������������������������������������������������������������������������37
RNN���������������������������������������������������������������������������������������������������������������39
Industrial Applications and Challenges���������������������������������������������������������������48
Retail�������������������������������������������������������������������������������������������������������������48
Healthcare�����������������������������������������������������������������������������������������������������49
Finance����������������������������������������������������������������������������������������������������������50
Travel and Hospitality������������������������������������������������������������������������������������50
Media and Marketing������������������������������������������������������������������������������������51
Manufacturing and Automobile���������������������������������������������������������������������51
Social Media��������������������������������������������������������������������������������������������������52
Others������������������������������������������������������������������������������������������������������������52
Challenges�����������������������������������������������������������������������������������������������������52
Requirements������������������������������������������������������������������������������������������������������53
Conclusion����������������������������������������������������������������������������������������������������������54
iv
Table of Contents
v
Table of Contents
Index�������������������������������������������������������������������������������������������������147
vi
About the Author
Pramod Singh is a manager of data science at
Bain & Company. He has more than 11 years
of rich experience in the data science field
working with multiple product- and service-
based organizations. He has been part of
numerous large-scale ML and AI projects. He
has published three books on large-scale data
processing and machine learning. He is also a
regular speaker at major AI conferences such
as O’Reilly AI and Strata.
vii
About the Technical Reviewer
Manohar Swamynathan is a data science
practitioner and an avid programmer, with
14+ years of experience in various data science
areas that include data warehousing, business
intelligence (BI), analytical tool development,
ad hoc analysis, predictive modeling, data
science product development, consulting,
formulating strategy, and executing analytics
programs. He’s had a career covering the
life cycle of data across different domains
such as US mortgage banking, retail/e-commerce, insurance, and
industrial IoT. He has a bachelor’s degree with a specialization in
physics, mathematics, and computers, and a master’s degree in project
management. He’s currently living in Bengaluru, the Silicon Valley of India.
He has also been the technical reviewer of books such as Data Science
Using Python and R.
ix
Acknowledgments
I want to take a moment to thank the most important person in my life: my
wife, Neha. Without her support, this book wouldn’t have seen the light of
day. She is the source of my energy, motivation, and happiness and keeps
me going despite challenges and hardships. I dedicate this book to her.
I also want to thank a few other people who helped a great deal
during these months and provided a lot of support. Let me start with
Aditee, who was very patient and kind to understand the situation and
help to reorganize the schedule. Thanks to Celestian John as well to offer
me another opportunity to write for Apress. Last but not the least, my
mentors: Barron Beranjan, Janani Sriram, Sebastian Keupers, Sreenivas
Venkatraman, Dr. Vijay Agneeswaran, Shoaib Ahmed, and Abhishek
Kumar. Thank you for your continuous guidance and support.
xi
Introduction
This book helps upcoming data scientists who have never deployed any
machine learning model. Most data scientists spend a lot of time analyzing
data and building models in Jupyter Notebooks but have never gotten an
opportunity to take them to the next level where those ML models are
exposed as APIs. This book helps those people in particular who want to
deploy these ML models in production and use the power of these models
in the background of a running application.
The term ML productionization covers lots of components and
platforms. The core idea of this book is not to look at each of the options
available but rather provide a holistic view on the frameworks for
productionizing models, from basic ML-based apps to complex ones.
Once you know how to take an ML model and put it in production, you
will become more confident to work on complicated applications and big
deployments. This book covers different options to expose the ML model
as a web service using frameworks such as Flask and Streamlit. It also
helps readers to understand the usage of Docker in machine learning apps
and the end-to-end process of deployment on Google Cloud Platform
using Kubernetes.
I hope there is some useful information for every reader, and
potentially they can apply it in their workstreams to go beyond Jupyter
Notebooks and productionalize some of their ML models.
xiii
CHAPTER 1
Introduction to
Machine Learning
In this first chapter, we are going to discuss some of the fundamentals
of machine learning and deep learning. We are also going to look at
different business verticals that are being transformed by using machine
learning. Finally, we are going to go over the traditional steps of training
and building a rather simple machine learning model and deep learning
model on a cloud platform (Databricks) before moving on to the next set
of chapters on productionization. If you are aware of these concepts and
feel comfortable with your level of expertise on machine learning already,
I encourage you to skip the next two sections and move on to the last
section, where I mention the development environment and give pointers
to the book’s accompanying codebase and data download information so
that you are able to set up the environment appropriately. This chapter
is divided into three sections. The first section covers the introduction
to the fundamentals of machine learning. The second section dives into
the basics of deep learning and the details of widely used deep neural
networks. Each of the previous sections is followed up by the code to build
a model on the cloud platform. The final section is about the requirements
and environment setup for the remainder of the chapters in the book.
History
Machine learning/deep learning is not new; in fact, it goes back to 1940s
when for the first time an attempt was made to build something that had
some amount of built-in intelligence. The great Alan Turing worked on
building this unique machine that could decrypt German code during
World War II. That was the beginning of machine intelligence era, and
within a few years, researchers started exploring this field in great detail
across many countries. ML/DL was considered to be significantly powerful
in terms of transforming the world at that time, and an enormous number
of funds were granted to bring it to life. Nearly everybody was very
optimistic. By late 1960s, people were already working on machine vision
learning and developing robots with machine intelligence.
While it all looked good on the surface level, there were some serious
challenges that were impeding the progress in this field. Researchers
were finding it extremely difficult to create intelligence in the machines.
Primarily it was due to a couple of reasons. One of them was that the
processing power of computers in those days was not enough to handle
and process large amounts of data, and the reason was the availability of
relevant data itself. Despite the support of government and the availability
of sufficient funds, the ML/AI research hit a roadblock from the period of
the late 1960s to the early 1990s. This block of time period is also known as
the “AI winters” among the community members.
In the late 1990s, corporations once again became interested in AI.
The Japanese government unveiled plans to develop a fifth-generation
computer to advance machine learning. AI enthusiasts believed that soon
computers would be able to carry on conversations, translate languages,
interpret pictures, and reason like people. In 1997, IBM’s Deep Blue
became the first computer to beat a reigning world chess champion, Garry
Kasparov. Some AI funding dried up when the dot-com bubble burst in the
early 2000s. Yet machine learning continued its march, largely thanks to
improvements in computer hardware.
2
Chapter 1 Introduction to Machine Learning
• Rise in data
• Improved ML algorithms
Rise in Data
The first most prominent reason for this trend is the massive rise in data
generation in the past couple of decades. Data was always present, but
it’s imperative to understand the exact reason behind this abundance of
data. In the early days, the data was generated by employees or workers
of particular organizations as they would save the data into systems, but
there were limited data points holding only a few variables. Then came
the revolutionary Internet, and generic information was made accessible
to virtually everyone using the Internet. With the Internet, the users got
3
Chapter 1 Introduction to Machine Learning
the control to enter and generate their own data. This was a colossal shift
as the total number of Internet users in the world grew at an exploding
rate, and the amount of data created by these users grew at an even
higher rate. All of this data—login/sign-up forms capturing user details,
photos and videos uploads on various social platforms, and other online
activities—led to the coining of the term Big Data. As a result, the challenges
that ML and AI researchers faced in earlier times due to a lack of data points
were completely eliminated, and this proved to be a major enabler for the
adoption of in ML and AI.
Finally, from a data perspective, we have already reached the next level
as machines are generating and accumulating data. Every device around
us is capturing data such as cars, buildings, mobiles, watches, and flight
engines. They are embedded with multiple monitoring sensors and are
recording data every second. This data is even higher in magnitude than the
user-generated data and commonly referred as Internet of Things (IoT) data.
4
Chapter 1 Introduction to Machine Learning
Improved ML Algorithms
Over the last few years, there has been tremendous progress in terms
of the availability of new and upgraded algorithms that have not only
improved the predictions accuracy but also solved multiple challenges that
traditional ML faced. In the first phase, which was a rule-based system,
one had to define all the rules first and then design the system within
those set of rules. It became increasingly difficult to control and update the
number of rules as the environment was too dynamic. Hence, traditional
ML came into the picture to replace rule-based systems. The challenge
with this approach was that the data scientist had to spent a lot of time
to hand design the features for building the model (known as feature
engineering), and there was an upper threshold in terms of predictions
accuracy that these models could never go above no matter if the input
data size increased. The third phase was the introduction of deep neural
networks where the network would figure out the most important features
on its own and also outperform other ML algorithms. In addition, some
other approaches that have been creating a lot of buzz over the last few
years are as follows:
• Meta learning
• Capsule networks
5
Chapter 1 Introduction to Machine Learning
M
achine Learning
Now that we know a little bit of history around machine learning, we can
go over the fundamentals of machine learning. We can break down ML
into four parts, as shown in Figure 1-1.
6
Chapter 1 Introduction to Machine Learning
7
Chapter 1 Introduction to Machine Learning
8
Chapter 1 Introduction to Machine Learning
• Binary classification
• Multiclassification
9
Chapter 1 Introduction to Machine Learning
U
nsupervised Learning
Unsupervised learning is another category of machine learning that is used
heavily in business applications. It is different from supervised learning in terms
of the output labels. In unsupervised learning, we build the models on similar
sort of data as of supervised learning except for the fact that this dataset does
not contain any label or outcomes column. Essentially, we apply the model
on the data without any right answers. In unsupervised learning, the machine
tries to find hidden patterns and useful signals in the data that can be later used
for other applications. The main objective is to probe the data and come up
with hidden patterns and a similarity structure within the dataset, as shown in
Figure 1-5. One of the use cases is to find patterns within the customer data and
group the customers into different clusters. It can also identify those attributes
that distinguish between any two groups. From a validation perspective, there
is no measure of accuracy for unsupervised learning. The clustering done by
person A can be totally different from that of person B based on the parameters
used to build the model. There are different types of unsupervised learning.
• K-means clustering
Semi-supervised Learning
As the name suggests, semi-supervised learning lies somewhere in between
supervised and unsupervised learning. In fact, it uses both of the techniques.
This type of learning is mainly relevant in scenarios when we are dealing
with a mixed sort of dataset, which contains both labeled and unlabeled
data. Sometimes it’s just unlabeled data completely, but we label some part
of it manually. The whole idea of semi-supervised learning is to use this
small portion of labeled data to train the model and then use it for labeling
the other remaining part of data, which can then be used for other purposes.
This is also known as pseudo-labeling as it labels the unlabeled data using
the predictions made by the supervised model. To quote a simple example,
say we have lots of images of different brands from social media and most
of it is unlabeled. Now using semi-supervised learning, we can label some
of these images manually and then train our model on the labeled images.
We then use the model predictions to label the remaining images to
transform the unlabeled data to labeled data completely.
The next step in semi-supervised learning is to re-train the model
on entire labeled dataset. The advantage that it offers is that the model
gets trained on a bigger dataset, which was not the case earlier and is
now more robust and better at predictions. The other advantage is that
semi-supervised learning saves a lot of effort and time that could go in to
manually label the data. The flipside of doing all this is that it’s difficult to
get the high performance of the pseudo-labeling as it uses a small part of
the labeled data to make the predictions. However, it is still a better option
rather than manually labeling the data, which can be expensive and time-
consuming at the same time. This is how semi-supervised learning uses
both the supervised and unsupervised learning to generate the labeled
data. Businesses that face challenges regarding costs associated with the
labeled training process usually go for semi-supervised learning.
11
Chapter 1 Introduction to Machine Learning
Reinforcement Learning
Reinforcement learning is the fourth kind of learning and is little different
in terms of the data usage and its predictions. Reinforcement learning
is a big research area in itself, and an entire book could be written just
on it. The main difference between the other kinds of learning and
reinforcement learning is that we need data, mainly historical data, to train
the models, whereas reinforcement learning works on a reward system,
as shown in Figure 1-6. It is primarily decision-making based on certain
actions that the agent takes to change its state while trying to maximize the
rewards. Let’s break this down to individual elements using a visualization.
12
Chapter 1 Introduction to Machine Learning
G
radient Descent
At the end of the day, the machine learning model is as good as the loss
it’s able to minimize in its predictions. There are different types of loss
functions pertaining to a specific category of problems, and most often in
the typical classification or regression tasks, we try to minimize the mean
squared error and log loss during training and cross validation. If we think
of the loss as a curve, as shown in Figure 1-7, gradient descent helps us to
13
Chapter 1 Introduction to Machine Learning
reach the point where the loss value is at its minimum. We start a random
point based on the initial weights or parameters in the model and move in
the direction where it starts reducing. One thing worth remembering here
is that gradient descent takes big steps when it’s far away from the actual
minima, whereas once it reaches a nearby value, the step sizes become
very small to not miss the minima.
To move toward the minimum value point, it starts with taking the
derivative of the error with respect to the parameters/coefficients (weights
in case of neural networks) and tries to find the point where the slope
of this error curve is equal to zero. One of the important components
in gradient descent is the learning rate as it decides how quickly or
how slowly it descends toward the lowest error value. If learning rate
parameters are set to be higher value, then chances are that it might
skip the lowest value, and on the contrary, if learning rate is too small, it
would take a long time to converge. Hence, the learning rate becomes an
important part in the overall gradient descent process.
The overall aim of gradient descent is to reach to a corresponding
combination of input coefficients that reflect the minimum errors based
on the training data. So, in a way we try to change these coefficient values
from earlier values to have minimum loss. This is achieved by the process
of subtracting the product of the learning rate and the slope (derivative
of error with regard to the coefficient) from the old coefficient value. This
alteration in coefficient values keeps happening until there is no more
change in the coefficient/weights of the model as it signifies that the
gradient descent has reached the minimum value point in the loss curve.
14
Chapter 1 Introduction to Machine Learning
15
Chapter 1 Introduction to Machine Learning
• Number of trees
16
Chapter 1 Introduction to Machine Learning
Performance Metrics
There are different ways in which the performance of a machine learning
model can be evaluated depending on the nature of algorithm used.
As mentioned previously, there are broadly two categories of models:
regression and classification. For the models that predict a continuous
target, such as R-square, root mean squared error (RMSE) can be used,
whereas for the latter, an accuracy measure is the standard metric.
However, the cases where there is class imbalance and the business needs
to focus on only one out of the positive or negative class, measures such as
precision and recall can be used.
Now that we have gone over the fundamentals and important concepts
in machine learning, it’s time for us to build a simple machine learning
model on a cloud platform, namely, Databricks.
17
Chapter 1 Introduction to Machine Learning
Note Sign up for the Databricks community edition to run this code.
The first step is to start a new cluster with the default settings as we
are not building a complicated model here. Once the cluster is up and
running, we need to simply upload the data to Databricks from the local
system. The next step is to create a new notebook and attach it to the
cluster we created earlier. The next step is to import all required libraries
and confirm that the data was uploaded successfully.
18
Another Random Scribd Document
with Unrelated Content
contents either because they found they could get along without
them, or were killed or died, or grew disheartened and made their
way back to the river towns of the Yukon. In only a couple of them
did they find fresh stores and in one of these, curiously enough,
there was a poke5 of gold nuggets. Its owner, in all probability, had
laid it down when he was stocking the cache and forgot to take it
with him when he went.
5 A poke is a small bag usually of deerskin.
Neither did the boys take it, nor disturb the stores in any of the
caches they found, for it is an unwritten law in the barren north that
no man shall touch anything cached which belongs to another.
On the fifth trip out they drove east, or more accurately east by
south, crossed the International boundary line and headed straight
for Mount Burgess forty miles away. As Jack had said, they cared
not whether they found the gold in Alaska, in the Yukon Territory or
on top of the North Pole, as long as they found it. After they had
covered about thirty miles they ran into a scrub forest and the first
thing Jack spied was a pair of moose antlers lashed to a tree.
Both he and Bill thought this a very strange circumstance but they
presently concluded that it had been put there by some hunter
though for what purpose they could not guess. After going half-a-
mile farther into the woods they came to another pair of moose
antlers likewise lashed to a tree; this interested them in dead earnest
and they began to investigate accordingly. Ordinarily when a trail is
blazed through the woods a bit of the bark of the trees is chipped off
at short intervals so that those who go or come cannot go astray but
must find their way there and back, let come whatever may.
But here was a trail blazed differently from any they had ever seen
or heard of, in that at considerable distances apart the antlers of a
moose lashed to a tree pointed the way, but what that way led to
neither Jack nor Bill had the remotest idea. Sometimes the antlers
were so far apart, or led off at such angles, that they had to hunt for
an hour or more for the next one.
“What, I’m askin’ you as man to man, does it mean? Are we gettin’
near it?” questioned Bill, blinking his blue eyes.
“I don’t know,” replied Jack soberly, though hoping against hope
that it was the sign they sought; “but it is queer, isn’t it?”
“Let’s keep right on,” was Bill’s solemn advice.
“Mush on there, you huskies!” yelled Jack; “double rations of fish
for you if we find it.”
“Ten rations of fish, three times a day fer life if we finds it, says I,”
came from Bill.
It is not known positively whether Sate could count up to ten or not
but he gave Bill an awful look which in husky language meant “cut
out that loose talk and maybe each of us will get a piece of fish for
supper anyway,” and with that he and his mates mushed on as fast
as their masters could pick out the trail.
They kept this up the best part of the day when their quest ended
at a log cabin not unlike their own, and over whose door was the
largest pair of bull-moose antlers the boys had ever seen. The boys,
who had been building high their hopes on something far less
tangible than a clew, were disappointed to the quick but they had the
right kind of stuff in them and so never batted an eye.
They were greeted by the barking and howling of many dogs and
what with the noise their own teams made it sounded as if
pandemonium had broken loose. Then Joseph Cook, hunter, trapper,
Indian Agent and sometime gold seeker, otherwise familiarly known
as Bull Moose Joe, for he had brought down more moose than any
other living man, appeared at the door and gave them a warm
welcome.
“But why all the antlers lashed to the trees?” Jack queried after
they had established comrade-like relations.
“I have blazed the trail to my cabin with antlers so that he who
chances this way with his eyes open can find me.”
Bull Moose Joe was a man who stood six foot in his moccasins,
was of medium build and as straight as an Indian. He looked as if he
might have stepped out of the great West in the days of the fifties for
he wore his hair long, had a mustache and a goatee. As usual with
white men up there he must needs have the news from down under,
no matter how stale it was, and then, also as usual, the conversation
just naturally drifted over to the channel of gold. It was then that Bull
Moose Joe gave the boys the greatest jolt they had had in all their
varied but brief career in the gold fields.
“I take it you boys are looking for the same thing I came up to look
for ten years ago,” he said in an off-hand way.
“Yes, it’s gold we’re after,” replied Jack.
“Gold in moosehide sacks piled up like cordwood!” he added,
watching the effect of his words on the boys.
And the effect was truly electrical for their faces became rigid,
their eyes glassed over and they felt the very blood in their arteries
congeal into water-ice.
“And—and—did you find it?” asked Jack when he had recovered
his powers of speech a little.
“Yes, that’s what we want to know,” Bill gurgled as if his gullet was
choked up.
Bull Moose Joe pulled a couple of times on his pipe, watched the
hot smoke ascend and dissolve away just as had his dreams of gold.
He laughed softly. He was in no hurry to answer but to the boys the
moments seemed like an age.
“No,” he said finally, “I never found it though I searched diligently
for it winter and summer for the first five years I was here. I speak
the Hupa tongue which is the tongue of the Athapascans and I
learned to talk it so that I could find out what the Indians knew about
it.
“There was once a tribe of Indians, who lived hereabouts and they
were different from any of the Indians that are living in the Yukon or
Alaska to-day, for they were as fierce and bloodthirsty as the
Apaches down under. Among our natives here there is a legend
about a pocket of gold that was found by these Indians long before
the gold seekers came on to it.
“Then hunters and trappers from the Hudson Bay Company
pushed their way across the desolate wastes of upper Canada and
coming upon this tribe they killed them and took the gold from them.
Before they could get the metal out of the country they were attacked
by the Yeehats, another band of Indians, and, in turn, lost their lives.
These latter Indians cached the gold in a pile of stones but how long
it remained there it is hard to say for the Indians now living seem not
to know.
“Many years after, when men swarmed over Chilcoot Pass and
White Pass like so many black flies, floated down the Yukon River
and on to the Klondike, a miner named John Thornton and a couple
of pards, left the others and pushed farther north. And then, like the
fools for luck they were, they discovered the cache and in it the pile
of nuggets that is worth millions.
“How to get it over to the Yukon River and down under in safety
were their only worries but they were big ones. They were rich
beyond the dreams of the wildest stampeder and so to lessen the
chances of loss by any means they took their time and laid the most
painstaking plans.
“First they hunted the moose and made sacks of the hides; into
these they packed the gold nuggets fifty pounds to the sack, and
there were five hundred sacks which were worth millions. No sooner
had they started than the Yeehats swooped down on them and
although Thornton and his men put up a desperate fight they fell
before the larger number of Indians and the moosehide sacks of gold
stayed right where they found them.
“In a few years the Yeehats as a tribe were practically
exterminated by starvation and disease and so the gold is still here,
but exactly where, no one knows. But sometime it will be found again
and if those who strike it are luckier than the others they will get it
out; but that time has not yet come. To keep me going I began to trap
and hunt and a year or so ago the Minister of the Interior made me
Indian Agent for this part of the Yukon.”
“‘THESE INDIANS CACHED THE GOLD IN A PILE OF STONES.’”
Now if you will look at a map of Alaska you will see that the
Porcupine River is like the letter U laid over on its side; that is to say,
its head waters are in Alaska and the stream then flows east over
the International boundary into the Yukon Territory, thence north by
northeast across the Arctic Circle and when it reaches latitude 137
degrees and longitude about 67-1/2 degrees, it makes a sharp bend
and flows back west by southwest for a couple of hundred miles,
when it empties into the Yukon River, between the towns of Beaver
and Fort Yukon.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com