Google Kubernetes Engine by Google
Google Kubernetes Engine by Google
Between 2017 and 2018 the number of organizations using containers for software
development and to deploy their services doubled.
And the trend shows no signs of slowing. For this reason, container knowledge and
skill with kubernetes is increasingly important for the job of a Cloud Architect.
And, of course, if you need more of these skills for the job, you will also need them to
prepare for the exam.
Enterprise and Continuous Deployment
Docker builds containers
Container
Docker
Container
Registry
Cloud
Build
Cloud Source
Repositories
Code Dockerfile
Docker is software that builds containers. You supply application code, and
instructions, called a Dockerfile, and Docker follows the instructions and assembles
the code and dependencies into the container. Containers can be "run", much as an
application can run. However, it is a self-contained environment that can run on many
platforms.
Google Cloud offers a service called Cloud Build which functions similarly to Docker. It
accepts code and configuration and builds containers. Cloud Build offers many
features and services that are geared towards professional development. It is
designed to fit into a continuous development / continuous deployment workflow. And
it is designed to scale to handle many application developers working on and
continuously updating a live global service.
If you had a hundred developers sharing source files, you would need a system for
managing them, for tracking them, versioning them, and enforcing a check-in, review,
and approval process. Cloud Source Repositories is a cloud-based solution.
If you were deploying hundreds of containers you would not being keeping it to
yourself. One of the reasons to use containers is to share them with others. So you
need a way to manage and share them. And this is the purpose of Container Registry.
Container Registry has various integrations with Continuous Integration / Continuous
Deployment services.
What is really in a container...
Dockerfile Container Container
FROM ubuntu:15.04 layer
COPY . /app Thin R/W layer
RUN make /app
CMD python /app/app.py
91e54dfb1179 0B
docker commands
Base
$> docker build -t py-web-server . d74508fb6632 1.895 KB
Image
$> docker run -d py-web-server ubuntu:15.04 Layers
$> docker images c22013c84729 194.5 KB (R/O)
$> docker ps
$> docker logs <container id> d3a1f33e8a5a 188.1 MB
$> docker stop py-web-server
The layered design inside of a container isolates functions. This is what makes the
container stable and portable.
You can run a container in Docker itself, as you saw with the "docker run" command.
App Engine supports containers as custom runtimes. The main difference between
the App Engine Standard environment and the App Engine Flexible environment is
that Flexible hosts applications in Docker containers. It creates Docker containers and
persists them in Container Registry.
Kubernetes is open standard software, so you can run a kubernetes cluster in your
data center. Google Kubernetes Engine provides kubernetes as a managed service.
Kubernetes cluster has nodes, pods, and containers
Container
runs in a pod
Cluster
Pods
Nodes
Each pod hosts, manages, and runs one or more containers. The containers in a pod
share networking and storage.
So typically, there is one container per pod, unless the containers hold closely related
applications. For example, a second container might contain the logging system for
the application in the first container.
A pod can be moved from one node to another without reconfiguring or rebuilding
anything.
This design enables advanced controls and operations that gives systems built on
kubernetes unique qualities.
Kubernetes jobs run containers on nodes
scheduler
Code
Master
YAML
Pod YAML
YAML
Deployment YAML
Each cluster has a Master node that determines what happens on the cluster. There
are usually at least three of them for availability. And they can be located across
zones. A kubernetes job makes changes to the cluster.
For example a pod YAML file provides the information to start up and run a pod on a
node. If for some reason a pod stops running or a node is lost, the pod will not
automatically be replaced. The Deployment YAML tells kubernetes how many pods
you want running. So the kubernetes deployment is what keeps a number of pods
running. The Deployment YAML also defines a Replica Set, which is how many copies
of a container you want running. The kubernetes scheduler determines on which node
and in which pod the replica containers are to be run.
Advanced operations: A/B testing, Rolling updates
One of the advanced things that kubernetes deployments allow you to do is roll out
software to some pods and not others. So you can actually keep version A in
production on most of the pods and try out version B with a sample group in other
pods. This is called A/B testing and it is great because you can test the new software
in the real production environment without risking the integrity of the entire service.
Another thing you can do with deployments is a rolling update. Basically, you load up
the new software in a replacement pod, switch the load to the new pod, and turn
down the old one. This allows you to perform a controlled and gradual roll-out of the
new software across the service. If something goes wrong, you can detect the
problem and roll back to the previous software.
Really, if you are going to run an enterprise production service you will need these
kinds of operations. And that is one major reason to adopt kubernetes.
There are a number of subjects that were not covered in this brief overview. For
example, how containers running in the same pod can share resources, how
containers running in different pods can communicate, and how networking is handled
between a node's IP and the applications. These subjects and more are covered in
the course "Getting Started with Google Kubernetes Engine" or you can find more
information in the online documentation.