Microservices PDF
Microservices PDF
Kubernetes in 10 minutes
MAR 4, 2019 BY EV KONTSEVOY
What is a Microservice?
What is a microservice? Should you be using microservices? How are microservices
related to containers and Kubernetes? If these things keep coming up in your day-to-
day and you need an overview in 10 minutes, this blog post is for you.
If you were to quickly write the code which serves this listing, the simple approach
would look something like this:
When a user’s request comes from a browser, it will be served by a web application
(a Linux or Windows process). Usually, the application code fragment which gets
invoked is called a request handler. The logic inside of the handler will sequentially
make several calls to databases, fetch the required information needed to render a
page and stitch it together and render a web page to be returned to the user. Simple,
right? In fact, many of Ruby on Rails books feature tutorials and examples that look
like this. So, why complicate things, you may ask?
Imagine what happens as the application grows and more and more engineers
become involved. The recommendation engine alone in the example above is
maintained by a small army of programmers and data scientists. There are dozens of
different teams who are responsible for some component of rendering that page.
Each of those teams usually wants the freedom to:
As you can imagine, having the teams agree on everything to ship newer versions of
the web store application will become more difficult over time.
The solution is to split up the components into smaller, separate services (aka,
microservices).
The application process becomes smaller and dumber. It’s basically a proxy which
simply breaks down the incoming page request into several specialized requests and
forwards them to corresponding microservices, who are now their own processes
and are running elsewhere. The “application microservice” is basically an aggregator
of the data returned by specialized services. You may even get rid of it entirely and
offload that job to a user’s device, having this code run in a browser as a single-page
JavaScript app.
The other microservices are now separated out and each development team working
on their microservice can:
• Deploy their service as frequently as they wish without disrupting other teams.
• Scale their service the way they see fit. For example, use AWS instance types of
their choice or perhaps run on specialized hardware.
• Have their own monitoring, backups and disaster recovery that are specific to
their service.
Containers and microservices are both useful but not dependent on each other.
Another win of adopting microservices is the ability to pick the best tool for the job.
Some parts of your application can benefit from the speed of C++ while others can
benefit from increased productivity of higher level languages such as Python or
JavaScript.
If an application and development team is small enough and the workload isn’t
challenging, there is usually no need to throw additional engineering resources into
solving problems you do not have yet and use microservices. However, if you are
starting to see the benefits of microservices outweigh the disadvantages, here are
some specific design considerations:
1. Separation of computing and storage. As your needs for CPU power and
storage grow, these resources have very different scaling costs and
characteristics. Not having to rely on local storage from the beginning will allow
you to adapt to future workloads with relative ease. This applies to both simple
storage forms like file systems and more complex solutions such as databases.
2. Asynchronous processing. The traditional approach of gradually building
applications by adding more and more subroutines or objects who call each other
stops working as workloads grow and the application itself must be stretched
across multiple machines or even data centers. Re-architecting an application
around the event-driven model will be required. This means sending an event
(and not waiting for a result) instead of calling a function and synchronously
waiting for a result.
3. Embrace the message bus. This is a direct consequence of having to implement
an asynchronous processing model. As your monolithic application gets broken
into event handlers and event emitters, the need for a robust, performant and
flexible message bus is required. There are numerous options and the choice
depends on application scale and complexity. For a simple use case, something
like Redis will do. If you need your application to be truly cloud-native and scale
itself up and down, you may need the ability to process events from multiple
event sources: from streaming pipelines like Kafka to infrastructure and even
monitoring events.
4. API versioning. As your microservices will be using each other’s APIs to
communicate with each other via a bus, designing a schema for maintaining
backward compatibility will be critical. Simply by deploying the latest version of
one microservice, a developer should not be demanding everyone else to
upgrade their code. This will be a step backward towards the monolith approach,
albeit separated across application domains. Development teams must agree
upon a reasonable compromise between supporting old APIs forever and
keeping the higher velocity of development. This also means that API design
becomes an important skill. Frequent breaking API changes is one of the reasons
teams fail to be productive in developing complex microservices.
5. Rethink your security. Many developers do not realize this but migrating to
microservices creates an opportunity for a much better security model. As every
microservice is a specialized process, it is a good idea to only allow it to access
resources it needs. This way a vulnerability in just one microservice will not
expose the rest of your system to an attacker. This is in contrast with a large
monolith which tends to run with elevated privileges (a superset of what everyone
needs) and there is limited opportunity to restrict the impact of a breach.
Kubernetes solves these problems quite elegantly and provides a common framework
to describe, inspect and reason about infrastructure resource sharing and utilization.
That’s why adopting Kubernetes as part of your microservice re-architecture is a good
idea.
Kubernetes, however, is a complex technology to learn and it’s even harder to manage.
You should take advantage of a hosted Kubernetes service provided by your cloud
provider if you can. However, this is not always viable for companies who need to run
their own Kubernetes clusters across multiple cloud providers and enterprise data
centers.
For such use cases, we recommend trying out Gravity, the open source Kubernetes
packaging solution, which removes the need for Kubernetes administration. Gravity
works by creating Kubernetes clusters from a single image file or “Kubernetes
appliances” and can be downloaded, moved, created and destroyed by the hundreds,
making it possible to treat Kubernetes clusters like cattle, not pets.
The Conclusion
To summarize:
1. Microservices are not new. It’s an old software design pattern which has been
growing in popularity due to the growing scale of Internet companies.
2. Small projects should not shy from the monolithic design. It offers higher
productivity for smaller teams.
3. Kubernetes is a great platform for complex applications comprised of multiple
microservices.
4. Kubernetes is also a complex system and hard to run. Consider using hosted
Kubernetes if you can.
5. If you must run your own K8s clusters or if you need to publish your K8s
applications as downloadable appliances, consider the open source
solution, Gravity.