0% found this document useful (0 votes)
57 views

Toffetti 2016

The document discusses the characteristics of cloud-native applications and proposes an architecture for self-managing microservices in the cloud. It describes the architecture and lessons learned from implementing it for a legacy application. The architecture enables scalable and resilient self-managing applications in the cloud through continuous monitoring and automated responses to failures and load changes.

Uploaded by

tien nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Toffetti 2016

The document discusses the characteristics of cloud-native applications and proposes an architecture for self-managing microservices in the cloud. It describes the architecture and lessons learned from implementing it for a legacy application. The architecture enables scalable and resilient self-managing applications in the cloud through continuous monitoring and automated responses to failures and load changes.

Uploaded by

tien nguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Future Generation Computer Systems ( ) –

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Self-managing cloud-native applications: Design, implementation,


and experience
Giovanni Toffetti ∗ , Sandro Brunner, Martin Blöchlinger, Josef Spillner,
Thomas Michael Bohnert
Zurich University of Applied Sciences, School of Engineering, Service Prototyping Lab (blog.zhaw.ch/icclab/), 8401 Winterthur, Switzerland

highlights
• A definition of cloud-native applications and their desired characteristics.
• A distributed architecture for self-managing (micro) services.
• A report on our experiences and lessons learnt applying the proposed architecture to a legacy application brought to the cloud.

article info abstract


Article history: Running applications in the cloud efficiently requires much more than deploying software in virtual
Received 30 November 2015 machines. Cloud applications have to be continuously managed: (1) to adjust their resources to the
Received in revised form incoming load and (2) to face transient failures replicating and restarting components to provide resiliency
30 June 2016
on unreliable infrastructure. Continuous management monitors application and infrastructural metrics to
Accepted 3 September 2016
Available online xxxx
provide automated and responsive reactions to failures (health management) and changing environmental
conditions (auto-scaling) minimizing human intervention.
Keywords:
In the current practice, management functionalities are provided as infrastructural or third party
Micro services services. In both cases they are external to the application deployment. We claim that this approach
Cloud-native applications has intrinsic limits, namely that separating management functionalities from the application prevents
Container-based applications them from naturally scaling with the application and requires additional management code and human
Distributed systems intervention. Moreover, using infrastructure provider services for management functionalities results in
Auto-scaling vendor lock-in effectively preventing cloud applications to adapt and run on the most effective cloud for
Health-management the job.
In this paper we discuss the main characteristics of cloud native applications, propose a novel
architecture that enables scalable and resilient self-managing applications in the cloud, and relate on our
experience in porting a legacy application to the cloud applying cloud-native principles.
© 2016 Elsevier B.V. All rights reserved.

1. Introduction the proceedings of the First International Workshop on Automated


Incident Management in the Cloud (AIMC’15). With respect to
After a phase driven mainly by early adopters, cloud computing that initial position paper, this article relates on our experience
is now being embraced by most companies. Not only new implementing the design we propose with a specific set of
applications are developed to be run in the cloud, but legacy technologies and the evaluation of the non-functional behavior of
workloads are increasingly being adapted and transformed to the implementation with respect to scalability and resilience.
leverage the dominant cloud computing models. A suitable cloud There are several advantages in embracing the cloud, but in
application design was published previously by the authors [1] in essence they typically fall into two categories: either operational
(flexibility/speed) or economical (costs) reasons. From the former
perspective, cloud computing offers fast self-service provisioning
∗ and task automation through application programming interfaces
Corresponding author.
E-mail addresses: [email protected] (G. Toffetti), [email protected] (S. Brunner),
(APIs) which allow to deploy and remove resources instantly,
[email protected] (M. Blöchlinger), [email protected] (J. Spillner), [email protected] reduce wait time for provisioning development/test/production
(T.M. Bohnert). environments, enabling improved agility and time-to-market
http://dx.doi.org/10.1016/j.future.2016.09.002
0167-739X/© 2016 Elsevier B.V. All rights reserved.
2 G. Toffetti et al. / Future Generation Computer Systems ( ) –

facing business changes. The bottom line is increased productivity. • Resilience: CNA have to anticipate failures and fluctuation
From the economical perspective, the pay-per-use model means in quality of both cloud resources and third-party services
that no upfront investment is needed for acquiring IT resources needed to implement an application to remain available during
or for maintaining them, as companies pay only for allocated outages. Resource pooling in the cloud implies that unexpected
resources and subscribed services. Moreover, by handing off the fluctuations of the infrastructure performance (e.g., noisy
responsibility of maintaining physical IT infrastructure, companies neighbor problem in multi-tenant systems) need to be expected
can avoid capital expenses (capex) in favor of usage-aligned and managed accordingly.
operational expenses (opex) and can focus on development rather • Elasticity: CNA need to support adjusting their capacity by
than operations support. adding or removing resources to provide the required QoS in
An extensive set of architectural patterns and best practices face of load variation avoiding over- and under-provisioning.
for cloud application development have been distilled, see for In other terms, cloud-native applications should take full
instance [2–4]. advantage of the cloud being a measured service offering on-
However, day-to-day cloud application development is still far demand self-service and rapid elasticity.
from fully embracing these patterns. Most companies have just It should be clear that resilience is the first goal to be attained to
reached the point of adopting hardware virtualization (i.e., VMs). achieve a functioning and available application in the cloud, while
Innovation leaders have already moved on to successfully deploy- scalability deals with load variation and operational cost reduction.
ing newer, more productive patterns, like microservices, based on Resilience in the cloud is typically addressed using redundant
light-weight virtualization (i.e., containers). resources. Formulating the trade-off between redundancy and
On one hand, a pay-per-use model only brings cost savings with operational cost reduction is a business decision.
respect to a dedicated (statically sized) system solution if (1) an The principles identified in the ‘‘12 factor app’’ methodology [7]
application has varying load over time and (2) the application focus not only on several aspects that impact on resiliency and scal-
provider is able to allocate the ‘‘right’’ amount of resources ability (e.g., dependencies, configuration in environment, backing
to it, avoiding both over-provisioning (paying for unneeded services as attached resources, stateless processes, port-binding,
resources) and under-provisioning resulting in QoS degradation. concurrency via process model, disposability) of Web applications,
On the other hand, years of cloud development experience have but also the more general development and operations process
taught practitioners that commodity server hardware and network (e.g., one codebase, build-release-run, dev/prod parity, administra-
switches break often. Failure domains help isolate problems, tive processes). Many of the best practices in current cloud devel-
but one should ‘‘plan for failure’’, striving to produce resilient opment stem from these principles.
applications on unreliable infrastructure, without compromising
their elastic scalability. 2.2. Current state of cloud development practice
In this article we relate on our experience in porting a legacy
Web application to the cloud, adopting a novel design pattern Cloud computing is novel and economically more viable with
for self-managing cloud native applications. This enables vendor respect to traditional enterprise-grade systems also because it
independence and reduced costs with respect to relying on relies on self-managed software automation (restarting compo-
IaaS/PaaS and third party vendor services. nents) rather than more expensive hardware redundancy to pro-
The main contributions of this article are: (1) a definition of vide resilience and availability on top of commodity hardware [8].
cloud-native applications and their desired characteristics, (2) a However, many applications deployed in the cloud today are sim-
distributed architecture for self-managing (micro) services, and ply legacy applications that have been placed in VMs without
(3) a report on our experiences and lessons learnt applying the changes of architecture or assumptions on the underlying infras-
proposed architecture to a legacy application brought to the cloud. tructure. Failing to adjust cost, performance and complexity expec-
tations, and assuming the same reliability of resources and services
in a traditional data center as in a public cloud can cost dearly, both
in terms of technical failure and economical loss.
2. Cloud-native applications
In order to achieve resilience and scalability, cloud applications
have to be continuously monitored, analyzing their application-
Any application that runs on a cloud infrastructure is a ‘‘cloud specific and infrastructural metrics to provide automated and re-
application’’, but a ‘‘cloud-native application’’ (CNA from here on) sponsive reactions to failures (health management functionality)
is an application that has been specifically designed to run in a cloud and changing environmental conditions (auto-scaling functional-
environment. ity), minimizing human intervention.
The current state of the art in monitoring, health management,
2.1. CNA: definitions and requirements and scaling consists of one of the following options: (a) using ser-
vices from the infrastructure provider (e.g., Amazon CloudWatch1
We can derive the salient characteristics of CNA from the main and Auto Scaling2 or Google Instance Group Manager3 ) with a de-
aspects of the cloud computing paradigm. As defined in [5], there fault or a custom provided policy, (b) leveraging a third-party ser-
are five essential characteristics of cloud computing: on-demand vice (e.g., Rightscale,4 New Relic5 ), (c) building an ad-hoc solution
self service, broad network access, resource pooling, rapid elasticity using available components (e.g., Netflix Scryer,6 logstash7 ). Both
and measured service. In actual practice the cloud infrastructure is
the enabler of these essential characteristics. Due to the economy
of scale, infrastructure installations are large and typically built 1 https://aws.amazon.com/cloudwatch.
of commodity hardware so that failures are the norm rather than 2 https://aws.amazon.com/autoscaling.
the exception [6]. Finally, cloud applications often rely on third- 3 https://cloud.google.com/compute/docs/autoscaler.
party services, as part of the application functionality, support 4 http://www.rightscale.com.
(e.g., monitoring) or both. Third-party services might also fail or 5 https://newrelic.com.
offer insufficient quality of service. 6 http://techblog.netflix.com/2013/11/scryer-netflixs-predictive-auto-
Given the considerations above, we can define the main re- scaling.html.
quirements of CNA as: 7 https://www.elastic.co/products/logstash.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 3

infrastructure providers and third-party services are footsteps on


a path leading to vendor lock-in, are paid services, and moreover
they may themselves suffer from outages. Ad-hoc solutions can be
hard to engineer, especially because they have to scale seamlessly
with the application they monitor and manage, in other terms they
have to be themselves resilient and scalable.
All the management approaches proposed above have one
common characteristic: their logic is run in isolation from the
managed application, as an external service/process. In this
article we claim that this approach has intrinsic limits and Fig. 1. Type graph and instance graph of an example microservices based
we argue that one possible solution is to build management application.

functionalities within the managed application itself, resulting in


monitoring, health management, and scaling functionalities that
naturally adapt to the managed application and its dynamic nature.
Moreover, self-managing applications are fundamental enablers
of vendor-independent multi-cloud applications. We propose
a decomposition of the application into stateful and stateless
containers, following the microservices paradigm.

3. Self-managing (micro) services

The main contributions of this article is a high-level distributed Fig. 2. Hierarchical KV-store clusters for microservices management.
architecture that can be used to implement self-managing cloud
native applications. running to provide resilience and performance guarantees (e.g., as
The idea is that just as there are best practices to build reli- in Fig. 1).
able services on the cloud by leveraging distributed algorithms In microservice architectures, several patterns are used to
and components, so can management functionalities (e.g., health- guarantee resilient, fail-fast behavior. For instance, the circuit-
management, auto-scaling, adaptive service placement) be imple- breaker pattern [12] or client-side load balancing such as in the
mented as resilient distributed applications. Netflix Ribbon library.11 The typical deployment has multiple
More in detail, the idea is to leverage modern distributed instances of the same microservice running at the same time,
in-memory key–value store solutions (KV-store; e.g. Consul,8 possibly with underlying data synchronization mechanisms for
Zookeeper,9 Etcd,10 Amazon Dynamo [9], Pahoehoe [10]) with stateful services. The rationale behind this choice is to be
strong or eventual consistency guarantees. They are used both able to deploy microservice instances across data centers and
to store the ‘‘state’’ of each management functionality and to infrastructure service providers and letting each microservice
facilitate the internal consensus algorithm for leader election and quickly adjust to failures by providing alternative endpoints for
assignment of management functionalities to cluster nodes. In this each service type.
way, management functionalities become stateless and, if any of In Fig. 2, we provide an intuitive representation of how mul-
the management nodes were to fail, the corresponding logic can tiple KV-store clusters can be used to implement self-managing
be restarted on another one with the same state. More concretely, microservice applications across cloud providers. Each microser-
any management functionality (e.g., the autoscaling logic) can vice is deployed with its own KV-store cluster for internal con-
be deployed within an atomic service as a stateless application figuration management and discovery among components. Local
component to make the service self-managing in that aspect. If the management functionalities (e.g., component health management,
autoscaling logic or the machine hosting it were to fail, the health scaling components) are delegated to nodes in the local cluster.
management functionality would restart it, and the distributed Another KV-store cluster is used at ‘‘global’’ (application) level.
key–value store would still hold its latest state. This ‘‘composition cluster’’ is used both for endpoint discovery
With the same approach, hierarchies of configuration clusters across microservices and leader election to start monitoring,
can be used to delegate atomic service scaling to the components, auto-scaling, and health management functionalities at service
and atomic service composition and lifecycle to service elected composition level. Other application-level decisions like for
leaders. What we propose integrates naturally with the common instance micro-service placement across clouds depending on
best practices of cloud orchestration and distributed configuration latencies and costs, or traffic routing across microservices can be
implemented as management logic in the composition cluster.
that we will discuss in the following sections.
Combined with placement across failure domains, the proposed
Self-managing microservice compositions. By generalization, and architecture enables distributed hierarchical self management,
building on the concept of service composability, the same akin to an organism (i.e., the composed service) that is able
architecture can be employed to deploy self-managing service to recreate its cells to maintain its morphology while each cell
compositions or applications using the microservice architectural (i.e., each microservice) is a living self-managing element.
pattern [11].
A microservice-oriented application can be represented with a 3.1. Atomic service example
type graph of microservices that invoke each other, and an instance
graph representing the multiple instances of microservices that are In this subsection we provide an example of how to apply the
concept of self-managing services to a monolithic Web application

8 https://www.consul.io.
9 https://zookeeper.apache.org/. 11 http://techblog.netflix.com/2013/01/announcing-ribbon-tying-netflix-
10 https://github.com/coreos/etcd. mid.html.
4 G. Toffetti et al. / Future Generation Computer Systems ( ) –

3.1.2. Distributed configuration and service endpoint discovery


A common problem in cloud computing development is the
configuration of service components and their dependencies. The
main reason it is challenging is due to the dynamic nature of cloud
applications. Virtual machines (or containers) are dynamically
provisioned and their endpoints (the IP addresses and ports at
which services can be reached) are only known after resources
have been provisioned and components started. This is what
Fig. 3. Type graph of a simple web application with caching (left), example of
is commonly known as service endpoint discovery. Different
instance graph of the same application (right). solutions for distributed cloud configurations have been proposed
both in academic literature and open source communities, most
acting as a single atomic service which is gradually converted to a of them sharing common characteristics such as a consensus
CNA. mechanism and a distributed KV-store API, as presented in the
The functionalities needed in our example are going to be self-managing microservices concept. In this work we will consider
component discovery and configuration, health-management, Etcd which is our preferred choice due to its simplicity to deploy
monitoring and auto-scaling. In order to introduce them, we are and its extensive documentation.
also going to introduce the concepts of orchestration, distributed According to its self-description, Etcd is a ‘‘distributed, consis-
configuration, and service endpoint discovery in the following tent key–value store for shared configuration and service discov-
paragraphs. ery’’. The high-level function is simple: multiple nodes run an Etcd
server, are connected with each other forming a cluster, and a con-
3.1.1. Cloud service orchestration sensus algorithm (Raft15 in this case) is used for fault tolerance and
Infrastructure as a service offers APIs to deploy and dismiss consistency of the KV-store.
compute, network, and storage resources. However, the advan- Nodes that form part of the same cluster share a common token
tages of on-demand resource deployment would be limited if it and can discover each other by using a global public node registry
could not be automated. Services and applications typically use a service. Alternatively, a dedicated private node registry service can
set of interconnected compute, storage, and network resources to be run anywhere.16
achieve their specific functionality. In order to automate their de- In Etcd the key space is hierarchical and organized in
ployment and configuration in a consistent and reusable manner, directories. Both keys and directories are generally referred as
deployment automation tools and languages (e.g., Amazon Cloud ‘‘nodes’’. Node values can be set and retrieved by Etcd clients over a
Formation,12 OpenStack Heat,13 TOSCA14 ) have been introduced. REST interface. Node values can also be ‘‘watched’’ by clients which
Generalizing, they typically consist of a language for the declarative receive a notification whenever the value of a node changes.
description of the needed resources and their interdependencies The typical usage of Etcd for service configuration is to
(service template) combined with an engine that from the tem- automatically update component configurations whenever there
plate builds a dependency graph and manages the ordered deploy- is a relevant change in the system. For instance, referring to our
ment of resources. example application in Fig. 3, the load balancer component can
Cloud orchestration [13] is an abstraction of deployment use Etcd to watch a directory listing all the application servers
automation. Services are defined by a type graph representing the and reconfigure its forwarding policy as soon as a new application
needed resources and connection topology. Each time a service of server is started. The ‘‘contract’’ for this reconfiguration simply
a given type needs to be instantiated, an orchestrator is started requires application servers to know where to register when they
with the aim of deploying and configuring the needed resources are started.
(possibly using deployment automation tools as actuators). The The consensus algorithm underneath Etcd also provides leader
abstraction w.r.t. deployment automation comes from the fact election functionality, so that one node of the cluster is recognized
that cloud orchestration is arbitrarily composable: the orchestration as the sole ‘‘coordinator’’ by all other nodes. We will extensively
logic of a composed service triggers the orchestration of its (atomic use this functionality in the self-managing architecture we propose
or composed) service components creating and running as many in the next section.
orchestrator instances as needed.
Each orchestrator has its own representation of the deployed 3.2. Component deployment and discovery
service topology and its components in the instance graph. In Fig. 3
we provide an example of a type graph (TG) for a simple Web The initial deployment of a self-managing atomic service is
application with caching (left). The application topology allows achieved through cloud orchestration as described in [13]. All
a load balancer (LB) forwarding requests to up to 20 application deployed software components (be it VMs or containers) know
servers (AS) that are connected to respectively maximum 5 and their role in the type graph (e.g., if they are an LB, AS, DB, or CA
4 database (DB) and caching (CA) instances. Cardinalities on in our example). Each component is assigned a universally unique
edges represent the minimum and maximum number of allowed identifier (UUID). All components can access the Etcd cluster and
connections among instances. For example, a CA node can serve discover each other.
up to 5 AS nodes. These cardinalities are typically derived from The Etcd directory structure can be used both to represent the
engineering experience [14]. The right graph in Fig. 3 is the instance service type graph as well as the instance graph of the deployed
graph (IG) for the same application. In algebraic graph terminology components and their interconnections as in Fig. 4. When a new
we can say that the instance graph ‘‘is typed’’ over the type graph, component is deployed and started, it (1) joins the Etcd cluster and
with the semantics that the topology respects the type graphs’ (2) advertises its availability by registering a new directory under
topology and cardinalities [15]. its component type and saving relevant connection information

12 https://aws.amazon.com/cloudformation/. 15 http://raftconsensus.github.io/.
13 https://wiki.openstack.org/wiki/Heat. 16 For example see http://blog.zhaw.ch/icclab/setup-a-kubernetes-cluster-on-
14 http://docs.oasis-open.org/tosca/TOSCA/v1.0/os/TOSCA-v1.0-os.html. openstack-with-heat ‘‘dedicated Etcd host’’.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 5

Fig. 4. An example snippet of the representation of a type graph (left) and instance graph (right) using Etcd directory structure.

there. For instance, in our example in Fig. 4, a new CA instance the scope of this work. For instance, in our example the load
adds a new directory with its UUID (uuid1) and saves a key with balancer can use its own internal metrics in combination with the
its endpoint to be used by the application server components. logstash aggregator17 to provide the average request rate, response
Edges in the instance graph are used to keep track of component time, and queue length in the last 5, 10, 30 s and 1, 5, 10 min.
connections in order to enforce the cardinalities on connections as These metrics are typically enough for an auto-scaling logic to take
specified in the type graph. The auto-scaling manager (described in decisions on the number of needed application servers.
the following subsections) is responsible for deciding how many
components per type are needed, while the health manager will 3.2.2. Auto-scaling
make sure that exactly as many instances as indicated by the auto- The auto-scaling component uses a performance model to
scaling logic are running and that their interconnections match the control horizontal scalability of the components. The main function
type graph. Component information (e.g., endpoints) is published is to decide how many instances of each component need to be
by each component in Etcd periodically with a period of 5 s and running to grant the desired QoS. Auto-scaling is started by the
a time to live (TTL) of 10 s. Whenever a component fails or is leader node. Its logic collects the monitoring information from
removed, its access information is automatically removed from Etcd, the current system configuration, and outputs the number of
Etcd, and the health manager and all dependent components can required components for each component type. This information is
be notified of the change. stored in the type graph for each node under the cardinality folder
Once the orchestrator has deployed the initial set of required with the key ‘‘req’’ (required) as in Fig. 4.
components for the service, it sets the status of the service on Etcd
to ‘‘active’’. Once this happens, the component which was elected 3.2.3. Health management
leader of the Etcd cluster will start the self-managing functionality The node that is assigned health management functionalities
with the auto-scaling and health management logic. compares the instance graph with the desired state of the
system (as specified by the auto-scaling logic) and takes care
of (1) terminating and restarting unresponsive components,
3.2.1. Monitoring
(2) instantiating new components, (3) destroying no longer needed
Before discussing the auto-scaling functionality, we will components, (4) configuring the connections among components
describe how Etcd can also be used to store a partial and aggregated in the instance graph so that cardinalities are enforced.
subset of monitoring information in order to allow auto-scaling
decisions to be taken. The rationale behind storing monitoring 3.2.4. Full life-cycle
information in Etcd is to allow resilience of the auto-scaling logic Fig. 5 depicts a simplified sequence diagram putting all the
by making it stateless. Even if the VM or container where the pieces together. The orchestrator sets up the initial deployment of
auto-scaling logic has been running fails, a new component can the service components. They register to Etcd and watch relevant
be started to take over the auto-scaling logic and knowledge base Etcd directories to perform configuration updates (reconfiguration
from where it was left. parts for AS and CA components are omitted). Once all initial
The common practice in cloud monitoring is to gather both components are deployed, the orchestrator sets the service state
low-level metrics from the virtual systems such as CPU, I/O, RAM to ‘active’. Components generating monitoring information save it
usage as well as higher-level and application-specific metrics such periodically in Etcd.
as response times and throughputs [16]. Considering the latter Each component runs a periodic check on the service state.
metrics, full response time distributions are typically relevant If the service is active and a node detects to be the Etcd cluster
in system performance evaluation, but for the sake of QoS leader, it starts the auto-scale and health management processes.
management high percentiles (e.g., 95th, 99th) over time windows Alternatively, auto-scale and health management components can
of few seconds are in general adequate to assess the system be started on other nodes depending on their utilization. A watch
behavior. We assume that each relevant component runs internal mechanism can be implemented from the cluster leader to indicate
monitoring logic that performs metrics aggregation and publishes to a component that it should start a management functionality.
aggregated metrics to Etcd. The actual directory structure and
format in which to save key performance indicators (KPIs) is
dependent on the auto-scaling logic to be used and is beyond 17 http://logstash.net/.
6 G. Toffetti et al. / Future Generation Computer Systems ( ) –

Fig. 5. Sequence diagram depicting a simplified service instantiation and deinstantiation. For simplicity we represent Etcd as a single process.

3.2.5. Self-healing properties technologies supporting the design patterns for cloud-based
By placing components across different failure domains (e.g., applications, and on the other hand to successfully apply these
availability zones in the same data center, or different data patterns to a traditional business application which was not
centers), the architecture described above is resilient to failure designed to run in the cloud. Rather than starting from scratch
and is able to guarantee that failed components will be restarted with an application designed from inception for the cloud, we
within seconds. The fact that any remaining node can be elected wanted to show that decomposition in smaller components
leader, and that the desired application state and monitoring data (even by component functionality rather than application feature)
is shared across an Etcd cluster, makes the health management and often allows to achieve resilience and elasticity even in legacy
auto-scaling components stateless, and allows the atomic service applications.
to be correctly managed as long as the cluster is composed of the For the evaluation of a suitable application, we decided to
minimum required number of nodes for consensus which is three. uphold the following criteria. The application should be:

4. Experience • available as open source, to guarantee the reproducibility of our


experiments
In the context of the Cloud-Native Applications (CNA) research • a ‘‘business’’ application, to promote adoption of the CNA
initiative at the Service Prototyping Lab at Zurich University of methods also for legacy applications
Applied Sciences,18 we designed and evaluated various different • a commonly used type of application, to achieve representative
forms of CNA applications. Amongst the practical results are results.
CNA guidelines with the most common problems and pitfalls
We took some time evaluating several well known and posi-
of application development specifically for the cloud. Here, we
tively reviewed open source business applications and came up a
report on our experiences with a specific focus on applying the
list of about ten applications such as Customer Relationship Man-
self-managing principles exposed in the previous sections to an
agement (CRM), Enterprise Resource Planning (ERP), Document
existing application with one specific CNA support stack.
Management Systems (DMS). At the very end of our evaluation we
were left with two choices: SuiteCRM19 and Zurmo20
4.1. Implementation
In the end we decided to go with Zurmo. The reasons behind
Step 1: Use case identification. The goal of our experiments was this choice were that Zurmo:
on one hand to gather hands-on experience with the latest

19 https://suitecrm.com/.
18 Service Prototyping Lab: http://blog.zhaw.ch/icclab/. 20 http://zurmo.org/.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 7

logging would be much more helpful to developers approaching


Fleet for the first time.
Step 3: Architectural changes. We need to make every part of the
application scalable and resilient. The first thing we did was to split
the application using different Docker25 containers to run the basic
components (e.g., Apache Web server, Memcached, Mysql RDBMS).
We decided to first scale out the web server. Since the
application core is in its original configuration tightly coupled to
the web server, every Apache process comes with an embedded
PHP interpreter. When we scale the webserver, we automatically
also scale the application core. To achieve this, all we need is
a load balancer which forwards incoming HTTP requests to the
Fig. 6. Zurmo initial architecture. web servers. In the original implementation, Zurmo saved session-
based information locally in the web server.
• Is developed by a team with extensive experience with We modified the session handling so that it saves the session
CRMs (formerly offering a modified version of SugarCRM @ state in the cache as well as in the database. We can now access it
Intelestream) in a quick manner from the cache, or should we encounter a cache
• Follows test-driven development (TDD) practices in its devel- miss we could still recover it from the database. After this change
opment the architecture looks exactly the same but the overall application
• Has all core functionality of a CRM without offering an is scalable and resilient. After this modification there is no more
overwhelming amount of features need to use sticky sessions in Web servers. In other terms we made
• Has a modern look and feel to it. the web server tier stateless so that users can be evenly distributed
among the existing web servers and, if one of the web servers or the
The first two reasons have given us confidence that the code caching system should crash, users will not be impacted by it.
is of high quality and that our changes will not just break the Memcached already allows horizontally scaling its service by
system in an unforeseen or hidden way. While evaluating CRMs, adding additional servers to a cluster. We then replaced the single
we repeatedly encountered statements saying one of the main server MySQL setup with a MySQL Galera Percona cluster to
problems of CRMs is that people are not using them. The last CNA-ify more parts of the application (see Fig. 7).
two reasons for choosing Zurmo address exactly this issue. After
Step 4: Monitoring. We implemented a generic monitoring system
evaluating alternative products, we think Zurmo could be a CRM
that can be easily reused for any CNA application. It consists of
solution which would actually be used by its end-users.
the so-called ELK stack,26 log-courier and collectd. The ELK stack
Zurmo CRM (see Fig. 6) is a PHP application employing the
in turn consists of Elasticsearch, Logstash and Kibana. Logstash
MVC pattern (plus a front-controller which is responsible for
collects log lines, transforms them into a unified format and sends
handling/processing the incoming HTTP requests) based on the Yii
them to a pre-defined output. Collectd collects system metrics and
web framework. Apache is the recommended web server, MySQL
stores them in a file. We use Log-Courier to send the application
is used as the backend datastore and Memcached for caching.
and system metric log-files from the container in which a service
It is pretty much a typical monolithic 3-tier application with an runs to Logstash. The output lines of Logstash are transmitted
additional caching layer. The recommended way of running Zurmo to Elasticsearch which is a full-text search server. Kibana is a
is via Apache’s PHP module. Thus, the logic to handle the HTTP dashboard and visualization web application which gets its input
requests and the actual application logic are somewhat tightly data from Elasticsearch. It is able to display the gathered metrics
coupled. in a meaningful way for human administrators. To provide the
Step 2: Platform. The subsequent step consisted of choosing a generated metrics for the scaling engine, we developed a new
platform upon which to run the application. We wanted to address output adapter for Logstash which enables to send the processed
both private and public cloud scenarios. Given the availability of an data directly to Etcd. The overall implementation is depicted in
OpenStack deployment at our lab, we chose to use both our internal Fig. 8. The monitoring component is essential to our experimental
private cloud and Amazon Web Services (AWS). evaluation.
We used CoreOS 21 as basic VM image and Fleet 22 as a basic Step 5: Autoscaling. We also implemented our own scaling engine
container/health-management component. Fleet is a distributed for container-based applications: Dynamite. Dynamite is an open-
systemd (boot manager) cluster combined with a distributed Etcd source Python application leveraging Etcd and Fleet. It takes
(key–value store). After using it for some time, we can definitely care of automatic horizontal scaling, but also of the initial
confirm what CoreOS states about Fleet in their documentation: deployment of an application orchestrating the instantiation of a
it is very low-level and other tools (e.g., Kubernetes23 ) are set of components (Fleet units) specified in a YAML configuration
more appropriate for managing application-level containers. Our file. This configuration strategy allows to use Dynamite to
experiences with the CoreOS + Fleet stack were not always recursively instantiate service compositions by having a top level
positive and we encountered some known bugs that made the YAML configuration specifying a list of Dynamite instances each
system more unstable than we expected (e.g., failing to correctly with its own configuration file. Deploying the top Dynamite
pull containers concurrently from Docker hub24 ). Also, it is instance enables the ‘‘orchestration of a set of orchestrators’’
sometimes pretty hard to find out why a container is not scheduled each responsible for the independent scaling and management of
for execution in Fleet. A more verbose output of commands and a microservice. Dynamite uses system metrics and application-
related information to decide whether a group of containers should
be scaled out or in. If a service should be scaled out, Dynamite
21 https://coreos.com/.
22 https://coreos.com/fleet.
23 http://kubernetes.io. 25 https://www.docker.com.
24 https://hub.docker.com. 26 https://www.elastic.co/webinars/introduction-elk-stack.
8 G. Toffetti et al. / Future Generation Computer Systems ( ) –

Fig. 7. Zurmo CNA architecture.

Fig. 8. Monitoring and logging.

creates a new component instance (i.e., a ‘‘unit’’ in Fleet parlance) and start Dynamite there where it could still restore the state from
and submits it to Fleet. Otherwise, if a scale-in is requested, it Etcd. For more details, we refer the reader to the documentation
instructs Fleet to destroy a specific unit. of the Dynamite implementation27 as well as our work previously
Dynamite is itself designed according to CNA principles. If it published in [17].
crashes, it is restarted and re-initialized using the information
stored in Etcd. This way, Dynamite can be run in a CoreOS cluster
resiliently. Even if the entire node Dynamite is running on were 27 Dynamite scaling engine: https://github.com/icclab/dynamite/blob/master/
to crash, Fleet would re-schedule the service to another machine readme.md.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 9

5. Experimental results and developers will only scale applications through containers.
The actual distribution of containers upon virtual machines is
In this section we report on our resilience and scalability decided by the Fleet scheduler, and in general results in uniform
experiments with our cloud-native Zurmo implementation. We distribution across VMs.
provide the complete source code of the application on our github Using our own internal monitoring system allows the appli-
repository.28 cation to scale on high level performance metrics (e.g. 95th per-
All the experiment we discuss here have been executed centiles) that are computed at timely intervals by the logstash
on Amazon AWS (eu-central) using 12 t2.medium-sized virtual component and saved to Etcd to be accessed by Dynamite.
machines. We also ran the same experiments on our local Fig. 9 shows one example run using the scaling engine to
OpenStack installation. The OpenStack results are in line with AWS withstand a load of 10 concurrent users growing to 100. In the
and we avoid reporting them here because they do not provide any upper graph we plot the application response time in milliseconds
additional insight. Instead, in the spirit of enabling verification and (red continuous line, left axis) and the request rate in requests per
repeatability, we decided to focus on the AWS experiments. They second (green dashed line, right axis). The request rate grows from
can be easily repeated and independently verified by deploying the roughly 2 requests per second up to 20, while the response time is
CloudFormation template we provide in the ‘‘aws’’ directory of the kept at bay by adaptively increasing the number of running Apache
implementation. containers. The bottom part of the graph shows the number of
The experiments are aimed at demonstrating that the proposed running Apache containers at any point in time (red continuous
self-managing architecture and our prototypical implementation line) as well as the number of simulated users. As soon as the
correctly address the requirements we identified for cloud-native generated traffic ends, the number of Apache containers is reduced.
applications: elasticity and resilience. In other terms we pose This simple experiment shows the feasibility of an auto-
ourselves the following questions: scaling mechanism according to our self-managing cloud-native
applications principles. For this example we only implemented
• Does the application scale (out and in) according to load a simple rule-based solution and we make no claims concerning
variations? its optimality with respect to minimizing operational costs. More
• Is the application resilient to failures? advanced adaptive model-based solutions (for instance the one
In order to demonstrate resilience we emulate IaaS failures by in [18]) could be easily integrated using the same framework.
respectively killing containers and VMs. Scaling of the application
is induced by a load generator whose intensity varies over time. 5.2. Resilience to container failures
The load generation tool we used is called Tsung.29 We created
a Zurmo navigation scenario by capturing it through a browser In order to emulate container failures, we extended the Multi-
extension, then generalized and randomized it. You can also find Cloud Simulation and Emulation Tool (MC-EMU).30 MC-EMU is an
this in our repository, in the ‘‘zurmo_tsung’’ component. In our extensible open-source tool for the dynamic selection of multiple
experiments the load was generated from our laptops running resource services according to their availability, price and capacity.
Tsung locally. We simulated a gradually increasing number of users We have extended MC-EMU with an additional unavailability
(from 10 up to 100) with a random think time between requests model and hooks for enforcing container service unavailability.
of 5 s in average. This yields a request rate of 0.2 requests per The container service hook connects to a Docker interface
second per user, and a theoretical maximum expected rate of 20 per VM to retrieve available container images and running
requests per second with 100 concurrent users. The load is mainly instances. Following the model’s determination of unavailability,
composed of read (HTTP GET) operations, around 200, and roughly the respective containers are forcibly stopped remotely. It is the
30 write (HTTP POST) requests involving database writes. It is task of the CNA framework to ensure that in such cases, the
important to notice that, due to our choice of avoiding to use sticky desired number of instances per image is only shortly underbid and
HTTP sessions, also any request saving data in the HTTP session that replacement instances are launched quickly. Therefore, the
object results in database writes. overall application’s availability should be close to 100% even if the
container instances are emulated with 90% estimated availability.
Fig. 10 depicts the results of an example run in which we forced
5.1. Scaling containers to fail with a 10% probability every minute. With respect
to the previous example one can clearly notice the oscillating
In order to address the first question we configured Dynamite number of Apache Web servers in the bottom of the figure, and
to scale out the service creating a new Apache container instance the effect this has on the application response time. Figs. 11 and
every time the 95th percentile of the application response time 12 show a glimpse of the monitoring metrics we were able to
(RT) continuously exceeds 1000 ms in a 15 s window. track and visualize through Kibana while running the experiment.
The scale in logic instead will shut down any Apache container We plot the average and percentile response times, response time
whose CPU utilization has been lower than 10% for a period of at per Apache container, request rate, HTTP response codes, number
least 30 s. Given that we are managing containers, scaling in and of running Apache containers and the CPU, memory, and disk
out is a very quick operation, and we can afford to react upon short utilization for each.
term signals (e.g., RT over few seconds).
Since we used Fleet and CoreOS for the experiments, and not 5.3. Resilience to VM failures
directly an IaaS solution billing per container usage, we also needed
to manage our own virtual machines. We used 10 VMs that are We also emulated VM failures, although without the automa-
pre-started before initiating the load and that are not part of the tion models of MC-EMU or similar tools like ChaosMonkey.31 In-
scaling exercise. The assumption is that future container-native stead, we simply used the AWS console to manually kill one or
applications will be only billed per container usage in seconds, more VMs at a given point in time to pinpoint critical moments.

28 https://github.com/icclab/cna-seed-project. 30 MC-EMU tool: https://github.com/serviceprototypinglab/mcemu.


29 Tsung tool: http://tsung.erlang-projects.org. 31 https://github.com/Netflix/SimianArmy.
10 G. Toffetti et al. / Future Generation Computer Systems ( ) –

Fig. 9. Response time, request rate, number of users and apache servers running the system without externally induced failures.

Fig. 10. Response time, request rate, number of users and apache servers running the system inducing probabilistic container failures.

The effects of killing entire VMs in our prototype implementa- AWS would restart the VM that would try to use Etcd discovery
tion vary a lot depending on the role of the VM in the Etcd cluster again to rejoin the cluster, but this would fail. In other failure
as well as the type of containers it is running. As one could expect, scenarios, the machine might even change its IP address, requiring
killing VMs only hosting ‘‘stateless’’ (or almost stateless) contain- manual deletion and addition of the new endpoint. This problem
ers (e.g., Apache, Memcached) only has small and transitory effects is fairly common for Etcd in AWS, so much that we found an
on the application quality of service. However, terminating a VM implementation of a containerized solution for it.32 However, we
running stateful components (e.g., the database) has much more did not yet integrate it into our stack and will leave a comparison
noticeable effects. to future work.
There are 2 types of VMs which we explicitly did not target for In order to show in practice how different the effects of killing
termination: VMs can be, we report here a run in which we target VMs running
• the VM running logstash; different types of components. Fig. 13 depicts a run in which we
• the VMs acting as ‘‘members’’ of the Etcd cluster. killed 2 of the VMs running half of the 4 MySQL Galera cluster
nodes roughly 3 min into the run (manually induced failures of
The reason for the former exclusion is trivial and easily amend- two VMs each time are marked with blue vertical dashed lines).
able: we simply did not have time to implement logstash as a load- Together with the database containers, one can see that also some
balanced service with multiple containers. Killing the logstash Apache containers were terminated. Moreover, having only two
container results in a period of few seconds without visible metrics Galera nodes left, one of which was acting as a replication source
in Kibana which would have defeated the goals of our experiment. for the Galera nodes newly (re)spawned by Fleet, means that the
The solution to this shortcoming is straightforward engineering. database response time became really high for a period, with a
Concerning the Etcd cluster member VMs, the issue is that clearly visible effect on the Web application response time. Other
the discovery token mechanism used for Etcd cluster initialization two VMs at a time were killed respectively 6 and 9 min into the run,
works only for cluster bootstrap. In order to keep the consensus
quorum small, the default cluster is only composed of three
members while other nodes join as ‘‘proxies’’ (they just read cluster
state). Any VM termination of one of the 3 member nodes in 32 http://engineering.monsanto.com/2015/06/12/etcd-clustering/.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 11

Fig. 11. The real-time monitoring metrics while running the experiment depicted in Fig. 10.

but since no database components were hit, apart from the graph bring up units after too many failed restarts. Fleet was failing in
of the number of Apache instances, no major effects are perceived bringing up replicas of components we needed to be redundant,
in the application response time. which made it extremely hard for to hope in achieving a reliable
system in those conditions. These failures in starting containers
5.4. Lessons learnt happened sporadically and we could not reproduce them at will.
This is not the behavior one is expecting with containers: one of the
Implementing our self-manging cloud-native application de- key points of them is to offer consistency between development
sign and applying it to an existing legacy Web application have and production environments.
proven to be valuable exercises in assessing the feasibility of our It took us a while to understand that the random failures were
approach through a proof of concept implementation and identi- due to a bug33 in the Docker version included in CoreOS 766.3.0. In
fying its weaknesses. very few cases concurrently pulling multiple containers resulted in
As it is mostly the case when realizing a theoretical design some container layers to be only partially downloaded, but docker
in practice, we were faced with several issues that hindered our would consider them complete and would refuse to pull again. The
progress. Some of them were a consequence of adopting fairly new problem was aggravated by the fact we used unit dependencies in
technologies lacking mature and battle-tested implementations. Fleet, requiring some units to start together on the same machine.
Here we report in a bottom-up fashion the main problems In this case a failing container would cause multiple units to be
we encountered with the technological stack we used for our disabled by Fleet.
implementation. It is hence always worth repeating: tight coupling is bad,
especially if it implies cascading failures while building a resilient
CoreOS. During about one year of research on CNA we used system.
different releases of CoreOS stable. The peskiest issue we had with
it took us quite some time to figure out. The symptoms were that
the same containers deployed on the same OS would randomly
refuse to start. This caused Fleet/systemd to give up trying to 33 https://github.com/coreos/bugs/issues/471.
12 G. Toffetti et al. / Future Generation Computer Systems ( ) –

Fig. 12. The real-time monitoring metrics while running the experiment depicted in Fig. 10.

Fig. 13. Response time, request rate, number of users and apache servers running the system inducing VM failures for stateful components.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 13

Etcd. The biggest issue we had with Etcd was already mentioned in One of the negative effects of having automatic component
the previous section. We use Etcd as the foundation of the whole reconfigurations upon changes of the component endpoints
distributed solution both as a distributed key–value store and for registered in Etcd is that circular dependencies would cause ripple
leader election. We expected that after a machine failure (when effects propagating through most components. This for instance
the machine gets rebooted or another machine takes its place) happened when we initially replaced a single MySQL instance with
rejoining a cluster would be automatic, however this is not the case a set of MySQL Galera nodes that needed to self-discover. A much
in AWS. Artificially causing machine failures like we did to test the more elegant solution is to put one or more load balancers in
reliability of our implementation often caused the Etcd cluster to front of every microservice and register them as the endpoint for
become unhealthy and unresponsive. a service. An even better solution is using the concept of services
Another issue we experienced is that Etcd stops responding to and an internal DNS to sort out service dependencies as done in
requests (also read requests!) if the machine disk is full. In this Kubernetes. This solution does not even require reconfigurations
case the cluster might again fail, and Fleet would stop behaving upon failures.
correctly across all VM instances. A very positive aspects of our implementation is that we have a
Fleet. Fleet is admittedly not a tool to be used directly for con- self-managing solution that now works seamlessly in OpenStack,
tainer management. Our lesson learnt here is that using managed AWS, and Vagrant. The internal monitoring stack can be easily
approaches like Kubernetes should be preferred. Apart from this, reused for other applications, and the decomposition in docker
we had often issues with failed units not being correctly removed containers allowed us to hit the ground running when starting our
in systemd on some nodes and in general misalignment between porting of the solution to Kubernetes which is our ongoing work.
the systemd state of some hosts and the units Fleet was aware of. Another aspect to notice is that when we started our implemen-
Some command line interface mistakes which can easily happen tation work, container managers were in their infancy and we had
(e.g., trying to run a unit template without giving it an identifier) to build a solution based on IaaS (managing VMs and their cluster-
result in units failing to be removed in systemd and Fleet hanging ing) rather than directly using APIs to manage sets of containers.
on the loop requesting their removal preventing any other com- Already now, the availability of container managers has improved,
mand from being executed. and we expect the commercial market to grow fast in this segment.
Another unexpected behavior we managed to trigger while If one is being charged per VM in a IaaS model, then only auto-
developing is due to the interplay of Fleet and Docker. Fleet scaling containers does not mean reducing costs. In practice what
is expected to restart automatically failed containers, however can be done is using, for example in AWS, AWS AutoscalingGroups
Docker volumes are not removed by default (the rationale is that for VMs and custom metrics generated from within the applica-
they might be remounted by some other containers). The net effect tion to trigger the instantiation and removal of a VM. The work is
is that after a while machines with problematic containers run out conceptually simple, but we did not implement it yet and are not
of disk space, Etcd would stop working, the cluster would become aware of existing re-usable solutions.
unhealthy, and the whole application would be on its own running Although our own experience using Fleet to implement
without Fleet. The CoreOS bug we mentioned above also caused the proposed solution was somehow difficult, we can already
this on long running applications effectively bringing down the relate on the excellent results we are having by porting the
service. entire architecture to Kubernetes. It is still work in progress,
These are all minor issues due to the fact that most of the tools but the whole work basically amounts to converting Fleet
we use are in development themselves. However, any of these unit files into replication controller and service descriptors for
problems might become a blocker for developers using these tools Kubernetes, no need for component discovery since ‘‘services’’ are
for the first time. framework primitives. All in all, the availability of more mature
Self-managing Zurmo. Finally some considerations concerning our container management solutions will only simplify the adoption
own design. The first thing to discuss is that we did not go all the of microservices architectures.
way and implement Zurmo as a set of self-managing microservices
each with its own specific application-level functionality. 6. Related work
The main reason is that we did not want to get into Zurmo’s code
base to split its functionality into different services. This would To the best of our knowledge, the work in [1] we extended here
have meant investing a large amount of time to understand the was the first attempt to bring management functionalities within
code and the database (which has more than 150 tables). Instead,
cloud-based applications leveraging on orchestration and the
we preserved the monolithic structure of the application core
consensus algorithm offered by distributed service configuration
written in PHP. What we did was replicating the components and
and discovery tools to achieve stateless and resilient behavior
put a load balancer in front of them (e.g., for Apache or MySQL
of management functionalities according to cloud-native design
Galera cluster). So, in a way, we created a microservice for each
patterns. The idea builds on the results and can benefit from a
type of component, with a layer of load balancers in front. This
number of research areas, namely cloud orchestration, distributed
is not the ‘‘functional’’ microservice decomposition advocated by
configuration management, health management, auto-scaling, and
Lewis and Fowler [11], however we showed experimentally that
cloud development patterns.
for all the purposes of resilience and elasticity it still works. Where
We already discussed the main orchestration approaches in
it would not work is in fostering and simplifying development
literature, as this work reuses much of the ideas from [13]. With
by multiple development teams (each catering for one or more
respect to the practical orchestration aspects of microservices
microservices as a product) in parallel. This for us means that the
management, a very recent implementation34 adopts a similar
microservices idea is actually more a way to scale the development
process itself rather than the running application. solution to what we proposed in our original position paper.
We used Etcd for component discovery, for instance for the We had some exchanges of views with the authors, but are not
Galera nodes to find each other, the loadbalancers to find backends, aware whether our work had zero or even minimal influence on
and Apache to find the Galera cluster endpoint and Memcached
instances. Breaking the application at least in microservices based
on component types would in hindsight have been a cleaner 34 https://www.joyent.com/blog/app-centric-micro-orchestration [retrieved on
option. 2016.06.10].
14 G. Toffetti et al. / Future Generation Computer Systems ( ) –

the Autopilot35 cloud-native design pattern recently promoted by some performance guarantees. In [1] we propose an approach that
Joyent. Either way, we consider the fact that other independent deploys the management (e.g., auto-scaling) functionalities within
researchers came up with a very similar idea an encouraging sign the managed application. This not only falls in the category of self-
for our work. */autonomic systems applied to auto-scaling surveyed in [21] (the
Several tools provide distributed configuration management application becomes self-managing), but with respect to the state
and discovery (e.g., Etcd, ZooKeeper, Consul). From the research of the art, brings the additional (and cloud-specific) contribution
perspective, what is more relevant to this work is the possibility of of making the managing functionalities stateless and resilient
counting on a reliable implementation of the consensus algorithm. according to cloud-native design principles. In this respect, the
Much of the health management functionality described in the works listed above are related just in desired functionality, but not
paper is inspired from Kubernetes [19], although to the best of relevant to the actual contribution we claim as any of the scaling
our knowledge Kubernetes was originally ‘‘not intended to span mechanisms proposed in literature can be used to perform the
multiple availability zones’’.36 Ubernetes,37 is a project aiming to actual scaling decision.
overcome this limit by federation. Finally, considering cloud patterns and work on porting legacy
A set of common principles concerning automated manage- applications to the cloud, the work of [24] is worth considering
ment of applications are making their way in container man- when addressing the thorny problem of re-engineering the
agement and orchestration approaches (e.g., Kubernetes, Mesos,38 database layer of existing applications to achieve scalable cloud
Fleet, Docker-compose39 ) with the identification, conceptualiza- deployment. With this respect, in our implementation work we
tion, and instantiation of management control loops as primitives just migrated a single MySQL node into a multi-master cluster
of the underlying management API. To give a concrete example, whose scalability is still limited in the end.
‘‘replication controllers’’ in Kubernetes are a good representative
of this: ‘‘A replication controller ensures the existence of the de- 7. Conclusion
sired number of pods for a given role (e.g., ‘‘front end’’). The au-
toscaler, in turn, relies on this capability and simply adjusts the In this experience report article, we have introduced an archi-
desired number of pods, without worrying about how those pods tecture that leverages on the concepts of cloud orchestration and
are created or deleted. The autoscaler implementation can focus distributed configuration management with consensus algorithms
on demand and usage predictions, and ignore the details of how to enable self-management of cloud-based applications. More in
to implement its decisions’’ [20]. Our proposed approach lever- detail, we build on the distributed storage and leader election func-
ages on basic management functionalities where present, but pro- tionalities that are commonly available tools in current cloud ap-
poses a way to achieve them as a part of the application itself plication development practice to devise a resilient and scalable
when deployed on a framework or infrastructure that does not sup- managing mechanism that provides health management and auto-
port it. Moreover, we target not only the atomic service level man- scaling functionality for atomic and composed services alike. The
aging components (akin to what Kubernetes does for containers) key design choice enabling resilience is for both functionalities to
but also service composition level managing multiple microservice be stateless so that in case of failure they can be restarted on any
instances. In [20], the authors also advocate control of multiple node collecting shared state information through the configuration
microservices through choreography rather than ‘‘centralized management system.
orchestration’’ to achieve emergent behavior. In our minds, once ap- Concerning future work, we plan to extend the idea to incor-
plications are deployed across different cloud vendors, orchestra- porate the choice of geolocation and multiple cloud providers in
tion (albeit with distributed state as we propose) is still the only the management functionality. Another aspect we look forward to
way to achieve a coherent coordinated behavior of the distributed tackle is that of continuous deployment management, including
system. adaptive load routing.
Horizontal scaling and the more general problem of quality of
service (QoS) of applications in the cloud have been addressed by a Acknowledgments
multitude of works. We reported extensively on the self-adaptive
approaches in [21] and here give only a brief intuition of the most This work has been partially funded by an internal seed project
relevant ones. We can cite the contributions from Nguyen et al. [22] at ICCLab40 and the MCN project under Grant No. [318109] of the
and Gandhi et al. [14] which use respectively a resource pressure EU 7th Framework Programme. It has also been supported by an
model and a model of the non-linear relationship between server AWS in Education Research Grant award, which helped us to run
load and number of requests in the system together with the our experiments on a public cloud.
maximum load sustainable by a single server to allocate new VMs. Finally we would like to acknowledge the help and feedback
A survey dealing in particular with the modeling techniques used from our colleagues Andy Edmonds, Florian Dudouet, Michael
to control QoS in cloud computing is available in [23]. With respect Erne, and Christof Marti in setting up the ideas and implementa-
to the whole area of auto-scaling and elasticity in cloud computing, tion. A big hand for Özgür Özsu who ran most of the experiments
including the works referenced from the surveys cited above, this and collected all the data during his internship at the lab.
work does not directly address the problem of how to scale a
cloud application to achieve a specific quality of service. Works in
References
current and past elasticity/auto-scaling literature focus either on
the models used or on the actual control logic applied to achieve [1] G. Toffetti, S. Brunner, M. Blöchlinger, F. Dudouet, A. Edmonds, An
architecture for self-managing microservices, in: V.I. Munteanu, T. For-
tis (Eds.), Proceedings of the 1st International Workshop on Auto-
mated Incident Management in Cloud, AIMC@EuroSys 2015, Bordeaux,
35 http://autopilotpattern.io/. France, April 21, 2015, ACM, ISBN: 978-1-4503-3476-1, 2015, pp. 19–24.
36 https://github.com/GoogleCloudPlatform/kubernetes/blob/master/DESIGN.md http://dx.doi.org/10.1145/2747470.2747474, URL
http://doi.acm.org/10.1145/2747470.2747474.
retrieved 03/03/2015. [2] B. Wilder, Cloud Architecture Patterns, O’Reilly, 2012.
37 https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/
federation.md.
38 http://mesos.apache.org.
39 https://docs.docker.com/compose. 40 http://blog.zhaw.ch/icclab/.
G. Toffetti et al. / Future Generation Computer Systems ( ) – 15

Giovanni Toffetti is a senior researcher at the InIT


[3] C. Fehling, F. Leymann, R. Retter, W. Schupeck, P. Arbitter, Cloud Computing
Cloud Computing Lab at the Zurich University of Applied
Patterns, Springer, 2014.
[4] A. Homer, J. Sharp, L. Brader, N. Masashi, T. Swanson, Cloud Design Patterns - Sciences. He received his Ph.D. in information technology
Prescriptive Architecture Guidance for Cloud Applications, Microsoft, 2014. from Politecnico di Milano in 2007. Before joining ZHAW
[5] P. Mell, T. Grance, The NIST Definition of Cloud Computing, September 2011. he was with the University of Lugano (USI), University
[6] A. Verma, L. Pedrosa, M. Korupolu, D. Oppenheimer, E. Tune, J. Wilkes, Large- College London (UCL), and the IBM Haifa research labs,
scale cluster management at Google with Borg. 2015. where he was part of the cloud operating systems team.
[7] W. Adam, The twelve-factor app., January 2012. http://12factor.net/ (Re- He is author of several publications in the areas of Web
trieved: 10.06.16). engineering, content-based routing, and cloud computing.
[8] S. Wardley, Private vs Enterprise Clouds, February 2011. His main research interests are currently cloud robotics
http://blog.gardeviance.org/2011/02/private-vs-enterprise-clouds.html (Re- and cloud-native applications with a focus on elasticity/
trieved: 16.03.15). scalability/availability, Web engineering, IaaS/PaaS and cluster schedulers.
[9] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A.
Pilchin, S. Sivasubramanian, P. Vosshall, W. Vogels, Dynamo: Amazon’s Sandro Brunner is a researcher at the ICCLab. During
highly available key-value store, SIGOPS Oper. Syst. Rev. (ISSN: 0163-5980) his industry tenure he gained broad knowledge and
41 (6) (2007) 205–220. http://dx.doi.org/10.1145/1323293.1294281, URL practical skill in software and systems engineering. In
http://doi.acm.org/10.1145/1323293.1294281. his bachelor-thesis Concept of Migrating an Application
[10] E. Anderson, X. Li, M.A. Shah, J. Tucek, J.J. Wylie, What consistency does your into the Cloud he devised a generic migration method
key-value store actually provide? In Sixth USENIX Workshop on Hot Topics in with which applications can be migrated into the cloud.
System Dependability, HotDep, October 2010. This was the starting point of his research on applications
[11] J. Lewis, M. Fowler, Microservices, March 2014. designed to run on the cloud out of which the initiative
http://martinfowler.com/articles/microservices.html (Retrieved: 10.06.16). Cloud-Native Applications was born which is currently his
[12] M.T. Nygard, Release It!: Design and Deploy Production-Ready Software. main field of research.
Pragmatic Bookshelf, 2007.
[13] G. Karagiannis, A. Jamakovic, A. Edmonds, C. Parada, T. Metsch, D. Pichon,
M. Corici, S. Ruffino, A. Gomes, P.S. Crosta, et al., Mobile cloud networking:
Virtualisation of cellular networks, in: 2014 21st International Conference on Martin Blöchlinger is a researcher at the ICCLab.
Telecommunications, (ICT), IEEE, 2014, pp. 410–415. After an IT apprenticeship and an additional year
[14] A. Gandhi, M. Harchol-Balter, R. Raghunathan, M.A. Kozuch, Autoscale: of programming experience he decided to study at the
Dynamic, robust capacity management for multi-tier data centers, ACM Trans. ZHAW. In summer 2014 he graduated (Bachelor of Science
Comput. Syst. (2012) 1–33. ZFH in Informatics) and a few weeks later started to work
[15] H. Ehrig, K. Ehrig, U. Prange, G. Taentzer, Fundamentals of Algebraic Graph at the InIT in the focus area Distributed Software Systems.
Transformation, in: Monographs in Theoretical Computer Science. An EATCS
Series, Springer-Verlag New York, Inc., Secaucus, NJ, USA, ISBN: 3540311874,
2006.
[16] G. Aceto, A. Botta, W. De Donato, A. Pescapè, Cloud monitoring: A survey,
Comput. Netw. 57 (9) (2013) 2093–2115.
[17] S. Brunner, M. Blöchlinger, G. Toffetti, J. Spillner, T.M. Bohnert, Experimental
evaluation of the cloud-native application design, in: 4th International
Workshop on Clouds and (eScience) Application Management, CloudAM, Josef Spillner is affiliated with Zurich University of
December 2015. Applied Sciences as senior lecturer and head of the
[18] A. Gambi, M. Pezze, G. Toffetti, Kriging-based self-adaptive cloud controllers, Service Prototyping Lab in conjunction with the InIT
IEEE Trans. Serv. Comput. (ISSN: 1939-1374) PP (99) (2015) 368–381. Cloud Computing Lab. His background for nearly 15
http://dx.doi.org/10.1109/TSC.2015.2389236. years, including studies of computer science, doctoral and
[19] D. Bernstein, Containers and cloud: From LXC to docker to kubernetes, IEEE post-doc projects, has been with Technische Universität
Cloud Comput. (3) (2014) 81–84. Dresden, focusing in particular on service ecosystems,
[20] B. Burns, B. Grant, D. Oppenheimer, E. Brewer, J. Wilkes, Borg, omega, and personal clouds and multi-cloud systems. Part of his
kubernetes, Commun. ACM 59 (5) (2016) 50–57. research was conducted at SAP, NTUU, UFCG and UniBZ.
[21] A. Gambi, G. Toffetti, M. Pezzè, Assurance of self-adaptive controllers for the
cloud, in: J. Cámara, R. de Lemos, C. Ghezzi, A. Lopes (Eds.), Assurances for
Self-Adaptive Systems - Principles, Models, and Techniques, in: Lecture Notes
in Computer Science, vol. 7740, Springer, ISBN: 978-3-642-36248-4, 2013,
pp. 311–339. URL http://dx.doi.org/10.1007/978-3-642-36249-1_12. Thomas Michael Bohnert is Professor at Zurich Uni-
[22] H. Nguyen, Z. Shen, X. Gu, S. Subbiah, J. Wilkes, AGILE: Elastic distributed versity of Applied Sciences. His interests are focused
resource scaling for infrastructure-as-a-service, in: Proc. of the International on enabling ICT infrastructures, coarsely ranging across
Conference on Autonomic Computing and Communications, 2013, pp. 69–82. mobile/cloud computing, service oriented infrastructure,
ISBN 978-1-931971-02-7. and carrier grade service delivery (Telco + IT). Prior to
[23] D. Ardagna, G. Casale, M. Ciavotta, J.F. Pérez, W. Wang, Quality-of-service in being appointed by ZHAW he was with SAP Research,
cloud computing: modeling techniques and their applications, J. Internet Serv. SIEMENS Corporate Technology, and ran an IT consultancy
Appl. 5 (1) (2014) 1–17. named BNCS. His works have been published in several
[24] M. Ellison, R. Calinescu, R. Paige, Re-engineering the database layer of legacy books, journals and conferences. He serves as regional
applications for scalable cloud deployment, in: Proceedings of the 2014 correspondent (Europe) for IEEE Communication Maga-
IEEE/ACM 7th International Conference on Utility and Cloud Computing, UCC zines news section (GCN). He is the founder of the IEEE
’14, IEEE Computer Society, Washington, DC, USA, ISBN: 978-1-4799-7881-6, Broadband Wireless Access Workshop and holds several project and conference
2014, pp. 976–979. URL http://dx.doi.org/10.1109/UCC.2014.160. chairs.

You might also like