Docker Management Design Patterns Swarm Mode On Amazon Web Services
Docker Management Design Patterns Swarm Mode On Amazon Web Services
Management
Design Patterns
Swarm Mode on Amazon Web Services
—
Deepak Vohra
Docker Management
Design Patterns
Swarm Mode on Amazon Web Services
Deepak Vohra
Docker Management Design Patterns: Swarm Mode on Amazon Web Services
Deepak Vohra
White Rock, British Columbia, Canada
ISBN-13 (pbk): 978-1-4842-2972-9 ISBN-13 (electronic): 978-1-4842-2973-6
https://doi.org/10.1007/978-1-4842-2973-6
Library of Congress Control Number: 2017955383
Copyright © 2017 by Deepak Vohra
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage
and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or
hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with
every occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an
editorial fashion and to the benefit of the trademark owner, with no intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are
not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to
proprietary rights.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material
contained herein.
Cover image by Freepik (www.freepik.com)
Managing Director: Welmoed Spahr
Editorial Director: Todd Green
Acquisitions Editor: Steve Anglin
Development Editor: Matthew Moodie
Technical Reviewers: Michael Irwin and Massimo Nardone
Coordinating Editor: Mark Powers
Copy Editor: Kezia Endsley
Distributed to the book trade worldwide by Springer Science+Business Media New York,
233 Spring Street, 6th Floor, New York, NY 10013. Phone 1-800-SPRINGER, fax (201) 348-4505, e-mail
[email protected], or visit www.springeronline.com. Apress Media, LLC is a California LLC
and the sole member (owner) is Springer Science + Business Media Finance Inc (SSBM Finance Inc).
SSBM Finance Inc is a Delaware corporation.
For information on translations, please e-mail [email protected], or visit http://www.apress.com/
rights-permissions.
Apress titles may be purchased in bulk for academic, corporate, or promotional use. eBook versions
and licenses are also available for most titles. For more information, reference our Print and eBook Bulk
Sales web page at http://www.apress.com/bulk-sales.
Any source code or other supplementary material referenced by the author in this book is available to
readers on GitHub via the book’s product page, located at www.apress.com/9781484229729. For more
detailed information, please visit http://www.apress.com/source-code.
Printed on acid-free paper
Contents at a Glance
■
■Chapter 1: Getting Started with Docker����������������������������������������������������������������� 1
■
■Chapter 2: Using Docker in Swarm Mode�������������������������������������������������������������� 9
■
■Chapter 3: Using Docker for AWS to Create a Multi-Zone Swarm����������������������� 31
■
■Chapter 4: Docker Services��������������������������������������������������������������������������������� 55
■
■Chapter 5: Scaling Services�������������������������������������������������������������������������������� 85
■
■Chapter 6: Using Mounts������������������������������������������������������������������������������������� 97
■
■Chapter 7: Configuring Resources��������������������������������������������������������������������� 115
■
■Chapter 8: Scheduling��������������������������������������������������������������������������������������� 131
■
■Chapter 9: Rolling Updates�������������������������������������������������������������������������������� 155
■
■Chapter 10: Networking������������������������������������������������������������������������������������ 179
■
■Chapter 11: Logging and Monitoring����������������������������������������������������������������� 201
■
■Chapter 12: Load Balancing������������������������������������������������������������������������������ 219
■
■Chapter 13: Developing a Highly Available Website������������������������������������������ 241
■
■Chapter 14: Using Swarm Mode in Docker Cloud����������������������������������������������� 271
■
■Chapter 15: Using Service Stacks��������������������������������������������������������������������� 297
Index��������������������������������������������������������������������������������������������������������������������� 317
iii
Contents
■
■Chapter 1: Getting Started with Docker����������������������������������������������������������������� 1
Setting the Environment��������������������������������������������������������������������������������������������������� 1
Running a Docker Application������������������������������������������������������������������������������������������ 3
Summary�������������������������������������������������������������������������������������������������������������������������� 7
■
■Chapter 2: Using Docker in Swarm Mode�������������������������������������������������������������� 9
The Problem��������������������������������������������������������������������������������������������������������������������� 9
The Solution������������������������������������������������������������������������������������������������������������������� 10
Docker Swarm Mode������������������������������������������������������������������������������������������������������ 10
Nodes��������������������������������������������������������������������������������������������������������������������������������������������������� 10
Service�������������������������������������������������������������������������������������������������������������������������������������������������� 11
Desired State of a Service�������������������������������������������������������������������������������������������������������������������� 11
Manager Node and Raft Consensus����������������������������������������������������������������������������������������������������� 11
Worker Nodes��������������������������������������������������������������������������������������������������������������������������������������� 12
Quorum������������������������������������������������������������������������������������������������������������������������������������������������� 12
vi
■ Contents
Removing a Service������������������������������������������������������������������������������������������������������� 83
Creating a Global Service����������������������������������������������������������������������������������������������� 83
Summary������������������������������������������������������������������������������������������������������������������������ 84
■
■Chapter 5: Scaling Services�������������������������������������������������������������������������������� 85
The Problem������������������������������������������������������������������������������������������������������������������� 85
The Solution������������������������������������������������������������������������������������������������������������������� 86
Setting the Environment������������������������������������������������������������������������������������������������� 87
Creating a Replicated Service���������������������������������������������������������������������������������������� 87
Scaling Up a Service������������������������������������������������������������������������������������������������������ 88
Scaling Down a Service������������������������������������������������������������������������������������������������� 91
Removing a Service������������������������������������������������������������������������������������������������������� 92
Global Services Cannot Be Scaled��������������������������������������������������������������������������������� 92
Scaling Multiple Services Using the Same Command��������������������������������������������������� 93
Service Tasks Replacement on a Node Leaving the Swarm������������������������������������������ 95
Summary������������������������������������������������������������������������������������������������������������������������ 96
■
■Chapter 6: Using Mounts������������������������������������������������������������������������������������� 97
The Problem������������������������������������������������������������������������������������������������������������������� 97
The Solution������������������������������������������������������������������������������������������������������������������� 97
vii
■ Contents
Volume Mounts��������������������������������������������������������������������������������������������������������������� 97
Bind Mounts������������������������������������������������������������������������������������������������������������������� 98
Setting the Environment������������������������������������������������������������������������������������������������� 99
Creating a Named Volume�������������������������������������������������������������������������������������������� 100
Using a Volume Mount�������������������������������������������������������������������������������������������������� 102
Removing a Volume������������������������������������������������������������������������������������������������������ 112
Creating and Using a Bind Mount��������������������������������������������������������������������������������� 112
Summary���������������������������������������������������������������������������������������������������������������������� 114
■
■Chapter 7: Configuring Resources��������������������������������������������������������������������� 115
The Problem����������������������������������������������������������������������������������������������������������������� 115
The Solution����������������������������������������������������������������������������������������������������������������� 116
Setting the Environment����������������������������������������������������������������������������������������������� 118
Creating a Service Without Resource Specification����������������������������������������������������� 119
Reserving Resources���������������������������������������������������������������������������������������������������� 120
Setting Resource Limits����������������������������������������������������������������������������������������������� 120
Creating a Service with Resource Specification���������������������������������������������������������� 121
Scaling and Resources������������������������������������������������������������������������������������������������� 121
Reserved Resources Must Not Be More Than Resource Limits����������������������������������� 122
Rolling Update to Modify Resource Limits and Reserves��������������������������������������������� 124
Resource Usage and Node Capacity���������������������������������������������������������������������������� 125
Scaling Up the Stack�������������������������������������������������������������������������������������������������������������������������� 127
Summary���������������������������������������������������������������������������������������������������������������������� 130
■
■Chapter 8: Scheduling��������������������������������������������������������������������������������������� 131
The Problem����������������������������������������������������������������������������������������������������������������� 131
The Solution����������������������������������������������������������������������������������������������������������������� 132
Setting the Environment����������������������������������������������������������������������������������������������� 135
Creating and Scheduling a Service: The Spread Scheduling���������������������������������������� 136
Desired State Reconciliation���������������������������������������������������������������������������������������� 138
viii
■ Contents
ix
■ Contents
■
■Chapter 15: Using Service Stacks��������������������������������������������������������������������� 297
The Problem����������������������������������������������������������������������������������������������������������������� 297
The Solution����������������������������������������������������������������������������������������������������������������� 297
Setting the Environment����������������������������������������������������������������������������������������������� 299
Configuring a Service Stack����������������������������������������������������������������������������������������� 303
Creating a Stack����������������������������������������������������������������������������������������������������������� 304
Listing Stacks��������������������������������������������������������������������������������������������������������������� 305
Listing Services������������������������������������������������������������������������������������������������������������ 306
Listing Docker Containers�������������������������������������������������������������������������������������������� 307
Using the Service Stack����������������������������������������������������������������������������������������������� 308
Removing a Stack�������������������������������������������������������������������������������������������������������� 314
Summary���������������������������������������������������������������������������������������������������������������������� 315
Index��������������������������������������������������������������������������������������������������������������������� 317
xii
About the Author
xiii
About the Technical Reviewers
Michael Irwin is an Application Architect at Virginia Tech (Go Hokies!) where he’s both a developer and
evangelist for cutting-edge technologies. He is helping Virginia Tech adopt Docker, cloud services, single-page
applications, CI/CD pipelines, and other current development practices. As a Docker Captain and a local
meetup organizer, he is very active in the Docker community giving presentations and trainings to help others
learn how to best utilize Docker in their organizations. Find him on Twitter at @mikesir87.
xv
Introduction
Docker, made available as open source in March 2013, has become the de facto containerization platform.
The Docker Engine by itself does not provide functionality to create a distributed Docker container cluster
or the ability to scale a cluster of containers, schedule containers on specific nodes, or mount a volume. The
book is about orchestrating Docker containers with the Docker-native Swarm mode, which was introduced
July 2016 with Docker 1.12. Docker Swarm mode should not be confused with the legacy standalone Docker
Swarm, which is not discussed in the book. The book discusses all aspects of orchestrating/managing Docker,
including creating a Swarm, using mounts, scheduling, scaling, resource management, rolling updates, load
balancing, high availability, logging and monitoring, using multiple zones, and networking. The book also
discusses the managed services for Docker Swarm: Docker for AWS and Docker Cloud Swarm mode.
xvii
■ Introduction
Chapter 3 discusses the managed service Docker for AWS, which provisions a Docker Swarm by
supplying the Swarm parameters, including the number of managers and workers and the type of EC2
instances to use. AWS uses an AWS CloudFormation to create the resources for a Swarm. Docker for AWS
makes it feasible to create a Swarm across multiple AWS zones.
Chapter 4 is about Docker services. Two types of services are defined—replicated and global. Chapter 4
discusses creating a service (replicated and global), scaling a replicated service, listing service tasks, and
updating a service.
Chapter 5 discusses scaling replicated services in more detail, including scaling multiple services
simultaneously. Global services are not scalable.
In Chapter 6, two types of mounts are defined: a bind mount and volume mount. This chapter discusses
creating and using each type of mount.
Chapter 7 is about configuring and using resources in a Swarm. Two types of resources are supported
for configuration: memory and CPU. Two types of resource configurations are defined: reserves and limits.
It discusses creating a service with and without resources specification.
Chapter 8 discusses scheduling service tasks with the default and custom scheduling. Scheduling
constraints are also discussed.
Chapter 9 discusses rolling updates, including setting a rolling update policy. Different types of rolling
updates are provisioned, including updating to a different Docker image tag, adding/removing environment
variables, updating resource limits/reserves, and updating to a different Docker image.
Chapter 10 is about networking in Swarm mode, including the built-in overlay networking called ingress
and support for creating a custom overlay network.
Chapter 11 is about logging and monitoring in a Swarm, which does not provide a built-in support for
logging and monitoring. Logging and monitoring is provided in a Swarm with a Sematext Docker agent,
which sends metrics to a SPM dashboard and logs to a Logsene user interface and Kibana.
Chapter 12 discusses load balancing across service tasks with ingress load balancing. An external AWS
elastic load balancer may also be added for distributing client requests across the EC2 instances on which a
Swarm is based.
Chapter 13 discusses developing a highly available website that uses an Amazon Route 53 to create a
hosted zone with resource record sets configured in a Primary/Secondary failover mode.
Chapter 14 discusses another managed service, Docker Cloud, which may be used to provision a
Docker Swarm or connect to an existing Swarm.
Chapter 15 discusses Docker service stacks. A stack is a collection of services that have dependencies
among them and are defined in a single configuration file for deployment.
xviii
CHAPTER 1
Docker has become the de facto containerization platform. The main appeal of Docker over virtual
machines is that it is lightweight. Whereas a virtual machine packages a complete OS in addition to the
application binaries, a Docker container is a lightweight abstraction at the application layer, packaging
only the code and dependencies required to run an application. Multiple Docker containers run as isolated
processes on the same underlying OS kernel. Docker is supported on most commonly used OSes, including
several Linux distributions, Windows, and MacOS. Installing Docker on any of these platforms involves
running several commands and also setting a few parameters. CoreOS Linux has Docker installed out-
of-the-box. We will get started with using Docker Engine on CoreOS in this chapter. This chapter sets the
context of the subsequent chapters, which discuss design patterns for managing Docker Engine using the
Swarm mode. This chapter does not use Swarm mode and provides a contrast to using the Swarm mode.
This chapter includes the following sections:
• Setting the environment
• Running a Docker application
From Choose an Instance Type, choose the t2.micro Type and click on Next. In Configure Instance
Details, specify the number of instances as 1. Select a network or click on Create New VPC to create a new
VPC. Select a subnet or click on Create New Subnet to create a new subnet. Select Enable for Auto-Assign
Public IP. Click on Next.
From Add Storage, select the default settings and click on Next. In Add Tags, no tags need to be added.
Click on Next. From Configure Security Group, add a security group to allow all traffic of any protocol in all
port ranges from any source (0.0.0.0/0). Click on Review and Launch and subsequently click on Launch.
Select a key pair and click on Launch Instances in the Select an Existing Key Pair or Create a New Key
Pair dialog, as shown in Figure 1-2.
2
Chapter 1 ■ Getting Started with Docker
An EC2 instance with CoreOS is launched. Obtain the public DNS or IPv4 public IP address of the EC2
instance from the EC2 Console, as shown in Figure 1-3, to SSH login into the instance.
core@ip-172-30-4-75 ~ $ docker
Usage: docker [OPTIONS] COMMAND [arg...]
docker [ --help | -v | --version ]
A self-sufficient runtime for containers.
Options:
--config=~/.docker Location of client config files
-D, --debug Enable debug mode
-H, --host=[] Daemon socket(s) to connect to
-h, --help Print usage
-l, --log-level=info Set the logging level
--tls Use TLS; implied by --tlsverify
--tlscacert=~/.docker/ca.pem Trust certs signed only by this CA
--tlscert=~/.docker/cert.pem Path to TLS certificate file
--tlskey=~/.docker/key.pem Path to TLS key file
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes
3
Chapter 1 ■ Getting Started with Docker
Output the Docker version using the docker version command. For native Docker Swarm support, the
Docker version must be 1.12 or later as listed in the bash shell output.
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.7.5
Git commit: a82d35e
Built: Mon Jun 19 23:04:34 2017
OS/Arch: linux/amd64
The Docker image is pulled and a Docker container is created, as shown in the following listing.
4
Chapter 1 ■ Getting Started with Docker
The port mapping for the Docker container is also listed using the docker ps command, but it may also
be obtained using the docker port <container> command.
Using the 8080 port and localhost, invoke the Hello World application with curl.
curl localhost:8080
The HTML markup for the Hello World application is output, as listed shown here.
Using the public DNS for the EC2 instance, the Hello World application may also be invoked in a
browser. This is shown in the web browser in Figure 1-4.
5
Chapter 1 ■ Getting Started with Docker
The docker stop <container> command stops a Docker container. The docker rm <container>
command removes a Docker container. You can list Docker images using the docker images command.
A Docker image may be removed using the docker rmi <image> command.
6
Chapter 1 ■ Getting Started with Docker
Summary
This chapter sets the basis for subsequent chapters by using a single Docker Engine on CoreOS. Subsequent
chapters explore the different design patterns for managing distributed Docker applications in a cluster. The
next chapter introduces the Docker Swarm mode.
7
CHAPTER 2
The Docker Engine is a containerization platform for running Docker containers. Multiple Docker
containers run in isolation on the same underlying operating system kernel, with each container having its
own network and filesystem. Each Docker container is an encapsulation of the software and dependencies
required for an application and does not incur the overhead of packaging a complete OS, which could
be several GB. Docker applications are run from Docker images in Docker containers, with each Docker
image being specific to a particular application or software. A Docker image is built from a Dockerfile, with
a Dockerfile defining the instruction set to be used to download and install software, set environment
variables, and run commands.
The Problem
While the Docker Engine pre-1.12 (without native Swarm mode) is well designed for running applications in
lightweight containers, it lacks some features, the following being the main ones.
• No distributed computing—No distributed computing is provided, as a Docker
Engine is installed and runs on a single node or OS instance.
• No fault tolerance—As shown in the diagram in Figure 2-1, if the single node on
which a Docker Engine is running fails, the Docker applications running on the
Docker Engine fail as well.
Node
Docker Engine
The Solution
With Docker Engine version 1.12 onward, Docker container orchestration is built into the Docker Engine
in Swarm mode and is native to the Docker Engine. Using the Swarm mode, a swarm (or cluster) of nodes
distributed across multiple machines (OS instances) may be run in a master/worker/ pattern. Docker Swarm
mode is not enabled in the Docker Engine by default and has to be initialized using a docker command.
Next, as an introduction to the Docker Swarm mode, we introduce some terminology.
Docker
Swarm
Mode
Nodes
An instance of a Docker host (a Docker Engine) is called a node. Two types of node roles are provided:
manager nodes and worker nodes.
10
Chapter 2 ■ Using Docker in Swarm Mode
Service
A service is an abstraction for a collection of tasks (also called replicas or replica tasks) distributed across
a Swarm. As an example, a service could be running three replicas of an Nginx server. Default scheduling,
which is discussed in Chapter 7, uses the “spread” scheduling strategy, which spreads the tasks across
the nodes of the cluster based on a computed node rank. A service consists of one or more tasks that run
independent of each other, implying that stopping a task or starting a new task does not affect running other
tasks. The Nginx service running on three nodes could consist of three replica tasks. Each task runs a Docker
container for the service. One node could be running multiple tasks for a service. A task is an abstraction for
the atomic unit of scheduling, a “slot” for the scheduler to run a Docker container.
11
Chapter 2 ■ Using Docker in Swarm Mode
Worker Nodes
A worker node actually runs the service replica tasks and the associated Docker containers. The
differentiation between node roles as manager nodes and worker nodes is not handled at service
deployment time but is handled at runtime, as node roles may be promoted/demoted. Promoting/demoting
a node is discussed in a later section. Worker nodes do not affect the manager Raft consensus. Worker
nodes only increase the capacity of the Swarm to run service replica tasks. The worker nodes themselves do
not contribute to the voting and state held in the raft, but the fact that they are worker nodes is held within
the raft. As running a service task requires resources (CPU and memory) and a node has a certain fixed
allocatable resources, the capacity of a Swarm is limited by the number of worker nodes in the Swarm.
Quorum
A quorum refers to agreement among the majority of Swarm manager nodes or managers. If a Swarm loses
quorum it cannot perform any management or orchestration functions. The service tasks already scheduled
are not affected and continue to run. The new service tasks are not scheduled and other management
decisions requiring a consensus, such as adding or removing a node, are not performed. All Swarm
managers are counted toward determining majority consensus for fault tolerance. For leader election only
the reachable manager nodes are included for Raft consensus. Any Swarm update, such as the addition or
removal of a node or the election of a new leader, requires a quorum. Raft consensus and quorum are the
same. For high availability, three to five Swarm managers are recommended in production. An odd number
of Swarm managers is recommended in general. Fault tolerance refers to the tolerance for failure of Swarm
manager nodes or the number of Swarm managers that may fail without making a Swarm unavailable.
Mathematically, “majority” refers to more than half, but for the Swarm mode Raft consensus algorithm, Raft
tolerates (N-1)/2 failures and a majority for Raft consensus is determined by (N/2)+1. N refers to the Swarm
size or the number of manager nodes in the Swarm.
As an example, Swarm sizes of 1 and 2 each have a fault tolerance of 0, as Raft consensus cannot be
reached for the Swarm size if any of the Swarm managers were to fail. More manager nodes increase fault
tolerance. For an odd number N, the fault tolerance is the same for a Swarm size N and N+1.
As an example, a Swarm with three managers has a fault tolerance of 1, as shown in Figure 2-3. Fault
tolerance and Raft consensus do not apply to worker nodes, as Swarm capacity is based only on the worker
nodes. Even if two of the three worker nodes were to fail, one Worker node, even if the manager nodes are
manager-only nodes, would keep the Swarm available though a reduction in Swarm capacity and could
transition some of the running tasks to non-running state.
12
Chapter 2 ■ Using Docker in Swarm Mode
Docker
Swarm
Mode
13
Chapter 2 ■ Using Docker in Swarm Mode
Docker Swarm mode is available starting with Docker version 1.12. Verify that the Docker version is at
least 1.12 using the docker --version command.
To initialize the Swarm, use the docker swarm init options command. Some of the options the
command supports are listed in Table 2-1.
14
Chapter 2 ■ Using Docker in Swarm Mode
Use the default values for all options except the --advertise-addr for which a default value is not
provided. Use the private address for the advertised address, which may be obtained from the EC2 console,
as shown in Figure 2-5. If the EC2 instances on AWS were in different regions, the external public IP address
should be used to access the manager node, which may also be obtained from the EC2 console.
15
Chapter 2 ■ Using Docker in Swarm Mode
As the output in the following listing indicates, Swarm is initialized and the current node is a manager
node. The command to add a worker node is also included in the output. The command to obtain the
command to add a manager node is also output. Copy the docker swarm join command to add a worker
node to the Swarm.
Run the docker info command to get system-wide information about the Docker Engine. The
command outputs the total number of Docker containers that are running, paused, or stopped; partial
output is listed.
The Storage Driver is overlay and the backing filesystem is extfs. The logging driver is json-file,
which is covered in Chapter 11 on logging. The Swarm is shown to be active. Information about the node
such as NodeID, whether the node is a manager, the number of managers in the Swarm, and the number of
nodes in the Swarm, is also listed.
16
Chapter 2 ■ Using Docker in Swarm Mode
The resource capacity (CPU and memory) of the node is also listed. Chapter 7 discusses more about
resource usage. The node name is the private DNS of the EC2 instance on which the Swarm is initialized.
List the nodes in the Swarm with the following command:
docker node ls
A single node gets listed including the node ID, which is the only unique parameter for a node.
The hostname is also unique if a node has not been made to leave the Swarm and rejoined.
The * after the node ID indicates that this is the current node. The nodes in the Swarm also have a
STATUS, AVAILABILITY, and MANAGER STATUS columns. STATUS can be one of the values listed in Table 2-2.
Status Description
Ready Ready for use
Down Not ready for use
Unknown Not known
Availability Description
Active Scheduler may assign tasks to the node.
Pause Scheduler does not assign new tasks to the node but existing tasks keep running.
Drain Scheduler does not assign new tasks to the node and existing tasks are shut down.
Replacement tasks are started on other nodes.
MANAGER STATUS can be one of the values listed in Table 2-4. If the MANAGER STATUS column has no
value, it indicates a worker node.
17
Chapter 2 ■ Using Docker in Swarm Mode
If neither of the preceding restores a unreachable manager node, the following should
be implemented.
Demote and remove the failed node.
docker node demote <NODE> and docker node rm <id-node>
Add another manager node with docker swarm join.
Or
Promote a worker node to manager node with docker node promote
Leader Primary manager node that performs all the Swarm management and orchestration.
The command to add a manager node may be found using the following command.
A reason for adding a worker node is that the service tasks scheduled on some of the nodes are not
running and are in Allocated state. A reason for adding a manager node is that another manager node has
become unreachable.
The node to join, manager or worker, must have Docker Engine version at least 1.12 installed. Next, you
add two worker nodes. Obtain the public IP address of an EC2 instance started for a worker node. SSH login
to the worker instance.
18
Chapter 2 ■ Using Docker in Swarm Mode
Run the docker swarm join command, which has the following syntax, to join the node to the Swarm
as a worker node.
The options supported by the docker swarm join command are listed in Table 2-5.
Run the docker swarm join command output during the initialization of the Swarm mode to join the
worker instance with the Swarm. As the output message indicates, “The node joined the Swarm as a worker.”
Run the same docker swarm join command and the second nodes joins the Swarm as a worker node.
The following sequence of events takes place when the docker swarm join command runs to join a
worker node to the Swarm.
1.
The Swarm mode for the Docker Engine on the node is enabled.
2.
A request for a TLS certificate is sent to the manager.
3.
The node is named with the machine hostname.
4.
The current node joins the Swarm at the manager listen address. Based on the
token, the node is joined as a worker node or a manager node.
19
Chapter 2 ■ Using Docker in Swarm Mode
5.
Sets the current node to Active availability.
6.
The ingress overlay network is extended to the current node.
When a node is joined to the Swarm using the manager token, the node joins as a manager node.
The new manager nodes should be Reachable and only the first manager node is the leader. Leader election
to a different manager node occurs only if the initial leader node were to fail or be demoted.
The worker nodes differ from the manager nodes in another regard. A worker node cannot be used to
view or modify the cluster state. Only the manager node can be used to view the cluster state such as the
nodes in the Swarm. Only the manager node can be used to modify a cluster state such as remove a node.
If the docker node ls command is run on a worker node, the following error message is generated.
docker node ls
How do you tell if a node is a manager node or a worker node? From the Manager Status column. If the
Manager Status is empty, the node is a worker node and if the Manager Status has a value, which would be
one of the values discussed in Table 2-4, the node is a manager node. Two worker nodes and one manager
node are listed.
We already discussed that worker nodes can’t be used to view or modify cluster state. Next, create a
Docker service using the docker service create command, which becomes available only if the Swarm
mode is enabled. Using Docker image alpine, which is a Linux distribution, create two replicas and ping the
docker.com domain from the service containers.
If the preceding command runs without an error, the Docker Swarm installed fine. The command
returns the service ID.
20
Chapter 2 ■ Using Docker in Swarm Mode
docker service ls
The service helloworld is listed and the number of replicas is listed as 2/2, which implies that two
replicas exist and meet the desired state of two replicas. The REPLICAS column output is ordered “actual/
desired”. The Docker image is alpine and the command to run the service is ping docker.com.
The docker service inspect command is used to find more information about the service.
The detailed information about the helloworld service—including the container spec, resources,
restart policy, placement, mode, update config, and update status—is listed.
21
Chapter 2 ■ Using Docker in Swarm Mode
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause"
},
"EndpointSpec": {
"Mode": "vip"
}
},
"Endpoint": {
"Spec": {}
},
"UpdateStatus": {
"StartedAt": "0001-01-01T00:00:00Z",
"CompletedAt": "0001-01-01T00:00:00Z"
}
}
]
The replicas and the nodes on which the replicas are placed may be listed with the following command
syntax.
The <SERVICE> placeholder is either a service name (like helloworld) or the actual service ID
(like bkwskfzqa173 for this example). For the helloworld service, the command becomes:
The preceding command also lists the node on which a replica is running. The Docker containers
started for a service are listed with same command as before, the docker ps command.
The docker ps command is not a Swarm mode command, but may be run on the worker nodes to find
the service containers running on a worker node. The docker ps command gives you all containers running
on a node, even if they are not service containers.
22
Chapter 2 ■ Using Docker in Swarm Mode
core@ip-172-30-5-108 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
74ea31054fb4 alpine:latest "ping docker.com" About a minute ago Up About a minute
helloworld.2.6twq1v0lr2gflnb6ae19hrpx9
Only two nodes are listed by the docker service ps helloworld command on which replicas are
scheduled, the manager node and one of the worker nodes. The docker ps command on the other worker
node does not list any Docker containers.
core@ip-172-30-5-31 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The docker node inspect <node> command is used to get detailed information about a node, such as
the node role, availability, hostname, resources capacity, plugins, and status.
23
Chapter 2 ■ Using Docker in Swarm Mode
{
"Type": "Network",
"Name": "null"
},
{
"Type": "Network",
"Name": "overlay"
},
{
"Type": "Volume",
"Name": "local"
}
]
}
},
"Status": {
"State": "ready"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "172.30.5.70:2377"
}
}
]
A service may be removed with the docker service rm <service> command. Subsequently, the
docker service inspect <service> command should not list any replicas and running docker ps will
show no more running Docker containers.
The command must be run from the leader node. As an example, promote the node ip-172-30-5-108.
ec2.internal. As the output indicates, the node gets promoted to a manager node. Subsequently list the
nodes in the Swarm and the node promoted should have manager status as Reachable.
24
Chapter 2 ■ Using Docker in Swarm Mode
A worker node should preferably be promoted using the node ID; the reason for which is discussed
subsequently. Promote another worker node using the node ID. Subsequently, both the worker nodes are
listed as Reachable in the Manager Status column.
Any manager node, including the leader node, may be demoted. As an example, demote the manager
node ip-172-30-5-108.ec2.internal.
Once demoted, the commands such as docker node ls that can be run only from a manager node
cannot be run any more on the node. The docker node ls command lists the demoted node as a worker
node; no MANAGER STATUS is listed for a worker node.
A node should be preferably promoted/demoted and otherwise referred to in any command that is
directed at the node using the node ID, which is unique to a node. The reason being that a demoted node, if
promoted back, could be added with a different node ID and the docker node ls command could list two
node IDs for the same hostname. If the hostname is used to refer to a node, it could result in the node is
ambiguous error message.
25
Chapter 2 ■ Using Docker in Swarm Mode
As the message output indicates, the node has left the Swarm.
After a worker node has left the Swarm, the node itself is not removed and continues to be listed with
the docker node ls command with a Down status.
Add the --force option to the docker swarm leave command on the manager node to cause the
manager node to leave the Swarm.
If the only manager node is removed, the Swarm no longer exists. The Swarm must be initialized again
if the Swarm mode is to be used.
26
Chapter 2 ■ Using Docker in Swarm Mode
A new Swarm is created with only the manager node and the Swarm has only one node initially.
If a Swarm has two manager nodes, making one of the manager nodes leave the Swarm has a different
effect. With two managers, the fault tolerance is 0, as discussed earlier. To create a Swarm with two manager
nodes, start with a Swarm that has one manager node and two worker nodes.
Run the docker swarm leave command from a manager node that’s not the leader node. The following
message is generated.
You are attempting to leave the swarm on a node that is participating as a manager.
Removing this node leaves one manager out of two. Without a Raft quorum, your Swarm will be
inaccessible. The only way to restore a Swarm that has lost consensus is to reinitialize it with --force-new-
cluster. Use --force to suppress this message.
To make the manager node leave, you must add the --force option to the command.
27
Chapter 2 ■ Using Docker in Swarm Mode
When one of the two managers has left the Swarm, the Raft quorum is lost and the Swarm becomes
inaccessible. As indicated, the Swarm must be reinitialized using the --force-new-cluster option.
Reinitializing a Cluster
A Swarm that has lost quorum cannot be reinitialized using the command used to initialize a Swarm. If the
same command runs on a Swarm that has lost quorum, a message indicates that the node is already in the
Swarm and first must be made to leave the Swarm:
To reinitialize the Swarm the --force-new-cluster option must be added to the docker swarm
init command. core@ip-172-30-5-70 ~ $ docker swarm init --advertise-addr 172.30.5.70
--force-new-cluster
Swarm initialized: current node (cnyc2w3n8q8zuxjujcd2s729k) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-4lxmisvlszjgck4ly0swsxubejfx0phlne1xegho2fiq99amqf-
11mpscd8gs6bsayzren8fa2ki \
172.30.5.70:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the
instructions.
The Swarm is reinitialized and the docker swarm join command to add a worker node is output.
28
Chapter 2 ■ Using Docker in Swarm Mode
The worker node is drained. All service tasks on the drained node are shut down and started on other
nodes that are available. The output from the docker node ls command lists the node with the status set to
Drain.
The node detail (partial output is listed) for the drained worker node lists the node
availability as "drain".core@ip-172-30-5-70 ~ $ docker node inspect ip-172-30-5-108.ec2.
internal
[
{
"ID": "bhuzgyqvb83dx0zvms54o0a58",
"Version": {
"Index": 49
},
"CreatedAt": "2017-07-22T19:30:31.544403951Z",
"UpdatedAt": "2017-07-22T19:33:37.45659544Z",
"Spec": {
"Role": "worker",
"Availability": "drain"
},
"Description": {
"Hostname": "ip-172-30-5-108.ec2.internal",
All service tasks on the drained node are shut down and started on other nodes that are available.
The node availability with the docker node ls is listed as Drain.
A drained node can be made active again using the docker node update command with
--availability set to Active.
The drained node becomes active and is listed with the status set to Active.
29
Chapter 2 ■ Using Docker in Swarm Mode
Removing a Node
One or more nodes may be removed from the Swarm using the docker node rm command, which is run
from any manager node.
The difference between docker swarm leave and docker node rm is that the docker node rm may be run
only from a manager node. A demoted node can only be removed from the Swarm with the docker node rm
command. The sequence to remove a manager node without using the --force option is the following.
1.
Demote the manager node, which makes it a worker node.
2.
Drain the worker node.
3.
Make the worker node leave the Swarm.
4.
Remove the node.
Summary
This chapter discussed using Docker in Swarm mode. First, you initialized the Swarm mode with the docker
swarm init command to make the current node the manager node in the Swarm. Subsequently, you joined
worker nodes to the Swarm with the docker swarm join command. The chapter also discussed promoting
a worker node to a manager node/demoting a manager node to a worker node, making a worker node leave
a Swarm and then rejoin the Swarm, making a manager node leave a Swarm, reinitializing a Swarm, and
modifying node availability and removing a node. The next chapter introduces Docker for AWS, which is a
managed service for Docker Swarm mode.
30
CHAPTER 3
Docker Swarm is provisioned by first initiating a Swarm to create a manager node and subsequently joining
worker nodes to that manager node. Docker Swarm provides distributed service deployment for Docker
applications.
The Problem
By default, a Docker Swarm is provisioned on a single zone on AWS, as illustrated in Figure 3-1. With the
manager nodes and all the worker nodes in the same AWS zone, failure of the zone would make the zone
unavailable. A single-zone Swarm is not a highly available Swarm and has no fault tolerance.
Swarm Manager
Docker
Swarm
Node
(Single
Zone)
The Solution
Docker and AWS have partnered to create a Docker for AWS deployment platform that provisions a Docker
Swarm across multiple zones on AWS. Docker for AWS does not require users to run any commands on a
command line and is graphical user interface (GUI) based. With manager and worker nodes in multiple
zones, failure of a single AWS zone does not make the Swarm unavailable, as illustrated in Figure 3-2. Docker
for AWS provides fault tolerance to a Swarm.
Docker
Swarm x
x x
Zone 1 Zone 2 Zone 3
x
x
x
Docker for AWS is a managed service for Docker Swarm on the AWS cloud platform. In addition to
multiple zones, Docker for AWS has several other benefits:
• All the required infrastructure is provisioned automatically.
• Automatic upgrade to new software versions without service interruption.
• A custom Linux distribution optimized for Docker. The custom Linux distribution is
not available separately on AWS and uses the overlay2 storage driver.
• Unused Docker resources are pruned automatically.
• Auto-scaling groups for managing nodes.
32
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
• Log rotation native to the host to avoid chatty logs consuming all the disk space.
• Centralized logging with AWS CloudWatch.
• A bug-reporting tool based on a docker-diagnose script.
Two editions of Docker for Swarm are available:
• Docker Enterprise Edition (EE) for AWS
• Docker Community Edition (CE) for AWS
We use the Docker Community Edition (CE) for AWS in this chapter to create a multi-zone Swarm.
This chapter includes the following topics:
• Setting the environment
• Creating a AWS CloudFormation stack for the Docker Swarm
• Connecting with the Swarm manager
• Using the Swarm
• Deleting the Swarm
Set the permissions on the docker.pem to 400, which gives only read permissions and removes all other
permissions.
The Create Stack wizard is started with the provision to either design a new template or choose the
default CloudFormation template for Docker on AWS. Select the Specify an Amazon S3 Template URL option
for which a URL is pre-specified, as shown in Figure 3-5. Click on Next.
34
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
In Specify Details, specify a stack name (DockerSwarm). The Swarm Parameters section has the fields
listed in Table 3-1.
Parameter Description
Number of Swarm managers? Number of Swarm manager nodes. Valid values are 1, 3, and 5.
Number of Swarm worker nodes? Number of worker nodes in the Swarm (0-1000).
Keep the default settings of 3 for Number of Swarm Managers and 5 for Number of Swarm Worker
nodes, as shown in Figure 3-6.
35
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
In the Which SSH key to use? property, select the docker SSH key. The Swarm properties are shown in
Figure 3-7.
36
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
37
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
The Swarm Manager properties are as shown in Figure 3-8. Specify the Swarm Worker properties, as
discussed in Table 3-4.
38
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Next, specify the options for the stack. Tags (key-value pairs) may be specified for resources in a stack.
For permissions, an IAM role for CloudFormation may be chosen. None of these options is required to be
set, as shown in Figure 3-9.
For Advanced options, the Notification options are set to No Notification. Set Rollback on Failure to
Yes, as shown in Figure 3-10. Click on Next.
39
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
40
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
41
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Select the acknowledgement checkbox and then click on Create, as shown in Figure 3-12.
A new stack begins to be created. Click on the Refresh button to refresh the stacks listed, as shown in
Figure 3-13.
Figure 3-13. Refresh
42
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
A new stack based on a CloudFormation template for Docker Swarm starts to be created, as indicated
by the status CREATE_IN_PROGRESS shown in Figure 3-14.
The different tabs are provided for the different stack details. The Resources tab shows the AWS
resources created by the CloudFormation template, as shown in Figure 3-15.
43
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
The Events tab shows the events that occur in creating a CloudFormation stack, as shown in Figure 3-16.
When the stack creation completes, the status says CREATE_COMPLETE, as shown in Figure 3-17.
44
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
All the required resources—including auto-scaling groups, EC2 Internet Gateway, EC2 security groups,
Elastic Load Balancer, IAM policy, Log Group, and VPC Gateway—are created, as shown in Figure 3-18.
The Outputs tab lists the Default DNS target, the zone availability comment about the number of
availability zones, and the manager nodes, as shown in Figure 3-19.
Figure 3-19. Outputs
45
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
To list the EC2 instances for the Swarm managers, click on the link in Managers, as shown in Figure 3-20.
The three manager instances are all in different availability zones. The public/private IP addresses and
the public DNS name for each EC2 instance may be obtained from the EC2 console, as shown in Figure 3-21.
46
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
The AMI used for the EC2 instances may be found using the AMI ID, as shown in Figure 3-22. A Moby
Linux AMI is used for this Swarm, but the AMI could be different for different users and in different AWS
regions.
You can list all the EC2 instances by setting Instance State to Running. The Docker Swarm manager
nodes (three) and worker nodes (five) are listed, as shown in Figure 3-23. The manager and worker nodes
are in three different availability zones.
Figure 3-23. Swarm managers and workers in three different availability zones
47
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Select Load Balancers in the EC2 dashboard and the provisioned Elastic Load Balancer is listed, as
shown in Figure 3-24. Click on the Instances tab to list the instances. All instances should have a status set to
InService, as shown in Figure 3-24.
Select Launch Configurations from the EC2 dashboard. The two launch configurations—one for the
managers and one for the worker nodes—will be listed, as shown in Figure 3-25.
Select Auto Scaling Groups in the EC2 dashboard. The two auto-scaling groups—one for the managers
and one for the worker nodes—will be listed, as shown in Figure 3-26.
48
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Welcome to Docker!
The Docker version of the Swarm node may be listed using docker --version. The version will be 17.06
or greater. Swarm mode is supported on Docker 1.12 or greater.
~ $ docker --version
Docker version 17.06.0-ce, build 02c1d87
~ $ docker node ls
The leader node and two other manager nodes indicated by Manager Status of Leader and Reachable
are listed. The worker nodes are all available, as indicated by Active in the Availability column.
49
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Docker services are introduced in the next chapter, but you can run the following docker service
create command to create an example Docker service for a MySQL database.
List the service with the docker service ls command, which is also discussed in the next chapter, and
the service ID, mode, replicas, and image are listed.
~S docker service ls
Scale the service to three replicas with the docker service scale command. The three replicas
are scheduled—one on the leader manager node and two on the worker nodes. The docker service ps
command to list service replicas is also discussed in more detail in the next chapter.
mysql scaled to 3
50
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
Deleting a Swarm
To delete a Swarm, choose Actions ➤ Delete Stack from the CloudFormation console, as shown in Figure 3-27.
In the Delete Stack confirmation dialog, click on Yes, Delete, as shown in Figure 3-28.
51
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
As each of the stack’s resources is deleted, its status becomes DELETE_COMPLETE, as shown for some of
the resources on the Events tab in Figure 3-30.
Figure 3-30. Events list some of the resources with a status of DELETE_COMPLETE
52
Chapter 3 ■ Using Docker for AWS to Create a Multi-Zone Swarm
When the EC2 instances have been deleted, the EC2 console lists their status as terminated, as shown
in Figure 3-31.
Summary
This chapter discussed creating a multi-zone Docker Swarm provisioned by a CloudFormation template
using the Docker for AWS service. You learned how to connect to the Swarm manager to run docker
service commands. The next chapter introduces Docker services.
53
CHAPTER 4
Docker Services
A Docker container contains all the binaries and dependencies required to run an application. A user only
needs to run a Docker container to start and access an application. The CoreOS Linux operating system has
Docker installed and the Docker commands may be run without even installing Docker.
The Problem
A Docker container, by default, is started only on a single node. However, for production environments,
where uptime and redundancy matters, you need to run your applications on multiple hosts.
When a Docker container is started using the docker run command, the container starts only on
a single host, as illustrated in Figure 4-1. Software is usually not designed to run on a single host only. A
MySQL database in a production environment, for example, may need to run across a cluster of hosts for
redundancy and high availability. Applications that are designed for a single host should be able to scale up
to multiple hosts as needed. But distributed Docker applications cannot run on a single Docker Engine.
Docker
Engine
docker run -d -p
8080 tututm/ Docker
hello-world Conainer
The Solution
Docker Swarm mode enables a Docker application to run across a distributed cluster of Docker Engines
connected by an overlay network, as illustrated in Figure 4-2. A Docker service may be created with a specific
number of replicas, with each replica potentially running on a different host in a cluster. A Swarm consists of
one or more manager nodes with a single leader for Swarm management and orchestration. Worker nodes
run the actual service tasks with the manager nodes being worker nodes by default. A Docker service may
be started only from the leader node. Service replicas scheduled on the worker nodes, as a result, run a
distributed application. Distributed applications provide several benefits, such as fault tolerance, failover,
increased capacity, and load balancing, to list a few.
x
x Swarm Swarm x
Task Docker
Worker helloworld.1 Container
x Swarm
Manager
Swarm
Task Docker
Worker
3 tutum/hello-world helloworld.2 Container x
service replicas
x
Task Docker x
Swarm helloworld.3 Container
Worker
x
x x
Figure 4-2. Docker service tasks and containers spread across the nodes
56
Chapter 4 ■ Docker Services
Three nodes should get listed in the Swarm with the docker node ls command—one manager node
and two worker nodes.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ilru4f0i280w2tlsrg9hglwsj ip-172-31-10-132.ec2.internal Ready Active
w5to186ipblpcq390625wyq2e ip-172-31-37-135.ec2.internal Ready Active
zkxle7kafwcmt1sd93kh5cy5e * ip-172-31-13-155.ec2.internal Ready Active Leader
57
Chapter 4 ■ Docker Services
A worker node may be promoted to a manager node using the docker node promote <node ip>
command.
If you list the nodes again, two manager nodes should be listed. A manager node is identified by a value
in the Manager Status column. One node has a Manager Status of Reachable and the other says Leader.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ilru4f0i280w2tlsrg9hglwsj ip-172-31-10-132.ec2.internal Ready Active Reachable
w5to186ipblpcq390625wyq2e ip-172-31-37-135.ec2.internal Ready Active
zkxle7kafwcmt1sd93kh5cy5e * ip-172-31-13-155.ec2.internal Ready Active Leader
The manager node that is the Leader performs all the swarm management and orchestration. The
manager node that is Reachable participates in the raft consensus quorum and is eligible for election as the
new leader if the current leader node becomes unavailable.
Having multiple manager nodes adds fault tolerance to the Swarm, but one or two Swarm managers
provide the same fault tolerance. If required, one or more of the worker nodes could also be promoted to a
manager node to increase fault tolerance.
For connectivity to the Swarm instances, modify the inbound rules of the security groups associated
with the Swarm manager and worker instances to allow all traffic. The inbound rules for the security group
associated with a Swarm node are shown in Figure 4-4.
Figure 4-4. Setting inbound rules on a security group to allow all traffic
58
Chapter 4 ■ Docker Services
The outbound rules for the security group associated with the Swarm manager are shown in Figure 4-5.
Figure 4-5. Setting outbound rules on a security group to allow all traffic
Command Description
docker service create Creates a new service.
docker service inspect Displays detailed information on one or more services.
docker service logs Fetches the logs of a service. The command was added in Docker 17.0.6.
docker service ls Lists services.
docker service ps Lists the tasks of one or more services.
docker service rm Removes one or more services.
docker service scale Scales one or multiple replicated services.
docker service update Updates a service.
59
Chapter 4 ■ Docker Services
Types of Services
Docker Swarm mode supports two types of services, also called service modes—replicated services and
global services. Global services run one task only on every node in a Docker Swarm. Replicated services run
as a configured number of tasks, which are also referred to as replicas, the default being one. The number of
replicas may be specified when a new service is created and may be updated later. The default service type is
a replicated service. A global service requires the --mode option to be set to global. Only replicated services
may be scaled; global services cannot be scaled.
We start off by creating a replicated service. Later in the chapter, we also discuss creating a global
service.
Creating a Service
The command syntax to create a Docker service is as follows.
Option Description
--constraint Placement constraints.
--container-label Container labels.
--env, -e Sets environment variables.
--env-file Reads in a file of environment variables. Option not added until Docker
1.13.
--host Sets one or more custom host-to-IP mappings. Option not added until
Docker 1.13. Format is host:ip.
--hostname Container hostname. Option not added until Docker 1.13.
--label, -l Service labels.
--limit-cpu Limits CPUs. Default value is 0.000.
--limit-memory Limits memory. Default value is 0.
--log-driver Logging driver for service.
--log-opt Logging driver options.
--mode Service mode. Value may be replicated or global. Default is replicated.
--mount Attaches a filesystem mount to the service.
--name Service name.
--network Network attachments. By default, the “ingress” overlay network is used.
--publish, -p Publishes a port as a node port.
--read-only Mounts the container’s root filesystem as read only. Option not added
until Docker 17.03.
Default is false.
(continued)
60
Chapter 4 ■ Docker Services
As an example, create a service called hello-world with Docker image tutum/hello-world consisting
of two replicas. Expose the service on port 8080 on the host. The docker service create command outputs
a service ID if successful.
61
Chapter 4 ■ Docker Services
The ID column lists the task ID. The task name is in the format servicename.n; hello-world.1 and
hello-world.2 for the two replicas. The Docker image is also listed. The NODE column lists the private DNS
of the node on which the task is scheduled. The DESIRED STATE is the state that is desired as defined in the
service definition. The CURRENT STATE is the actual state of the task. At times, a task could be in a pending
state because of lack of resource capacity in terms of CPU and memory.
A service task is a slot for running a Docker container. On each node on which a task is running, a
Docker container should also be running. Docker containers may be listed with the docker ps command.
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
0ccdcde64e7d tutum/hello-world:latest "/bin/sh -c 'php-f..." 2 minutes ago
Up 2 minutes 80/tcp hello-world.2.kezidi82ol5ct81u59jpgfhs1
~ $ curl ec2-34-200-225-39.compute-1.amazonaws.com:8080
<html>
<head>
<title>Hello world!</title>
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400,700' rel='stylesheet'
type='text/css'>
<style>
body {
background-color: white;
text-align: center;
padding: 50px;
font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;
}
#logo {
margin-bottom: 40px;
}
</style>
</head>
<body>
62
Chapter 4 ■ Docker Services
The detailed information includes the container specification, resources, restart policy, placement,
mode, update config, ports (target port and published port), virtual IPs, and update status.
63
Chapter 4 ■ Docker Services
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 2
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
"PublishedPort": 8080,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 80,
64
Chapter 4 ■ Docker Services
"PublishedPort": 8080,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "y3k655bdlp3x102a2bslh4swh",
"Addr": "10.255.0.5/16"
}
]
}
}
]
65
Chapter 4 ■ Docker Services
Similarly, you can obtain the public DNS of a EC2 instance on which a Swarm worker node is hosted, as
shown in Figure 4-7.
Figure 4-7. Obtaining the public DNS for a EC2 instance on which a Swarm worker node is hosted
Invoke the service using the PublicDNS:8080 URL in a browser, as shown in Figure 4-8.
Figure 4-8. Invoking a service in a browser using public DNS for a EC2 instance on which a Swarm worker
node is hosted
A manager node is also a worker node by default and service tasks also run on the manager node.
66
Chapter 4 ■ Docker Services
A service gets created for MySQL database and the service ID gets output.
List the services with the docker service ls command; the mysql service should be listed.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
gzl8k1wy8kf3 mysql replicated 1/1 mysql:latest
vyxnpstt3511 hello-world replicated 2/2 tutum/hello-world:latest *:8080->80/tcp
List the service tasks/replicas with the docker service ps mysql command. One task is running on
the manager worker node.
67
Chapter 4 ■ Docker Services
How service tasks are scheduled, including node selection based on node ranking, is discussed in
Chapter 8, which covers scheduling.
Scaling a Service
Next, we scale the mysql service. Only replicated services can be scaled and the command syntax to scale
one or more services is as follows.
To scale the mysql service to three tasks, run the following command.
The mysql service gets scaled to three, as indicated by the command output.
Option Description
--filter, -f Filters output based on conditions provided. The following filters are supported:
id=<task id>
name=<task name>
node=<node id or name>
desired-state=(running | shutdown | accepted)
--no-resolve Whether to map IDs to names. Default value is false.
--no-trunc Whether to truncate output. Option not added until Docker 1.13. Default value is
false.
--quiet, -q Whether to only display task IDs. Option not added until Docker 1.13. Default value is
false.
68
Chapter 4 ■ Docker Services
As an example, you can list only the service tasks that are running.
All tasks are running; therefore, the effect of using the filter is not very apparent. But, in a subsequent
example, you’ll list running service tasks when some tasks are not running.
Not all worker nodes are utilized for running service tasks if the number of nodes is more than the
number of tasks, as when the hello-world and mysql services had fewer than three tasks running. A node
could have more than one service task running if the number of replicas is more than the number of nodes
in a Swarm. Scaling up to five replicas starts more than one replica on two of the nodes.
Only one mysql service replica is running on the manager node; therefore, only one Docker container
for the mysql service is running on the manager node.
~ $ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
6bbe40000874 mysql:latest "docker-entrypoint..."
About a minute ago Up About a minute 3306/tcp mysql.2.s4flvtode8odjjere2z
si9gdx
69
Chapter 4 ■ Docker Services
The number of Docker containers for the mysql service on the manager node increases to three for the
three tasks running on the manager node.
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
15e3253f69f1 mysql:latest "docker-entrypoint..." 50 seconds ago
Up 49 seconds 3306/tcp mysql.8.lousvchdirn9fv8wot5vivk6d
cca7ab20c914 mysql:latest "docker-entrypoint..." 50 seconds ago
Up 49 seconds 3306/tcp mysql.10.pd40sd7qlk3jc0i73huop8e4r
6bbe40000874 mysql:latest "docker-entrypoint..." 2 minutes ago
Up 2 minutes 3306/tcp mysql.2.s4flvtode8odjjere2zsi9gdx
Because you’ll learn more about Docker services with the MySQL database service example in later
sections, and also for completeness, next we discuss using a Docker container for MySQL database to create
a database table.
70
Chapter 4 ■ Docker Services
Start the MySQL CLI with the mysql command as user root. Specify the password when prompted;
the password used to create the service was specified in the --env option to the docker service create
command using environment variable MYSQL_ROOT_PASSWORD. The mysql> CLI command prompt is
displayed.
Set the database to use as mysql with the use mysql command.
Database changed
Add some data to the wlslog table with the following SQL commands run from the MySQL CLI.
71
Chapter 4 ■ Docker Services
Exit the MySQL CLI and the bash shell using the exit command.
mysql> exit
Bye
root@15e3253f69f1:/# exit
exit
72
Chapter 4 ■ Docker Services
Updating a Service
A service may be updated subsequent to being created with the docker service update command, which
has the following syntax:
Option Description
--args Args for the command.
--constraint-add Adds or updates a placement constraint.
--constraint-rm Removes a placement constraint.
--container-label-add Adds or updates a Docker container label.
--container-label-rm Removes a container label by its key.
--env-add Adds or updates an environment variable.
--env-rm Removes an environment variable.
--force Whether to force an update even if no changes require it. Option added in
Docker 1.13. Default is false.
--group-add Adds an additional supplementary user group to the container. Option
added in Docker 1.13.
--group-rm Removes a previously added supplementary user group from the
container. Option added in Docker 1.13.
--host-add Adds or updates a custom host-to-IP mapping (host:ip). Option added in
Docker 1.13.
--host-rm Removes a custom host-to-IP mapping (host:ip). Option added in
Docker 1.13.
--hostname Updates the container hostname. Option added in Docker 1.13.
--image Updates the service image tag.
--label-add Adds or updates a service label.
--label-rm Removes a label by its key.
--limit-cpu Updates the limit CPUs. Default value is 0.000.
--limit-memory Updates the limit memory. Default value is 0.
--log-driver Updates logging driver for service.
--log-opt Updates logging driver options.
--mount-add Adds or updates a mount on a service.
--mount-rm Removes a mount by its target path.
--publish-add Adds or updates a published port.
--publish-rm Removes a published port by its target port.
(continued)
73
Chapter 4 ■ Docker Services
74
Chapter 4 ■ Docker Services
A service from Docker image mysql:5.6 is created and the service ID is output.
Update the number of replicas to five using the docker service update command. If the command is
successful, the service name is output from the command.
Setting replicas to five does not just start four new tasks to make a total of five tasks. When a service
is updated to change the number of replicas, all the service tasks are shut down and new tasks are started.
Subsequently listing the service tasks lists the first task as being shut down and five new tasks as being
started.
75
Chapter 4 ■ Docker Services
You can list detailed information about the service with the docker service inspect command. The
image listed in the ContainerSpec is mysql:latest. The PreviousSpec is also listed.
The update does not get completed immediately even though the docker service update command
does. While the service is being updated, the UpdateStatus for the service is listed with State set to
"updating" and the Message of "update in progress".
"UpdateStatus": {
"State": "updating",
"StartedAt": "2017-07-23T19:24:15.539042747Z",
"Message": "update in progress"
}
When the update completes, the UpdateStatus State becomes "completed" and the Message becomes
"update completed".
"UpdateStatus": {
"State": "completed",
"StartedAt": "2017-07-23T19:24:15.539042747Z",
"CompletedAt": "2017-07-23T19:25:25.660907984Z",
"Message": "update completed"
}
76
Chapter 4 ■ Docker Services
While the service is updating, the service tasks are shutting down and the new service tasks are starting.
When the update is starting, some of the running tasks might be based on the previous image mysql:5.6
whereas others could be based on the new image mysql:latest.
The desired state of the tasks with image mysql:5.6 is set to Shutdown. Gradually, all the new service
tasks based on the new image mysql:latest are started.
77
Chapter 4 ■ Docker Services
2t8j1zd8uts1 \_ mysql.4 mysql:5.6 ip-172-31-10-132.ec2.internal
Shutdown Shutdown 44 seconds ago
hppq840ekrh7 mysql.5 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 39 seconds ago
8tf0uuwb8i31 \_ mysql.5 mysql:5.6 ip-172-31-10-132.ec2.internal
Shutdown Failed 44 seconds ago "task: non-zero exit (1)"
Filtering the service tasks with the –f option was introduced earlier. To find which, if any, tasks are
scheduled on a particular node, you run the docker service ps command with the filter set to the node.
Filtered tasks, both Running and Shutdown, are then listed.
Service tasks may also be filtered by desired state. To list only running tasks, set the desired-state filter
to running.
Likewise, only the shutdown tasks are listed by setting the desired-state filter to shutdown.
78
Chapter 4 ■ Docker Services
bswz4sm8e3vj mysql.3 mysql:5.6 ip-172-31-37-135.ec2.internal
Shutdown Shutdown 2 minutes ago
ktrwxnn13fug \_ mysql.3 mysql:5.6 ip-172-31-37-135.ec2.internal
Shutdown Failed 3 minutes ago "task: non-zero exit (1)"
wj1x26wvp0pt mysql.4 mysql:latest ip-172-31-13-155.ec2.internal
Shutdown Failed 2 minutes ago "task: non-zero exit (1)"
2t8j1zd8uts1 \_ mysql.4 mysql:5.6 ip-172-31-10-132.ec2.internal
Shutdown Shutdown 3 minutes ago
8tf0uuwb8i31 mysql.5 mysql:5.6 ip-172-31-10-132.ec2.internal
Shutdown Failed 3 minutes ago "task: non-zero exit (1)"
It may take a while (a few seconds or minutes) for the desired state of a service to be reconciled, during
which time tasks could be running on manager nodes even though the node.role is set to worker or less
than the required number of tasks could be running. When the update has completed (the update status
may be found from the docker service inspect command), listing the running tasks for the mysql service
indicates that the tasks are running only on the worker nodes.
As another example, service tasks for the mysql service may be constrained to run on only manager
nodes. Starting with service tasks running on both manager and worker nodes and with no other constraints
added, run the following command to place all tasks on the manager nodes.
79
Chapter 4 ■ Docker Services
The tasks are not shut down on worker nodes and started on manager nodes immediately and initially
may continue to be running on worker nodes.
List the service replicas again after a while. You’ll see that all the tasks are listed as running on the
manager nodes.
When the update has completed, the docker service inspect command lists the environment
variables added.
Updating the environment variables causes the containers to restart. So, simply adding environment
variables doesn’t cause the new database to be created in the same container. A new container is started
with the updated environment variables.
80
Chapter 4 ■ Docker Services
After the update has completed, showing the running service tasks lists new tasks for the postgres
image. The service name stays the same and the Docker image is updated to postgres.
Updating the Docker image does not remove the environment variables associated with the mysql
Docker image, which are still listed in the service detail.
The added environment variables for the MySQL database need to be removed, as the PostgreSQL
database Docker image postgres does not use the same environment variables. Remove all the environment
variables from the mysql service with the --env-rm option to the docker service update command.
To remove only the env variable, the name needs to be specified, not the env value.
81
Chapter 4 ■ Docker Services
On listing detailed information about the service, the added label is listed in the ContainerSpec labels.
The label added may be removed with the --container-label-rm option. To remove only the label, the
key needs to be specified, not the label value.
The resources settings are updated. Service detail lists the updated resource settings in the Resources
JSON object.
82
Chapter 4 ■ Docker Services
"NanoCPUs": 500000000,
"MemoryBytes": 1073741824
}
},
...
]
Removing a Service
The docker service rm command removes a service. If the output of the command is the service name, the
service has been removed. All the associated service tasks and Docker containers also are removed.
A replicated mode service is created with the default number of replicas, which is 1. List the services
with the docker service ls command. The nginx service is listed with one replica.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
no177eh3gxsy nginx replicated 1/1 nginx:latest
A global service runs one task on each node in a Swarm by default. A global service may be required
at times such as for an agent (logging/monitoring) that needs to run on each node. A global service is used
for logging in Chapter 11. Next, we create a nginx Docker image-based service that’s global. Remove the
replicated service nginx with the docker service rm nginx command. A service name must be unique
even if different services are of different modes. Next, create a global mode nginx service with the same
command as for the replicated service, except that the --mode option is set to global instead of replicated.
A global mode service is created. The docker service ls command lists the service. The REPLICAS
column for a global service does not list the number of replicas, as no replicas are created. Instead global is
listed in the REPLICAS column.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
5prj6c4v4be6 nginx global 3/3 nginx:latest
83
Chapter 4 ■ Docker Services
A service task is created for a global service on each node in the Swarm on which a task can run.
Scheduling constraints may be used with a global service to prevent running a task on each node.
Scheduling is discussed in Chapter 8. Global services cannot be scaled.
Summary
This chapter introduced Docker services running on a Docker Swarm. A service consists of service tasks or
replicas. A Docker Swarm supports two types of services—replicated services and global services. A replicated
service has the assigned number of replicas and is scalable. A global service has a task on each node in a
Swarm. The term “replica” is used in the context of a replicated service to refer to the service tasks that are
run across the nodes in a Swarm. A replicated service could run a specified number of tasks for a service,
which could imply running no tasks or running multiple tasks on a particular node. The term “replica” is
generally not used in the context of a global service, which runs only one task on each node in the Swarm.
Each task (replica) is associated with a Docker container. We started with a Hello World service and invoked
the service with curl on the command line and in a browser. Subsequently, we discussed a service for
a MySQL database. We started a bash shell for a MySQL service container and created a database table.
Scaling, updating, and removing a service are some of the other service features this chapter covered.
The chapter concluded by creating a global service. The next chapter covers the Docker Swarm scaling
service in more detail.
84
CHAPTER 5
Scaling Services
Docker Engine is suitable for developing lightweight applications that run in Docker containers that are
isolated from each other. Docker containers are able to provide their own networking and filesystem.
The Problem
Docker Engine (prior to native Swarm mode) was designed to run Docker containers that must be started
separately. Consider the use case that multiple replicas or instances of a service need to be created.
As client load on an application running in a Docker container increases, the application may need to be
run on multiple nodes. A limitation of Docker Engine is that the docker run command must be run each
time a Docker container is to be started for a Docker image. If a Docker application must run on three nodes,
the docker run <img> command must run on each of the nodes as well, as illustrated in Figure 5-1.
No provision to scale an application or run multiple replicas is provided in the Docker Engine
(prior to Docker 1.12 native Swarm mode support).
The Solution
The Docker Swarm mode has the provision to scale a Docker service. A service abstraction is associated
with zero or more replicas (tasks) and each task starts a Docker container for the service. The service
may be scaled up or down to run more/fewer replicas, as required. With a single docker service
scale <svc>=<replicas> command, a service can run the required number of replicas, as illustrated in
Figure 5-2. If 10 service replicas are to be started across a distributed cluster, a single command is able to
provision scaling.
Node
1
Node
2 Docker
docker service scale
<svc>=<number of containers
tasks>
Node
3
Scaling is supported only for replicated services. A global service runs one service task on each node
in a Swarm. Scaling services was introduced in Chapter 3 and, in this chapter, we discuss some of the other
aspects of scaling services not discussed in Chapter 3. This chapter covers the following topics:
• Setting the environment
• Creating a replicated service
• Scaling up a service
• Scaling down a service
• Removing a service
• Global services cannot be scaled
• Scaling multiple services in the same command
• Service replicas replacement on a node leaving the Swarm
86
Chapter 5 ■ Scaling Services
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ilru4f0i280w2tlsrg9hglwsj ip-172-31-10-132.ec2.internal Ready Active
w5to186ipblpcq390625wyq2e ip-172-31-37-135.ec2.internal Ready Active
zkxle7kafwcmt1sd93kh5cy5e * ip-172-31-13-155.ec2.internal Ready Active Leader
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ndu4kwqk9ol7 mysql replicated 1/1 mysql:latest
As service replicas take a while (albeit a few seconds) to start, initially 0/1 replicas could be listed in the
REPLICAS column, which implies that the desired state of running one service replica has not been achieved
yet. Run the same command after a few seconds and 1/1 REPLICAS should be listed as running.
Optionally, the docker service create command may also be run by setting the --mode option.
Remove the mysql service if it was created previously and use the --mode option as follows.
The mysql service is created as without the --mode replicated option. List the service replicas or tasks
with docker service ps mysql. A single replica is listed.
One service replica is created by default if the --replicas option is omitted. It should be mentioned
that running multiple replicas of the MySQL database does not automatically imply that they are sharing
data, so accessing one replica will not give you the same data as another replica. Sharing data using mounts
is discussed in Chapter 6.
Scaling Up a Service
The docker service scale command, which has the following syntax, may be used to scale up/down a
service, which changes the desired state of the service.
Subsequently, three tasks are listed as scheduled on the three nodes in the Swarm.
In addition to one replica on the manager node, one replica each is started on each of the two worker
nodes. If the docker ps command is run on the manager node, only one Docker container for the mysql
Docker image is listed.
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d2161a3b282 mysql: "docker- 50 seconds ago Up 49 3306/tcp mysql.1.yrikmh7mci
latest entrypoint..." seconds v7dsmql1nhdi62l
88
Chapter 5 ■ Scaling Services
A service may also be scaled using the docker service update command with the --replicas option.
As an example, scale it to 50 replicas.
The service is scaled to 50 replicas and, subsequently, 50 service tasks are listed.
89
Chapter 5 ■ Scaling Services
wiou769z8xeh mysql.21 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 47 seconds ago
5exwimn64w94 mysql.22 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 48 seconds ago
agqongnh9uu3 mysql.23 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
ynkvjwgqqqlx mysql.24 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 47 seconds ago
yf87kbsn1cga mysql.25 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 10 seconds ago
xxqj62007cxd mysql.26 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
50ym9i8tjwd5 mysql.27 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
7btl2pga1l5o mysql.28 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
62dqj60q1ol8 mysql.29 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 45 seconds ago
psn7zl4th2zb mysql.30 mysql:latest ip-172-31-37-135.ec2.internal
Running Preparing 16 seconds ago
khsj2an2f5gk mysql.31 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
rzpndzjpmuj7 mysql.32 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 45 seconds ago
9zrcga93u5fi mysql.33 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 45 seconds ago
x565ry5ugj8m mysql.34 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 48 seconds ago
o1os5dievj37 mysql.35 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
dritgxq0zrua mysql.36 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
n8hs01m8picr mysql.37 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 47 seconds ago
dk5w0qnkfb63 mysql.38 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 45 seconds ago
joii103na4ao mysql.39 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
db5hz7m2vac1 mysql.40 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 46 seconds ago
ghk6s12eeo48 mysql.41 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 45 seconds ago
jbi8aksksozs mysql.42 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 47 seconds ago
rx3rded30oa4 mysql.43 mysql:latest ip-172-31-37-135.ec2.internal
Running Running 47 seconds ago
c3zaacke440s mysql.44 mysql:latest ip-172-31-13-155.ec2.internal
Running Running 45 seconds ago
l6ppiurx4306 mysql.46 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
90
Chapter 5 ■ Scaling Services
of06zibtlsum mysql.47 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
kgjjwlc9zmp8 mysql.48 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
rw1icgkyw61u mysql.49 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 46 seconds ago
j5jpl9a5jgbj mysql.50 mysql:latest ip-172-31-10-132.ec2.internal
Running Running 47 seconds ago
A small-scale MySQL database service probably wouldn’t benefit from scaling to 50 replicas, but an
enterprise-scale application could use 50 or even more replicas.
The service gets scaled down to no replicas. No service replicas that are running are listed.
The actual service tasks could take a while to shut down, but the desired state of all tasks is set to
Shutdown.
Scaling a service to no tasks does not run any tasks, but the service is not removed. The mysql service
may be scaled back up again from none to three tasks as an example.
91
Chapter 5 ■ Scaling Services
Removing a Service
A service may be removed using the docker service rm command.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
Multiple services may be removed using the docker service rm command. To demonstrate, you can
create two services, hello-world and nginx.
Subsequently, remove both the services with one docker service rm command. The services removed
are output if the command is successful.
92
Chapter 5 ■ Scaling Services
The global service is created and listing the service should indicate a Mode set to global.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
nxhnrsiulymd mysql-global global 3/3 mysql:latest
If another node is added to the Swarm, a service task automatically starts on the new node.
If the docker service scale command is run for the global service, the service does not get scaled.
Instead, the following message is output.
A global service may be removed just as a replicated service, using the docker service rm command.
List the two services. One replica for each service should be running.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1umb7e2gr68s mysql replicated 1/1 mysql:latest
u6i4e8eg720d nginx replicated 1/1 nginx:latest
93
Chapter 5 ■ Scaling Services
Scale the nginx service and the mysql service with a single command. Different services may be scaled
to a different number of replicas.
The mysql service gets scaled to five tasks and the nginx service gets scaled to 10 replicas. Initially, some
of the new tasks for a service may not have started, as for the nginx service, which lists only 8 of the 10 tasks
as running.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1umb7e2gr68s mysql replicated 5/5 mysql:latest
u6i4e8eg720d nginx replicated 8/10 nginx:latest
After a while, all service tasks should be listed as running, as indicated by 10/10 for the nginx service.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1umb7e2gr68s mysql replicated 5/5 mysql:latest
u6i4e8eg720d nginx replicated 10/10 nginx:latest
The service tasks for the two services may be listed using a single docker service ps command.
94
Chapter 5 ■ Scaling Services
p84m8yh5if5t nginx.7 nginx:latest ip-172-31-37-135.ec2.internal
Running Running 41 seconds ago
7yp8m7ytt7z4 nginx.8 nginx:latest ip-172-31-26-234.ec2.internal
Running Running 24 seconds ago
zegs90r015nn nginx.9 nginx:latest ip-172-31-37-135.ec2.internal
Running Running 41 seconds ago
qfkpvy28g1g6 nginx.10 nginx:latest ip-172-31-26-234.ec2.internal
Running Running 24 seconds ago
A replacement service task for the service task running on the shutdown node gets scheduled on
another node.
Make the other worker node also leave the Swarm. The service replicas on the other worker node also
get shut down and scheduled on the only remaining node in the Swarm.
95
Chapter 5 ■ Scaling Services
If only the replicas with desired state as running are listed, all replicas are listed as running on the
manager node.
Summary
This chapter discussed service scaling in Swarm mode. Only a replicated service can be scaled and not a
global service. A service may be scaled up to as many replicas as resources can support and can be scaled
down to no replicas. Multiple services may be scaled using the same command. Desire state reconciliation
ensures that the desired number of service replicas are running. The next chapter covers Docker service
mounts.
96
CHAPTER 6
Using Mounts
A service task container in a Swarm has access to the filesystem inherited from its Docker image. The data is
made integral to a Docker container via its Docker image. At times, a Docker container may need to store or
access data on a persistent filesystem. While a container has a filesystem, it is removed once the container
exits. In order to store data across container restarts, that data must be persisted somewhere outside the
container.
The Problem
Data stored only within a container could result in the following issues:
• The data is not persistent. The data is removed when a Docker container is stopped.
• The data cannot be shared with other Docker containers or with the host filesystem.
The Solution
Modular design based on the Single Responsibility Principle (SRP) recommends that data be decoupled
from the Docker container. Docker Swarm mode provides mounts for sharing data and making data
persistent across a container startup and shutdown. Docker Swarm mode provides two types of mounts for
services:
• Volume mounts
• Bind mounts
The default is the volume mount. A mount for a service is created using the --mount option of the
docker service create command.
Volume Mounts
Volume mounts are named volumes on the host mounted into a service task’s container. The named
volumes on the host persist even after a container has been stopped and removed. The named volume may
be created before creating the service in which the volume is to be used or the volume may be created at
service deployment time. Named volumes created at deployment time are created just prior to starting a
service task’s container. If created at service deployment time, the named volume is given an auto-generated
name if a volume name is not specified. An example of a volume mount is shown in Figure 6-1, in which
a named volume mysql-scripts, which exists prior to creating a service, is mounted into service task
containers at the directory path /etc/mysql/scripts.
Mount
Of Type
“volume”
Service
Replica
Each container in the service has access to the same named volume on the host on which the container
is running, but the host named volume could store the same or different data.
When using volume mounts, contents are not replicated across the cluster. For example, if you put
something into the mysql-scripts directory you’re using, those new files will only be accessible to other
tasks running on that same node. Replicas running on other nodes will not have access to those files.
Bind Mounts
Bind mounts are filesystem paths on the host on which the service task is to be scheduled. The host
filesystem path is mounted into a service task’s container at the specified directory path. The host filesystem
path must exist on each host in the Swarm on which a task may be scheduled prior to a service being
created. If certain nodes are to be excluded for service deployment, using node constraints, the bind mount
host filesystem does not have to exist on those nodes. When using bind mounts, keep in mind that the
service using a bind mount is not portable as such. If the service is to be deployed in production, the host
directory path must exist on each host in the Swarm in the production cluster.
The host filesystem path does not have to be the same as the destination directory path in a task
container. As an example, the host path /db/mysql/data is mounted as a bind mount into a service’s
containers at directory path /etc/mysql/data in Figure 6-2. A bind mount is read-write by default, but
could be made read-only at service deployment time. Each container in the service has access to the
same directory path on the host on which the container is running, but the host directory path could store
different or the same data.
98
Chapter 6 ■ Using Mounts
Mount
Of
Type
“bind”
Service
Replica
Swarm mode mounts provide shareable named volumes and filesystem paths on the host that persist
across a service task startup and shutdown. A Docker image’s filesystem is still at the root of the filesystem
hierarchy and a mount can only be mounted on a directory path within the root filesystem.
This chapter covers the following topics:
• Setting the environment
• Types of mounts
• Creating a named volume
• Using a volume mount to get detailed info about a volume
• Removing a volume
• Creating and using a bind mount
99
Chapter 6 ■ Using Mounts
Figure 6-3. EC2 instances for Docker for AWS Swarm nodes
List the nodes in the Swarm. A manager node and two worker nodes are listed.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
8ynq7exfo5v74ymoe7hrsghxh ip-172-31-33-230.ec2.internal Ready Active
o0h7o09a61ico7n1t8ooe281g * ip-172-31-16-11.ec2.internal Ready Active Leader
yzlv7c3qwcwozhxz439dbknj4 ip-172-31-25-163.ec2.internal Ready Active
100
Chapter 6 ■ Using Mounts
Table 6-1. Options for the docker volume create Command for a Named Volume
Create a named volume called hello using the docker volume create command.
Subsequently, list the volumes with the docker volume ls command. The hello volume is listed in
addition to other named volumes that may exist.
~ $ docker volume ls
DRIVER VOLUME NAME
local hello
You can find detailed info about the volume using the following command.
In addition to the volume name and driver, the mountpoint of the volume also is listed.
The scope of a local driver volume is local. The other supported scope is global. A local volume is
created on a single Docker host and a global volume is created on each Docker host in the cluster.
101
Chapter 6 ■ Using Mounts
102
Chapter 6 ■ Using Mounts
Some of the mount options are only supported for volume mounts and are discussed in Table 6-3.
The options discussed in Table 6-4 are supported only with a mount of type tmpfs.
Next, we will use the named volume hello in a service created with Docker image tutum/hello-world.
In the following docker service create command, the --mount option specifies the src as hello and
includes some volume-label labels for the volume.
103
Chapter 6 ■ Using Mounts
104
Chapter 6 ■ Using Mounts
In the preceding example, a named volume is created before using the volume in a volume mount. As
another example, create a named volume at deployment time. In the following docker service create
command, the --mount option is set to type=volume with the source set to nginx-root. The named volume
nginx-root does not exist prior to creating the service.
When the command is run, a service is created. Service description includes the volume mount in
mounts.
The named volume nginx-root was not created prior to creating the service and is therefore created
before starting containers for service tasks. The named volume nginx-root is created only on nodes on
which a task is scheduled. One service task is scheduled on each of the three nodes.
105
Chapter 6 ■ Using Mounts
As a task is scheduled on the manager node, a named volume called nginx-root is created on the
manager node, as listed in the output of the docker volume ls command.
~ $ docker volume ls
DRIVER VOLUME NAME
local hello
local nginx-root
Service tasks and task containers are started on each of the two worker nodes. A nginx-root named
volume is created on each of the worker nodes. Listing the volumes on the worker nodes lists the nginx-root
volume.
A named volume was specified in src in the preceding example. The named volume may be omitted as
in the following service definition.
The service is created with a replica and is scheduled on each of the Swarm nodes.
106
Chapter 6 ■ Using Mounts
Named volumes with auto-generated names are created when a volume name is not specified explicitly.
One auto-generated named volume with an auto-generated name is created on each node on which a
service task is run. One of the named volumes listed on the manager node is an auto-generated named
volume with an auto-generated name.
~ $ docker volume ls
DRIVER VOLUME NAME
local 305f1fa3673e811b3b320fad0e2dd5786567bcec49b3e66480eab2309101e233
local hello
local nginx-root
As another example of using named volumes as mounts in a service, create a named volume called
mysql-scripts for a MySQL database service.
~ $ docker volume ls
DRIVER VOLUME NAME
local 305f1fa3673e811b3b320fad0e2dd5786567bcec49b3e66480eab2309101e233
local hello
local mysql-scripts
local nginx-root
107
Chapter 6 ■ Using Mounts
The volume description lists the scope as local and lists the mountpoint.
Next, create a service that uses the named volume in a volume mount.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
8ily37o72wyx hello-world replicated 2/2 tutum/hello-world:latest *:8080->80/tcp
cghaz4zoxurp ysql replicated 1/2 mysql:latest *:3306->3306/tcp
Listing the service tasks indicates that the tasks are scheduled on the manager node and one of the
worker nodes.
The destination directory for the named volume is created in the Docker container. The Docker
container on the manager node may be listed with docker ps and a bash shell on the container may be
started with the docker exec -it <containerid> bash command.
108
Chapter 6 ■ Using Mounts
~ $ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
a855826cdc75 mysql:latest "docker-entrypoint..."
22 seconds ago Up 21 seconds 3306/tcp mysql.2.zg7wrludkr84zf
8vhdkf8wnlh
~ $ docker exec -it a855826cdc75 bash
root@a855826cd75:/#
Change the directory to /etc/mysql/scripts in the container. Initially, the directory is empty.
root@a855826cdc75:/# cd /etc/mysql/scripts
root@a855826cdc75:/etc/mysql/scripts# ls -l
total 0
root@a855826cdc75:/etc/mysql/scripts# exit
exit
A task container for the service is created on one of the worker nodes and may be listed on the worker
node.
~ $ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
eb8d59cc2dff mysql:latest "docker-entrypoint..."
8 minutes ago Up 8 minutes 3306/tcp mysql.1.xjmx7qviihyq2so7n0oxi1muq
Start a bash shell for the Docker container on the worker node. The /etc/mysql/scripts directory on
which the named volume is mounted is created in the Docker container.
If a service using an auto-generated named volume is scaled to run a task on nodes on which a task was
not running previously, named volumes are auto-generated on those nodes also. As an example of finding
the effect of scaling a service when using an auto-generated named volume as a mount in the service, create
a MySQL database service with a volume mount. The volume mysql-scripts does not exist prior to creating
the service; remove the mysql-scripts volume if it exists.
109
Chapter 6 ■ Using Mounts
List the nodes; the node on which the service task is scheduled is the manager node.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
o5hyue3hzuds8vtyughswbosl ip-172-31-11-41.ec2.internal Ready Active
p6uuzp8pmoahlcwexr3wdulxv ip-172-31-23-247.ec2.internal Ready Active
qnk35m0141lx8jljp87ggnsnq * ip-172-31-13-122.ec2.internal Ready Active Leader
A named volume mysql-scripts and an ancillary named volume with an auto-generated name are
created on the manager node on which a task is scheduled.
~ $ docker volume ls
DRIVER VOLUME NAME
local a2bc631f1b1da354d30aaea37935c65f9d99c5f084d92341c6506f1e2aab1d55
local mysql-scripts
The worker nodes do not list the mysql-scripts named volume, as a task is not scheduled on the
worker nodes.
~ $ docker volume ls
DRIVER VOLUME NAME
Scale the service to three replicas. A replica is scheduled on each of the three nodes.
110
Chapter 6 ■ Using Mounts
A named volume mysql-scripts and an ancillary named volume with an auto-generated name are
created on the worker nodes because a replica is scheduled.
~ $ docker volume ls
DRIVER VOLUME NAME
local 431a792646d0b04b5ace49a32e6c0631ec5e92f3dda57008b1987e4fe2a1b561
local mysql-scripts
[root@localhost ~]# ssh -i "docker.pem" [email protected]
Welcome to Docker!
~ $ docker volume ls
DRIVER VOLUME NAME
local afb2401a9a916a365304b8aa0cc96b1be0c161462d375745c9829f2b6f180873
local mysql-scripts
The auto-generated named volumes are persistent and do not get removed when a service replica is
shut down. The named volumes with auto-generated names are not persistent volumes. As an example,
scale the service back to one replica. Two of the replicas shut down, including the replica on the manager
node.
But the named volume mysql-scripts on the manager node is not removed even though no Docker
container using the volume is running.
~ $ docker volume ls
DRIVER VOLUME NAME
local mysql-scripts
Similarly, the named volume on a worker node on which a service replica is shut down also does not get
removed even though no Docker container using the named volume is running. The named volume with the
auto-generated name is removed when no container is using it, but the mysql-scripts named volume does not.
111
Chapter 6 ■ Using Mounts
Removing a Volume
A named volume may be removed using the following command.
If the volume you try to delete is used in a Docker container, an error is generated instead and the
volume will not be removed. Even a named volume with an auto-generated name cannot be removed if it’s
being used in a container.
112
Chapter 6 ■ Using Mounts
Similarly, create a directory and add a SQL script to the worker nodes.
Create a service with a bind mount that’s using the host directory. The destination directory is specified
as /scripts.
Start a bash shell for the service container from the node on which a task is scheduled. The destination
directory /scripts is listed.
core@ip-10-0-0-143 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
e71275e6c65c mysql:latest "docker-entrypoint.sh" 5 seconds ago
Up 4 seconds 3306/tcp mysql.1.btqfrx7uffym2xvc441pubaza
Change the directory (cd) to the destination mount path /scripts. The createtable.sql script is listed
in the destination mount path of the bind mount.
root@e71275e6c65c:/# cd /scripts
root@e71275e6c65c:/scripts# ls -l
-rw-r--r--. 1 root root 1478 Jul 24 20:44 createtable.sql
113
Chapter 6 ■ Using Mounts
Each service task Docker container has its own copy of the file on the host. Because, by default, the
mount is read-write, the files in the mount path may be modified or removed. As an example, remove the
createtable.sql script from a container.
A mount may be made read-only by including an additional option in the --mount arg, as discussed
earlier. To demonstrate a readonly mount, first remove the mysql service that’s already running. Create
a service and mount a readonly bind with the same command as before, except include an additional
readonly option.
root@3bf9cf777d25:/scripts# rm createtable.sql
rm: cannot remove 'createtable.sql': Read-only file system
Summary
This chapter introduced mounts in Swarm mode. Two types of mounts are supported—bind mount and
volume mount. A bind mount mounts a pre-existing directory or file from the host into each container of a
service. A volume mount mounts a named volume, which may or may not exist prior to creating a service,
into each container in a service. The next chapter discusses configuring resources.
114
CHAPTER 7
Configuring Resources
Docker containers run in isolation on the underlying OS kernel and require resources to run. Docker Swarm
mode supports two types of resources—CPU and memory—as illustrated in Figure 7-1.
CPU RAM
The Problem
By default, Docker Swarm mode does not impose any limit on how many resources (CPU cycles or memory)
a service task may consume. Nor does Swarm mode guarantee minimum resources. Two issues can result if
no resource configuration is specified in Docker Swarm mode.
Some of the service tasks could consume a disproportionate amount of resources, while the other
service tasks are not able to get scheduled due to lack of resources. As an example, consider a node
with resource capacity of 3GB and 3 CPUs. Without any resource guarantees and limits, one service
task container could consume most of the resources (2.8GB and 2.8 CPUs), while two other service task
containers each have only 0.1GB and 0.1 CPU of resources remaining to be used and do not get scheduled,
as illustrated in Figure 7-2. A Docker service task that does not have enough resources to get scheduled is
put in Pending state.
Node
Capacity:
3 GB,
3 CPUs
0.1 GB
2.8 GB 0.1 CPUs
Docker
Containers 2.8 CPUs
0.1 GB
0.1 CPUs
The second issue that can result is that the resource capacity of a node can get fully used up without any
provision to schedule any more service tasks. As an example, a node with a resource capacity of 9GB and 9
CPUs has three service task containers running, with each using 3GB and 3 CPUs, as illustrated in Figure 7-3.
If a new service task is created for the same or another service, it does not have any available resources on
the node.
Node
Capacity:
9 GB,
9 CPUs
3 GB
3 GB 3 CPUs
Docker
Containers 3 CPUs
3 GB
3 CPUs
The Solution
Docker Swarm mode has a provision to set resource guarantees (or reserves) and resource limits, as
illustrated in Figure 7-4. A resource reserve is the minimum amount of a resource that is guaranteed or
reserved for a service task. A resource limit is the maximum amount of a resource that a service task can use
regardless of how much of a resource is available.
116
Chapter 7 ■ Configuring Resources
Figure 7-4. Managing Swarm resources with resource reserves and limits
With resource reserves, each service task container can be guaranteed 1 CPU and 1GB in the issue
discussed previously, as illustrated in Figure 7-5.
Node
Capacity:
3 GB,
3 CPUs
1 GB
1 GB 1 CPUs
Docker
Containers 1 CPUs
1 GB
1 CPUs
And, if resource limits are implemented for service task containers, excess resources would be available
to start new service task containers. In the example discussed previously, a limit of 2GB and 2 CPUs per
service task would keep the excess resources of 3GB and 3 CPUs available for new service task containers, as
illustrated in Figure 7-6.
117
Chapter 7 ■ Configuring Resources
Node
Capacity:
9 GB,
9 CPUs
2 GB
2 GB 2 CPUs
Docker
Containers 2 CPUs
2 GB
2 CPUs
118
Chapter 7 ■ Configuring Resources
List the Swarm nodes; a manager node and two worker nodes are listed.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
8ynq7exfo5v74ymoe7hrsghxh ip-172-31-33-230.ec2.internal Ready Active
o0h7o09a61ico7n1t8ooe281g * ip-172-31-16-11.ec2.internal Ready Active Leader
yzlv7c3qwcwozhxz439dbknj4 ip-172-31-25-163.ec2.internal Ready Active
A single service replica is created. The output of the command is the service ID (shown in italics).
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
2kcq6cf72t4w mysql replicated 1/1 mysql:latest
List the service tasks. The only service task is running on a worker node.
On inspecting the service, the container spec does not include any resources, limits, or reserves. The
single service task may use all of the available resources on the node on which it’s scheduled.
Reserving Resources
Swarm mode provides two options for resource reserves in the docker service create and docker
service update commands, as listed in Table 7-1.
120
Chapter 7 ■ Configuring Resources
On inspecting the service, the resources limits and reserves are listed, which contrasts with the empty
settings for resources when a service is created without the resources definition.
121
Chapter 7 ■ Configuring Resources
The CPU limit on each service task created in the preceding section is also 1 CPU. When scaling, the
total of the resource limits for all service tasks on a node may exceed the node's capacity. However, the total
of resource reserves must not exceed node capacity.
As an example, scale to five replicas.
Scaling to five schedules two replicas on the manager node, two replicas on one of the worker nodes,
and one replica on the other worker node. The aggregate of the resource limits on the worker nodes is
exceeded but the aggregate of resource reserves are within the node’s capacity.
122
Chapter 7 ■ Configuring Resources
The service configuration has the resource reserves exceeding the resource limits.
The resource reserves are within the node capacity, but because the resource limits are less than the
resource reserves, the newly started service task fails and is shut down. The service task keeps getting
restarted and shut down.
The service task resource limits can be the same as the resource reserves. Remove the mysql service and
create it again with the resource limits the same as the resource reserves. The output of the command is the
service ID (shown in italics).
The service is created and the single task is scheduled. The service task does not fail as when the
resource reserves exceeded the resource limit.
123
Chapter 7 ■ Configuring Resources
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
14d5553f0393 mysql:latest "docker-entrypoint..." 34 seconds ago Up 33 seconds
3306/tcp mysql.1.4i1fpha53absl4qky9dgafo8t
The resources are updated. Updating the resource specification for a service shuts down the service
replica and starts a new replica with the new resource specification.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
81bu63v97p9r mysql replicated 1/1 mysql:latest
~ $ docker service ps mysql
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
xkis4mirgbtv mysql.1 mysql:latest ip-172-31-33-230.ec2.internal
Running Running 14 seconds ago
4i1fpha53abs \_ mysql.1 mysql:latest ip-172-31-16-11.ec2.internal
Shutdown Shutdown 15 seconds ago
124
Chapter 7 ■ Configuring Resources
"Reservations": {
"NanoCPUs": 1000000000,
"MemoryBytes": 268435456
}
},
]
None of the service replicas is scheduled, as indicated by the Replicas column value of 0/3, because
the requested capacity is more than the node capacity of a single node.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
cgrihwij2znn mysql replicated 0/3 mysql:latest
If a service that was previously running with all replicas is scaled up, some or all of the replicas could
get de-scheduled. This happens if the resources required to run the new replicas exceed the available node
capacity. As an example, remove the mysql service and create a new mysql service with resource settings
within the provision of a node. The output of the command is the service ID (shown in italics).
125
Chapter 7 ■ Configuring Resources
The service is created and the single replica is running as indicated by the Replicas column value of 1/1.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 1/1 mysql:latest
Incrementally scale up the service to determine if all of the service replicas are scheduled. First, scale up
to three replicas.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 3/3 mysql:latest
The service replicas are scheduled, one replica on each node in the Swarm, using the spread scheduling
strategy, which is discussed in more detail in Chapter 8.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 3/10 mysql:latest
126
Chapter 7 ■ Configuring Resources
Some of the replicas are Allocated but not scheduled for running on any node due to insufficient
resources. The service replicas not running are listed with Current State set to Pending.
Adding one or more new worker nodes could make the service reconcile its desired state and cause all
the replicas to run. To demonstrate next, we scale up the CloudFormation stack to increase the number of
worker nodes.
The Update Docker Stack wizard starts. It’s similar to the Create Stack wizard. In the Select Template,
click on Next without modifying any settings. In Specify Details, increase Number of Swarm Worker Nodes?
to 10, as shown in Figure 7-9. Click on Next.
127
Chapter 7 ■ Configuring Resources
When the update completes, the stack’s status becomes UPDATE_COMPLETE, as shown in Figure 7-11.
128
Chapter 7 ■ Configuring Resources
The Swarm gets eight new worker nodes, for a total of 10 worker nodes. List the service description
periodically (after an interval of few seconds) and, as new worker nodes are created, new replicas start to
reconcile the current state with the desired state. The number of replicas in the Replicas column increases
gradually within a few seconds. All the replicas for the mysql service start running, as indicated by 10/10 in
the service listing.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 3/10 mysql:latest
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 6/10 mysql:latest
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 9/10 mysql:latest
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 10/10 mysql:latest
Listing the service replicas lists all replicas as Running. The previously Pending replicas are scheduled
on the new nodes.
If the stack is updated again to decrease the number of worker nodes, some of the replicas shut down
and are de-scheduled. After decreasing the number of worker nodes, the Replicas column lists only 5/10
replicas as running.
129
Chapter 7 ■ Configuring Resources
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
ysef8n02mhuw mysql replicated 5/10 mysql:latest
Some of the service tasks are listed as Shutdown because some of the worker nodes have been removed
from the Swarm.
Summary
This chapter discussed the resources model of Docker Swarm mode, which is based on resource reserves
and resource limits. Reserved resources cannot be more than resource limits and resource allocation to
service tasks is limited by the node capacity. The next chapter discusses scheduling in Docker Swarm mode.
130
CHAPTER 8
Scheduling
In Chapter 2, the Docker Swarm was introduced. In Chapter 4, Docker Swarm services were introduced.
A service consists of zero or more service tasks (replicas), which it schedules on the nodes in a Swarm.
The desired state of a service includes the number of tasks that must be run. Scheduling is defined as the
process of placing a service task that is required to be run on a node in the Swarm to keep the desired state
of a service, as illustrated in Figure 8-1. A service task may only be scheduled on a worker node. A manager
node is also a worker node by default.
Scheduling
Task Node
Figure 8-1. Scheduling
The Problem
Without a scheduling policy, the service tasks could get scheduled on a subset of nodes in a Swarm. As
an example, all three tasks in a service could get scheduled on the same node in a Swarm, as illustrated in
Figure 8-2.
Task
Task Node
Task
Docker Swarm
Node
Service
Node
The Solution
To overcome the issues discussed in the preceding section, service task scheduling in a Docker Swarm is
based on a built-in scheduling policy. Docker Swarm mode uses the spread scheduling strategy to rank nodes
for placement of a service task (replica). Node ranking is computed for scheduling of each task and a task is
132
Chapter 8 ■ Scheduling
scheduled on the node with the highest computed ranking. The spread scheduling strategy computes node
rank based on the node's available CPU, RAM, and the number of containers already running on the node. The
spread strategy optimizes for the node with the least number of containers. Load sharing is the objective of the
spread strategy and results in tasks (containers) spread thinly and evenly over several machines in the Swarm.
The expected outcome of the spread strategy is that if a single node or a small subset of nodes go down or
become available, only a few tasks are lost and a majority of tasks in the Swarm continue to be available.
■■Note Because a container consumes resources during all states, including when it is exited, the spread
strategy does not take into consideration the state of a container. It is recommended that a user remove
stopped containers, because a node that would otherwise be eligible and suitable for scheduling a new task
becomes unsuitable if it has several stopped containers.
The spread scheduling strategy does not take into consideration for which service a task is scheduled.
Only the available and requested resources are used to schedule a new task. Scheduling using the spread
scheduling policy is illustrated in Figure 8-3.
Node
Task
Docker Swarm
Node
Service Task
Node
Task
As a hypothetical example:
1.
Start with three nodes, each with a capacity of 3GB and 3 CPUs and no containers
running.
133
Chapter 8 ■ Scheduling
2.
Create a mysql service with one replica, which requests resources of 1GB and
1 CPU. The first replica gets scheduled randomly on one of the three nodes in
the Swarm as all nodes have the same ranking. If all the nodes have the same
ranking, a new task gets scheduled randomly on one of the nodes.
3.
Scale the mysql service to three tasks. As one of the nodes is already loaded,
the two new tasks are scheduled on the other two nodes, spreading one task to
each node.
4.
Scale the mysql service to five tasks. Two new tasks must be started and all the
nodes have the same ranking because they have the same available resource
capacity and the same number of containers running. The two new tasks are
scheduled randomly on two of the nodes. As a result, two nodes have two tasks
each and one node has one task.
5.
Create another service for the nginx server with a desired state of two tasks,
with each task requesting 0.5GB and 0.5 CPU. Both the tasks are scheduled on
the node that has only the task of the mysql service, as it is the least loaded. As
a result, two nodes have two tasks of mysql service and an available capacity of
1GB and 1 CPU, and one node has two tasks of nginx service and one task of
mysql service and also an available resource capacity of 1GB and 1 CPU.
6.
Scale the nginx service to three. Even though all nodes have the same available
CPU and RAM, the new task is not scheduled randomly on one of the three
nodes, but is scheduled on the node with the least number of containers. As a
result, the new nginx task gets scheduled randomly on one of the nodes, with
two tasks of mysql each. If the nodes have the same available CPU and RAM, the
node with fewer containers (running or stopped) is selected for scheduling the
new task.
This chapter covers the following topics:
• Setting the environment
• Creating and scheduling a service—the spread scheduling
• Desired state reconciliation
• Scheduling tasks limited by node resource capacity
• Adding service scheduling constraints
• Scheduling on a specific node
• Adding multiple scheduling constraints
• Adding node labels for scheduling
• Adding, updating, and removing service scheduling constraints
• Spread scheduling and global services
134
Chapter 8 ■ Scheduling
SSH Login to the Swarm manager using the public IP address, which may be obtained from the EC2
console, as shown in Figure 8-5.
135
Chapter 8 ■ Scheduling
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
0waa5g3b6j641xtwsygvjvwc1 ip-172-31-0-147.ec2.internal Ready Active
e7vigin0luuo1kynjnl33v9pa ip-172-31-29-67.ec2.internal Ready Active
ptm7e0p346zwypos7wnpcm72d * ip-172-31-25-121.ec2.internal Ready Active Leader
Subsequently, list the services using docker service ls. Initially, the REPLICAS column could be 0/5,
indicating that none of the replicas are scheduled and running yet.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1onpemnoz4x1 mysql replicated 0/5 mysql:latest
Run the command again after a while; all the replicas should be running as indicated by a 5/5 in the
REPLICAS column. List the service replicas using the docker service ps mysql command. The tasks should
be running or preparing to run.
Following the spread scheduling strategy, two of the replicas are listed as scheduled on one of the
worker nodes, two on the other worker node, and one on the manager node. Because of the odd number of
replicas, the placement cannot be completely evenly distributed, but a single node does not have more than
two replicas.
136
Chapter 8 ■ Scheduling
To see how the spread scheduling strategy distributes the replicas evenly across a Swarm, scale the
service to six replicas. The output of the docker service scale command is in italics.
Subsequently, list the replicas. Each node has two replicas scheduled on it, as the spread scheduling
policy is designed to schedule.
As a service replica or task is nothing but a slot to run a container, each node runs two containers for the
mysql service.
To further demonstrate spread scheduling, scale down the service to three tasks. The command output
is in italics.
List the service tasks. Each node has one task running on it, which again is an evenly spread scheduling
of tasks.
137
Chapter 8 ■ Scheduling
In an earlier task listing, all tasks were in the current state preparing and the desired state running.
Swarm mode is designed to reconcile the desired state as much as feasible, implying that if node
resources are available, the desired number of replicas runs. To demonstrate, update the Docker for AWS
CloudFormation stack by choosing Actions ➤ Update Stack, as shown in Figure 8-6.
138
Chapter 8 ■ Scheduling
Decrease the number of worker nodes from two to one, as shown in Figure 8-7.
Subsequently, list the service replicas from the Swarm manager node.
The service replicas running on the Swarm worker node that was made to leave the Swarm are listed as
shutdown. New replicas are started on the remaining two nodes in the Swarm to reconcile the desired state.
139
Chapter 8 ■ Scheduling
Listing only the replicas with a desired state of running, the six replicas are listed as scheduled evenly
between the two nodes—three replicas on the manager node and three replicas on the worker node.
The spread scheduling strategy does not reschedule already running replicas to achieve even spread
across a Swarm if new nodes are added to the Swarm. To demonstrate this, we increase the number of
worker nodes back to two, as shown in Figure 8-8.
Adding a node to a swarm does not shut down replicas on other nodes and start replicas on the new
node. Listing the running replicas does not indicate a replacement of the service replicas. Service replicas
continue to run on the nodes they were running on before the new node was added—three on the manager
node and three on the worker node.
140
Chapter 8 ■ Scheduling
Listing the services indicates that one replica of the service is created.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0qe2thy0dlvi mysql replicated 1/1 mysql:latest
The single replica is scheduled on the manager node, which is chosen randomly if all nodes in a Swarm
have the same node ranking.
141
Chapter 8 ■ Scheduling
Next, to potentially make the service replicas consume more resources than available, scale the service
to five replicas.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0qe2thy0dlvi mysql replicated 3/5 mysql:latest
Listing the service replicas indicates that some of the replicas are pending instead of running.
The pending state implies that the replicas are allocated to the service but not scheduled on any node
yet. Only three replicas could run based on the requested resources and available node resources, one on
each node.
Because the replicas are not scheduled due to lack of resources, we add one or more new worker nodes
to potentially schedule the replicas to reconcile the desired state. Increase the number of worker nodes to
five, as shown in Figure 8-9.
142
Chapter 8 ■ Scheduling
The Swarm should list six nodes after a new node is added. As resources became available for the
pending tasks, the tasks get scheduled and start running.
If the number of worker nodes is decreased, some of the tasks are descheduled, as indicated by the
shutdown desired state.
143
Chapter 8 ■ Scheduling
Updating the service to lower CPU and memory resource usage reserved only updates the
UpdateConfig for the service. This does not lower the resource usage of the already running tasks or
make pending or shutdown tasks run. As an example, lower the resource reserves and limits for the mysql
service when some of the tasks are pending or shutdown due to lack of resources.
The UpdateConfig gets modified, but only applies to new replicas created after that point.
Only three of the replicas in the mysql service are actually running.
144
Chapter 8 ■ Scheduling
To force the service tasks to use the new resource settings, scale down the service to one task and then
scale back up to five tasks.
145
Chapter 8 ■ Scheduling
A placement constraint may be added using the --constraint option with the docker service create
command. For an already running service, constraints may be added and removed with the --constraint-add
and --constraint-rm options, respectively, with the docker service update command. The node
attributes discussed in Table 8-1 may be used to specify constraints.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
81h6uvu8uq0emnovzkg6v7mzg ip-172-31-2-177.ec2.internal Ready Active
e7vigin0luuo1kynjnl33v9pa ip-172-31-29-67.ec2.internal Ready Active
ptm7e0p346zwypos7wnpcm72d * ip-172-31-25-121.ec2.internal Ready Active Leader
We can schedule a service by node role. Create a mysql service with the placement constraint that the
service tasks be scheduled on worker nodes only. First, remove the mysql service if it’s already running
146
Chapter 8 ■ Scheduling
The service is created and three tasks are scheduled only on the two worker nodes, as listed in the
running service tasks.
Next, we use the node ID to schedule a service’s tasks. Copy the node ID for the manager node, which
is also the leader in the Swarm being the only manager node. Substitute the node ID in the following
command to create a service for the MySQL database and schedule replicas only on the manager node.
All the three replicas of the service are scheduled on the manager node only.
147
Chapter 8 ■ Scheduling
A service gets created. Listing the services lists 3/3 replicas as running.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
87g0c8kauhz8 mysql replicated 3/3 mysql:latest
Listing the service tasks indicates that all tasks are scheduled on a single worker node. The two
constraints are met: the node is a worker node and not the worker node with hostname ip-172-31-2-177.ec2.
internal.
If the mysql service is updated to remove the constraints, the spread scheduling strategy reschedules the
tasks based on node ranking. As an example, update the service to remove the two placement constraints
added. A constraint is removed with the –constraint-rm option of the docker service update command.
When a service is updated to remove constraints, all the service tasks are shut down and new service
tasks are started. The new service tasks are started, one each on the three nodes in the Swarm.
148
Chapter 8 ■ Scheduling
List only the running tasks. One task is listed running on each node.
Similarly, multiple node constraints could be used to run replicas only on a manager node. Next, we
update the mysql service to run on a specific manager node. First, promote one of the worker nodes to
manager.
Subsequently, two manager nodes are listed as indicated by the Manager Status for two of the nodes.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
81h6uvu8uq0emnovzkg6v7mzg ip-172-31-2-177.ec2.internal Ready Active Reachable
e7vigin0luuo1kynjnl33v9pa ip-172-31-29-67.ec2.internal Ready Active
ptm7e0p346zwypos7wnpcm72d * ip-172-31-25-121.ec2.internal Ready Active Leader
Update the mysql service to add multiple node constraints to run replicas only on a specific manager
node. Constraints are added using the --constraint-add option of the docker service update command.
149
Chapter 8 ■ Scheduling
Again, all service tasks are shut down and new tasks are started, all on the specified manager node that
was promoted from the worker node.
As an example, add the label db=mysql to the node with a hostname set to ip-172-31-25-121.ec2.
internal, which is the leader node.
A node label is added. On inspecting the node, the label is listed in the Labels field.
Next, create a service that uses the node label to add a placement constraint. The --constraint option
for the label must include the prefix node.labels.
The service is created. Listing the tasks lists all the tasks on the Leader manager node, which is what the
node label constraint specified.
The label added may be removed with the --label-rm option of the docker node update command in
which the only the label key is specified.
A mysql service gets created with three replicas scheduled on the three nodes in the Swarm, using the
spread policy.
Next, update the service with the docker service update command to add a constraint for the service
replicas to run only on the manager nodes.
In a Swarm with two manager nodes, all the service tasks are shut down and new tasks are started only
on the manager nodes.
151
Chapter 8 ■ Scheduling
Scheduling constraints may be added and removed in the same docker service update command.
As an example, remove the constraint for the node to be a manager and add a constraint for the node to be a
worker.
Again. all the service tasks are shut down and new tasks are started only on the worker nodes.
If the only scheduling constraint that specifies the node role as worker is removed, the spread
scheduling strategy starts new tasks spread evenly across the Swarm. To demonstrate, remove the constraint
for the node role to be a worker.
Subsequently, new tasks are spread across the nodes in the Swarm.
152
Chapter 8 ■ Scheduling
The global service is created. Listing the service tasks for the tasks with desired state as running lists
only the tasks on the worker nodes.
If created without the constraint to schedule on worker nodes only, a global service schedules one task
on each node, as demonstrated by the following example.
153
Chapter 8 ■ Scheduling
Summary
This chapter discussed the scheduling policy of spread used in the Docker Swarm mode, whereby service
replicas are spread evenly across nodes in a Swarm based on node ranking; a higher node ranking gets a
service replica placement priority. We also discussed the effect of limited node resource capacity and how
to alleviate it by adding new nodes to the Swarm. We discussed placement constraints for scheduling new
replicas. The spread scheduling policy is not relevant for global services as they create one service task on
each node by default. However, scheduling constraints may be used with global services. In the next chapter
we discuss rolling updates to Docker services.
154
CHAPTER 9
Rolling Updates
The Docker Swarm mode provisions services consisting of replicas that run across the nodes in the Swarm.
A service definition is created when a service is first created/defined. A service definition is created with the
docker service create command. That command provides several options, including those for adding
placement constraints, container labels, service labels, DNS options, environment variables, resource
reserves and limits, logging driver, mounts, number of replicas, restart condition and delay, update delay,
failure action, max failure ratio, and parallelism, most of which were discussed in Chapter 4.
The Problem
Once a service definition has been created, it may be required to update some of the service options such as
increase/decrease the number of replicas, add/remove placement constraints, update resource reserves and
limits, add/remove mounts, add/remove environment variables, add/remove container and service labels,
add/remove DNS options, and modify restart and update parameters. If a service is required to be shut down
as a whole to update service definition options, an interruption of service is the result.
The Solution
Docker Swarm mode includes the provision for rolling updates. In a rolling update, the service is not shut
down, but individual replicas/tasks in the service are shut down one at a time and new service replicas/
tasks based on the new service definition are started one at a time, as illustrated in Figure 9-1. As a result the
service continues to be available during the rolling update. The service tasks that are served to a client could
be from both old and new service definitions during a rolling update. As an example, if the rolling update
performs an update to a more recent image tag, some of the tasks served to external clients during the rolling
update could be from a mix of old image tag and new image tag.
Service
Replicas
Rolling update creates a new service definition and a new desired state for a service. Rolling update
involves shutting down all service replicas and starting all new service replicas and does not apply to service
replicas that have not yet been scheduled, due to lack of resources for example. Even updating just the
number of replicas in a rolling update shuts down or fails all the old replicas and starts all new replicas.
The following sequence is used by the scheduler during a rolling update.
1.
The first task is stopped.
2.
An update for the stopped task is scheduled.
3.
A Docker container for the updated task is started.
4.
If the update to a task returns RUNNING, wait for the duration specified in
--update-delay and start the update to the next task.
156
Chapter 9 ■ Rolling Updates
5.
If during the update, a task returns FAILED, perform the --update-failure-
action, which is to pause the update by default.
6.
Restart a paused update with docker service update <SERVICE-ID>.
7.
If an update failure is repeated, find the cause of the failure and reconfigure the
service by supplying other options to the docker service update.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
81h6uvu8uq0emnovzkg6v7mzg ip-172-31-2-177.ec2.internal Ready Active
e7vigin0luuo1kynjnl33v9pa ip-172-31-29-67.ec2.internal Ready Active
ptm7e0p346zwypos7wnpcm72d * ip-172-31-25-121.ec2.internal Ready Active Leader
157
Chapter 9 ■ Rolling Updates
To configure the rolling update policy at service deployment time, the options to be configured must
be supplied when the service is created. As an example, create a service for MySQL database and specify the
update policy options --update-delay and --update-parallelism.
The service is created. Listing the services may not list all replicas as running initially, as indicated by
0/1 in the REPLICAS column.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wr0z48v1uguk mysql replicated 0/1 mysql:5.6
Running the same command after a while should list all replicas as running, as indicated by 1/1 in
REPLICAS column.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wr0z48v1uguk mysql replicated 1/1 mysql:5.6
The single service replica is scheduled on the manager node itself and the Docker container for the
replica is started.
Creating a service using rolling update options does not by itself demonstrate a rolling update. It only
defines the UpdateConfig settings of the service. In the next section we perform a rolling update.
158
Chapter 9 ■ Rolling Updates
Subsequently, the services listing may list some of the replicas as not started yet in the output to the
docker service ls command. But, running the command again after a while should list all replicas as running.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
wr0z48v1uguk mysql replicated 5/5 mysql:5.6
During the rolling update, all the running tasks are shut down and new tasks are started. The desired
state of the mysql.1 task gets updated to shutdown and the current state is set to failed. A new task mysql.1
is started.
When scaling from one to five replicas, first a few new tasks are started and then the task running
initially is shut down so that the service continues to be available during the rolling update. If the only task in
the service were to be shut down first before starting any new tasks, the service wouldn’t have any running
tasks for a short while.
The desired state of running five replicas is not immediately reconciled during a rolling update. Fewer
than five tasks could be running while the rolling update is in progress. Listing the running service tasks lists
only three tasks as running.
When the rolling update has completed, five tasks are running.
Inspecting the service should list the updated number of replicas. The UpdateConfig is also listed with
the docker service inspect command.
160
Chapter 9 ■ Rolling Updates
The service rolling update gets started. Listing the service replicas lists mysql:5.6 image-based replicas
as shutting down, as indicated by the shutdown desired state and mysql:latest image-based replicas as
starting, as indicated by the running desired state.
161
Chapter 9 ■ Rolling Updates
While the rolling update is in progress, some of the running tasks could be based on the previous
service specification (mysql:5.6), while others are based on the new service specification (mysql:latest).
When the rolling update has completed, all running tasks are based on the new service specification.
162
Chapter 9 ■ Rolling Updates
When the update has completed, the UpdateStatus state becomes "completed" and the Message
becomes "update completed".
As indicated by the StartedAt and CompletedAt timestamp, the rolling update takes about two minutes.
Listing only tasks with desired state of running indicates that one task has been running for 21 seconds and
another task has been running for two minutes.
163
Chapter 9 ■ Rolling Updates
jir97p344kol mysql.4 mysql:latest ip-172-31-29-67.ec2.internal
Running Running about a minute ago
5rly53mcc8yq mysql.5 mysql:latest ip-172-31-2-177.ec2.internal
Running Running 45 seconds ago
The environment variables added may be removed with another docker service update command
and the --env-rm options for each environment variable to remove. Only the env variable name is to be
specified in --env-rm, not the env value.
Another rolling update gets performed. All service tasks get shut down and new service tasks based
on the new service specification are started. The service definition lists only the mandatory environment
variable MYSQL_ROOT_PASSWORD.
New resource limits and reserves are configured, as listed in the service specification. The
PreviousSpec indicates that no Resources Limits and Reservations are configured to start with.
164
Chapter 9 ■ Rolling Updates
"Resources": {
"Limits": {
"NanoCPUs": 2000000000,
"MemoryBytes": 536870912
},
"Reservations": {
"NanoCPUs": 1000000000,
"MemoryBytes": 268435456
}
},
... },
"PreviousSpec": {
...
"Name": "mysql",
"Resources": {
"Limits": {},
"Reservations": {}
},
"UpdateStatus": {
"State": "updating",
"StartedAt": "2017-07-25T19:23:44.004458295Z",
"Message": "update in progress"
}
}
]
Setting new resource limits and reserves are subject to node capacity limits. If requested resources
exceed the node capacity the rolling update may continue to run and not get completed, with some tasks in
the pending current state.
If some tasks are pending, adding resources to the Swarm could make the pending tasks run. We
can update the CloudFormation stack to increase the number of worker nodes from 2 to 3, as shown in
Figure 9-2.
165
Chapter 9 ■ Rolling Updates
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
81h6uvu8uq0emnovzkg6v7mzg ip-172-31-2-177.ec2.internal Ready Active
e7vigin0luuo1kynjnl33v9pa ip-172-31-29-67.ec2.internal Ready Active
ptm7e0p346zwypos7wnpcm72d * ip-172-31-25-121.ec2.internal Ready Active Leader
t4d0aq9w2a6avjx94zgkwc557 ip-172-31-42-198.ec2.internal Ready Active
With increased resources in the Swarm, the pending tasks also start to run.
166
Chapter 9 ■ Rolling Updates
The mysql:latest image-based tasks start to get shut down and postgres image-based replacement
tasks begin to get started one task at a time. The rolling update does not get completed immediately and
listing the service tasks with the desired state as running lists some tasks based on the postgres:latest
image, while other tasks are still using the mysql:latest image.
One replica at a time, the mysql image-based replicas are shut down and postgres image-based
replicas are started. After about two minutes, all tasks have updated to the postgres:latest image.
The service name continues to be the same and the replica names also include the mysql prefix. The
mysql service definition ContainerSpec lists the image as postgres. Updating the image to postgres does
not imply that all other service definition settings are updated for the new image. The postgres image
does not use the MYSQL_ROOT_PASSWORD, but the environment variable continues to be in the service
specification.
167
Chapter 9 ■ Rolling Updates
The MYSQL_ROOT_PASSWORD environment variable may be removed with another update command.
Subsequently, the ContainerSpec does not include the MYSQL_ROOT_PASSWORD environment variable.
168
Chapter 9 ■ Rolling Updates
"Env": [
"MYSQL_ROOT_PASSWORD=mysql"
],
... },
"UpdateStatus": {
"State": "updating",
"StartedAt": "2017-07-25T20:42:56.651025816Z",
"Message": "update in progress"
}
}
]
A rolling update to remove an environment variable involves shutting down all service tasks and
starting all new tasks. The update takes about two minutes to complete.
Listing the running tasks indicates that tasks have only been running two minutes at the maximum.
By removing the env variable MYSQL_ROOT_PASSWORD the mysql service gets updated to use Docker image
postgres. The service name itself cannot be updated. The service may be updated back to the mysql image
and the mandatory environment variable MYSQL_ROOT_PASSWORD added with another rolling update.
169
Chapter 9 ■ Rolling Updates
Again, listing the replicas with a desired state as running lists the postgres image-based replicas being
replaced by mysql image-based replicas. One replica at a time, the postgres image-based replicas are
replaced by mysql image-based replicas.
Within a minute or two, all the postgres image replicas are replaced by mysql image-based replicas.
The service specification is updated to the mysql image and the mandatory environment variable
MYSQL_ROOT_PASSWORD is added. When the update has completed, the UpdateStatus State becomes completed.
... },
"PreviousSpec": {
"Name": "mysql",
"ContainerSpec": {
170
Chapter 9 ■ Rolling Updates
"Image": "postgres:latest@sha256:e92fe21f695d27be7050284229a1c8c63ac10d8
8cba58d779c243566e125aa34",
... },
"UpdateStatus": {
"State": "completed",
"StartedAt": "2017-07-25T20:45:54.104241339Z",
"CompletedAt": "2017-07-25T20:47:47.996420791Z",
"Message": "update completed"
}
}
]
Rolling Restart
Docker 1.13 added a new option to perform a rolling restart even when no update is required based on the
update options. As an example starting with the mysql service with update config as --update-parallelism
1 and --update-delay 20s, the following update command won’t perform any rolling update, as no changes
are being made to the service.
Service tasks begin to get shut down and new service tasks are started even though no update is made
to the service specification. Some tasks are listed as having started a few seconds ago.
171
Chapter 9 ■ Rolling Updates
"UpdateStatus": {
"State": "completed",
"StartedAt": "2017-07-25T20:49:34.716535081Z",
"CompletedAt": "2017-07-25T20:51:36.880045931Z",
"Message": "update completed"
}
}
]
After the rolling restart has completed, the service has all new service tasks as shown.
A mount is added to the service and is listed in the service definition. Adding a mount involves shutting
down all service tasks and starting new tasks. The rolling update could take 1-2 minutes.
172
Chapter 9 ■ Rolling Updates
"UpdateStatus": {
"State": "completed",
"StartedAt": "2017-07-25T20:51:55.205456644Z",
"CompletedAt": "2017-07-25T20:53:56.451313826Z",
"Message": "update completed"
}
}
]
The mount added may be removed with the --mount-rm option of the docker service update
command and by supplying only the mount destination directory as an argument.
Another rolling update is performed and the mount is removed. It does not get listed in the service
definition. The PreviousSpec lists the mount. The UpdateStatus indicates the status of the rolling update.
173
Chapter 9 ■ Rolling Updates
The rolling update is still started and the update status indicates that the update is paused. The update
status message indicates “update paused due to failure or early termination of task”.
Two options are available if a rolling update is paused due to update to a task having failed.
• Restart a paused update using docker service update <SERVICE-ID>.
• If an update failure is repeated, find the cause of the failure and reconfigure the
service by supplying other options to the docker service update <SERVICE-ID>
command.
174
Chapter 9 ■ Rolling Updates
We start a rolling update to the postgres image from the mysql image.
Subsequently, some of the tasks are based on the postgres image and some on the mysql image.
175
Chapter 9 ■ Rolling Updates
The postgres image-based tasks start to get shut down and the mysql image-based tasks are started.
The rolling update from mysql to postgres is rolled back. When the rollback has completed, all replicas
are mysql image-based, which is the desired state of the service to start with.
176
Chapter 9 ■ Rolling Updates
> --name mysql \
> mysql
7nokncnti3izud08gfdovwxwa
The service is updated. The Spec>ContainerSpec>Image is updated to mysql:5.6 from the PreviousSpec>
ContainerSpec>Image of mysql:latest.
Within a minute, all the new service tasks based on mysql:5.6 are started.
177
Chapter 9 ■ Rolling Updates
A rolling update cannot be performed on a global service to set replicas with the --replicas option, as
indicated by the message in the following docker service update command.
As the output indicates, while replicas are set on a replicated service mysql, replicas are not set on the
global service.
Summary
This chapter discussed rolling updates on a service. A rolling update on a service involves shutting down
previous service tasks and updating the service definition to start new tasks. In the next chapter, we discuss
configuring networking in Swarm mode.
178
CHAPTER 10
Networking
Networking on a Docker Engine is provided by a bridge network, the docker0 bridge. The docker0 bridge is local in
scope to a Docker host and is installed by default when Docker is installed. All Docker containers run on a Docker
host and are connected to the docker0 bridge network. They communicate with each other over the network.
The Problem
The default docker0 bridge network has the following limitations:
• The bridge network is limited in scope to the local Docker host to provide container-
to-container networking and not for multi-host networking.
• The bridge network isolates the Docker containers on the host from external access.
A Docker container may expose a port or multiple ports and the ports may be published
on the host for an external client host access, as illustrated in Figure 10-1, but by default
the docker0 bridge does not provide any external client access outside the network.
Exposed
port
published Docker
on the Containers
host
interface
Docker
Host
docker0
bridge
External Host
Exposed
port Docker
Container
The Solution
The Swarm mode (Docker Engine >=1.12) creates an overlay network called ingress for the nodes in the
Swarm. The ingress overlay network is a multi-host network to route ingress traffic to the Swarm; external
clients use it to access Swarm services. Services are added to the ingress network if they publish a port.
The ingress overlay network has a default gateway and a subnet and all services in the ingress network
are exposed on all nodes in the Swarm, whether a service has a task scheduled on each node or not. In
addition to the ingress network, custom overlay networks may be created using the overlay driver. Custom
overlay networks provide network connectivity between the Docker daemons in the Swarm and are used for
service-to-service communication. Ingress is a special type of overlay network and is not for network traffic
between services or tasks. Swarm mode networking is illustrated in Figure 10-2.
Swarm
Swarm Node
Node Docker
Docker Host
Docker Host
Container Custom Swarm Overlay
Service Network
started container
with IP: 10.0.0.3
IP: 10.0.0.2
docker
run
Swarm Overlay Network
“ingress”
Subnet: 10.0.0.0/24
Gateway: 10.0.0.1
Swarm Node Docker
Host (no service task) IP: 10.0.0.4
Swarm
Node
Docker
Host
Docker Host (in
Docker Host (not in any another Swarm)
Swarm)
The following Docker networks are used or could be used in Swarm mode.
180
Chapter 10 ■ Networking
If the <PUBLISHED-PORT> is omitted, the Swarm manager selects a port in the range 30000-32767 to
publish the service.
The following ports must be open between the Swarm nodes to use the ingress network.
• Port 7946 TCP/UDP is used for the container network discovery
• Port 4789 UDP is used for the container ingress network
181
Chapter 10 ■ Networking
182
Chapter 10 ■ Networking
Obtain the public IP address of the Swarm manager node, as shown in Figure 10-4.
Figure 10-4. Obtaining the public IP address of a Swarm manager node instance
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
npz2akark8etv4ib9biob5yyk ip-172-31-47-123.ec2.internal Ready Active
p6wat4lxq6a1o3h4fp2ikgw6r ip-172-31-3-168.ec2.internal Ready Active
tb5agvzbi0rupq7b83tk00cx3 * ip-172-31-47-15.ec2.internal Ready Active Leader
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
34a5f77de8cf bridge bridge local
0e06b811a613 docker_gwbridge bridge local
6763ebad69cf host host local
e41an60iwval ingress overlay swarm
eb7399d3ffdd none null local
183
Chapter 10 ■ Networking
We discussed most of these networks in a preceding section. The "host" network is the networking
stack of the host. The "none" network provides no networking between a Docker container and the host
networking stack and creates a container without network access.
The default networks are available on a Swarm manager node and Swarm worker nodes even before
any service task is scheduled.
The listed networks may be filtered using the driver filter set to overlay.
Only the ingress network is listed. No other overlay network is provisioned by default.
The network of interest is the overlay network called ingress, but all the default networks are discussed
in Table 10-1 in addition to being discussed in the chapter introduction.
Network Description
bridge The bridge network is the docker0 network created on all Docker hosts. The
Docker daemon connects containers to the docker0 network by default. Any Docker
container started with the docker run command, even on a Swarm node, connects to
the docker0 bridge network.
docker_gwbridge Used for communication among Swarm nodes on different hosts. The network is
used to provide external connectivity to a container that lacks an alternative network
for connectivity to external networks and other Swarm nodes. When a container is
connected to multiple networks, its external connectivity is provided via the first non-
internal network, in lexical order.
host Adds a container to the host’s network stack. The network configuration inside the
container is the same as the host’s.
ingress The overlay network used by the Swarm for ingress, which is external access. The
ingress network is only for the routing mesh/ingress traffic.
none Adds a container to a container specific network stack and the container lacks a
network interface.
The default networks cannot be removed and, other than the ingress network, a user does not need to
connect directly or use the other networks. To find detailed information about the ingress network, run the
following command.
184
Chapter 10 ■ Networking
The ingress network's scope is the Swarm and the driver used is overlay. The subnet and gateway are
10.255.0.0/16 and 10.255.0.1, respectively. The ingress network is not an internal network as indicated
by the internal setting of false, which implies that the network is connected to external networks. The
ingress network has an IPv4 address and the network is not IPv6 enabled.
185
Chapter 10 ■ Networking
"Name": "ip-172-31-47-123.ec2.internal-d6ebe8111adf",
"IP": "172.31.47.123"
},
{
"Name": "ip-172-31-3-168.ec2.internal-99510f4855ce",
"IP": "172.31.3.168"
}
]
}
]
The service is created and the service task is scheduled on one of the nodes.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
likujs72e46t mysql replicated 1/1 mysql:latest
The mysql service created is not added to the ingress network, as it does not publish a port.
186
Chapter 10 ■ Networking
The service creates three tasks, one on each node in the Swarm.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
l76ukzrctq22 hello-world replicated 3/3 tutum/hello-world:latest *:8080->80/tcp
~ $ docker service ps hello-world
ID NAME IMAGE NODE
DESIRED STATE CURRENT STATE ERROR PORTS
5ownzdjdt1yu hello-world.1 tutum/hello-world: latest ip-172-31-14-234.ec2.internal
Running Running 33 seconds ago
csgofrbrznhq hello-world.2 tutum/hello-world:latest ip-172-31-47-203.ec2.internal
Running Running 33 seconds ago
sctlt9rvn571 hello-world.3 tutum/hello-world:latest ip-172-31-35-44.ec2.internal
Running Running 32 seconds ago
187
Chapter 10 ■ Networking
The service may be accessed on any node instance in the Swarm on port 8080 using the <Public DNS>:
<8080> URL. If an elastic load balancer is created, as for Docker for AWS, the service may be accessed at
<LoadBalancer DNS>:<8080>, as shown in Figure 10-5.
Figure 10-5. Invoking a Docker service in the ingress network using EC2 elastic load balancer public DNS
The <PublishedPort> 8080 may be omitted in the docker service create command.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
pbjcjhx163wm hello-world replicated 3/3 tutum/hello-world:latest *:0->80/tcp
188
Chapter 10 ■ Networking
The Swarm manager automatically assigns a published port (30000), as listed in the docker service
inspect command.
189
Chapter 10 ■ Networking
"VirtualIPs": [
{
"NetworkID": "bllwwocjw5xejffmy6n8nhgm8",
"Addr": "10.255.0.5/16"
}
]
}
}
]
Even though the service publishes a port (30000 or other available port in the range 30000-32767),
the AWS elastic load balancer for the Docker for AWS Swarm does not add a listener for the published
port (30000 or other available port in the range 30000-32767). We add a listener with <Load Balancer
Port:Instance Port> mapping of 30000:30000, as shown in Figure 10-6.
190
Chapter 10 ■ Networking
Invoke the service at the <Load Balancer DNS>:<30000> URL, as shown in Figure 10-7.
> --driver overlay \
> mysql-network
mkileuo6ve329jx5xbd1m6r1o
The custom overlay network is created and listed in networks as an overlay network with Swarm scope.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
34a5f77de8cf bridge bridge local
0e06b811a613 docker_gwbridge bridge local
6763ebad69cf host host local
e41an60iwval ingress overlay swarm
mkileuo6ve32 mysql-network overlay swarm
eb7399d3ffdd none null local
Listing only the overlay networks should list the ingress network and the custom mysql-network.
The detailed information about the custom overlay network mysql-network lists the subnets and
gateways.
192
Chapter 10 ■ Networking
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": null,
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097,4098"
},
"Labels": null
}
]
Only a single overlay network can be created for specific subnets, gateways, and IP ranges. Using a
different subnet, gateway, or IP range, a different overlay network may be created.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
34a5f77de8cf bridge bridge local
0e06b811a613 docker_gwbridge bridge local
6763ebad69cf host host local
e41an60iwval ingress overlay swarm
mkileuo6ve32 mysql-network overlay swarm
qwgb1lwycgvo mysql-network-2 overlay swarm
eb7399d3ffdd none null local
New overlay networks are only made available to worker nodes that have containers using the overlay.
While the new overlay networks mysql-network and mysql-network-2 are available on the manager node,
the network is not extended to the two worker nodes. SSH login to a worker node.
The mysql-network and mysql-network-2 networks are not listed on the worker node.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
255542d86c1b bridge bridge local
3a4436c0fb00 docker_gwbridge bridge local
bdd0be4885e9 host host local
e41an60iwval ingress overlay swarm
5c5f44ec3933 none null local
193
Chapter 10 ■ Networking
To extend the custom overlay network to worker nodes, create a service in the network that runs a task
on the worker nodes, as we discuss in the next section.
The Swarm mode overlay networking is secure by default. The gossip protocol is used to exchange
overlay network information between Swarm nodes. The nodes encrypt and authenticate the information
exchanged using the AES algorithm in GCM mode. Manager nodes rotate the encryption key for gossip data
every 12 hours by default. Data exchanged between containers on different nodes on the overlay network
may also be encrypted using the --opt encrypted option, which creates IPSEC tunnels between all the
nodes on which tasks are scheduled. The IPSEC tunnels also use the AES algorithm in GCM mode and rotate
the encryption key for gossip data every 12 hours. The following command creates an encrypted network.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
34a5f77de8cf bridge bridge local
0e06b811a613 docker_gwbridge bridge local
6763ebad69cf host host local
e41an60iwval ingress overlay swarm
mkileuo6ve32 mysql-network overlay swarm
qwgb1lwycgvo mysql-network-2 overlay swarm
eb7399d3ffdd none null local
aqppoe3qpy6m overlay-network-2 overlay swarm
The mysql-2 service is created. Scale the mysql-2 service to three replicas and lists the service tasks for
the service.
194
Chapter 10 ■ Networking
Docker containers in two different networks for the two services—mysql (bridge network) and mysql-2
(mysql-network overlay network)—are running simultaneously on the same node.
A custom overlay network is not extended to all nodes in the Swarm until the nodes have service tasks
that use the custom network. The mysql-network does not get extended to and get listed on a worker node
until after a service task for mysql-2 has been scheduled on the node.
A Docker container managed by the default Docker Engine bridge network docker0 cannot connect
with a Docker container in a Swarm scoped overlay network. Using a Swarm overlay network in a docker
run command, connecting with a Swarm overlay network with a docker network connect command, or
linking a Docker container with a Swarm overlay network using the --link option of the docker network
connect command is not supported. The overlay networks in Swarm scope can only be used by a Docker
service in the Swarm.
For connecting between service containers:
• Docker containers for the same or different services in the same Swarm scoped
overlay network are able to connect with each other.
• Docker containers for the same or different services in different Swarm scoped
overlay networks are not able to connect with each other.
In the next section, we discuss an internal network, but before we do so, the external network should be
introduced. The Docker containers we have created as of yet are external network containers. The ingress
network and the custom overlay network mysql-network are external networks. External networks provide
a default route to the gateway. The host and the wider Internet network may connect to a Docker container
in the ingress or custom overlay networks. As an example, run the following command to ping google.com
from a Docker container’s bash shell; the Docker container should be in the ingress overlay network or a
custom Swarm overlay network.
A connection is established and data is exchanged. The command output is shown in italics.
> --driver overlay \
> hello-world-network
pfwsrjeakomplo5zm6t4p19a9
The internal network is created and listed just the same as an external network would be.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
194d51d460e6 bridge bridge local
a0674c5f1a4d docker_gwbridge bridge local
pfwsrjeakomp hello-world-network overlay swarm
03a68475552f host host local
tozyadp06rxr ingress overlay swarm
3dbd3c3ef439 none null local
Create a service that uses the internal network with the --network option.
196
Chapter 10 ■ Networking
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
d365d4a5ff4c tutum/hello-world:latest "/bin/sh -c 'php-f..." About a minute ago
Up About a minute hello-world.3.r759ddnl1de11spo0zdi7xj4z
A connection is not established, which is because the container is in an internal overlay network.
Connection is established between containers in the same internal network, as the limitation is only on
external connectivity. To demonstrate, obtain the container ID for another container in the same internal
network.
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b7b505f5eb8d tutum/hello-world:latest "/bin/sh -c 'php-f..." 3 seconds ago
Up 2 seconds hello-world.6.i60ezt6da2t1odwdjvecb75fx
57e612f35a38 tutum/hello-world:latest "/bin/sh -c 'php-f..." 3 seconds ago
Up 2 seconds hello-world.7.6ltqnybn8twhtblpqjtvulkup
d365d4a5ff4c tutum/hello-world:latest "/bin/sh -c 'php-f..." 7 minutes ago
Up 7 minutes hello-world.3.r759ddnl1de11spo0zdi7xj4z
Connect between two containers in the same internal network. A connection is established.
197
Chapter 10 ■ Networking
If a service created in an internal network publishes (exposes) a port, the service gets added to the
ingress network and, even though the service is in an internal network, external connectivity is provisioned.
As an example, we add the --publish option of the docker service create command to publish the
service on port 8080.
~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
1c52804dc256 tutum/hello-world:latest "/bin/sh -c 'php-f..." 28 seconds ago
Up 27 seconds 80/tcp hello-world.1.20152n01ng3t6uaiahpex9n4f
Connect from the container in the internal network to the wider external network at google.com, as an
example. A connection is established. Command output is shown in italics.
Deleting a Network
A network that is not in use may be removed with the docker network rm <networkid> command. Multiple
networks may be removed in the same command. As an example, we can list and remove multiple networks.
~ $ docker network ls
NETWORK ID NAME DRIVER SCOPE
34a5f77de8cf bridge bridge local
0e06b811a613 docker_gwbridge bridge local
wozpfgo8vbmh hello-world-network swarm
6763ebad69cf host host local
e41an60iwval ingress overlay swarm
mkileuo6ve32 mysql-network overlay swarm
qwgb1lwycgvo mysql-network-2 overlay swarm
eb7399d3ffdd none null local
aqppoe3qpy6m overlay-network-2 overlay swarm
198
Chapter 10 ■ Networking
Networks that are being used by a service are not removed. The command output is shown in italics.
Summary
This chapter discussed the networking used by the Docker Swarm mode. The default networking used in
Swarm mode is the overlay network ingress, which is a multi-host network spanning all Docker nodes
in the same Swarm to provide a routing mesh for each node to be able to accept ingress connections
for services on published ports. Custom overlay network may be used to create a Docker service with the
difference that a custom overlay network provides service-to-service communication instead of ingress
communication and extends to a Swarm worker node only if a service task using the network is scheduled
on the node. The chapter also discussed the difference between an internal and an external network. In the
next chapter, we discuss logging and monitoring in Docker Swarm mode.
199
CHAPTER 11
Docker includes several built-in logging drivers for containers, such as json-file, syslog, journald, gelf,
fluentd, and awslogs. Docker also provides the docker logs command to get the logs for a container. Docker
1.13 includes an experimental feature for getting a Docker service log using the docker service logs
command.
The Problem
Docker Swarm mode does not include a native monitoring service for Docker services and containers.
Also the experimental feature to get service logs is a command-line feature and required to be run per service.
A logging service with which all the services’ logs and metrics could be collected and viewed in a dashboard
is lacking.
The Solution
Sematext is an integrated data analytics platform that provides SPM performance monitoring for metrics
and events collection, and Logsene for log collection, including correlation between performance metrics,
logs, and events. Logsene is a hosted ELK (Elasticsearch, Logtash, Kibana) stack. Sematext Docker Agent
is required to be installed on each Swarm node in the Swarm for continuously collecting logs, metrics, and
events, as illustrated in Figure 11-1.
Swarm
Node
SematextAgent
Swarm
Node
Docker
Containers SPM
SematextAgent
(Metrics & Events)
Swarm Logsene
Node (Logs)
SematextAgent
202
Chapter 11 ■ Logging and Monitoring
The procedure to use Sematext SPM and Logsene for logging and monitoring with a Docker Swarm is as
follows.
1.
Create an account at https://apps.sematext.com/ui/registration.
2.
Log in to the user account at https://apps.sematext.com/ui/login.
3.
Select the integrations (Logsene app and SPM Docker app) from https://apps.
sematext.com/ui/integrations?newUser, as listed in Steps 4 and 5.
4.
Create a SPM (a performance monitoring app). An app is like a namespace for
data. A SPM token is generated that is to be used to install a Sematext agent on
each Swarm node.
5.
Create a Logsene app. A Logsene token is generated that is also used to install a
Sematext agent on each Swarm node.
6.
Install a Sematext agent on each Swarm node. Docker Swarm metrics, logs, and
events start getting collected in the SPM dashboard and the Logsene dashboard.
203
Chapter 11 ■ Logging and Monitoring
An SPM App is created, as shown in Figure 11-3. Several client configurations are listed.
Click on the Client Configuration tab for Docker Swarm, as shown in Figure 11-4. The Docker Swarm
tab displays the docker service create command to create a service for a Sematext Docker agent; copy the
command. The command includes a SPM_TOKEN, which is unique for each SPM app.
204
Chapter 11 ■ Logging and Monitoring
The SPM app is added to the dashboard, as shown in Figure 11-5. Click on the App link to navigate to
App Reports, which shows the monitoring data, metrics, and events collected by the SPM app and the charts
generated from the data.
As the message in Figure 11-6 indicates, the app has not received any data yet. All the metrics graphs are
empty initially, but they will display the graphs when data starts getting received.
Figure 11-6. The DockerSwarmSPM app has not received any data
205
Chapter 11 ■ Logging and Monitoring
In the Add Logsene App dialog, specify an application name (DockerSwarmLogsene) and click on Create
App, as shown in Figure 11-8.
206
Chapter 11 ■ Logging and Monitoring
A new Logsene application called DockerSwarmLogsene is created, as shown in Figure 11-9. Copy the
LOGSENE_TOKEN that’s generated, which we will use to create a Sematext Docker agent service in a Docker
Swarm.
207
Chapter 11 ■ Logging and Monitoring
Click on the DockerSwarmLogsene app link to display the log data collected by the app. Initially, the app
does not receive any data, as indicated by a message in Figure 11-11, because we have not yet configured
a Sematext Docker agent service on the Docker Swarm. The Logsene UI is integrated with the Kibana
dashboard.
Figure 11-11. The app does not receive any data at first
208
Chapter 11 ■ Logging and Monitoring
Select DockerSwarmSPM as the first app and DockerSwarmLogsene as the second app, as shown in
Figure 11-13. Then click on Connect Apps.
Figure 11-13. DockerSwarmLogsene
A service for the Sematext Docker agent is created; it’s listed using docker service ls.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
oubjk53mpdnj sematext-agent-docker global 3/3 sematext/sematext-agent-docker:latest
List the service tasks. As this is a global service, one task gets started on each node.
If additional nodes are added to the Swarm, the Sematext Docker agent starts a service task on the new
nodes. As an example, update the CloudFormation stack to increase the number of manager nodes to three
and worker nodes to five, as shown in Figure 11-15.
210
Chapter 11 ■ Logging and Monitoring
The Swarm nodes are increased to three manager nodes and five worker nodes when the Stack update
is complete.
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
8d0qv1epqu8xop4o2f94i8j40 ip-172-31-8-4.ec2.internal Ready Active
9rvieyqnndgecagbuf73r9gs5 ip-172-31-35-125.ec2.internal Ready Active Reachable
j4mg3fyzjtsdcnmr7rkiytltj ip-172-31-18-156.ec2.internal Ready Active
mhbbunhl358chah1dmr0y6i71 ip-172-31-7-78.ec2.internal Ready Active Reachable
r02ftwtp3n4m0cl7v2llw4gi8 ip-172-31-44-8.ec2.internal Ready Active
vdamjjjrz7a3ri3prv9fjngvy ip-172-31-6-92.ec2.internal Ready Active
xks3sw6qgwbcuacyypemfbxyj * ip-172-31-31-117.ec2.internal Ready Active Leader
xxyy4ys4oo30bb4l5daoicsr2 ip-172-31-21-138.ec2.internal Ready Active
Adding nodes to the Swarm starts a Sematext agent on the nodes that were added.
211
Chapter 11 ■ Logging and Monitoring
The service is created and listed in addition to the Sematext Docker agent service.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
oubjk53mpdnj sematext-agent-docker global 8/8 sematext/sematext-agent-
docker:latest
rmy45fpa31tw mysql replicated 10/10 mysql:latest
The service tasks for the mysql service are also listed.
212
Chapter 11 ■ Logging and Monitoring
After the Sematext Docker agent service has been started on the Swarm and a MySQL database service
has been started, both the SPM and Logsene apps start receiving data, as indicated by the Data Received
column in the dashboard. See Figure 11-16.
213
Chapter 11 ■ Logging and Monitoring
The Docker container metrics—including Container Count, Container CPU, Container Disk, Container
Memory, and Container Network—may be displayed by selecting Docker in the navigation. The Docker
Container Count metrics are shown in Figure 11-18.
The Docker ➤ Container Network selection displays the network traffic received and transmitted, the
receive rate, and the transmit rate. The OS Disk Space Used may be displayed by choosing OS ➤ Disk. The
metrics collection granularity may be set to auto granularity (default), by month, by week, by day, by hour, by
5 minutes, or by 1 minute. The Logs Overview may be displayed using the Logs button.
Click the Refresh Charts button to refresh the charts if they are not set to auto-refresh, which is the
default.
Detailed logs are displayed using Logsene UI or Kibana 4, which we discuss in the next section.
214
Chapter 11 ■ Logging and Monitoring
The Logsene collects all the Docker events, such as the Docker pull event for the mysql:latest image,
as shown in Figure 11-20.
Figure 11-20. Logs for Docker event for mysql image pull
215
Chapter 11 ■ Logging and Monitoring
Logs for another Docker event, a volume mount, are shown in Figure 11-21.
216
Chapter 11 ■ Logging and Monitoring
Summary
This chapter discussed continuous logging and monitoring of a Docker Swarm with Sematext SPM
performance monitoring and Logsene log management. First, you learned how to create a SPM app and a
Logsene app. Then you installed a Sematext agent service on each of the Swarm nodes and monitored the
metrics and events in a SPM dashboard. You also learned how to monitor the logs in the Logsene UI or a
Kibana 4 dashboard. The next chapter discusses load balancing in a Docker Swarm.
217
CHAPTER 12
Load Balancing
A Docker Swarm mode service provides a distributed application that may be scaled across a cluster of
nodes. Swarm mode provides internal load balancing among the different services in the Swarm based on
the DNS name of a service. Swarm mode also provides ingress load balancing among a service’s different
tasks if the service is published on a host port. Additionally, service tasks may be scheduled on specific
nodes using placement constraints.
Service Discovery
A Swarm has a DNS server embedded in it. Service discovery is based on the DNS name. Swarm manager
assigns each service in the Swarm a unique DNS name entry. Swarm manager uses internal load balancing
to distribute requests for the different services in the Swarm based on the DNS name for a service.
Custom Scheduling
Service replicas are scheduled on the nodes in a Swarm using the spread scheduling strategy by default.
A user may configure placement constraints for a service so that replicas are scheduled on specific nodes.
Scheduling using constraints is discussed in Chapter 6.
The Problem
Ingress load balancing is for distributing the load among the service tasks and is used even if a Swarm
consists of a single node. Ingress load balancing for a multi-node Swarm is illustrated in Figure 12-1. A client
may access any node in the Swarm, whether the node has a service task scheduled or not, and the client
request is forwarded to one of the service tasks using ingress load balancing.
Node
Published
Port
Ingress
Load
Balancer
Node
Service Client
Replicas Host
Node
Published
Port
A single client accesses a single node and, as a result, the Swarm is under-utilized in terms of
distributing external client load across the Swarm nodes. The client load is not balanced across the Swarm
nodes. A single node does not provide any fault tolerance. If the node fails, the service becomes unavailable
to an external client accessing the service at the node.
The Solution
An AWS Elastic Load Balancer (ELB) is used to distribute client load across multiple EC2 instances. When
used for Docker Swarm mode an AWS Elastic Load Balancer distributes client load across the different EC2
instances, which are hosting the Swarm nodes. The external load balancer accesses (listens to) the Swarm on
each EC2 instance at the published ports for the services running in the Swarm using LB listeners. Each LB
listener has an LB port mapped to an instance port (a published port for a service) on each EC2 instance. An
ELB on a Swarm is illustrated in Figure 12-2.
220
Chapter 12 ■ Load Balancing
Node
Published
Port
External
Ingress Load
Load Balancer
Balancer
Node
Published
Service Port Client
Replicas Host
Node
Published
Port
As a client is not accessing the service at a single host even if a single node goes down or becomes
unavailable, the Swarm does not become unavailable as the external load balancer directs the client request
to a different node in the Swarm. Even when all the nodes are available, the client traffic is distributed among
the different nodes. As an example, a client could be being served from one node at a particular time and
from a different node shortly thereafter. Thus, an external load balancer serves two functions: load balancing
and fault tolerance. Additionally the cloud provider on which a Swarm is hosted may provide additional
features such as a secure and elastic external load balancing. Elastic load balancing, as provided by AWS
Elastic Load Balancer, scales the request handling capacity based on the client traffic.
This chapter discusses load balancing with a user-created Swarm on CoreOS. It also discusses the
automatically provisioned elastic load balancer on Docker for AWS managed services.
221
Chapter 12 ■ Load Balancing
Figure 12-3. CoreOS instances on EC2 for a manager and two worker nodes
SSH login into the manager node to initiate the Swarm mode. Initializing a Swarm mode on CoreOS
and joining worker nodes to the Swarm is discussed in Chapter 2. Copy the docker swarm join command
output to join the worker nodes to the Swarm. List the Swarm nodes with the docker node ls command.
The <PUBLISHED-PORT> is the port exposed on the hosts and the <TARGET-PORT> is the port on which the
Docker container exposes the service. Using the tutum/hello-world Docker image, <PUBLISHED-PORT> as 8080,
<TARGET-PORT> as 80, and <SERVICE-NAME> as hello-world, run the following command to create the service.
222
Chapter 12 ■ Load Balancing
The service is added to the ingress overlay network and the service is exposed at each node on the
Swarm, whether a service task is running on the node or not. The hello-world service lists 3/3 replicas.
List the service tasks using the docker service ps hello-world command and the three tasks are
listed as scheduled, one on each node.
core@ip-10-0-0-226 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b73cbcd0c37e tutum/hello-world:latest "/bin/sh -c 'php-fpm " 34 seconds ago
Up 32 seconds 80/tcp hello-world.2.5g5d075yib2td8466mh7c01cz
core@ip-10-0-0-198 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8bf11f2df213 tutum/hello-world:latest "/bin/sh -c 'php-fpm " 38 seconds ago
Up 36 seconds 80/tcp hello-world.1.di5oilh96jmr6fd5haevkkkt2
And the third Docker container is running on the other worker node.
core@ip-10-0-0-203 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
a461bfc8d4f9 tutum/hello-world:latest "/bin/sh -c 'php-fpm " 40 seconds ago
Up 38 seconds 80/tcp hello-world.3.5saarf4ngju3xr7uh7ninho0o
223
Chapter 12 ■ Load Balancing
224
Chapter 12 ■ Load Balancing
Similarly, to invoke the service at a worker node, obtain the public DNS of the worker instance from the
EC2 console and invoke the service in a web browser at the <PublicDNS>:<PublishedPort> URL, as shown
in Figure 12-5.
225
Chapter 12 ■ Load Balancing
Similarly, to invoke the service at the other worker node, obtain the public DNS of the worker instance
from the EC2 console and invoke the service in a web browser at the <PublicDNS>:<PublishedPort> URL, as
shown in Figure 12-6.
While the external AWS Elastic Load Balancer distributes the load among the EC2 instances, the ingress
load balancer distributes the load among the service tasks. In the preceding example, the same service task
is invoked when the service is invoked at the Swarm manager instance and at a Swarm worker instance, as
indicated by the same hostname (Figures 12-4 and 12-6). This demonstrates the ingress load balancing.
A different service task could get invoked if the service is invoked at the same host. As an example,
invoke the service at the Swarm manager instance again. A different service task is served, as indicated by a
different hostname in Figure 12-7. This is in comparison to the hostname served earlier in Figure 12-4, again
demonstrating the ingress load balancing.
226
Chapter 12 ■ Load Balancing
Figure 12-7. Different hostname served when invoking the service at the manager node again
227
Chapter 12 ■ Load Balancing
AWS Elastic Load Balancing offers two types of load balancers—classic load balancers and application
load balancers. The classic load balancer routes traffic based on either application or network level
information whereas the application load balancer routes traffic based on advanced application-level
information. The classic load balancer should suffice for most simple load balancing of traffic to multiple
EC2 instances and is the one we use for Docker Swarm instances. Select the Classic Load Balancer and then
click on Continue, as shown in Figure 12-9.
In the Define Load Balancer dialog, specify a load balancer name (HelloWorldLoadBalancer) and select
a VPC to create the load balancer in, as shown in Figure 12-10. The VPC must exist prior to creating the load
balancer and must be where the EC2 instances to be load balanced are created. The load balancer protocol
is HTTP and so is the instance protocol, by default. Keeping the default setting of HTTP protocol, specify the
load balancer port and the instance port as 8080, because the Hello World service is exposed at port 8080.
228
Chapter 12 ■ Load Balancing
In the Select Subnets tab, click on one or more subnets listed in the Available Subnets table. The subnets
are added to the selected subnets, as shown in Figure 12-11. Click on Next. To provide high availability,
select at least two subnets in different availability zones.
In the Assign Security Groups tab, select Create a New Security Group, as shown in Figure 12-12. In
Type, select Custom TCP Rule. Choose the TCP protocol and the port range as 8080. Select Anywhere for the
source and its value as 0.0.0.0/0. Click on Next.
Click on Next in Configure Security Settings, as we have not used the HTTPS or the SSL protocol. In the
Configure Health Check tab, select HTTP for the ping protocol and 8080 for the ping port. Specify the ping path
as /, as shown in Figure 12-13. Keep the defaults as is in the Advanced Details area and then click on Next.
229
Chapter 12 ■ Load Balancing
Select the three Swarm instances listed, as shown in Figure 12-14. Also select Enable Cross-Zone Load
Balancing, which distributes traffic evenly across all backend instances in all availability zones. Click on Next.
230
Chapter 12 ■ Load Balancing
In the Add Tags tab, no tags need to be added. In the Review tab, click on Create, as shown in Figure 12-15.
As indicated, the load balancer is an Internet-facing type.
Figure 12-15. Review your settings then create the load balancer
231
Chapter 12 ■ Load Balancing
Obtain the DNS name of the load balancer from the EC2 console, as shown in Figure 12-17. Initially, the
status will be “0 of 3 instances in service” because the registration is still in progress.
After a while, the status should become “3 of 3 instances in service” and all the instance should be
InService, as shown in Figure 12-18.
232
Chapter 12 ■ Load Balancing
The Hello World service may be invoked from the <DNSname>:<LoadBalancerPort> URL in a web
browser, as shown in Figure 12-19.
The external elastic load balancer balances the load among the EC2 instances in the Swarm. Because
the ingress load balancer balances the load among the different service tasks, a different service task could
get invoked if the service is invoked at the ELB DNS name again, as shown in Figure 12-20.
233
Chapter 12 ■ Load Balancing
234
Chapter 12 ■ Load Balancing
An Internet-facing Elastic Load Balancer is created, as shown in Figure 12-22. The public DNS for the
load balancer may be used to access the Swarm, as discussed later.
Figure 12-22. Load balancer for the Swarm created with Docker for AWS
Select the Instances tab. All the instances in the Swarm, manager or worker, are listed. All the instances
should be InService, as shown in Figure 12-23.
235
Chapter 12 ■ Load Balancing
Update the load balancer listeners in the Listeners tab to add/modify a listener with a load balancer
port set to 8080 and an instance port set to 8080, which is the published port for the Hello World service we
create, as shown in Figure 12-24.
Obtain the public IP address of one of the manager nodes from the EC2 console.
SSH login to the manager node.
236
Chapter 12 ■ Load Balancing
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
8d0qv1epqu8xop4o2f94i8j40 ip-172-31-8-4.ec2.internal Ready Active
8eckb0twpbuoslfr58lbibplh ip-172-31-32-133.ec2.internal Ready Active
b6f18h4f3o44gkf5dhkzavoy3 ip-172-31-2-148.ec2.internal Ready Active
k9nl2zcmjzobbqu5c5bkd829g ip-172-31-21-41.ec2.internal Ready Active
p0d70jwh5vpjwximc1cpjfjkp * ip-172-31-1-130.ec2.internal Ready Active Leader
r02ftwtp3n4m0cl7v2llw4gi8 ip-172-31-44-8.ec2.internal Ready Active
rd8d0kksuts3aa07orhgkri3i ip-172-31-41-86.ec2.internal Ready Active Reachable
xks3sw6qgwbcuacyypemfbxyj ip-172-31-31-117.ec2.internal Ready Active Reachable
Create a Hello World service and expose the service at port 8080 (published port).
237
Chapter 12 ■ Load Balancing
The hello-world service may be created without explicitly specifying a published port.
The Swarm manager automatically assigns a published port in the range 30000-32767; the default being
port 30000 if it’s available. The listener in the load balancer for the Docker for AWS Swarm may need to be
modified to add a mapping for the LoadBalancerPort:ServiceInstancePort, such as 30000:30000.
Obtain the public DNS for the elastic load balancer, which gets created automatically, as shown in
Figure 12-25.
238
Chapter 12 ■ Load Balancing
Figure 12-26. Accessing a Docker service at the elastic load balancer DNS
Summary
This chapter discussed load balancing in Swarm mode. An ingress load balancer is used to distribute
the load among a service’s tasks. Each service in a Swarm is assigned a DNS name and an internal load
balancer balances service requests among the services based on DNS name. We also created an external
load balancer for AWS EC2 instances to distribute load among the EC2 instances. Docker for AWS creates
an external load balancer automatically on AWS. In the next chapter we discuss developing a Docker Swarm
based highly available website.
239
CHAPTER 13
High availability of a website refers to a website being available continuously without service interruption.
A website is made highly available by provisioning fault tolerance into the Docker Swarm application. High
availability is provided at various levels. The ingress load balancer balances incoming client requests across
the multiple service tasks and provides fault tolerance at the tasks level. If one service task fails, client traffic
is routed to another service task. Using an external load balancer for a Docker Swarm hosted across multiple
availability zones is another method for providing high availability. An external load balancer provides fault
tolerance at the node level. If one node fails, client traffic is routed to Swarm nodes on another node.
The Problem
Using an external load balancer such as an AWS Elastic Load Balancer provides fault tolerance across
multiple availability zones in an AWS region. The elastic load balancer may be accessed at its DNS name by
a client host, as illustrated in Figure 13-1. The Swarm is not highly available, as failure of a single AWS region
would cause a website to become unavailable.
Availability Zone 1
Load
Balancer
Availability Zone 2 Client
Host
Availability Zone 3
Figure 13-1. The elastic load balancer may be accessed at its DNS name by a client host
The Solution
Amazon Route 53 provides high availability with various DNS failover options, including active-active and
active-passive failover using alias resource record sets. Amazon Route 53 provides DNS failover across AWS
regions that are geographically spread, as illustrated in Figure 13-2. We use the Amazon Route 53 active-
passive failover configuration based on the primary-secondary architectural patter for load balancer DNSes.
Region
1
Availability Zone 1
Primary
Load DNS
Balancer
Availability Zone 2
AWS Route 53
Hosted Zone
Availability Zone 2
Load
Region Balancer
2 Secondary
DNS
Availability Zone 3
Figure 13-2. Amazon Route 53 provides DNS failover across AWS regions
242
Chapter 13 ■ Developing a Highly Available Website
A domain name must be registered to be used for creating an Amazon Route 53 hosted zone.
Each Docker Swarm has manager and worker nodes spread across the AWS availability zones in an AWS
region. The public IP of a manager node may be obtained from the EC2 console, as shown in Figure 13-5.
243
Chapter 13 ■ Developing a Highly Available Website
Using the public IP address for a manager node in the first Docker Swarm, SSH login to the manager
node EC2 instance.
Welcome to Docker!
~$
Create the other Docker Swarm in the Ohio AWS region as an example, as shown in Figure 13-6.
The regions may be different for different users.
Figure 13-6. CloudFormation stack for the Docker Swarm in one region
244
Chapter 13 ■ Developing a Highly Available Website
The Swarm node EC2 instances for the second Docker Swarm are also spread across the AWS availability
zones in the second AWS region, as shown in Figure 13-7. Obtain the public IP for a manager node.
List the Swarm nodes in a Docker Swarm with the Docker node.
~ $ docker node ls
245
Chapter 13 ■ Developing a Highly Available Website
~ $ docker service 1s
Scale the service to 10 replicas to provide load distribution. Subsequently, list the
services to list 10/10 replicas as running.~ $ docker service scale hello-world=10
hello-world scaled to 10
~ $ docker service ls
246
Chapter 13 ■ Developing a Highly Available Website
Obtain the load balancer DNS for the first Docker Swarm from the EC2 dashboard, as shown in Figure 13-9.
Access the service at <DNS>:<LoadBalancerPort> in a web browser, as shown in Figure 13-10; the load
balancer port is set to 8080, the port at which the service is exposed.
woqx2ltuibv53ctmuvssrsq8j
~ $ docker service ls
248
Chapter 13 ■ Developing a Highly Available Website
hello-world scaled to 10
The service replicas are distributed across the Swarm nodes, as shown in Figure 13-11.
249
Chapter 13 ■ Developing a Highly Available Website
Obtain the DNS of the elastic load balancer for the second Swarm, as shown in Figure 13-12.
Figure 13-12. Obtaining the DNS name for the Swarm ELB
251
Chapter 13 ■ Developing a Highly Available Website
252
Chapter 13 ■ Developing a Highly Available Website
In the Create Hosted Zone dialog, specify a domain name (nosqlsearch.com). The domain name must
be registered with the user. Select Public Hosted Zone for the type, as shown in Figure 13-18.
253
Chapter 13 ■ Developing a Highly Available Website
A new public hosted zone is created, as shown in Figure 13-19. The name servers for the hosted zone
(by default, there are four) are assigned.
254
Chapter 13 ■ Developing a Highly Available Website
Add four name servers (collectively called a delegation set), as shown in Figure 13-21, for the domain for
which a hosted zone is to be created.
255
Chapter 13 ■ Developing a Highly Available Website
256
Chapter 13 ■ Developing a Highly Available Website
In the Create Record Set tab, the type should be set to A –IPv4 address, as shown in Figure 13-23. The
name of each record set ends with the domain name. Select Yes for Alias.
257
Chapter 13 ■ Developing a Highly Available Website
Next, select the alias target as the AWS Elastic Load Balancer DNS for one of the Docker Swarms, as
shown in Figure 13-24.
Select Failover for the routing policy. This configures DNS failover, as shown in Figure 13-26. Select
Failover Record Type as Primary.
For Associate with Health Check, select No. Click on Create, as shown in Figure 13-28.
A primary record set is created, as shown in Figure 13-29; “primary” implies that website traffic will be
first routed to the record set.
260
Chapter 13 ■ Developing a Highly Available Website
To create a secondary record set, click on Create Record Set again, as shown in Figure 13-30.
Select the type as A –IPv4 address and choose Yes for Alias. Select Alias Target as the second ELB DNS,
as shown in Figure 13-31.
261
Chapter 13 ■ Developing a Highly Available Website
Select the Failover routing policy and the secondary Failover Record Type, as shown in Figure 13-32.
Choose Yes for the Evaluate Target Health and No for the Associate with Health Check. Click on Create,
as shown in Figure 13-33.
The secondary record set is created; “secondary” implies that traffic is routed to the record set if the
primary record set fails, as shown in Figure 13-34. Click on Back to Hosted Zones.
The domain (nosqlsearch.com) is configured with four record sets, as shown in Figure 13-35.
263
Chapter 13 ■ Developing a Highly Available Website
To test high availability, delete the CloudFormation stack for the Docker Swarm associated with the
primary record set, as shown in Figure 13-37.
Click on Yes, Delete in the Delete Stack dialog. The stack should start to be deleted, as indicated by the
DELETE_IN_PROGRESS status shown in Figure 13-38.
264
Chapter 13 ■ Developing a Highly Available Website
The DNS fails over to the secondary resource record set and the domain continues to serve the Docker
service, as shown in Figure 13-39.
The hostname in the browser could become different if the request is forwarded to a different service
task replica, as shown in Figure 13-40. But the hostname could also become different regardless of whether
failover has been initiated, because the ingress load balancer distributes traffic among the different service
replicas.
265
Chapter 13 ■ Developing a Highly Available Website
Select the hosted zone to delete and click on Delete Hosted Zone, as shown in Figure 13-44.
267
Chapter 13 ■ Developing a Highly Available Website
268
Chapter 13 ■ Developing a Highly Available Website
Summary
This chapter developed a highly available website using an Amazon Route 53 hosted zone. First, we created
two Docker Swarms using the Docker for AWS managed service and deployed the same Docker service on
each. Each Docker Swarm service may be accessed using the AWS Elastic Load Balancer for the Docker
Swarm created automatically by the Docker for AWS. The Route 53 hosted zone is to create a hosted zone for
a domain to route traffic to DNSes configured in the primary/secondary failover pattern. Subsequently, we
tested that if the Docker Swarm for the primary record set is shut down, the website is still available, as the
hosted zone routes the traffic to the secondary ELB DNS. In the next chapter we discuss using the Docker
Swarm mode in Docker Cloud.
269
CHAPTER 14
Docker for AWS is a managed service for Docker Swarm based on a custom Linux distribution, and
hosted on AWS with all the benefits inherent with being integrated with the AWS Cloud platform, such as
centralized logging with CloudWatch, custom debugging, auto-scaling groups, elastic load balancing, and a
DynamoDB database.
The Problem
While AWS is a managed cloud platform, it is not a managed service for Docker containers, images, and
services per se. Docker’s builds and tests still need to be integrated.
The Solution
Docker Cloud is a managed service to test code and build Docker images and to create and manage
Docker image repositories in the Docker Cloud registry. Docker Cloud also manages Docker containers,
services, stacks, nodes, and node clusters. A stack is a collection of services and a service is a collection of
containers. Docker Cloud is an integrated cloud service that manages builds and images, infrastructure,
and nodes and apps.
Docker Cloud also introduced a Swarm mode to manage Docker Swarms. In Swarm mode, Docker
Cloud is integrated with Docker for AWS. As a result, Docker Cloud Swarm mode is an integration of two
managed services—Docker for AWS and Docker Cloud.
Docker Cloud provides some Docker images to interact between a Docker Swarm and a Docker host
client, as discussed in Table 14-1.
In this chapter, we discuss the Docker Cloud Swarm mode to provision a Docker Swarm with
infrastructure hosted on AWS. This chapter covers the following topics:
• Setting the environment
• Creating an IAM role
• Creating a Docker Swarm in Docker Cloud
• Connecting to the Docker Swarm from a Docker host
• Connecting to the Docker Swarm from a Swarm manager
• Bringing a Swarm into Docker Cloud
272
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Specify a role name (dockercloud-swarm-role), as shown in Figure 14-3, and click on Next Step.
The Select Role Type page is displayed, as shown in Figure 14-4. As we are linking two services—Docker
Cloud and Docker for AWS—we do not need to select an AWS service role.
Select Role for Cross-Account Access, as shown in Figure 14-5, and select the sub-choice called Provide
Access Between Your AWS Account and a 3rd Party AWS Account using the Select button.
273
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Next, specify the account ID of the third party AWS account whose IAM users will access the AWS
account. A third-party AWS account has been set up for the Docker Cloud service and has an account ID of
689684103426, which may be used by anyone (AWS user) linking Docker Cloud service to their AWS account.
Specify the account ID as 689684103426, as shown in Figure 14-6. The external ID is a user’s Docker ID for
the Docker Cloud service account created at https://cloud.docker.com/. While the account ID will be the
same (689684103426) for everyone, the external ID will be different for different users. Keep the Require MFA
checkbox unchecked. Click on Next Step.
274
Chapter 14 ■ Using Swarm Mode in Docker Cloud
As we are embedding a custom policy, do not select from any of the listed policies in Attach Policy. Click
on Next Step, as shown in Figure 14-7.
275
Chapter 14 ■ Using Swarm Mode in Docker Cloud
A new AWS IAM role called dockercloud-swarm-role is created, as shown in Figure 14-9. Click on the
dockercloud-swarm-role role name.
Next, we will add an embedded (also called an inline) policy. The Permissions tab should be selected by
default. Click on the v icon to expand the Inline Policies section, as shown in Figure 14-10.
276
Chapter 14 ■ Using Swarm Mode in Docker Cloud
To start, no inline policies are listed. Click on the Click Here link to add an inline policy, as shown in
Figure 14-11.
Figure 14-11. Click on the Click Here link to add an inline policy
In Set Permissions, select Custom Policy using the Select button, as shown in Figure 14-12.
A policy document lists some permissions and the policy document for an IAM role to use Docker
for AWS may be obtained from https://docs.docker.com/docker-for-aws/iam-permissions/. Click on
Validate Policy to validate the policy, as shown in Figure 14-13.
277
Chapter 14 ■ Using Swarm Mode in Docker Cloud
278
Chapter 14 ■ Using Swarm Mode in Docker Cloud
A new inline policy is added for the dockercloud-swarm-role role, as shown in Figure 14-15.
Copy the Role ARN String listed in Figure 14-16, as we need the ARN string to connect to the AWS Cloud
provider from Docker Cloud.
279
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Click on the Swarm Mode slider; the Swarm mode should be enabled, as shown in Figure 14-18.
280
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Two options are available—Bring Your Own Swarm or Create a New Swarm. Click on Create to create a
new Swarm, as shown in Figure 14-20.
Next, we will configure the Swarm, including specifying a Swarm name, selecting a cloud provider, and
selecting cloud provider options. Two Cloud service providers are supported: Amazon Web Services (AWS)
and Microsoft Azure (not yet available). We use AWS in this chapter. We need to configure the cloud settings
for AWS with the ARN string we copied earlier. Cloud settings may be configured with one of the two options.
One option is to select Cloud Settings from the account, as shown in Figure 14-21.
281
Chapter 14 ■ Using Swarm Mode in Docker Cloud
In the Cloud Settings page, click on the plug icon that says Connect Provider for the Amazon Web
Services provider, as shown in Figure 14-22.
282
Chapter 14 ■ Using Swarm Mode in Docker Cloud
The other option to configure the Cloud settings is to click on the Amazon Web Service Service Provider
icon, as shown in Figure 14-24, which also displays the Add AWS Credentials dialog.
Specify the ARN string copied earlier from the Add AWS Credentials dialog and click on Save, as shown
in Figure 14-25.
283
Chapter 14 ■ Using Swarm Mode in Docker Cloud
With either option, the service provider Amazon Web Services should be connected, as indicated by the
Connect Provider icon turning to Connected, as shown in Figure 14-26.
The Amazon Web Services option should indicate connected, as shown in Figure 14-27.
284
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Specify a Swarm name. That name should not include any spaces, capitalized letters, or special
characters other than “,”, “-“ and “_”, as shown in Figure 14-28.
Specify a valid Swarm name (docker-cloud-swarm), select the Amazon Web Services Service provider,
which is already connected, and click on Create, as shown in Figure 14-29.
Figure 14-29. Creating a Docker Swarm using the AWS service provider
285
Chapter 14 ■ Using Swarm Mode in Docker Cloud
In the region, select a region (us-east-2), the number of Swarm managers (3), the number of Swarm
workers (5), the Swarm manager instance type (t2.micro), the agent worker instance type (t2.micro), and
the SSH key. Click on Create, as shown in Figure 14-30.
The Swarm should start to get deployed, as indicated by the DEPLOYING message shown in Figure 14-31.
286
Chapter 14 ■ Using Swarm Mode in Docker Cloud
When the Swarm has been deployed, the message becomes Deployed, as shown in Figure 14-32.
The AWS infrastructure for the Swarm is created and configured. A CloudFormation stack is created, as
shown in Figure 14-33.
A new proxy AWS IAM role for the Swarm is added, as shown in Figure 14-34.
Figure 14-34. Proxy role and Docker Cloud Swarm AWS role
287
Chapter 14 ■ Using Swarm Mode in Docker Cloud
EC2 instances for the Swarm manager and worker nodes are started. Each EC2 instance is started with
the proxy IAM role created automatically, as shown for a manager node in Figure 14-35.
Each Docker Cloud account namespace must be associated with only one AWS IAM role. If multiple
Docker Cloud accounts are to access the same AWS account, multiple roles must be created for each Docker
Cloud account or Docker Cloud account namespace. Each AWS IAM role for Docker Cloud to access AWS is
associated with an ARN string. The ARN string for a deployed Swarm may be edited with the Edit Endpoint
link, as shown in Figure 14-36.
288
Chapter 14 ■ Using Swarm Mode in Docker Cloud
If the Swarm endpoint is to be modified, specify a new ARN string (for a different IAM role associated with
a different Docker Cloud namespace) in the Edit Endpoint dialog. Click on Save, as shown in Figure 14-37.
Next, we connect to the Docker Swarm. There are two ways to do so:
• Connect directly from any Docker host
• Obtain the public IP address of a Swarm manager from the EC2 dashboard and SSH
login to the Swarm manager
We discuss each of these options.
Figure 14-38. Listing and copying the docker run command to connect to the Swarm
289
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Start an EC2 instance with CoreOS AMI, which has Docker pre-installed, as shown in Figure 14-39.
Obtain the public IP address of the CoreOS instance from the EC2 console, as shown in Figure 14-40.
290
Chapter 14 ■ Using Swarm Mode in Docker Cloud
The dockercloud/client Docker image that’s used to connect to Docker Cloud is downloaded.
A username and password prompt should be displayed. Specify the username and password for the Docker
Cloud account in which the Swarm was created.
Run the command. The Swarm is connected to the CoreOS Docker host. List the Swarm nodes using the
docker node ls command.
>export DOCKER_HOST=tcp://127.0.0.1:32768
>docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
291
Chapter 14 ■ Using Swarm Mode in Docker Cloud
The Swarm manager is logged in and the Swarm command prompt is displayed.
292
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Welcome to Docker!
ID HOSTNAME STATUS
Create a service using the docker service create command and list the service with docker service ls.
The hello-world service is created. A Docker Cloud server proxy service is also listed.
> - - replicas 1 \
> tutum/hello-world
hbiejbua8u5øskabun3dzkxk4
~ $ docker service 1s
293
Chapter 14 ■ Using Swarm Mode in Docker Cloud
Copy the docker swarm join command output to join the worker nodes.
~ $ docker --version
--token SWMTKN-1-23snf1iuieafnyd1zzgf37ucwuz1.khg9atqsmysmvv6iw1.arw0-do29n83jptkkdwss5fjsd3rt \
172.31.23.196:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the
instructions.
Join a worker node on another EC2 instance with Docker 1.13 or later.
294
Chapter 14 ■ Using Swarm Mode in Docker Cloud
A Swarm with two nodes is created, as listed in the output to the docker node ls command, which
runs on the Swarm manager node.
~$ docker node 1s
HOSTNAME STATUS
~$
Next, import the Swarm into Docker Cloud. From the Swarm manager node, run the following
command.
Specify the Docker ID at the username prompt and the password at the password prompt.
Specify a cluster name for the Swarm imported into Docker Cloud, or use the default. Specify cluster
as dvohra/dockercloudswarm. The Swarm is registered with Docker Cloud. As for a Swarm created in the
Docker Cloud Swarm mode, the Swarm may be accessed from any Docker host for which a command is
output.
You can now access this cluster using the following command in any Docker Engine
To bring the Swarm into Docker Cloud, click on the Bring Your Own Swarm button in Swarm mode, as
shown in Figure 14-42.
295
Chapter 14 ■ Using Swarm Mode in Docker Cloud
The Swarm registered with Docker Cloud is added to the Docker Cloud Swarms, as shown in Figure 14-43.
Summary
This chapter introduced the Docker Cloud Swarm mode, which is a managed service for linking the Docker
Cloud managed service to a AWS service provider account and provisioning a Swarm from Docker Cloud.
A Swarm created on the command line can be imported into Docker Cloud. In the next chapter we discuss
Docker service stacks.
296
CHAPTER 15
The Docker Swarm mode is Docker-native as of Docker 1.12 and is used to create distributed and scalable
services for developing Docker applications.
The Problem
While single Docker image applications are also commonly used, a vast majority of Docker enterprise
applications are comprised of multiple images that have dependencies between them. Docker Compose
(standalone in v1 and v2) could be used to declare dependencies between microservices using the links
and depends_on options, but Compose (standalone) is archaic, other than the format for defining services, in
the context of Swarm mode services.
The Solution
Docker Swarm mode has introduced service stacks to define a collection of services (Swarm mode services) that
are automatically linked with each other to provide a logical grouping of services with dependencies between
them. Stacks use stack files that are YAML files in a format very much like the docker-compose.yml format.
There are a few differences such as the absence of links and depends_on options that were used to define
dependencies between microservices in Docker Compose (standalone). YAML (http://www.yaml.org/) is a
data serialization format commonly used for configuration files.
As of Docker v1.13, the docker stack subset of commands has been introduced to create a Docker
stack. Using a stack file that defines multiple services, including services’ configuration such as environment
variables, labels, number of containers, and volumes, a single docker stack deploy command creates a
service stack, as illustrated in Figure 15-1. The services are automatically linked to each other.
Stack
service1
service3
Figure 15-1. Service stack created with the docker stack deploy command
Docker Compose versions 3.x and later are fully Docker Swarm mode compatible, which implies
that a Docker Compose v3.x docker-compose.yml file could be used as a Stack file except for a few sub-
options (including build, container_name, external_links, and links) that are not supported in a stack
file. Docker Compose 3.x could still be used standalone to develop non-Swarm mode services, but those
microservices are not usable or scalable with the Docker Swarm mode docker service group of commands.
To use stacks to manage Swarm mode services, the following requirements must be applied.
• Docker version must be 1.13 or later
• Swarm mode must be enabled
• Stack file YAML format must be based on Docker Compose v3.x file format
To use service stacks, the Docker Compose version 3 YAML file format is used, but Docker Compose is
not required to be installed.
When using Docker Swarm mode, the Docker version requirement for Swarm mode is 1.12 or later.
Before developing stacks to manage Swarm mode services, verify that the Docker version is at least 1.13.
The Docker version used in this chapter is 17.0x. The docker stack group of commands listed in Table 15-1
becomes available in Docker v1.13 and later.
Command Description
deploy Deploys a service stack or updates an existing stack
ls Lists the stacks
ps Lists the Swarm mode tasks in a stack
rm Removes a stack
services Lists the Swarm mode services in a stack
298
Chapter 15 ■ Using Service Stacks
Run the docker --version command to list the Docker version. To list the commands for stack usage,
run the docker stack command.
Options:
--help Print usage
Commands:
deploy Deploy a new stack or update an existing stack
ls List stacks
ps List the tasks in the stack
rm Remove one or more stacks
services List the services in the stack
Figure 15-2. Deploying the Docker Community Edition for AWS (stable)
299
Chapter 15 ■ Using Service Stacks
Configure a Swarm using the Create Stack wizard as discussed in Chapter 3. You can specify the number
of swarm managers to be 1, 3, or 5 and the number of Swarm worker nodes to be 1-1000. We used one
Swarm manager node and two Swarm worker nodes, as shown in Figure 15-3.
300
Chapter 15 ■ Using Service Stacks
Three EC2 instances—one for Docker Swarm manager node and two for the Swarm worker nodes—are
launched, as shown in Figure 15-5. The Linux distribution used by the CloudFormation stack is Moby Linux,
as shown in Figure 15-5.
Figure 15-5. The Moby Linux AMI used for Docker on AWS
Before being able to use Docker on AWS, enable all inbound/outbound traffic between the EC2
instances in the security groups used by the EC2 instances. This is shown for the security group for Swarm
manager node instance inbound rules in Figure 15-6.
Figure 15-6. The security group inbound rules are enabled for all traffic
301
Chapter 15 ■ Using Service Stacks
SSH login into the Swarm manager EC2 instance and obtain the public IP address from the AWS
management console, as shown in Figure 15-7.
Using the key pair used to create the CloudFormation stack SSH login into the Swarm manager
instance.
docker node ls
~ $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
bf4ifhh86sivqp03ofzhk6c46 ip-172-31-21-175.ec2.internal Ready Active
ozdhl0jtnricny1y95xbnhwtq ip-172-31-37-108.ec2.internal Ready Active
ud2js50r4livrqf3f4l30fv9r * ip-172-31-19-138.ec2.internal Ready Active Leader
302
Chapter 15 ■ Using Service Stacks
Test the Swarm mode by creating and listing a Hello World service.
docker service ls
The docker service commands output indicates a Docker Swarm service, so it’s created and listed.
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
q05fef2a7cf9 helloworld replicated 2/2 alpine:latest
~ $
If we were to create a WordPress blog using the wordpress and mysql images with the docker run
command, we would create Docker containers for each of the Docker images separately and link the
containers using the –link option. If we were to use Docker Compose (standalone), we would need to add a
links or depends_on sub-option in the Docker Compose file.
303
Chapter 15 ■ Using Service Stacks
Next, specify the Docker images and environment variables to the stack file for creating a service stack.
To use the Docker Compose YAML file format for Swarm mode stacks, specify the version in the stack file as
3 or a later version such as 3.1. The docker-cloud.yml file is listed:
version: '3'
services:
web:
image: wordpress
links:
- mysql
environment:
- WORDPRESS_DB_PASSWORD="mysql"
ports:
- "8080:80"
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD="mysql"
- MYSQL_DATABASE="mysqldb"
The ports mapping of 8080:80 maps the WordPress Docker container port 80 to the host port 8080. Any
stack file options, such as links that are included in the preceding listing that are not supported by docker
stack deploy, are ignored when creating a stack. Store the preceding listing as docker-cloud.yml in the
Swarm manager EC2 instance. Listing the files in Swarm manager should list the docker-cloud.yml file.
~ $ ls -l
total 4
-rwxr-x--- 1 docker docker 265 Jun 17 00:07 docker-cloud.yml
Having configured a stack file with two services, next we will create a service stack.
Creating a Stack
The docker stack deploy command is used to create and deploy a stack. It has the following syntax.
304
Chapter 15 ■ Using Service Stacks
Using the stack file docker-cloud.yml, create a Docker stack called mysql with the docker stack
deploy command.
A Docker stack is created and the links option, which is not supported in Swarm mode, is ignored. Two
Swarm services—mysql_mysql and mysql_web—are created in addition to a network mysql_default.
Listing Stacks
List the stacks with the following command.
docker stack ls
The mysql stack is listed. The number of services in the stack also are listed.
~ $ docker stack ls
NAME SERVICES
mysql 2
305
Chapter 15 ■ Using Service Stacks
Listing Services
List the services in the mysql stack using the docker stack services command, which has the following
syntax.
To filter the services, add the --filter option. To filter multiple services, add multiple --filter
options, as shown in the following command.
The filtered stack services are listed. As both services are specified using –filter, both services are
listed.
The services created by a stack are Swarm services and may also be listed using the following command.
docker service ls
306
Chapter 15 ■ Using Service Stacks
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE
ixv0ykhuo14c mysql_mysql replicated 1/1 mysql:latest
sl2jmsat30ex helloworld replicated 2/2 alpine:latest
vl7ph81hfxan mysql_web replicated 1/1 wordpress:latest
To list all Docker containers in the mysql stack, run the following command.
By default, one replica is created for each service, so one Docker container for each service in the stack
is listed. Both Docker containers are running on a Swarm worker node.
Using the –f option to filter the Docker containers to list only the mysql_web.1 container.
307
Chapter 15 ■ Using Service Stacks
List all the running containers by setting the desired-state filter to running.
308
Chapter 15 ■ Using Service Stacks
Open the <public dns>:8080 URL in a browser. The <public dns>:8080/wp-admin/install.php URL is
displayed to start the WordPress installation. Select Continue. Specify a subtitle, username, password, e-mail,
and whether to discourage search engines from indexing the website. Then click on Install WordPress, as
shown in Figure 15-9.
309
Chapter 15 ■ Using Service Stacks
Specify a username and password and click on Log In, as shown in Figure 15-11.
310
Chapter 15 ■ Using Service Stacks
To add a new post, select Posts and click on Add New, as shown in Figure 15-13.
311
Chapter 15 ■ Using Service Stacks
In the Add New Post dialog, specify a title and add a blog entry. Click on Publish, as shown in Figure 15-14.
The new post is added. Click on View Post, as shown in Figure 15-15, to display the post.
312
Chapter 15 ■ Using Service Stacks
313
Chapter 15 ■ Using Service Stacks
Removing a Stack
The docker stack rm STACK command is used to remove a stack. Remove the mysql stack using the
following command.
The mysql stack is removed and the docker stack service mysql command does not list the stack, as
shown in the output from the command.
314
Chapter 15 ■ Using Service Stacks
Summary
This chapter introduced stacks, a Docker-native feature added in Docker 1.13. A stack is a collection of
related services and is created using a stack file, which is defined in YAML format similar to the Docker
Compose v3.x YAML syntax. This chapter concludes this book about Docker management design patterns.
As new features are added to Docker, other design patterns may be used for developing Docker-native
applications.
315
Index
A
Cross-account access, 273
Current state reconciliation, 138
Amazon Route 53 service, 242, 251 Custom overlay network
Amazon Web Services (AWS), 281–284 gossip protocol, 194
CloudFormation IP range, 191
Deploy Docker, 34 IPSEC tunnels, 194
EC2, 46 mysql-network, 192–193
Elastic Load Balancer, 48 MySQL database service, creation, 194–195
launch configurations, 48 service containers, 181
Moby Linux AMI, 47
stacks, 41–46
Swarm parameters, 35 D
Swarm properties, 36–38 Desired state reconciliation, 138–139
delete stack, 51–53 Docker
editions, 33 Cloud, 271–272
key pair, 33 CoreOS, 1
manager and worker nodes, 49–50 DNS/IPv4, 3
option, 284 execution, 3–6
single/multi zone Swarm, 31–33 launch instances, 2
Application load balancers, 228 service, 239, 246
Availability zone column, 245 Docker Cloud dashboard, 289
AWS credentials dialog, 282 Docker Cloud Swarm AWS role, 287
AWS elastic load balancer, 241 Docker Cloud Swarm mode, 285, 294, 296
AWS service provider, 285 docker_gwbridge network, 181, 184
Docker docker stack deploy command, 297
B
Docker Swarm load balancer, 247
Docker Swarm mode service, 219
Bind mounts, 98–99, 112–114 Domain Name Service (DNS), 251, 256
Bridge network DynamoDB database, 271
AWS CloudFormation stack, 182
create service, 186
description, 184 E
docker0 limitations, 179 Edit Endpoint link, 288
Swarm manager instance, 183 Elastic load balancer (ELB), 220, 238, 250
External load balancer, 221
C
Classic load balancers, 228 F
CloudFormation stack, 235, 244, 287 Failover record type, 259
Cloud settings page, 282 Failover routing policy, 262
G
protocol, 228
security settings, 229
Global service, 83 service discovery, 219
rolling update, 176, 178 service task, 226, 234
spread scheduling policy, 153 SSH login, 222
Gossip protocol, 194 types, 228
Logging and monitoring
H
connecting, SPM and Logsene apps, 208–209
Docker Swarm logs, Logsene, 214–216
Hello World service, 222, 233, 237, 246, 293 Docker Swarm metrics, 213
Highly available website Logsene application, 205–206, 208
alias target, 258, 261 MySQL database service, 212
Amazon Route 53, 242 Sematext Docker agent, 209, 211
AWS elastic load balancer, 258 Sematext SPM and Logsene, 203
AWS region, 241 SPM application creation, 203–205
confirmation dialog, 267
DNSes, 243
Docker Swarm, 243 M
domain name, 243 Mounts
failover record type, 259 bind, 98–99, 112–114
hosted zone, 252, 266–268 data stored, 97
hostname, 266 EC2 instances, AWS Swarm nodes, 100
name servers, 254–255 named volume, 100–101
Host network, 184 options, 102–103
tmpfs mount options, 103
I, J, K
volume (see Volume mount)
Multiple Docker Swarms, 243
IAM role, 288 MySQL database
Ingress network, 180, 185 Docker container, 70–72
add load balancer listener, 190–191 service, creation, 67
create service, 189
description, 184
docker service N, O
create command, 188 Network
inspect command, 189, 191 custom overlay, 181
ports, 181 delete, 198–199
Ingress load balancing, 219 docker0 bridge limitations, 179
Internal overlay network, 195–198 docker_gwbridge, 181
ingress, 180
L
internal overlay, 195–198
Swarm mode, 183–184
Listeners tab, 236
Load balancer listeners, 236
Load balancing P, Q
CoreOS, 221 Postgres image, 167, 170
custom scheduling, 219 Primary record set, 260
dialog, 228 Public hosted zone, 254
DNS name, 232
Docker container, 223
EC2 instances, 230 R
ELB, 220 Replicated services, 60
external elastic load balancer, 227 Resources configuration
HTTP, 228 allocation, resource reserves set, 117–118
invoke, service at worker node, 226 CPU and memory limits, 124
new security group, 229 EC2 instances, Swarm nodes, 119
318
■ INDEX
319
■ INDEX
320