Docker Ecosystem Vulnerability Analysis
Docker Ecosystem Vulnerability Analysis
net/publication/323854483
CITATIONS READS
120 4,690
4 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Roberto Di Pietro on 25 March 2018.
Abstract
Cloud based infrastructures have typically leveraged virtualization. However, the need for always shorter development
cycles, continuous delivery and cost savings in infrastructures, led to the rise of containers. Indeed, containers provide
faster deployment than virtual machines and near-native performance. In this paper, we study the security implications
of the use of containers in typical use-cases, through a vulnerability-oriented analysis of the Docker ecosystem. Indeed,
among all container solutions, Docker is currently leading the market. More than a container solution, it is a complete
packaging and software delivery tool. In this paper we provide several contributions: we first provide a thorough survey
on related work in the area, organizing them in security-driven categories, and later we perform an analysis of the
containers security ecosystem. In particular, using a top-down approach, we identify in the different components of the
Docker environment several vulnerabilities—present by design or introduced by some original use-cases. Moreover, we
detail real world scenarios where these vulnerabilities could be exploited, propose possible fixes, and, finally discuss the
adoption of Docker by PaaS providers.
Keywords:
Security, Containers, Docker, Virtual Machines, DevOps, Orchestration.
Roadmap. This paper is organized as follows: in Sec- (d) Container (e) Unikernel
tion 2, we provide the background information for virtual-
ization and its alternatives. In Section 3 we survey the re-
Figure 1: Application runtime models
lated work in the area, organizing them in security-driven
categories. We then focus on Docker’s architecture in Sec-
tion 4, including its ecosystem. In Section 5 we outline and cloud computing infrastructure). Given some funda-
Docker’s security architecture. In Section 6, we present mental constraints such as performance overhead, flexi-
Docker’s use cases from several points of view and show bility, and scalability, alternatives to virtualization have
how they differ from VM and other containers (Linux- emerged: unikernels as well as containers. All these ap-
VServer, OpenVZ, etc.) use cases. From these typical proaches are summarized in Fig. 1, and detailed below.
use cases we build, in Section 7, a vulnerability-oriented
risk analysis, classifying vulnerabilities into five categories.
2.1. Virtual machines (VM)
Finally, in Section 8, we discuss the implications of these
vulnerabilities in a cloud-based infrastructure and the is- Virtual machines are the most common way of perform-
sues they trigger. Conclusions are drawn in Section 9. ing cloud computing: they are fully functional OS, run-
ning on top of an emulated hardware layer, provided by
the underlying hypervisor. The hypervisor can either be
2. Technology background running directly on hardware (Fig. 1b) (Xen) or on a host
OS (Fig. 1c) (for instance KVM). VM can be cloned, in-
Cloud applications have typically leveraged virtualiza- stalled within minutes, and booted within seconds, hence
tion, which can also be used as a security component [50] allowing to stack them with centralized tools. However,
(e.g., to provide monitoring of VMs, allowing easier man- the presence of two operating systems (host and guest)
agement of the security of complex cluster, server farms, along with an additional virtual hardware layer introduces
2
significant overhead in performance. Hardware support for shortening the syscalls execution path: they do not em-
virtualization dramatically reduces the cited overhead, but bed drivers and the hypervisor does not emulate a hard-
performance is far from bare metal, especially for I/O op- ware layer. The interaction is achieved through a specific
erations [64] [56] [62]. API, enabling optimizations that were impossible with the
legacy model where an unmodified kernel was running on
2.2. Containers top of emulated hardware [37]. However, these modifica-
tions make unikernels dependent on the hypervisor they
Containers (Fig. 1d) provide near bare metal per- run on. Indeed, currently most unikernels development
formance as opposed to virtualization (fig. 1a and are bound to the Xen hypervisor. Furthermore, uniker-
1b) [64] [56] [62] with the further possibility to run seam- nels are usually designed to run programs written in a
lessly multiple versions of applications on the same ma- specific programming language. Thus, they only embed
chine. For instance, new instances of containers can be the libraries needed by this specific language and are opti-
created quasi-instantly to face a customer demand peak. mized for this specific language (e.g., HaLVM for Haskell,
Containers have existed for a long time under various OSv for Java). It decreases the induced overhead rela-
forms, which differ by the level of isolation they provide. tively to a VM. A detailed performance comparison of
For example, BSD jails [48] and chroot can be considered OSv against Docker and KVM is currently available [56].
as an early form of container technology. As for recent A study on this topic [37] shows that unikernels achieve
Linux-based container solutions, they rely on a kernel sup- better performance than VM, and addresses some of the
port, a userspace library to provide an interface to syscalls, security concerns containers suffer from. This is possi-
and front-end applications. There are two main kernel im- ble since applications running in unikernels do not share
plementations: LXC-based implementation, using cgroups the host OS kernel. The study concludes that unikernels
and namespaces, and the OpenVZ patch. The most pop- implementations are not mature enough for widespread
ular implementations and their dependencies are shown in deployment in production. However, latest developments
Table 1. consider unikernels as a serious concurrent to containers
Containers may be integrated in a multi-tenant environ- in the longer term [54] [53], both for security reasons and
ment, thus leveraging resource-sharing to increase average for their significantly short startup time.
hardware use. This goal is achieved by sharing the ker-
nel with the host machine. Indeed, in opposition to VM, 2.4. Comparison
containers do not embed their own kernel but run directly Each of these alternatives provides a different trade-off
on the host kernel. This shortens the syscalls execution between multiple factors, including performance, isolation,
path by removing the guest kernel and the virtual hard- boot time, storage, OS adherence, density and maturity.
ware layer. Additionally, containers can share software Most of these factors are performance-related, but security
resources (e.g., libraries) with the host, avoiding code du- and maturity of the solutions are also at stake. According
plication. The absence of kernel and some system libraries to the relative importance of these factors, and driven by
(provided by the host) makes containers very lightweight the intended use, one specific solution may be preferred.
(image sizes can shrink to a few megabytes). It makes On the one hand, traditional VM are quite resource-
the boot process very fast (about one second [65]). This consuming and slow to boot and deploy. However, they
short startup time is convenient to spawn containers on- provide a strong isolation that has been experienced in
demand or to quickly move a service, for instance when im- production for many years. On the other hand, contain-
plementing Network Function Virtualization (NFV). The ers are lightweight, very fast to deploy and boot, impose
deployment of such containers —agnostic of each other low performance overhead —at the price of less isolation—
even though running on the same shared kernel— requires and provide a new environment with almost no feedback
isolation. from production uses. Unikernels try to achieve a compro-
In sections 4, 5, 6 and 7, we will discuss the cgroups mise between VMs and containers, providing a fast and
and namespaces based containers, and especially Docker’s lightweight —but still experimental— execution environ-
containers. Indeed, Docker popularity, coupled with the ment running in an hypervisor.
extended privileges on the machines it is run, make it a
target with a high payoff for any adversary, and that is 3. Related work
why we concentrate on the vulnerabilities it is subject to,
and possible countermeasures. In this section we provide a thorough survey on related
work in the area. We first describe the studies that provide
2.3. Unikernels a comparison between containers and virtualization tech-
niques. Then we organize the related work into security-
Unikernels consist of very lightweight operating systems driven groups according to the type of security contribu-
(OS), specifically designed to run in VM (Fig. 1e). Uniker- tion provided (i.e., security aspects of containers, defense
nels were originally designed to run on the Xen hypervi- against specific attacks, vulnerability analysis, and use-
sor [34]. They are more performant than classical VM by case driven vulnerability analysis).
3
Table 1: Container solutions
Container and virtualization comparison and compare a selection of OS-level virtualization solutions
A comparison between containers and virtualization is with respect to this model.
provided in [43]. This article is a general-purpose sur- Bacis et al. in [35] focuses on SELinux profiles manage-
vey of containers, showing containers concepts, pros and ment for containers. The authors propose an extension to
cons, and detailing features of some container solutions. It the Dockerfile specification to let developers include the
concludes on the prediction (the paper dates back to 2014) SELinux profile of their container in the built image, to
that containers will be a central technology for future PaaS simplify their installation on the host. This solution at-
solutions. However, the comparison is essentially made on tempts to address the problem that the default SELinux
a performance basis, and the only security concern men- Docker profile gives all containers the same type, and all
tioned is the need for a better isolation between containers. Docker objects the same label, so that it does not protect
A thorough performance evaluation of virtualization and containers from other containers [55].
containerization technologies —Docker, LXC, KVM, OSv
(unikernel)— is provided in [56]. The authors use mul- Defense against specific attacks
tiple benchmark tests (Y-Cruncher, NBENCH, Noploop, With the wide spread usage of containerization tech-
Linpack, Bonnie++, STREAM, Netperf) to assess CPU, nology, numerous articles related to specific defenses have
memory and network throughput and disk I/O in differ- come to light. Given that containers can directly commu-
ent conditions. They show that containers are significantly nicate with the kernel of the host, an attacker can perform
better than KVM in network and disk I/O, with perfor- several escape attacks in order to compromise both the
mance almost equal to native applications. They conclude container and the host environment. Once out of the con-
by mentioning security as the trade-off of performance tainer, she has the potential to cause serious damages by
for containers. A similar performance evaluation is made obtaining rights through privilege escalation techniques or
in [44]. by causing the block of the system (and therefore of all
the related containers hosted) making use of DoS attacks.
Security aspects of containers In [47], the authors analyze Docker escape attacks and
The security aspect of containers is discussed in more de- propose a defense method that exploits the status names-
tail in [38]. This article details Docker’s interaction with paces. This method provides for a dynamic detection of
the underlying system, e.g., on the one hand internal se- namespace status at runtime (i.e., during the execution
curity relying on namespaces and cgroups, intended and of the processes). The resulting monitoring is able to de-
achieved isolation, and per-namespace isolation features. tect anomalous processes and can prevent their escape be-
On the other hand, it details operating system-level secu- haviours, increasing the security and the reliability of the
rity, including host hardening (with a short presentation container operations. In [40] the authors proposes a tech-
of Apparmor and SELinux) and capabilities. The authors nique based on the limitation of container memory to re-
insist on the need to run containers with the least privi- duce the Docker attack surface and protects the container
lege level (e.g non-root) and conclude that with this use technology from DoS attacks.
and the default Docker configuration, containers are fairly
safe, providing a high level of isolation. Vulnerability Analysis
A security-driven comparison among Operating System- In [45], the authors provide an analysis of the con-
level virtualization systems is provided in [58]. In the ap- tent of the images available to download on the Docker
proach of OS-level virtualization, a number of distinct user Hub, from a security point of view. The authors show
space instances (often referred to as containers) are exe- that a significant amount of official and unofficial images
cuted on top of a shared operating system kernel. The on the Docker Hub embed packages with known security
authors propose a generic model for a typical OS-level vir- vulnerabilities, that can therefore be exploited by an at-
tualization setup with the associated security requirements tacker. Detailed results show that 36% of official images
4
contain high-priority CVE vulnerabilities, and 64% con- (the Docker Hub: Fig. 2 - component c), and other un-
tain medium or high-priority vulnerabilities. Another re- official repositories (Fig. 2 - component d), along with a
cent work related to the vulnerabilities inside the images trademark (Docker Inc.) and bindings with third parties
of Docker Hub is described in [61]. The authors of the applications (Fig. 2 - component e). The build process im-
paper analyze the Docker Hub images by using the frame- plies fetching code from external repositories (containing
work DIVA (Docker Image Vulnerability Analysis). With the packages that will be embedded in the images: Fig. 2
the analysis of exactly 356.218 images they show that both - component g). An orchestrator (Fig. 2 - component f)
official and unofficial images have more than 180 vulner- can be used for managing the lifecycle of the operational
abilities on average. Furthermore, many images have not infrastructure.
been updated for hundreds of days and the vulnerabili- The Docker project is written in Go language and was
ties commonly tend to propagate from parent images to first released in March 2013. Since then, it has experienced
child ones. Lu et al. in [52] the authors study the typi- an explosive diffusion and widespread adoption [21].
cal penetration testing process under Docker environment
related to common attacks such as DoS, container escape
and side channel. A. Mouat, in [57], provides an overview
of some container vulnerabilities, such as Kernel exploits,
DoS attacks, container breakouts, poisoned images, and
compromising secrets. The study describes the Docker
technology and provides security tips in order to limit the
related attack surface. Although some vulnerabilities in
our paper are common to those of the book, our work
faces them from a different point of view (e.g., both works
analyze the vulnerabilities inside the images but only ours
considers the vulnerabilities due to the image automatic
construction starting from the software development plat-
form GitHub). Besides, in our work we study the security
implications of the use of containers taking into account
the typical use-cases.
Figure 3: Example of image inheritance trees • IPC (inter-process communication): provides POSIX
message queues, SystemV IPC, shared memory, etc..
layer, starting from a base image (generally a lightweight • NET: provides network resources — each NET names-
Linux distribution). This way, images are organized in pace contains its own network stack, including inter-
trees and each image has a parent, except from base im- faces, routing tables, iptables rules, network sockets,
ages that are roots of the trees (Fig. 3). This structure etc..
allows to ship in an image only the modifications specif- • MNT: provides file-system mountpoints: each con-
ically related to that image (app payload). Therefore, if tainer has its own view of the file-system and mount
many images on a host inherit from the same base im- points —like an enhanced chroot— in order to avoid
age, or have the same dependencies, they will be fetched path traversals, chroot escapes, or information leak /
only once from the repositories. Additionally, if the local injection through /proc, /sys and /dev directories.
storage driver allows it (with a union file-system, i.e., a
read-only file system, and some sort of writable overlay on • UTS: provides hostname and domain isolation.
top [63]), it will be stored only once on the disk, leading
to substantial resource savings. The detailed specification • USER: provides a separate view of users and groups,
for Docker images and containers can be found at [20]. including UIDs, GIDs, file permissions, capabilities...
Images metadata contain information about the im-
• CGROUP: provides a virtualization of the process’
age itself (e.g., ID, checksum, tags, repository, author...),
cgroups view — each cgroup namespace has its own
about its parent (ID) along with (optional) default run-
set of cgroup root directories, that represents its base
time parameters (e.g., port re-directions, cgroups config-
points.
uration). These parameters can be overridden at launch
time by the docker run command. Each of these namespaces has its own kernel internal ob-
The build of images can be done in two ways. It is possi- jects related to its type, and provides to processes a local
ble to launch a container from an existing image (docker instance of some paths in /proc and /sys file-systems. For
run), perform modifications and installations inside the instance, NET namespaces have their own /proc/net di-
container, stop the container and then save the state of rectory. A thorough list of per-namespace isolated paths is
the container as a new image (docker commit). This pro- provided by [29] and their isolation role is detailed in [58].
cess is close to a classical VM installation, but has to be New namespaces can be created by the clone() and
performed at each image rebuild (e.g., for an update); since unshare() syscalls, and processes can change their current
the base image is standardized, the sequence of commands namespaces using setns(). Processes inherit namespaces
is exactly the same. To automate this process, Docker- from their parent. Each container is created within its own
files allow to specify a base image and a sequence of com- namespaces. Hence, when the main process (the container
mands to be performed to build the image, along with entry point) is launched, all container’s children processes
other options specific to the image (e.g., exposed ports, are restricted to the container’s view of the host.
entry point...). The image is then built with the docker cgroups are a kernel mechanism to restrict the resource
build command, resulting in another standardized tagged usage of a process or group of processes. They prevent
image that can be either run or used as a base image for a process from taking all available resources and starving
another build. The Dockerfile reference is available in [25]. other processes and containers on the host. Controlled
resources include CPU shares, RAM, network bandwidth,
4.2. Docker internals and disk I/O.
Docker containers rely on creating a wrapped and con-
trolled environment on the host machine in which arbi- 4.3. The Docker daemon
trary code could (ideally) be run safely. This isolation The Docker software itself (Fig. 2b) runs as a daemon on
is achieved by two main kernel features, kernel names- the host machine. It can launch containers, control their
paces [36] and control groups (cgroups). Note that these level of isolation (cgroups, namespaces, capabilities restric-
features were merged starting from the Linux kernel ver- tions and SELinux / Apparmor profiles), monitor them to
sion 2.6.24 [2]. There are currently 7 different namespaces trigger actions (e.g restart) and spawn shells into running
6
containers for administration purposes. It can change ipt- 4.6. Docker dedicated operating systems
ables rules on the host and create network interfaces. It is
In addition to the Docker package in mainstream dis-
also responsible for the management of container images:
tributions, a number of dedicated distributions have been
pull and push images on a remote registry (e.g the Docker
developed specifically to run Docker or other container so-
Hub), build images from Dockerfiles, sign them, etc.. The
lutions. They allow running Docker on host OS other than
daemon itself runs as root (with full capabilities) on the
Linux when run inside a VM, without the complexity of
host, and is remotely controlled through a UNIX socket.
a full Linux distribution. We experimented three of these
The ownership of this socket determines which users can
distributions:
manage containers on the host using the docker command.
Alternatively, the daemon can listen on a classical TCP • Boot2docker [10], a distribution based on TinyCore-
socket, enabling remote container administration without Linux, meant to be very lightweight (the bootable .iso
requiring a shell on the host. weights 27MiB). It is mainly used to run Docker con-
tainers on OS other than Linux (e.g., running in Vir-
4.4. The Docker Hub tualBox on Windows Server). The security advantage,
The Docker Hub (Fig. 2c) is an online repository that when compared to mainstream distributions, is the re-
allows developers to upload their Docker images and let duced attack surface due to the minimal installation.
users download them. Developers can sign up for a free ac-
• CoreOS [12], a distribution dedicated to containers.
count, in which all repositories are public, or for a pay ac-
It can run Docker, along with Rocket, for which it
count, allowing the creation of private repositories. Repos-
was designed. Rocket is a fork of Docker that only
itories from a developer are namespaced, i.e., their name
runs containers: in opposition to the monolithic de-
is “developer/repository”. There also exist official repos-
sign of Docker, interaction with the ecosystem and
itories, directly provided by Docker Inc, whose name is
image builds are managed by other tools in CoreOS.
“repository”. These official repositories stand for most
The OS integrates with Kubernetes [28] to orchestrate
used base images to build containers. They are “a curated
container clusters on multiple hosts.
set of Docker repositories that are promoted on Docker
Hub” [19]. • RancherOS [31], an OS entirely based on Docker,
The Docker daemon, along with the Docker Hub and meant to run Docker containers. The init process is
the repositories are similar to a package manager, with a Docker daemon (system-docker) and system ser-
a local daemon installing software on both the host and vices run in (privileged) containers. One of these ser-
the remote repositories. Some of the repositories are offi- vices is another Docker daemon (user-docker) that
cial while others are unofficial, provided by third parties. spawns itself user-level containers. All installed ap-
From this point of view, the Docker Hub security can be plications on the system run in Docker containers, so
compared to that of a classical package manager [39]. This that Docker commands are used to install and update
similarity guided our vulnerability analysis study in Sec- software on the host. No external package manager is
tion 7. required.
5.3. Network security • Cloud provider’s CaaS use-case, i.e., the usages guided
by the Cloud providers implementations to cope with
Network resources are used by Docker for image distri-
both security and integration within their infrastruc-
bution and remote control of the Docker daemon.
ture.
Concerning image distribution, images downloaded from
a remote repository are verified with a hash, while the con-
nection to the registry is made over TLS (except if explic- 6.1. Recommended use-case
itly specified otherwise). Moreover, starting from version Docker developers recommend a micro-services ap-
1.8 issued in August 2015, the Docker Content Trust [16] proach [15], meaning that a container must host a single
architecture allows developers to sign their images before service, in a single process (or a daemon spawning chil-
pushing them to a repository. Content Trust relies on TUF dren). Therefore a Docker container is not considered as
(The Update Framework [60]). It is specifically designed to a VM: there is no package manager, no init process, no
address package manager flaws [39]. It can recover from a sshd to manage it. All administration tasks (container
8
stop, restart, backups, updates, builds...) have to be per- processes in the containers). Then, with containers embed-
formed via the host machine, which implies that the le- ding enough software to run a full system (logging daemon,
gitimate containers admin has root access to the host. ssh server, even sometimes an init process), it is tempting
Indeed, Docker was designed to isolate applications that to perform administration tasks from within the container
would otherwise run on the same host, so this root access itself. This is completely opposed to Docker’s design. In-
is assumed to be granted. From a security point of view, deed, some of these administration tasks need root access
isolation of processes (through namespaces) and resources to the container. Some other administration actions (e.g.,
management (through cgroups) makes it safer to deploy mounting a volume in a container) may need extra capa-
Docker applications compared to not using container tech- bilities that are dropped by Docker by default. This kind
nology but rather usual processes on the host. of usage tends to increase the attack surface. Indeed, it
The main advantage of Docker is the ease of applica- enables more communication channels between host and
tion deployment. It was designed to completely separate containers, and between co-located containers, increasing
the code plane from the data plane: Docker images can the risk of attacks, such as privilege escalation.
be built anywhere through a generic build file (Dockerfile) Eventually, with the acceleration of software develop-
which specifies the steps to build the image from a base ment cycles allowed by Docker, developers cannot main-
image. This generic way of building images makes the tain each version of their product and only maintain the
image generation process and the resulting images almost latest one (tag “latest” on Docker repositories). As a con-
host-agnostic, only depending on the kernel and not on sequence, old images are still available for downloading,
the installed libraries. The considerable effort and associ- but they have not been updated for hundreds of days and
ated benefits of adopting the micro-services approach are can introduce several vulnerabilities [61]. A study [45] has
developed in [49]. Airpair [24] lists eight proven real world shown that more than 30% of images on the Docker Hub
Docker use cases, that fit in the official recommendations: contain high severity CVE vulnerabilities, and up to 70%
contain high or medium severity vulnerabilities.
• Simplifying configuration; Note that these images cannot always be reproducibly
built: although the Dockerfile is public on the Docker Hub,
• Code pipeline management;
it often includes a statement ADD start.sh /start.sh or
• Developer productivity; similar, that copies an install script from the maintainer’s
computer (and not available on the Dockerhub) to the im-
• App isolation; age and runs it, without appearing in the Dockerfile. Some
• Server consolidation; maintainers even remove this script after execution.
Wide-spread use-case
Vulnerability Docker recommended Cloud Provider CaaS
(e.g., casting
categories use-case use-case
containers as VM)
Moderate.
Docker’s default High
configuration on local CIS benchmark on EC2
systems is relatively Very high by default score 62% of
Insecure configuration secure — see Section 7.3. Very likely insecure compliance. Containers
Lowering of security configuration. in pods sharing the same
configuration possible by NET namespace with
the sysadmin or Kubernetes.
containers placement.
Very high
Usage promoted
Vulnerabilities in the High
extensively be the Moderate
image distribution, Automation at all layers
DevOps approach. Containers used as VMs,
verification, to bring shorter
Automation at all layers followed by less
decompression and development cycles and
to bring shorter continuous delivery
storage process continuous delivery.
development cycles and
continuous delivery.
Very high Moderate
Very likely usage of Depending on what
Moderate heavyweight Linux images and where they
Vulnerabilities inside
By default exposing a distribution images with are retrieved from: both
the images
limited attack surface. an attack surface bigger micro-server and VM like
than micro-services- usages are possibly used
oriented images. here.
Vulnerabilities
Similar level across N. A. N. A.
directly linked to
the use-cases
Docker or libcontainer
Vulnerabilities in the Similar level across
N. A. N. A.
kernel the use-cases
Use-cases
Container and
Security aspects Defense against Defense against Vulnerability driven
virtualization
of containers DoS attacks Escape attacks analysis vulnerability
comparison
analysis
[43] X
[56] X
[44] X
[38] X
[58] X
[35] X
[55] X
[40] X X
[47] X X
[45] X
[61] X
[52] X X X
[57] X X X
This work X X X X
References [19] Docker hub official repositories, Last checked January 2018.
https://docs.docker.com/docker-hub/official_repos.
[1] Containers: Real adoption and use cases in 2017. [20] Docker image specification, Last checked January 2018.
http://en.community.dell.com/techcenter/cloud/m/dell_ https://github.com/docker/docker/blob/master/image/
cloud_resources/20443801. spec/v1.md.
[2] Notes from a container, October 2007. https://lwn.net/ [21] Docker overview, Last checked January 2018. https://www.
Articles/256389. docker.com/company.
[3] Novell apparmor administration guide, September 2007. [22] Docker security scanning, Last checked January 2018. https:
https://www.suse.com/documentation/apparmor/pdfdoc/ //docs.docker.com/docker-cloud/builds/image-scan.
book_apparmor21_admin/book_apparmor21_admin.pdf. [23] Docker store, Last checked January 2018. https://docs.
[4] Docker and ssh, June 2014. https://blog.docker.com/2014/ docker.com/docker-store.
06/why-you-dont-need-to-run-sshd-in-docker. [24] Docker use-cases, Last checked January
[5] docker-default apparmor profile, June 2014. https:// 2018. https://www.airpair.com/docker/posts/
wikitech.wikimedia.org/wiki/Docker/apparmor. 8-proven-real-world-ways-to-use-docker.
[6] Containers are not vms, March 2016. https://blog.docker. [25] Dockerfile reference, Last checked January 2018. https://docs.
com/2016/03/containers-are-not-vms/. docker.com/engine/reference/builder.
[7] Amazon ec2 container service reference, Last checked Jan- [26] Google compute engine reference, Last checked January 2018.
uary 2018. http://docs.aws.amazon.com/AmazonECS/latest/ https://cloud.google.com/compute/docs/.
developerguide/ECS_instances.html. [27] Kubernetes installation advices, Last checked January 2018.
[8] Azure container instances, Last checked January 2018. https: https://kubernetes.io/docs/setup/pick-right-solution/.
//docs.microsoft.com/en-us/azure/container-instances/. [28] Kubernetes orchestrator, Last checked January 2018. http:
[9] Azure container services, Last checked January 2018. https: //kubernetes.io/.
//docs.microsoft.com/en-us/azure/container-service/. [29] Linux kernel namespaces man page, Last checked January 2018.
[10] Boot2docker project, Last checked January 2018. http:// http://man7.org/linux/man-pages/man7/namespaces.7.html.
boot2docker.io/. [30] Overview of docker compose, Last checked January 2018.
[11] Cis docker benchmark. Tech. rep., Center for Internet Security, https://docs.docker.com/compose/overview/.
January 2018. [31] Rancheros project, Last checked January 2018. http://
[12] Coreos project, Last checked January 2018. https://coreos. rancher.com/docs/os/v1.1/en/.
com/docs/. [32] Swarm mode, Last checked January 2018. https://docs.
[13] Cve-2014-9356, Last checked January 2018. https:// docker.com/engine/swarm/how-swarm-mode-works/nodes/.
security-tracker.debian.org/tracker/CVE-2014-9356. [33] Use compose with swarm, Last checked January 2018. https:
[14] Cve vulnerability statistics on docker, Last checked Jan- //docs.docker.com/compose/swarm/.
uary 2018. http://www.cvedetails.com/product/28125/ [34] Xen wiki page about unikernels, Last checked January 2018.
Docker-Docker.html. http://wiki.xenproject.org/wiki/Unikernels.
[15] Docker best practices, Last checked January 2018. [35] Bacis, E., Mutti, S., Capelli, S., and Paraboschi, S. Dock-
https://docs.docker.com/engine/userguide/eng-image/ erpolicymodules: mandatory access control for docker contain-
dockerfile_best-practices. ers. In Communications and Network Security (CNS), 2015
[16] Docker content trust, official documentation, Last checked IEEE Conference on (2015), IEEE, pp. 749–750.
January 2018. https://docs.docker.com/engine/security/ [36] Biederman, E. Multiple instances of the global linux
trust/content_trust. namespaces. In Proceedings of the 2006 Linux Sym-
[17] Docker hub: Automated builds and webhooks, Last checked posium (2006). https://www.kernel.org/doc/ols/2006/
January 2018. https://docs.docker.com/docker-hub/builds. ols2006v1-pages-101-112.pdf.
[18] Docker hub images, Last checked January 2018. https://hub.
docker.com/explore.
17
[37] Briggs, I., Day, M., Guo, Y., Marheine, P., and Eide, E. A of the 12th USENIX Conference on Networked Systems De-
performance evaluation of unikernels. Tech. rep., 2014. sign and Implementation (Berkeley, CA, USA, 2015), NSDI’15,
[38] Bui, T. Analysis of Docker Security, 2015. arXiv:1501.02967v1. USENIX Association, pp. 559–573.
[39] Cappos, J., Samuel, J., Baker, S. M., and Hartman, J. H. A [54] Madhavapeddy, A., Mortier, R., Rotsos, C., Scott, D.,
look in the mirror: attacks on package managers. In Proceedings Singh, B., Gazagnaire, T., Smith, S., Hand, S., and
of the 2008 ACM Conference on Computer and Communica- Crowcroft, J. Unikernels: Library operating systems for the
tions Security, CCS 2008, Alexandria, Virginia, USA, October cloud. vol. 48, ACM, pp. 461–472.
27-31, 2008, pp. 565–574. [55] Miller, A., and Chen, L. Securing your containers - an
[40] Chelladhurai, J., Chelliah, P. R., and Kumar, S. A. Se- exercise in secure high performance virtual containers. In
curing docker containers from denial of service (dos) attacks. In Proceedings of the International Conference on Security and
Services Computing (SCC), 2016 IEEE International Confer- Management (SAM) (2012), The Steering Committee of The
ence on (2016), IEEE, pp. 856–859. World Congress in Computer Science, Computer Engineer-
[41] Combe, T., Martin, A., and Di Pietro, R. To docker or not ing and Applied Computing (WorldComp), p. 1. http://
to docker: A security perspective. IEEE Cloud Computing 3, 5 worldcomp-proceedings.com/proc/p2012/SAM9702.pdf.
(2016), 54–62. [56] Morabito, R., Kjallman, J., and Komu, M. Hypervisors vs.
[42] Di Pietro, R., and Lombardi, F. Security for Cloud Comput- lightweight virtualization: A performance comparison. In Pro-
ing. Artec House, Boston, 2015. ISBN 978-1-60807-989-6. ceedings of the 2015 IEEE International Conference on Cloud
[43] Dua, R., Raja, A., and Kakadia, D. Virtualization vs con- Engineering (2015), pp. 386–393.
tainerization to support paas. In Proceedings of the 2014 IEEE [57] Mouat, A. Docker security using containers safely in produc-
International Conference on Cloud Engineering (IC2E) (March tion, 2015.
2014), pp. 610–614. [58] Reshetova, E., Karhunen, J., Nyman, T., and Asokan, N.
[44] Felter, W., Ferreira, A., Rajamony, R., and Rubio, Security of os-level virtualization technologies: Technical re-
J. An updated performance comparison of virtual machines port. CoRR abs/1407.4245 (2014).
and linux containers. Tech. rep., IBM Research Report, [59] Ross, R. S. Guide for conducting risk assessments (nist sp-800-
July 2014. http://www.cs.nyu.edu/courses/fall14/CSCI-GA. 30rev1). The National Institute of Standards and Technology
3033-010/vmVcontainers.pdf. (NIST), Gaithersburg (2012).
[45] Gummaraju, J., Desikan, T., and Turner, Y. Over 30% [60] Samuel, J., Mathewson, N., Cappos, J., and Dingledine, R.
of official images in docker hub contain high priority security Survivable key compromise in software update systems. In Pro-
vulnerabilities. Tech. rep., BanyanOps, May 2015. ceedings of the 17th ACM Conference on Computer and Com-
[46] Intel. Linux* containers streamline virtualization and comple- munications Security (New York, NY, USA, 2010), CCS ’10,
ment hypervisor-based virtual machines. ACM, pp. 61–72.
[47] Jian, Z., and Chen, L. A defense method against docker escape [61] Shu, R., Gu, X., and Enck, W. A study of security vulner-
attack. In Proceedings of the 2017 International Conference on abilities on docker hub. In Proceedings of the Seventh ACM
Cryptography, Security and Privacy (2017), ACM, pp. 142–146. on Conference on Data and Application Security and Privacy
[48] Kamp, P.-H., and Watson, R. N. M. Jails: Confining the (2017), ACM, pp. 269–280.
omnipotent root. In Proceedings of the 2nd International [62] Soltesz, S., Pötzl, H., Fiuczynski, M. E., Bavier, A., and
System Administration and Networking Conference (SANE) Peterson, L. Container-based operating system virtualization:
(May 2010). http://therbelot.free.fr/Install_FreeBSD/ A scalable, high-performance alternative to hypervisors. In Pro-
jail/jail.pdf. ceedings of the 2Nd ACM SIGOPS/EuroSys European Confer-
[49] Killalea, T. The hidden dividends of microservices. Commu- ence on Computer Systems 2007 (New York, NY, USA, 2007),
nications of the ACM 59, 8 (2016), 42–45. EuroSys ’07, ACM, pp. 275–287.
[50] Lombardi, F., and Di Pietro, R. Secure virtualization for [63] Wright, C. P., and Zadok, E. Kernel korner: Unionfs: Bring-
cloud computing. Journal of Network and Computer Applica- ing filesystems together. Linux J. 2004, 128 (Dec. 2004), 8–.
tions 34, 4 (2011), 1113–1122. [64] Xavier, M. G., Neves, M. V., Rossi, F. D., Ferreto, T. C.,
[51] Lombardi, F., and Di Pietro, R. Virtualization and cloud Lange, T., and De Rose, C. A. F. Performance evaluation of
security: Benefits, caveats, and future developments. In Cloud container-based virtualization for high performance computing
Computing, Z. Mahmood, Ed., Computer Communications and environments. In Proceedings of the 2013 21st Euromicro In-
Networks. Springer International Publishing, 2014, pp. 237–255. ternational Conference on Parallel, Distributed, and Network-
[52] Lu, T., and Chen, J. Research of penetration testing technol- Based Processing (Washington, DC, USA, 2013), PDP ’13,
ogy in docker environment. IEEE Computer Society, pp. 233–240.
[53] Madhavapeddy, A., Leonard, T., Skjegstad, M., Gazag- [65] Zheng, C., and Thain, D. Integrating containers into work-
naire, T., Sheets, D., Scott, D., Mortier, R., Chaudhry, flows: A case study using makeflow, work queue, and docker.
A., Singh, B., Ludlam, J., Crowcroft, J., and Leslie, I. In Proceedings of the 8th International Workshop on Virtual-
Jitsu: Just-in-time summoning of unikernels. In Proceedings ization Technologies in Distributed Computing (New York, NY,
USA, 2015), VTDC ’15, ACM, pp. 31–38.
18