Docker PDF
Docker PDF
Project Members
Last Name First Name Institution Country
Ghanmi Nidhal Tunisia
Lukyamuzi Edward Uganda Virus Research Institute Uganda
Maslamoney Suresh Computational Biology Division, South Africa
University of Cape Town
Kimbowa Timothy Uganda Virus Research Institute Uganda
1
Table of contents
1.0 Docker Networking 3
1.1 Using OVS bridge for Docker networking 3
1.2 Weave Networking for Docker 4
2.0 Docker Volumes 4
2.1 Volumes from Docker Image 4
2.2 Volumes from Another Container 5
3.0 Docker security 6
3.1 Host Configuration 6
3.1.1 General Configuration 6
3.1.2 Linux Hosts Specific Configuration 6
3.2 Docker Daemon Configuration 7
3.3 Docker Daemon Configuration Files 8
3.4 Container Runtime 9
3.5 Container Images and Build file 11
3.6 Docker Security Operations 12
3.7 Docker Swarm Configuration 13
4.0 Linking Docker Containers 14
4.1 Docker Link Flag 14
4.2 Docker Compose 14
5.0 Swarmkit 15
5.1 Configure Swarm Cluster 15
6.0 Jupyter notebook on Docker 17
6.1 Numpy 17
6.2 Tensorflow 17
2
1.0 Docker Networking
1.1 Using OVS bridge for Docker networking
OVS bridges or Open vSwitch bridges are used as an alternative to the native bridges in linux. It
supports most features which are in a physical switch while also supporting multiple vLANs on a
single bridge. It is widely used in Docker networking because it proves to be useful for multiple host
networking and provides more secure communication compared to native bridges. Let us now
create, add and configure a new OVS bridge to get docker containers on different networks to
connect to each other
Install OVS
$ sudo apt-get install openvswitch-switch
3
1.2 Weave Networking for Docker
Weave creates a virtual network that enables users to connect docker containers on different host
and enable their auto-discovery
Install Weave
$ sudo wget -O /usr/local/bin/weave \
https://github.com/weaveworks/weave/releases/download/latest_release/weave
$ sudo chmod a+x /usr/local/bin/weave
Launch weave containers: internally pull weave router container and run it
$ weave launch
Start two application containers on weave network
$ C=$(weave run 10.10.1.1/24 -i -t ubuntu)
$ C12=$(weave run 10.10.1.2/24 -i -t ubuntu)
C and C12 hold the containerId of the containers created
weave run command will internally run docker run -d command in order to set the ip address of weave
network and start the ubuntu containers. Test the connection between two containers connected via
weave network by using the ping command
$ docker attach $C
$ ping 10.10.1.2 -c 4
Let’s look at: How to add a file as a volume using Dockerfile format, Create an Image from the
Dockerfile and Use the Image to create a container and check if the file exists.
Create a Dockerfile in a directory. Make sure log1 exists in the same directory. We are going to mount
log1 as /h3abionet1/log and mount /h3abionet1 as a volume.
FROM ubuntu:20.04
ADD log1 /h3abionet1/log
VOLUME /h3abionet1
CMD /bin/sh
4
Check if the volume /h3abionet1 is mounted using ls command
# ls
bin boot dev etc home lib .. sbin srv sys tmp h3abionet1 usr var
# ls /h3abionet1
log
Let’s use command line to mount a host directory into a container created from an image
Create a local directory h3abionet2 and two files log1 and log2 in that directory.
$ sudo mkdir -p /h3abionet2
[sudo] password for ubuntu:
$ sudo touch /h3abionet2/log1
$ sudo touch /h3abionet2/log2
Create a container with a volume h3abionet2 from the image: test/volume-by-dockerfile by specifying
the directory to be mounted on the command line with a flag -v.
$ docker run -it -v /h3abionet2:/h3abionet2 test/volume-by-dockerfile
Check that the directory h3abionet2 got mounted in the docker container. Run ls in the container
shell.
# ls
bin boot dev etc home lib sys tmp h3abionet1 h3abionet2 usr var
# ls h3abionet2
log1 log2
As you can see above both h3abionet1 and h3abionet2 got mounted as volumes.
Container with ReadOnly Volume
Try creating a new file in that volume from the bash shell of the container.
# touch /h3abionet2/log3
touch: cannot touch '/h3abionet2/log3': Read-only file system
5
Create a Container with a Volume: Create a container with name h3abionet01 from image ubuntu
$ docker run -it --name h3abionet01 -v /h3abionet1:/h3abionet1 ubuntu
root@############:/# ls h3abionet1
log
Create Second Container with shared volumes: Create a second container h3abionet02 with volumes
from h3abionet01
$ docker run -it --name h3abionet02 --volumes-from h3abionet01 ubuntu
The section is based on the Centre for Internet Security (CIS) benchmark, CIS DOCKER BENCHMARK
V1.2.0. Here, we will be covering all the important guidelines to run docker containers in secured
environment.
By staying up to date on Docker updates, vulnerabilities in the Docker software can be mitigated. An
attacker may exploit known vulnerabilities when attempting to attain access or elevate privileges.
Not installing regular Docker updates may leave you with running vulnerable Docker software. It
might lead to elevation privileges, unauthorized access or other security breaches.
$ docker version
6
The Docker daemon currently requires ‘root’ privileges. A user added to the ‘docker’ group gives him
full ‘root’ access rights. Hence, only verified users should be added to docker group.
$ useradd test
$ docker ps
$ ausearch -k docker
By default, unrestricted network traffic is enabled between all containers on the same host. Thus, each
container has the potential of reading all packets across the container network on the same host. This
might lead to unintended and unwanted disclosure of information to other containers. Hence, restrict
the inter container communication by setting the icc flag to false
7
● Confirm default cgroup usage
● Do not change base device size until needed
● Enable authorization for Docker client commands
● Configure centralized and remote logging
● Enable live restore
● Disable Userland Proxy
● Appropriately apply daemon-wide custom seccomp profile
● Do not implement experimental features in production
● Restrict containers from acquiring new privileges
● Verify that docker.socket file permissions are set to 644 or more restrictive 660
If you are using Docker on a machine that uses systemd to manage services, then verify that the
‘docker.service’ file permissions are correctly set to ‘644’ or more restrictive.
As it can be seen below if we allocate 666 as the permission then the “test” user will be also be
available to access the Docker daemon
$ ls -l /var/run/docker.sock
$ su test
test@ubuntu:/etc/init.d$ docker ps
As soon as we change the permission to 660 we will be able to see that the “test” user is not able to
access the docker daemon
$ chmod 660 /var/run/docker.sock
8
$ su test
test@ubuntu:/etc/init.d$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
9
$ docker run --privileged -it centos /bin/bash
[root@######## /]# sysctl -a | wc -l
10
80/tcp -> 0.0.0.0:4915
In order to restrict this we should bind container to one of the host interface IP address using the “-
p” flag
$ docker run -d -p <Host IP>:4915:80 nginx
$ docker ps -q | xargs docker inspect --format '{{ .Id }}: PidMode={{ .HostConfig.PidMode }}'
<CONTAINER ID>: PidMode=host
11
recommendations that you should follow for container base images and build files to ensure that your
containerized infrastructure is secure.
● Ensure that a user for the container has been created
It is thus highly recommended to ensure that there is a non-root user created for the container and
the container is run using that user.
By default, Centos docker image has user field as blank that means by default container will get root
user during runtime which should be avoided.
$ docker inspect centos
While building the docker image we can provide the “test” user the less-privileged user in the
Dockerfile as shown below
$ cd
$ mkdir test-container
$ cd test-container/
$ cat Dockerfile
FROM centos:latest
RUN useradd test
USER test
12
● Avoid Container Sprawl
The flexibility of containers makes it easy to run multiple instances of applications and indirectly
leads to Docker images that exist at varying security patch levels. It also means that you are
consuming host resources that otherwise could have been used for running ‘useful’ containers.
Having more than just the manageable number of containers on a host makes the situation
vulnerable to mishandling, misconfiguration and fragmentation. Thus, avoid container sprawl and
keep the number of containers on a host to a manageable total.
$ docker info
Few containers can be seen in the docker info command but there are no running containers, the
rest containers can be listed using docker ps which are not in running state but occupying space on
the host and can cause container sprawl.
$ docker ps -a
It is always advisable to run the docker container with “rm” option so that when you exit the
container it gets removed from the host as well
$ docker run --rm=true -it h3abionet
$ docker ps -a
In order to remove all the non-running containers from the host following command can be used
$ docker rm `docker ps --no-trunc -aq`
13
4.0 Linking Docker Containers
4.1 Docker Link Flag
In order to connect together multiple docker containers or services running inside docker container,
‘--link’ flag can be used in order to securely connect and provide a channel to transfer information
from one container to another. Let’s use a simple application of using a Wordpress container linked
to MySQL container.
Pull the latest MySql container
$ docker pull mysql:latest
14
- "127.0.0.3:8080:80"
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=sample
- MYSQL_DATABASE=wordpress
Get the linked containers up
$ docker-compose up
Creating dockercompose_mysql...
Creating dockercompose_web...
Attaching to dockercompose_mysql, dockercompose_web
mysql | Initializing database
..............
Visit the IP address http://127.0.0.3:8080 to see the setup page of the newly created linked wordpress
container.
5.0 Swarmkit
First let’s look at the overall architecture of Swarmkit, a distributed resource manager. This can be
bundled to run Docker tasks or other types of Tasks.
Configure Docker Swarm to create Docker Cluster with multiple Docker nodes.
There are 2 roles on Swarm Cluster: Manager and Worker nodes. Here, we configure Swarm Cluster
with 3 Docker nodes illustrated as follows.
--------+---------------------------+----------------------+---------
| | |
eth0 |10.0.0.51 eth0|10.0.0.52 eth0|10.0.0.53
+-------+-----------+ +-----------+----------+ +-------+---------+
[node01] [node02] [node03]
Manager Worker Worker
+-------------------+ +----------------------+ +-----------------+
15
3) Join in Swarm Cluster on all Worker Nodes
$ docker node ls
Create the same container image on all Nodes for the service first.
For example, let's create a Container image which provides http service on all Nodes
$ root@node01:~# vi Dockerfile
FROM ubuntu:20.04
EXPOSE 80
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
Access the Manager node's Hostname or IP address to verify it works normally. Note that requests to
worker nodes are load-balanced with round-robin
$ docker service ls
$ curl http://node01
16
$ curl http://node01
$ curl http://node01
6.1 Numpy
In this section we learn how to run numpy programs on Jupyter which is served from inside a docker
container.
Setup Docker
Let’s assume you have the latest version of docker running on your computer.
Run the jupyter/scipy-notebook in the detached mode. Please note the container port 8888 is
mapped to host port of 8888.
Since the jupyter notebooks from this image have a security token associated, execute the following
command to get the token
6.2 Tensorflow
In this section we learn how to run Tensorflow programs on Jupyter which is served from inside a
docker container.
Setup Docker
We assume you have the latest version of docker running on your computer.
17
Download Run Docker Jupyter Image
Run the jupyter/scipy-notebook in the detached mode. Please note the container port 8888 is
mapped to host port of 8888.
Since the jupyter notebooks from this image have a security token associated, execute the following
command to get the token
18