Docker

Docker

Docker has revolutionized the way software is built, shipped, and run. As an open-source platform for automating the deployment of applications in lightweight containers, Docker ensures that the software behaves the same regardless of the environment in which it’s executed. It has become the cornerstone of modern DevOps and cloud-native strategies, streamlining development and enabling scalable applications.

But what exactly is Docker? How did it come to be such an essential tool in software development? This guide will take you on an insightful journey through Docker, exploring its basics, advanced concepts, real-world applications, and best practices.

What is Docker?

Docker is a platform designed to simplify the development, shipping, and running of applications through containerization. Containers are isolated environments that encapsulate an application along with its dependencies, libraries, and system tools. This isolation guarantees that an application behaves the same way regardless of the environment it runs in, be it a developer’s laptop or a production server.

History of Docker

To truly appreciate Docker, it’s essential to understand its origins.

While containerization concepts date back to the 1970s, Docker as we know it today emerged in 2013. Created by Solomon Hykes at dotCloud, a French Platform as a Service company, Docker was introduced at the PyCon conference that year.

Initially an internal project for Platform as a Service, Docker quickly gained traction in the wider tech community due to its ability to simplify application deployment and management. Major tech players, including Red Hat, IBM, and Microsoft, soon announced their support. Even Google revealed they had been using similar technology internally.

Docker provides a consistent runtime across all phases of a product cycle: development, testing, and deployment. For example, if development team has upgraded one dependency, other teams must also do the same. If they dont, app may work during development but fail in deployment or work with unexpected side effects. Docker overcomes this complexity by providing a consistent environment for your app.nbsp;Hence, its become essential for DevOps practice.Docker containers are smaller in size and boot up faster compared tonbsp;VMs. Theyre also more cost efficient since many more containers thannbsp;VMs can run on a machine.Docker is open source. Theres freedom of choice since any type of application (legacy, cloud native, monolithic, 12-factor) can run in a Docker container. Security is built into the Docker Engine by default. Its powered by some of the best components such as thenbsp;containerd. Theres also powerfulnbsp;CLInbsp;andnbsp;APInbsp;to manage containers. Via certified plugins, we can extend the capabilities of the Docker Engine.What do you mean by the term Docker image?A Docker image is built as multiple layers. Source: Kasireddy 2016.nbsp;A Docker image is a read-only template from which containers are created. Heres a useful analogy: just as objects are instantiated from classes in object-oriented languages, Docker containers are instantiated from Docker images.For example, your application may require annbsp;OSnbsp;and runtimes such as Apache, Java or ElasticSearch. All these can be bundled into a Docker image. Thus, your app can have this exact runtime environment wherever it runs.An image is built in multiple read-only layers. At the bottom, we might have bootfs andnbsp;OSnbsp;base image such as Debian. Higher layers could have custom software or libraries such as Emacs or Apache. This layering mechanism makes it easy to build new images on top of existing images. When an image gets instantiated into a running container, a thin writable layer is added on top, callednbsp;Container Layer. This means that all changes go into the topmost layer. In fact, the underlying file system doesnt make a copy of the lower layers since they are read-only. This helps us bring up containers quickly.How do developers share Docker images?Dissecting fullnbsp;URLnbsp;as specified with Docker client. Source: McCarty 2018.nbsp;A remote location where Docker images are stored is called thenbsp;Docker Registry. Images are pulled from and new images are pushed to the registry. Without registries, it would be difficult to share and reuse images across projects. There are many registries online andnbsp;Docker Hubnbsp;is the default one. Just as developers share code via GitHub, Docker images are shared via Docker Hub.Besides Docker Hub, here are some other registries: Gitlab, Quay, Harbor, and Portus. Registries from cloud providers include Amazon Elastic Container Registry, Azure Container Registry, and Google Container Registry.A collection of images with the same name but different tags is callednbsp;Docker Repository. A tag is an identifier for an image. For example, Python is name of the repository but when we pull the image Python:3.7-slim, we refer to the image tagged with 3.7-slim.nbsp;In fact, its possible to pull by mentioning only the repository name. A default tag (such as latest) will be used and only that image will be pulled. Thus, these two commands are equivalent:nbsp;docker pull rhel7nbsp;ornbsp;docker pull registry.access.redhat.com/rhel7:latestnbsp;.Which are the essential components of the Docker ecosystem?A selection of Docker logos. Source: Adapted from Janetakis 2017.nbsp;Thenbsp;Docker Enginenbsp;is a client-server app of two parts: thenbsp;Docker Clientnbsp;and thenbsp;Docker Daemon. Docker commands are invoked using the client on the users local machine. These commands are sent to daemon, which is typically running on a remote host machine. The daemon acts on these commands to manage images, containers and volumes.Usingnbsp;Docker Networkingnbsp;we can connect Docker containers even if theyre running on different machines.nbsp;What if your app involves multiple containers? This is wherenbsp;Docker Composenbsp;is useful. This can start, stop or monitor all services of the app.nbsp;What if you need to orchestrate containers across many host machines?nbsp;Docker Swarmnbsp;allows us to do this, basically manage a cluster of Docker Engines.Docker Machinenbsp;is anbsp;CLInbsp;tool that simplifies creation of virtual hosts and install Docker on them.nbsp;nbsp;Docker Desktopnbsp;is an application that simplifies Docker usage on MacOS and Windows.Among the commercial offerings arenbsp;Docker Cloud,nbsp;Docker Data Centernbsp;andnbsp;Docker Enterprise Edition.Which are the command-line interfaces (CLIs) that Docker provides?Since Docker has many components, there are also multiplenbsp;CLIs:Dockernbsp;CLI: This is the basicnbsp;CLInbsp;used by Docker clients. For example,nbsp;docker pullnbsp;is part of thisnbsp;CLInbsp;with pull being the child command. These commands are invoked by user using the Docker Client. Commands are translated to Dockernbsp;APInbsp;calls that are sent to the Docker Daemon.Docker Daemonnbsp;CLI: The Docker Daemon has its ownnbsp;CLI, which is invoked using thenbsp;dockerdnbsp;command. For example, the commandnbsp;$ sudo dockerd -H tcp://127.0.0.1:2375 -H unix:///var/run/docker.sock amp;nbsp;asks the daemon to listen on bothnbsp;TCPnbsp;and a Unix socket.Docker Machinenbsp;CLI: This is invoked with the commandnbsp;docker-machine.Docker Composenbsp;CLI: This is invoked with the commandnbsp;docker-compose. This uses Dockernbsp;CLInbsp;under the hood.DTRnbsp;CLI: Invoked withnbsp;docker/dtr, this is thenbsp;CLInbsp;for Docker Trusted Registry (DTR).UCPnbsp;CLI: Invoked withnbsp;docker/dtr, this is thenbsp;CLInbsp;for installing and managing Docker Universal Control Plane (UCP) on a Docker Engine.What are Volumes in Docker and where are they useful?Containers are meant to be temporary. Any changes made to it at runtime are usually not saved. Sometimes we may save the changes into a new image. Otherwise, changes are discarded. However, theres merit in saving data or state so that other containers can use them. Volumes are therefore used fornbsp;persistent storage.A volume is a directory mounted inside a container. It points to a filesystem outside the container. We can therefore exit containers without losing app data. Newer containers that come up can access the same data using volumes. The important thing is to implement locks or something equivalent for concurrent write access. Volumes can be created via Dockerfile or Dockernbsp;CLInbsp;tool.Apart from sharing data or state across containers, volumes are useful for sharing data (such as code) between containers and host machines. Theyre also useful for handling large files (logs or databases) because writing to volumes is faster than writing to Dockers Union File System (UFS) that uses IO expensive copy-on-write (CoW).Whats the purpose of a Dockerfile?Dockerfile commands to build a layered Docker image. Source: Grace 2017.nbsp;Dockerfile is nothing more than a text file containing instructions to build a Docker image. Each instruction creates one read-only layer of the image. When the container runs, a new writable layer is created on top to capture runtime changes.Lets take the example of a Node.js application. The instructionnbsp;FROM node:9.3.0-alpinenbsp;specifies the base image of Node.js version 9.3.0 running on Alpine Linux.nbsp;ADDnbsp;instruction can be used to add files.nbsp;RUNnbsp;can be used for framework installation or application build. To expose ports from the container, usenbsp;EXPOSE. To finally launch the app within the container, use thenbsp;CMD.For more details, readnbsp;Dockerfile Referencenbsp;and thenbsp;best practices for writing Dockerfiles.What are some basic Docker commands that a beginner should know?A selection of Docker commands showing how they affect containers. Source: Docker Saigon 2016.nbsp;We describe some Docker commands listed in thenbsp;official Docker documentation:To build a new image from a Dockerfile use thenbsp;buildnbsp;command. We can then push this to a registry usingnbsp;push. Commandsnbsp;searchnbsp;andnbsp;pullnbsp;can be used to find and download an image from registry to our local system. To create a new image from a running containers changes, we can usenbsp;commit. To list images, usenbsp;images. A downloaded image can be removed usingnbsp;rmi.Once we have an image, we can create and start a container usingnbsp;createnbsp;andnbsp;start. Containers can be stopped usingnbsp;stopnbsp;ornbsp;kill. A running container can be restarted usingnbsp;restart. We can usenbsp;rmnbsp;to remove containers. The commandnbsp;psnbsp;will list all running containers.nbsp;To list stopped containers as well, use -all option.Commands that deal with processes inside containers includenbsp;run,nbsp;exec,nbsp;pause,nbsp;unpausenbsp;andnbsp;top. Commands that deal with container filesystem includenbsp;cp,nbsp;diff,nbsp;exportnbsp;andnbsp;import.

To view or add a comment, sign in

Explore topics