0% found this document useful (0 votes)
19 views

Docker Unit 1 & 3 Final

Containerization allows software components to be packaged with their dependencies and configuration into isolated containers. Docker is an open platform for developing, shipping, and running containerized applications. It provides tools and a platform to manage the lifecycle of containers from development to production deployment on various environments including physical, virtual, and cloud infrastructures.

Uploaded by

Tamanna
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Docker Unit 1 & 3 Final

Containerization allows software components to be packaged with their dependencies and configuration into isolated containers. Docker is an open platform for developing, shipping, and running containerized applications. It provides tools and a platform to manage the lifecycle of containers from development to production deployment on various environments including physical, virtual, and cloud infrastructures.

Uploaded by

Tamanna
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

UNIT-1

Containerization
Containerization entails placing a software component and its
environment, dependencies, and configuration, into an isolated unit
called a container . This makes it possible to deploy an application
consistently on any computing environment, whether on-premises or
cloud-based.

Docker overview
Docker is an open platform for developing, shipping, and running applications.
Docker enables you to separate your applications from your infrastructure so you
can deliver software quickly. With Docker, you can manage your infrastructure in
the same ways you manage your applications. By taking advantage of Docker’s
methodologies for shipping, testing, and deploying code quickly, you can
significantly reduce the delay between writing code and running it in production.

The Docker platform


Docker provides the ability to package and run an application in a loosely isolated
environment called a container. The isolation and security allows you to run many
containers simultaneously on a given host. Containers are lightweight and contain
everything needed to run the application, so you do not need to rely on what is
currently installed on the host. You can easily share containers while you work,
and be sure that everyone you share with gets the same container that works in the
same way.
Docker provides tooling and a platform to manage the lifecycle of your containers:

 Develop your application and its supporting components using containers.


 The container becomes the unit for distributing and testing your application.
 When you’re ready, deploy your application into your production
environment, as a container or an orchestrated service. This works the same
whether your production environment is a local data center, a cloud
provider, or a hybrid of the two.

What can I use Docker for?


Fast, consistent delivery of your applications

Docker streamlines the development lifecycle by allowing developers to work in standardized environments
using local containers which provide your applications and services. Containers are great for continuous
integration and continuous delivery (CI/CD) workflows.

Consider the following example scenario:

 Your developers write code locally and share their work with their colleagues using Docker
containers.
 They use Docker to push their applications into a test environment and execute automated and manual
tests.
 When developers find bugs, they can fix them in the development environment and redeploy them to
the test environment for testing and validation.
 When testing is complete, getting the fix to the customer is as simple as pushing the updated image to
the production environment.

Responsive deployment and scaling

Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a
developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of
environments.

Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or
tearing down applications and services as business needs dictate, in near real time.

Running more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual
machines, so you can use more of your server capacity to achieve your business goals. Docker is perfect for
high density environments and for small and medium deployments where you need to do more with fewer
resources.

Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy
lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on
the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and
daemon communicate using a REST API, over UNIX sockets or a network interface. Another Docker client is
Docker Compose, that lets you work with applications consisting of a set of containers.

The Docker daemon


The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images,
containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker
services.

The Docker client


The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use
commands such as docker run, the client sends these commands to dockerd, which carries them out.
The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker Desktop
Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment that enables
you to build and share containerized applications and microservices. Docker Desktop includes the Docker
daemon (dockerd), the Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes, and
Credential Helper. For more information, see Docker Desktop.

Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is
configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled from your configured
registry. When you use the docker push command, your image is pushed to your configured registry.

Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other
objects. This section is a brief overview of some of those objects.

Images

An image is a read-only template with instructions for creating a Docker container. Often, an image is based
on another image, with some additional customization. For example, you may build an image which is based
on the ubuntu image, but installs the Apache web server and your application, as well as the configuration
details needed to make your application run.
You might create your own images or you might only use those created by others and published in a registry.
To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create
the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the
Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes
images so lightweight, small, and fast, when compared to other virtualization technologies.

Containers

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using
the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even
create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You can control
how isolated a container’s network, storage, or other underlying subsystems are from other containers or from
the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create or
start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Example docker run command


The following command runs an ubuntu container, attaches interactively to your local command-line session,
and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash

When you run this command, the following happens (assuming you are using the default registry
configuration):
1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as though
you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a running
container to create or modify files and directories in its local filesystem.

4. Docker creates a network interface to connect the container to the default network, since you did not
specify any networking options. This includes assigning an IP address to the container. By default,
containers can connect to external networks using the host machine’s network connection.

5. Docker starts the container and executes /bin/bash. Because the container is running interactively and
attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while
the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is not removed. You
can start it again or remove it.

The underlying technology🔗


Docker is written in the Go programming language and takes advantage of several features of the Linux kernel
to deliver its functionality. Docker uses a technology called namespaces to provide the isolated workspace
called the container. When you run a container, Docker creates a set of namespaces for that container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace and its
access is limited to that namespace.
Containers vs. virtual machines
Containers and virtual machines are very similar resource virtualization technologies.
Virtualization is the process in which a system singular resource like RAM, CPU, Disk,
or Networking can be ‘virtualized’ and represented as multiple resources. The key
differentiator between containers and virtual machines is that virtual machines virtualize
an entire machine down to the hardware layers and containers only virtualize software
layers above the operating system level.
What is a container?

Containers are lightweight software packages that contain all the


dependencies required to execute the contained software application.
These dependencies include things like system libraries, external third-party
code packages, and other operating system level applications. The
dependencies included in a container exist in stack levels that are higher
than the operating system.
Pros
 Iteration speed
Because containers are lightweight and only include high level software, they are very fast to
modify and iterate on.
 Robust ecosystem
Most container runtime systems offer a hosted public repository of pre-made containers. These
container repositories contain many popular software applications like databases or messaging
systems and can be instantly downloaded and executed, saving time for development teams

Cons
 Shared host exploits
Containers all share the same underlying hardware system below the operating system layer, it
is possible that an exploit in one container could break out of the container and affect the
shared hardware. Most popular container runtimes have public repositories of pre-built
containers. There is a security risk in using one of these public images as they may contain
exploits or may be vulnerable to being hijacked by nefarious actors.

Popular container providers


 Docker
Docker is the most popular and widely used container runtime. Docker Hub is a giant public
repository of popular containerized software applications. Containers on Docker Hub can
instantly downloaded and deployed to a local Docker runtime.
 RKT
Pronounced "Rocket", RKT is a security-first focused container system. RKT containers do not
allow insecure container functionality unless the user explicitly enables insecure features. RKT
containers aim to address the underlying cross contamination exploitive security issues that
other container runtime systems suffer from.
 Linux Containers (LXC)
The Linux Containers project is an open-source Linux container runtime system. LXC is used to
isolate operating, system-level processes from each other. Docker actually uses LXC behind the
scenes. Linux Containers aim to offer a vender neutral open-source container runtime.
 CRI-O
CRI-O is an implementation of the Kubernetes Container Runtime Interface (CRI) that allows the
use of Open Container Initiative (OCI) compatible runtimes. It is a lightweight alternative to
using Docker as the runtime for Kubernetes.
What is a virtual machine?

Virtual machines are heavy software packages that provide complete emulation of low
level hardware devices like CPU, Disk and Networking devices. Virtual machines may
also include a complementary software stack to run on the emulated hardware. These
hardware and software packages combined produce a fully functional snapshot of a
computational system.

Pros

 Full isolation security


Virtual machines run in isolation as a fully standalone system. This means that virtual machines
are immune to any exploits or interference from other virtual machines on a shared host. An
individual virtual machine can still be hijacked by an exploit but the exploited virtual machine
will be isolated and unable to contaminate any other neighboring virtual machines.
 Interactive development
Containers are usually static definitions of the expected dependencies and configuration needed
to run the container. Virtual machines are more dynamic and can be interactively developed.
Once the basic hardware definition is specified for a virtual machine the virtual machine can
then be treated as a bare bones computer. Software can manually be installed to the virtual
machine and the virtual machine can be snapshotted to capture the current configuration state.
The virtual machine snapshots can be used to restore the virtual machine to that point in time or
spin up additional virtual machines with that configuration.
Cons

 Iteration speed
Virtual machines are time consuming to build and regenerate because they encompass a full
stack system. Any modifications to a virtual machine snapshot can take significant time to
regenerate and validate they behave as expected.
 Storage size cost
Virtual machines can take up a lot of storage space. They can quickly grow to several gigabytes
in size. This can lead to disk space shortage issues on the virtual machines host machine.

Popular virtual machine providers


 Virtualbox
Virtualbox is a free and open source x86 architecture emulation system owned by Oracle.
Virtualbox is one of the most popular and established virtual machine platforms with an
ecosystem of supplementary tools to help develop and distribute virtual machine images.
 VMware
VMware is a publicly traded company that has built its business on one of the first x86 hardware
virtualization technologies. VMware comes included with a hypervisor which is a utility that will
deploy and manage multiple virtual machines. VMware has robust UI for managing virtual
machines. VMware is a great enterprise virtual machine option offering support.
 QEMU
QEUM is the most robust hardware emulation virtual machine option. It has support for any
generic hardware architecture. QEMU is a command line only utility and does not offer a
graphical user interface for configuration or execution. This trade-off makes QEMU one of the
fastest virtual machine options.

Which option is better for you?

If you have specific hardware requirements for your project, or you are developing on
one hardware platform and need to target another like Windows vs MacOS, you will
need to use a virtual machine. Most other 'software only' requirements can be met by
using containers.

How can you use containers and virtual machines


together?

It is entirely possible to use containers and virtual machines in unison although the
practical use-cases may be limited. A virtual machine can be created that emulates a
unique hardware configuration. An operating system can then be installed within this
virtual machine's hardware. Once the virtual machine is functional and boots the
operating system, a container runtime can be installed on the operating system. At this
point we have a functional computational system with emulated hardware that we can
install containers on.
Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does
the heavy lifting of building, running, and distributing your Docker containers. The Docker client and
daemon can run on the same system, or you can connect a Docker client to a remote Docker
daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a
network interface. Another Docker client is Docker Compose, that lets you work with applications
consisting of a set of containers.

The Docker daemon


The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such
as images, containers, networks, and volumes. A daemon can also communicate with other
daemons to manage Docker services.

The Docker client


The Docker client (docker) is the primary way that many Docker users interact with Docker. When
you use commands such as docker run, the client sends these commands to dockerd, which
carries them out. The docker command uses the Docker API. The Docker client can communicate
with more than one daemon.

Docker Desktop
Docker Desktop is an easy-to-install application for your Mac, Windows or Linux environment that
enables you to build and share containerized applications and microservices. Docker Desktop
includes the Docker daemon (dockerd), the Docker client (docker), Docker Compose, Docker
Content Trust, Kubernetes, and Credential Helper. For more information, see Docker Desktop.

Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and
Docker is configured to look for images on Docker Hub by default. You can even run your own
private registry.

When you use the docker pull or docker run commands, the required images are pulled from
your configured registry. When you use the docker push command, your image is pushed to your
configured registry.

Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins,
and other objects. This section is a brief overview of some of those objects.

Images

An image is a read-only template with instructions for creating a Docker container. Often, an image
is based on another image, with some additional customization. For example, you may build an
image which is based on the ubuntu image, but installs the Apache web server and your application,
as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a
registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps
needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image.
When you change the Dockerfile and rebuild the image, only those layers which have changed are
rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other
virtualization technologies.

Containers

A container is a runnable instance of an image. You can create, start, stop, move, or delete a
container using the Docker API or CLI. You can connect a container to one or more networks, attach
storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. You can
control how isolated a container’s network, storage, or other underlying subsystems are from other
containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you
create or start it. When a container is removed, any changes to its state that are not stored in
persistent storage disappear.
Example docker run command
The following command runs an ubuntu container, attaches interactively to your local command-line
session, and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash

When you run this command, the following happens (assuming you are using the default registry
configuration):

1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as
though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container
create command manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a
running container to create or modify files and directories in its local filesystem.
4. Docker creates a network interface to connect the container to the default network, since you
did not specify any networking options. This includes assigning an IP address to the
container. By default, containers can connect to external networks using the host machine’s
network connection.
5. Docker starts the container and executes /bin/bash. Because the container is running
interactively and attached to your terminal (due to the -i and -t flags), you can provide input
using your keyboard while the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is not
removed. You can start it again or remove it.
Docker Architecture and its
Components
Before learning the Docker architecture, first, you should know about the Docker
Daemon.

What is Docker daemon?


Docker daemon runs on the host operating system. It is responsible for running
containers to manage docker services. Docker daemon communicates with other
daemons. It offers various Docker objects such as images, containers, networking, and
storage.

Docker architecture
Docker follows Client-Server architecture, which includes the three main components
that are Docker Client, Docker Host, and Docker Registry.
1. Docker Client
Docker client uses commands and REST APIs to communicate with the Docker Daemon
(Server). When a client runs any docker command on the docker client terminal, the
client terminal sends these docker commands to the Docker daemon. Docker daemon
receives these commands from the docker client in the form of command and REST
API's request.

Note: Docker Client has an ability to communicate with more than one docker daemon.

Docker Client uses Command Line Interface (CLI) to run the following commands -

docker build

docker pull

docker run

2. Docker Host
Docker Host is used to provide an environment to execute and run applications. It
contains the docker daemon, images, containers, networks, and storage.

3. Docker Registry
Docker Registry manages and stores the Docker images.

There are two types of registries in the Docker -

Pubic Registry - Public Registry is also called as Docker hub.

Private Registry - It is used to share images within the enterprise.


Docker Objects or Components
There are the following Docker Objects –

1. Docker Client
Docker client uses commands and REST APIs to communicate with the Docker Daemon
(Server). When a client runs any docker command on the docker client terminal, the
client terminal sends these docker commands to the Docker daemon. Docker daemon
receives these commands from the docker client in the form of command and REST
API's request.

Note: Docker Client has an ability to communicate with more than one docker daemon.

Docker Client uses Command Line Interface (CLI) to run the following commands -

docker build

docker pull

docker run

Docker daemon/Server
Docker daemon runs on the host operating system. It is responsible for running
containers to manage docker services. Docker daemon communicates with other
daemons. It offers various Docker objects such as images, containers, networking, and
storage.
Docker Images
Docker images are the read-only binary templates used to create Docker Containers. It
uses a private container registry to share container images within the enterprise and
also uses public container registry to share container images within the whole world.
Metadata is also used by docket images to describe the container's abilities.

Ways to create docker images

1. From files
2. From docker hub
3. From existed container

Docker Containers
Containers are the structural units of Docker, which is used to hold the entire package
that is needed to run the application. The advantage of containers is that it requires very
less resources.

In other words, we can say that the image is a template, and the container is a copy of
that template.

Docker Networking
Using Docker Networking, an isolated package can be communicated. Docker contains
the following network drivers -

o Bridge - Bridge is a default network driver for the container. It is used when
multiple docker communicates with the same docker host.
o Host - It is used when we don't need for network isolation between the container
and the host.
o None - It disables all the networking.
o Overlay - Overlay offers Swarm services to communicate with each other. It
enables containers to run on the different docker host.
Docker Storage
Docker Storage is used to store data on the container. Docker offers the following
options for the Storage -

o Data Volume - Data Volume provides the ability to create persistence storage. It
also allows us to name volumes, list volumes, and containers associates with the
volumes.
o Directory Mounts - It is one of the best options for docker storage. It mounts a
host's directory into a container.
o Storage Plugins - It provides an ability to connect to external storage platforms.

What Are Docker images?

Applications consist of Docker containers as the main building blocks, with each
container representing an image (Docker image). Docker images are built up of
layers stacked up, one on top of the other. Docker reads instructions from
Dockerfile to build images automatically. It does so using the Docker build
command feature.
Each Docker image layer represents an instruction in the Dockerfile.

A Dockerfile refers to a text document consisting of all the commands required to


assemble an image. Only the top layer of Docker images has read-write
permissions; the rest have read-only permissions. Like the copy-on-write concept,
this technology ensures that the changes you make when running a container from
the image are made to the top writable layer.

Why Docker Image Layers Are Important

Docker image layers are beneficial in many ways.


 Layers allow you to work with Docker images faster. This is because the
builds avoid unnecessary steps, and the pulling and pushing of images
skips the transfer of a large unchanged amount of data already available
in the intended destination.
 The use of the copy-on-write file system saves disk space for future
containers and images.
 Layers allow you to apply less computational effort (in image building)
and save on bandwidth (in image distribution).

Docker Image Layers

Images contain everything you need to configure and run a container environment.
These include system libraries, dependencies, and tools.
Docker images consist of many layers. Each layer is built on top of another layer to
form a series of intermediate images. This arrangement ensures that each layer
depends on the layer immediately below it. The way layers are placed in a
hierarchy is very significant. It allows you to place the layers that frequently
change high up the hierarchy so that you manage the Docker image’s lifecycle
efficiently.
Changes made to a Docker image layer trigger Docker to rebuild that particular
layer and all other layers built from it. Making changes to a layer at the top of the
stack ensures the rebuilding of the entire image using fewer computational
resources. This means you should keep the layers with the least or no changes at
the bottom of the hierarchy formed.
To understand this concept in detail, let’s take an example. Assume you have a
Node.js app and want to create a Docker image for this app. Below is the most
basic Dockerfile that you can use to create the Node.js image:
FROM node:alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY ./ ./
CMD ["npm", "start"]
This Dockerfile contains instructions needed to build a basic Node.js app image on
Docker. When you run a
docker build
command, Docker starts executing these instructions one at a time, iteratively.

Docker Commands
First, it will read the
FROM
command and execute a Node.js image as the base image from the Docker
registry. A base image provides the environment to run the application of your
choice just the same way you would on a local machine.
It is good to note that a base image will have its own image layers based on how it
was initially created and deployed to the Docker Hub registry.
Once Docker gets the base image, it moves to the next command,
WORKDIR
. At this point, the Docker build context will create an intermediate image. This
process creates a new image layer by committing a new intermediate image. Here
is a diagram that illustrates this example:
Each time a command is executed from the Dockerfile, a new image layer is created on
top of the existing image. This process is iterated until Docker reads the last command
of the Dockerfile. Each instruction created a new image.

At the end of the day, Docker will have created the whole image. However, these
images are a composition of different image layers, as illustrated above.
This is how Docker will execute the instruction in a typical interactive terminal:
Listing Docker Images
Docker images are a big part of the Docker ecosystem.
Docker images are used to define instructions to be executed on your containers.

On Docker images, you may choose to have a specific operating system, to install specific packages
or to execute a set of predefined commands.

However, if you create multiple environments or multiple tools, your list of Docker images will be
growing quickly.
As a consequence, you may need commands in order to list your Docker images easily.
In this tutorial, you are going to learn how you can list your Docker images using Docker
commands

The easiest way to list Docker images is to use the “docker images” with no arguments.
When using this command, you will be presented with the complete list of Docker images on your
system.

$ docker images

Alternatively, you can use the “docker image” command with the “ls” argument.
$ docker image ls

Note that you will have to make sure that you have written “image” and not “images”.

As an example, let’s say that you want to list Docker images on your current Windows operating
system.
To achieve that, you would run the following command

$ docker images

Congratulations, you successfully listed Docker images on your system!


Using those commands, you will be presented with all the results, but what if you want to restrict
your results to specific words?

How to Build Docker Images

A Dockerfile is a script with instructions on how to build a Docker image. These


instructions are, in fact, a group of commands executed automatically in the Docker
environment to build a specific Docker image.

How to Create a Dockerfile


The first thing you need to do is to create a directory in which you can store all the Docker
images you build.

1. As an example, we will create a directory named MyDockerImages with the command:


mkdir MyDockerImages

2. Move into that directory and create a new empty file (Dockerfile) in it by typing:

cd MyDockerImages
touch Dockerfile

3. Open the file with a text editor of your choice. In this example, we opened the file using Nano:

nano Dockerfile

4. Then, add the following content:

FROM ubuntu

MAINTAINER sofija

RUN apt-get update

CMD ["echo", "Hello World"]


 FROM – Defines the base of the image you are creating. You can start from a parent image (as
in the example above) or a base image. When using a parent image, you are using an existing
image on which you base a new one. Using a base image means you are starting from scratch
(which is exactly how you would define it: FROM scratch).
 MAINTAINER – Specifies the author of the image. Here you can type in your first and/or last
name (or even add an email address). You could also use the LABEL instruction to add metadata
to an image.
 RUN – Instructions to execute a command while building an image in a layer on top of it. In this
example, the system searches for repository updates once it starts building the Docker image.
You can have more than one RUN instruction in a Dockerfile.
 CMD – There can be only one CMD instruction inside a Dockerfile. Its purpose is to provide
defaults for an executing container. With it, you set a default command. The system will execute
it if you run a container without specifying a command.

5. Save and exit the file.

6. You can check the content of the file by using the cat command:

cat Dockerfile
Build a Docker Image with Dockerfile
The basic syntax used to build an image using a Dockerfile is:

docker build [OPTIONS] PATH | URL | -

To build a docker image, you would therefore use:

docker build [location of your dockerfile]

If you are already in the directory where the Dockerfile is located, put a . instead of the location:

docker build .

By adding the -t flag, you can tag the new image with a name which will help you when
dealing with multiple images:

docker build -t my_first_image .

Once the image is successfully built, you can verify whether it is on the list of local images with
the command:

docker images

The output should show my_first_image available in the repository.


Docker Push for Publishing
Images to Docker Hub
Dockerfiles use has been on the rise in creating Docker containers.
The untouched bit on Dockerfiles that rarely gets a mention is its
trivial advantage in creating a Docker image to which users can push
to an online repository in Docker Hub.
This makes it easy to share Docker images across various public and private
repositories, and registries. This gives users more flexibility in creating
earlier versions of Docker images.

In this article, we’ll discuss how to create a Dockerfile, and create an account
in Docker Hub from where we’ll create a repository. We’ll also cover how to
push Docker images to a registry.

This article will be beneficial for individuals who already have knowledge in
Docker and readers with fundamentals of container technology.

Prerequisites
To get started, we have to install Docker on our system. Check out this

Note that we will be using Ubuntu 20.04 in this tutorial.

If you’re not using Ubuntu be sure to review the official documentation on


how to install Docker on your Operating System environment.

Creating a Dockerfile
Before we publish a Docker image, it will be appropriate to build one. First,
let’s understand what a Dockerfile entails.

A Dockerfile is a content file comprising of specific commands used to


generate a Docker image. Let’s proceed and create one. In your terminal,
create a directory and move into the directory created with the command
below:
mkdir TestDocker
cd TestDocker
Create a file called Dockerfile with the command below:
touch Dockerfile

Since the file we created is empty, open it via a text editor of your choice and
update the file as shown below:
FROM linux

MAINTAINER testUser

RUN apt-get update

CMD ["echo", "Welcome to Dockerfile"]

 FROM - specifies the prop of the created image. It can start from the
base image or the root image.
 MAINTAINER - Defines the author of that particular image. It can
take a first name, last name, or email.
 LABEL attribute can be used to highlight more about the image. Its use
is optional depending on how applicable it is when creating your
Dockerfile.
 RUN - It is a command that carries the set of instructions for executing
the Dockerfile while building the image.
 CMD - The command provides a revert for a Dockerfile that’s
executing.

To check the content of the Dockerfile you can use the cat command in the
terminal:
oruko@oruko-ThinkPad-T520:~/Documents/TestDocker$ cat Dockerfile
FROM ubuntu

MAINTAINER testUser

RUN apt-get update


CMD ["echo", "Welcome to Dockerfile"]

Creating a repository on Docker Hub


Now that we have created our Dockerfile let’s create a repository within
Docker before we push it to an online repository. If you’re well-acquainted
with the way Github works, then Docker hub isn’t that different from it.

So head over to Docker Hub and register an account. After signup, click
the Repositories tab in the navbar, you’ll see a form like the one shown
below:

Create a repository called docker-push as that is the example we’ll be using


throughout the article. Now that our repository is set, let’s create an image from
Docker and push it to the repository we created earlier.

Build Docker Image using Docker Hub


To build an image in Docker the command below is used:
docker build -t username/repository_name .
The -t flag helps when dealing with various images in identifying which
name they belong to. The username is your Docker hub name, and
the repository_name in this case is docker-push; the repository we created
earlier. We add a period in the folder of the Dockerfile.
With this hindsight, let’s proceed and build our Docker image. Execute the
command below changing the username with yours as it appears in the
Docker hub and the repository with docker-push:
docker build -t bullet08/docker-push .

The same approach is used when building Docker images for organizations.
All you need to do is change the username with the organization’s account
name and Docker hub repository of the organization.

Pushing Docker image


Before we push the Docker image, we need to log into Docker hub. We can
do this effortlessly using the command-line:
docker login

Giving tag to Image


docker tag imagename username/imagename

Once validated, we can push our container to the Docker hub. To push our
container to the Docker hub, we use the commands below:
Docker push imagetag
docker push bullet08/docker-push
With that done, our Docker image is now available in Docker Hub. You can
see it by visiting your repository.

Conclusion
In this article, we learn about the Docker Hub, building Docker images for
both our usernames and organization. We then pushed those Docker images
to our Docker Hub repository and the non-Docker Hub.

Removing docker Images


docker image rm

Remove one or more images

Usage
$ docker image rm [OPTIONS] IMAGE [IMAGE...]

Refer to the options section for an overview of available OPTIONS for this command.

Description
See docker rmi for more information.

Options
Name, shorthand Default Description

--force , -f Force removal of the image

--no-prune Do not delete untagged parents

Parent command

Command Description

docker image Manage images

Related commands

Command Description

docker image build Build an image from a Dockerfile

docker image history Show the history of an image

docker image import Import the contents from a tarball to create a filesystem image
Command Description

docker image inspect Display detailed information on one or more images

docker image load Load an image from a tar archive or STDIN

docker image ls List images

docker image prune Remove unused images

docker image pull Download an image from a registry

docker image push Upload an image to a registry

docker image rm Remove one or more images

docker image save Save one or more images to a tar archive (streamed to STDOUT by default)

docker image tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE


Command Description
UNIT-3

Introduction To Docker Networking:


Advantages and Working

In the Docker world, Network admins have a huge responsibility of understanding the network
components found in virtualization platforms like Microsoft, Red Hat, etc. But, deploying a
container isn’t simple; it requires strong networking skills to configure a container architecture
correctly. To solve this issue, Docker Networking was introduced.

Before understanding Docker Networking, let’s quickly understand the term ‘Docker’ first.

What is Docker?

Docker is a platform that utilizes OS-level virtual software, to help users to develop, deploy,
manage, and run applications in a Docker Container with all their library dependencies.
Note: Docker Container is a standalone package that includes all the dependencies (frameworks,
libraries, etc.) required to execute an application.

Now, let’s dig into what Docker networking is, and then understand its advantages.

What is Docker Networking?

Docker networking enables a user to link a Docker container to as many networks as he/she
requires. Docker Networks are used to provide complete isolation for Docker containers.

Note: A user can add containers to more than one network.

Let’s move forward and look at the Advantages of networking.

Advantages of Docker Networking

Some of the major benefits of using Docker Networking are:

 They share a single operating system and maintain containers in an isolated environment.

 It requires fewer OS instances to run the workload.

 It helps in the fast delivery of software.


 It helps in application portability.

How Does Docker Networking Work?

For a more in-depth understanding, let’s have a look at how Docker Networking works. Below is
a diagrammatic representation of the Docker Networking workflow:

 Docker File builds the Docker Image.

 Docker Image is a template with instructions, which is used to build Docker Containers.

 Docker has its own cloud-based registry called Docker Hub, where users store and distribute container
images.

 Docker Container is an executable package of an application and its dependencies together.

Functionalities of the different components:

 Docker File has the responsibility of building a Docker Image using the build command
 Docker Image contains all the project’s code.

 Using Docker Image, any user can run the code to create Docker Containers.
 Once Docker Image is built, it’s either uploaded in a registry or a Docker Hub

Now that you know how Docker networking works, it is important to understand the container
network model.

Container Network Model

This concept will help you to build and deploy your applications in the Docker tool.

Let’s discuss the components of the container network model in detail:

Network Sandbox

 It is an isolated sandbox that holds the network configuration of containers

 Sandbox is created when a user requests to generate an endpoint on the network

Endpoints

 It can have several endpoints in a network, as it represents a container’s network configuration such as
IP-address, MAC-address, DNS, etc.
 The endpoint establishes the connectivity for container services (within a network) with other services

 It helps in providing connectivity among the endpoints that belong to the same network and isolate
them from the rest. So, whenever a network is created, or configuration is changed, the corresponding
Network Driver will be notified with an event

Docker Engine

 It is the base engine installed on your host machine to build and run containers using Docker
components and services

 Its task is to manage the network with multiple drivers

 It provides the entry-point into libnetwork to maintain networks, whereas libnetwork supports multiple
virtual drivers

So, those were the key concepts in the container network model. Going ahead, let’s have a look
at the network drivers.

Network Drivers

Docker supports networking for its containers via network drivers. These drivers have several
network drivers.

In this article, we will be discussing how to connect your containers with suitable network
drivers. The network drivers used in Docker are below:

 Bridge

 Host

 None

 Overlay

 Macvlan
Bridge

 It is a private default network created on the host

 Containers linked to this network have an internal IP address through which they communicate with
each other easily

 The Docker server (daemon) creates a virtual ethernet bridge docker0 that operates automatically, by
delivering packets among various network interfaces

 These are widely used when applications are executed in a standalone container

Host

 It is a public network

 It utilizes the host’s IP address and TCP port space to display the services running inside the container

 It effectively disables network isolation between the docker host and the docker containers, which
means using this network driver a user will be unable to run multiple containers on the same host

None

 In this network driver, the Docker containers will neither have any access to external networks nor
will it be able to communicate with other containers

 This option is used when a user wants to disable the networking access to a container

 In simple terms, None is called a loopback interface, which means it has no external network
interfaces

Overlay

 This is utilized for creating an internal private network to the Docker nodes in the Docker
swarm cluster

 Note: Docker Swarm is a service for containers which facilitates developer teams to build and manage
a cluster of swarm nodes within the Docker platform

 It is an important network driver in Docker networking. It helps in providing the interaction between
the stand-alone container and the Docker swarm service
Macvlan

 It simplifies the communication process between containers

 This network assigns a MAC address to the Docker container. With this Mac address, the Docker
server (daemon) routes the network traffic to a router

 Note: Docker Daemon is a server which interacts with the operating system and performs all kind of
services

 It is suitable when a user wants to directly connect the container to the physical network rather than
the Docker host

Basic Docker Networking Commands

Let’s discuss some of the important networking commands that are widely used by the developer
teams.

 List down the Networks associated with Docker

docker network ls

The above command displays all the networks available on the Docker ecosystem

 Connect a Running Container to a Network

$ docker network connect multi-host-network container

In the command shown above, You can also use the docker network option to start a container
and immediately connect it to multiple host networks.

 Specify the IP Address that you want to assign to the Container

$ docker network connect --IP 10.10.36.122 multi-host-network container

In the above command, a user can specify the IP address (for example, 10.10.36.122) that he/she
wants to assign to the container interface.
 Create a Network alias for a Container

$ docker network connect --alias db --alias mysql multi-host-network container2

In the above command, we have specified Aliases to define new commands and to rectify
incorrect input

 Disconnect a Container from a Network

$ docker network disconnect multi-host-network container1

In the above command, the disconnect option is used to stop the running docker containers on
multiple host network

 Remove a Network

$ docker network rm network_name

In the above command, the rm option is used to remove a network from the Docker ecosystem

 Remove Multiple Network

$ docker network rm 3695c422697f network_name

The above command can be used when a user wants to remove multiple networks at a time

 Remove all Unused Networks

$ docker network prune

The above ‘prune’ command can be used when a user wants to remove all unused networks at a
time

Conclusion
That concludes the Docker Networking article. In this write-up, we learned what Docker and
Docker Networking are, some of its benefits, how Docker networking works, the Container
network model, network drivers, and finally, we saw some of the basic Docker networking
commands.

[Container Network With Docker Compose]

Docker Compose is a tool that allows you to define and manage multi-container applications
using a YAML file. It provides a simple way to orchestrate the deployment of multiple
containers and their interconnections. In the context of container networks, Docker Compose
allows you to define and configure networks for your containers to communicate with each
other.

Here are the details of using Docker Compose for container networks:

1. Docker Compose YAML file: To define your container network using Docker Compose, you
start by creating a YAML file (usually named `docker-compose.yml`). This file contains the
configuration details for your application's services, including the network setup.

2. Services and networks sections: In the YAML file, you define your application's services
under the `services` section and networks under the `networks` section. Each service represents a
container in your application, and each network represents a network that the containers can
connect to.

3. Service definition: Under the `services` section, you define your containers by specifying their
names, images, ports, and any other required configuration. For example:

```yaml
services:
app:
image: myapp:latest
ports:
- "8080:80"
```

In this example, a service named `app` is defined with the image `myapp:latest`, and port
mapping is specified to expose port 80 inside the container as port 8080 on the host machine.

4. Network definition: Under the `networks` section, you define your networks by specifying
their names and any additional configuration. For example:

```yaml
networks:
mynetwork:
driver: bridge
```

In this example, a network named `mynetwork` is defined with the `bridge` driver. The bridge
driver is the default network driver in Docker and allows containers to communicate with each
other on the same host.

5. Connecting services to networks: To connect a service to a network, you use the `networks`
property within the service definition. For example:

services:

app:
image: myapp:latest
ports:
- "8080:80"
networks:
- mynetwork

In this example, the `app` service is connected to the `mynetwork` network. This allows the
containers in the `app` service to communicate with other containers on the same network.
6. Inter-container communication: Once your services are connected to a network, they can
communicate with each other using their service names as hostnames. Docker Compose
automatically sets up DNS resolution between the containers within the same network. For
example, if you have a service named `db` connected to the same network as the `app` service,
you can access it from the `app` container using the hostname `db`.

These are the basic steps involved in setting up container networks using Docker Compose. You
can also configure additional network settings, such as IP address assignment, external network
connectivity, and network aliases, depending on your application requirements. Docker Compose
provides various options and flexibility to define and manage complex container networks
effectively.

Docker Engine API

The Docker Engine API is a comprehensive interface that allows developers to interact with Docker Engine programmatically. It
provides a set of RESTful endpoints that enable the management and control of Docker containers, images, networks, volumes,
and other Docker resources. Here's a more detailed explanation of the key concepts and functionalities of the Docker Engine API:

1. Container Operations:

- Creating and starting containers: You can use the API to create new containers based on specific images and start them with
desired configurations, such as environment variables, port bindings, volumes, and network settings.

- Inspecting containers: The API provides endpoints to retrieve detailed information about containers, including their status,
resource usage, network configuration, and more.

- Managing container lifecycle: You can start, stop, restart, pause, resume, and remove containers using the API. These
operations allow you to control the execution and behavior of containers.

2. Image Management:

- Building and pulling images: The Docker Engine API allows you to build new container images based on Dockerfiles or pull
existing images from remote repositories.

- Tagging and pushing images: You can tag images with specific names and versions, and push them to remote image
repositories, such as Docker Hub or private registries.
- Inspecting and deleting images: The API provides endpoints to retrieve detailed information about images, including their
layers, metadata, and history. You can also delete unwanted images using the API.

3. Network Configuration:

- Managing networks: The API enables the creation, deletion, and listing of Docker networks. You can define network types
(bridge, overlay, MACVLAN, etc.), IP address allocation methods, and other network-specific configurations.

- Connecting containers to networks: Using the API, you can connect containers to specific networks, enabling inter-container
communication and defining network-related settings for each container.

4. Volume Management:

- Creating and deleting volumes: The Docker Engine API allows you to create, delete, and list volumes. Volumes provide
persistent storage for containers.

- Mounting volumes in containers: You can specify volume mounts for containers through the API, allowing you to access and
persist data beyond the lifespan of containers.

5. Events and Logs:

- Streaming container events: The API provides endpoints to stream real-time events related to containers, such as container
creation, start, stop, and deletion. This feature enables monitoring and reacting to container lifecycle events programmatically.

- Accessing container logs: You can retrieve the logs generated by containers using the API, facilitating troubleshooting and
debugging processes.

6. Authentication and Security:

- Authentication mechanisms: The Docker Engine API supports various authentication methods, such as HTTP basic
authentication, OAuth, and JSON Web Tokens (JWT). These mechanisms ensure secure access to Docker resources and
operations.

- Authorization and access control: The API allows you to define access controls and permissions for users and applications,
ensuring that only authorized entities can interact with Docker Engine through the API.
7. Extensibility:

- Docker plugins and extensions: The Docker Engine API is designed to be extensible. Docker plugins allow you to extend
Docker functionality by introducing new APIs, resource types, and operations. This extensibility enables customization and
integration of Docker with third-party tools and systems.

The Docker Engine API provides a powerful interface for managing and automating Docker resources. By leveraging the API's
capabilities, developers can build applications, command-line tools, and frameworks that interact with Docker Engine
programmatically, integrating Docker into their workflows and systems efficiently.

Managing Images and container with docker API

Using the Docker Engine API, you can manage images and containers programmatically. Here's an overview of how you can
perform various operations on images and containers using the Docker API:

Managing Images:

1. Pulling Images:

- Use the `/images/create` endpoint to pull an image from a remote registry. Specify the image name and optional tag in the
request payload.

- Authenticate with the registry using the appropriate authentication method if required.

2. Building Images:

- Use the `/build` endpoint to build a new Docker image based on a Dockerfile. You can pass the Dockerfile content or provide
a URL to the Dockerfile in the request payload.

- Customize the build context and specify build arguments if needed.

3. Listing Images:

- Use the `/images/json` endpoint to retrieve a list of images available on the Docker host.

4. Inspecting Images:

- Use the `/images/{image-id}/json` endpoint to get detailed information about a specific image, including its tags, layers, size,
and configuration.
5. Tagging Images:

- Use the `/images/{image-id}/tag` endpoint to add or update tags for an image. Specify the new tag name in the request
payload.

6. Pushing Images:

- Use the `/images/{image-id}/push` endpoint to push an image to a remote registry. Authenticate with the registry using the
appropriate authentication method if required.

7. Removing Images:

- Use the `/images/{image-id}` endpoint with the DELETE method to delete a specific image from the Docker host.

Managing Containers:

1. Creating Containers:

- Use the `/containers/create` endpoint to create a new container based on an image. Specify the image name, container
configuration (e.g., command, environment variables, ports, volumes), and network settings in the request payload.

2. Starting and Stopping Containers:

- Use the `/containers/{container-id}/start` endpoint to start a previously created container.

- Use the `/containers/{container-id}/stop` endpoint to stop a running container gracefully.

3. Inspecting Containers:

- Use the `/containers/{container-id}/json` endpoint to retrieve detailed information about a specific container, including its
status, resource usage, network configuration, and more.

4. Listing Containers:

- Use the `/containers/json` endpoint to get a list of containers running on the Docker host.

5. Removing Containers:
- Use the `/containers/{container-id}` endpoint with the DELETE method to remove a specific container from the Docker host.

- Use the `/containers/{container-id}/remove` endpoint to remove a container and its associated volumes.

6. Container Logs:

- Use the `/containers/{container-id}/logs` endpoint to retrieve the logs generated by a container.

These are just a few examples of the operations you can perform on images and containers using the Docker Engine API. The
API provides additional endpoints and functionalities to manage networks, volumes, and other Docker resources as well. By
utilizing the Docker API, you can automate and customize image and container management, integrate Docker into your
applications and workflows, and build powerful tools for working with Docker resources programmatically.

Authenticating the Docker Engine API

To authenticate and secure access to the Docker Engine API, you can use various authentication
methods supported by Docker. Here are some common authentication mechanisms you can utilize:

1. HTTP Basic Authentication:


 With HTTP Basic Authentication, you provide a username and password with each API request.
 To use this authentication method, set the Authorization header in your API requests as follows:
makefileCopy code
Authorization: Basic BASE64_ENCODED_CREDENTIALS
 BASE64_ENCODED_CREDENTIALS should be the Base64 encoding of the string username:password.
2. OAuth:
 Docker supports OAuth 2.0 authentication for interacting with the Docker Engine API.
 You'll need to register your application with an OAuth provider and obtain client credentials (client
ID and client secret).
 Follow the OAuth provider's instructions to authenticate and obtain an access token.
 Include the access token in the Authorization header of your API requests using the Bearer
scheme:
makefileCopy code
Authorization: Bearer ACCESS_TOKEN
3. JSON Web Tokens (JWT):
 Docker Engine API can authenticate using JSON Web Tokens (JWT).
 Generate a JWT token with the necessary claims (e.g., user information, access permissions) and sign
it using a shared secret or private key.
 Include the JWT token in the Authorization header of your API requests using the Bearer scheme:
makefileCopy code
Authorization: Bearer JWT_TOKEN
4. Certificates and TLS:
 You can enable Transport Layer Security (TLS) on the Docker Engine API to secure the
communication between the client and the server.
 Generate and configure TLS certificates on both the client and server side.
 Set the appropriate TLS configurations in your Docker client or application to establish a secure
connection with the Docker Engine API.

Note that the authentication methods mentioned above may require additional configuration and
setup. The specific steps may vary depending on your authentication provider or the Docker setup
you're using.

It's important to secure your Docker Engine API by choosing the appropriate authentication method
based on your requirements and environment. By implementing authentication, you can ensure that
only authorized users or applications can access and interact with Docker resources through the API.

You might also like