0% found this document useful (0 votes)
18 views

Docker_final_doc

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Docker_final_doc

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 19

===================================================================================

============================

###### DOCKER ###########

===================================================================================
=====================

MONOLITHIC: Deploying monolithic applications is more straightforward than


deploying microservices. Developers install the entire application code base and
dependencies in a single environment.

MICRO SERVICES: Microservices are deployed using VM or Containers. Containers are


the preferred deployment route for microservices as containers are lighter,
portable, and modular. The microservice code is packaged into a container image and
deployed as a container service.
multiple services are deployed on multiple servers with multiple databases.

note: if number of application are more will go with microservices concept (docker
and kubernetes)

BASED ON USERS AND APP COMPLEXITY WE NEED TO SELECT THE ARCHITECTURE.

FACTORS AFFECTIONG FOR USING MICRO SERVICES:


FLEXIBLE
COST
MAINTAINANCE
EASY CONTROL

## ----- Docker overview ----------##


Docker is an open platform for developing, shipping, and running applications.
Docker enables you to separate your applications from your infrastructure so you
can deliver software quickly. With Docker, you can manage your infrastructure in
the same ways you manage your applications. By taking advantage of Docker's
methodologies for shipping, testing, and deploying code, you can significantly
reduce the delay between writing code and running it in production.

DOCKER IMAGE:
Docker images are read-only templates that contain instructions for creating a
container. A Docker image is a snapshot or blueprint of the libraries and
dependencies required inside a container for an application to run

CONTAINERS:
A Docker container image is a lightweight, standalone, executable package of
software that includes everything needed to run an application: code, runtime
its same as a server/vm.
operating system independent
(Ec2 server=AMI, CONTAINER=IMAGE)
os will be managed in image

-----docker install process -------


----root------

sudo yum install docker -y # If linux 20203

sudo systemctl start docker


sudo systemctl status docker

Note : By default Docker works with the root user and other users can only access
to Docker with sudo commands. However, we can bypass the sudo commands by creating
a new group with the name docker and add ec2_user.

#First let’s create the docker group

if you install on ec2-user

sudo groupadd docker (optional if group is not created)

#Now let’s add ec2-user to docker group

sudo usermod -a -G docker ec2-user

#In order to enable the changes, run the following command

newgrp docker

sudo chmod 666 /var/run/docker.sock # to give access docker demon to run docker
server

docker --version to check docker version

#If you want to see an extended version of the version details, such as the API
version, Go version, and Engine Version, use the version command without dashes.
give below command

docker version

# ----commands:---- #

---to pull base images from public docker repository

docker pull image <imagename>

docker pull nginx or ubuntu

docker inspect image nginx ### to check images details

docker images # to check list of images

docker run -it imagename /bin/bash --will enter into the container interact
terminal

docker run -dt imagename -we are not enter into the container detach terminal

docker run -dt --name <name> <imagename>


(to give csutome name --name)
docker ps ## to check running containers

docker ps -a ## to check both running and stopped containers

docker ps -a | grep 'Exited' #to check only stopped container

Condition-1----- running container login

## To login container ##
docker exec -it <continername or continerid> /bin/bash # to login running container

1.if you want to come out from container without stop give "ctrl p+q" or exit
(container already running while login time so no impact)

condition-2-------create container from image and login

## To login container ##
docker run -it <imagename> /bin/bash # to create container from image and also
interactive (login mode)

1.if you want to come out from container without stop give "ctrl p+q"

2.if you give "exit" container also will stop

Condition-3-------run container detach mode

docker run -dt <imagename> (detach mode not going to be interacted with container)

After if we want login "docker exec -it <continername or continerid> /bin/bash"

ps -ef --to know how many processors running if it is in vm many process we can
see but it is in container onle few becuase its light weight

ps -ef | wc -l #to know number of processors request running backend

#### how to start container#####

docker start <containerid>


or
docker start <container name>

#### how to stop container#####


docker stop <containerid>
or
docker stop <container name>

## docker kill comeplete terminated


docker kill <containername>

docker pause cont_name : to pause container


docker unpause cont_name: to unpause container
docker inspect cont_name: to get complete info of a container
docker logs <containerid>: to check the logs

## danger commands##

#### deleting process ######


## Images -----

docker rmi <imagename> ##to delete image if image is not tagging to any container
weather it is running or stopped

docker rmi <imageID> -f #forcrfully delete image even respective container weather
it is running or stopped

docker system prune -a to remove all images (if images are tag to stopped
containers)

## container ------
docker rm <containername or containerid> ### to remover stopped container

docker rm -f <container id> delete running and stopped container

docker container prune -- delete all stopped containers

docker rm -f $(docker ps -aq) ---delete all running containers

# --Enabeling port-- #

docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME

docker run -dt -p 3000:3000 --name project-2 ubuntu #along port expose

===================================================================================

-------- pull jenkins----------

docker pull jenkins/jenkins

docker run -dt -p <local host port>:<container run port> --name <name of the
container> <name of the image>

docker run -dt -p 8081:8080 --name jenkins container jenkins/jenkin

access the jenkins with public ip and port number (8081)

============================================================
# IMAGE PUSH #
======================================================================
######### docker push to Docker private repository #########
docker login first
docker tag <image> dockerusername/<image>
docker push <dockerusername>/<image>
docker pull <dockerusername>/<image>

================================================================

######### docker push to AWS ECR #########

Retrieve an authentication token and authenticate your Docker client to your


registry.
Use the AWS CLI:

login#

aws ecr get-login-password --region ap-south-1 | docker login --username AWS --


password-stdin 992382358200.dkr.ecr.ap-south-1.amazonaws.com

Note: If you receive an error using the AWS CLI, make sure that you have the latest
version of the AWS CLI and Docker installed an create a role and give ecr
permissions and attach to ec2.

Build your Docker image using the following command. For information on building a
Docker file from scratch see the instructions here . You can skip this step if your
image is already built:

docker build -t maventomcat . # optional


After the build completes, tag your image so you can push the image to this
repository:

Tagging #

docker tag maventomcat:latest


992382358200.dkr.ecr.ap-south-1.amazonaws.com/maventomcat:latest
Run the following command to push this image to your newly created AWS repository:

Push #
docker push 992382358200.dkr.ecr.ap-south-1.amazonaws.com/maventomcat:latest

===================================================================================
====

------------# Docker file run instructions #----------


Docker file is a simple text file that consists of instructions to build Docker
images.
-----------------------------------
##### Docker file instruction ########

From : to pull base image


Run : to executive commands
CMD : To Provide defaults for an executing container
Entrypoint : To configure a container that will run as an container
Workdir : to set working directory
Copy: files form local to the container
Add: TO copy files and loader file but little advance add command we can add url
also
Expose : informs docker that the container listen on the specified network port
runtime
Env: To set env variabels

# below command to run docker file

docker build -t check .


docker run -p <HOST_PORT>:<CONTAINER:PORT> IMAGE_NAME
docker run -dit --name demo -p 8080:80 check
ex: docker run -dt -p 3000:3000 --name project-2 saturday

-------------- ## Docker run with different flags ## ------------------------


The docker run command is used to create and start a new container from a specified
image. Various flags can modify its behavior. Here's a breakdown of the flags
you've mentioned:

1. docker run image


Usage: --name container-1
Purpose: This flag allows you to assign a name to the container for easier
reference later. Instead of using the container ID, you can use the name you
specify.

This will execute the default command defined in the Dockerfile of the test image.
If that command is an interactive shell or a long-running process, you will see the
output in your terminal.
If the command completes quickly (e.g., a command that just prints something and
exits), the container will stop, and you won't be able to interact with it.

Example:
docker run --name container-1 test

2. -it
Usage: -it
Purpose: This is a combination of two flags:
-i: Runs the container in interactive mode
-t: Allocates a pseudo-TTY, allowing you to interact with the container as if it
were a terminal.
Example:
docker run -it --name container-1 test

3. -dt recommended 100%


Usage: -dt
Purpose: This is a combination of the -d and -t flags:
-d: Runs the container in detached mode.
-t: Allocates a pseudo-TTY. However, when in detached mode, you won't interact with
the terminal directly, but it's useful if you need to allocate a terminal for
future connections or interactions.
Example:
docker run -dt --name container-1 test
Summary of Use Cases
Running in the foreground:

without any flag------


docker run --name container-1 test: Starts the container, and you can see logs
directly in the terminal.
Running in detached mode:

docker run -it --name container-1 test: Starts the container in interactive mode,
allowing you to interact with it directly.
Detached with terminal allocation:

docker run -dt --name container-1 test: Runs the container in the background while
still allocating a pseudo-terminal. You can use it for future commands like docker
exec.
Example Scenarios

Use -it when you need to debug or interact directly with the container.
Use -dt if you want the flexibility of a pseudo-terminal while running the
container in the background.

### Sample Docker file ####

#example-1

FROM ubuntu:20.04

WORKDIR /app

COPY myfile.txt /app/

# Ensure that the container stays running


CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"]

Case-1:
## Using sh -c
When you use sh -c, you're explicitly invoking a shell to run the commands. This
allows you to:

For example -----

CMD ["sh", "-c", "cat /app/myfile.txt && sleep 3600"]

Instruction Behavior
CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"] sleep 60 runs
unconditionally, whether cat succeeds or fails.
CMD ["sh", "-c", "cat /app/myfile.txt && sleep 3600"] sleep 3600 runs
conditionally, only if cat succeeds.

--------------------------------
##Without sh -c:

## Example -2 dockerfile ###

FROM ubuntu:20.04

WORKDIR /app

COPY myfile.txt /app/


CMD ["cat", "/app/myfile.txt && sleep 60"]
--------------------------------------------

Explanation:

CMD ["command1", "&&", "command2"]


This tries to run command1 with && as an argument, which is not valid.
Summary
Use sh -c when you want to run multiple commands or leverage shell features.
Use the exec form (without sh -c) for straightforward command execution when you
only need to run a single command without chaining or additional shell features.

Single Command Execution: Only the command specified is executed, without any shell
features.

For example:-----

CMD ["cat", "/app/myfile.txt", "&&", "sleep", "3600"]

This would not work as intended because the command cat would look for an argument
called &&, and it would fail because cat doesn't know how to process &&.

No Command Chaining: You cannot chain commands together directly. Each command must
be specified separately, and you can't use shell-specific operators like &&.

-------------------------------------------------------
###### Example 2 ######### COPY ####

FROM ubuntu:20.04

# Set the working directory inside the container


WORKDIR /app

# Copy all files and directories from the current host directory to the container's
/app directory
COPY . .

# Ensure that the container stays running


CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"]

#Explanation of the Changes


WORKDIR /app:

Sets /app as the working directory for subsequent instructions.


If it doesn't exist, it will be created.
COPY . .:

Copies all files and directories from the current directory on the host (where
docker build is executed) to the /app directory in the container.
CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"]:

Ensures that the container prints the content of myfile.txt and remains running for
60 seconds.
-------------------- Example 3-----#### ADD ####------

# Use an official Ubuntu as a base image


FROM ubuntu:20.04

# Set the working directory in the container


WORKDIR /app

# Use the ADD instruction to download and extract the tarball from the GitHub
release
ADD https://github.com/torvalds/linux/archive/refs/tags/v5.14.tar.gz /app/

# List the contents of the /app directory and keep the container running with sleep
CMD ["sh", "-c", "ls /app && sleep 3600"]

--------Example- python version ---------

# Use the official Python base image


FROM python:3.12-slim

# Set the working directory in the container


WORKDIR /app

# Check Python version


CMD ["python", "--version"]

Ex: 1

# Ex:1 docker pull httpd (readymade)

FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/ # copy present directory files into
destination patha tha is /usr/local/apache2/htdocs/

Ex:2

# httpd with ubuntu (impliment from scratch)

FROM ubuntu #pull ubuntu base image from docker hub


RUN apt update
RUN apt-get -y update
RUN apt-get -y install apache2 #by using RUN instrcution install apache2
COPY index.html /var/www/html #copying index.html file into /var/www/html
EXPOSE 80 #expose default port number for application
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"] #Run apache2 srever while running
container from image

Note:
-D FOREGROUND This is not a docker command this is Apache server argument which is
used to run the web server in the background. If we do not use this argument the
server will start and then it will stop

# Ex:3 httpd with centos

FROM centos:7
RUN yum update -y && yum install -y httpd
COPY index.html /var/www/html/
EXPOSE 80
#httpdserver
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]

--------------------------------- Docker files different programming languages


-------------------------
A Dockerfile is a script containing a series of instructions that tells Docker how
to build a custom container image. It automates the steps required to set up an
environment inside a container, such as installing software, copying files, setting
up configurations, and defining how the container should run.

1.# Use the official Python image as a base


FROM python:3.9

# Set the default command to check the Python version


CMD ["python", "--version"]

------------------------------------------------------------------
2.If python runs on ubutnu

# Use the official Ubuntu image as a base


FROM ubuntu:20.04

# Set the environment variable to avoid interactive prompts during package


installation
ENV DEBIAN_FRONTEND=noninteractive

# Update the package list and install Python


RUN apt-get update && \
apt-get install -y python3.9 python3-pip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Set the default command to run Python


CMD ["python3.9"]

----------------------------------------------------------------------
3. Deploy python flask application by cloning direct python image

FROM python:3.6
MAINTAINER veera "[email protected]"
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]

----------------------------------------------------------------------

4. Deploy python flask application by cloning base image ubuntu


FROM ubuntu:20.04

# Set the environment variable to avoid interactive prompts during package


installation
ENV DEBIAN_FRONTEND=noninteractive

# Update the package list and install Python


RUN apt-get update && \
apt-get install -y python3.9 python3-pip && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* #leanup: After installing packages with apt-get, a
lot of metadata files are stored in /var/lib/apt/lists/. These files are no longer
needed after the installation is complete.
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
# Set the default command to run Python

ENTRYPOINT ["python3"]
CMD ["app.py"]
(or)
CMD ["python3", "app.py"]
----------------------------------------------
nodejs----------------------------------------------
# Use Node.js based on Debian Slim as the base image
FROM node:16-slim

# Create and set the working directory inside the container


WORKDIR /app

# Copy the entire codebase to the working directory


COPY . .

# Install dependencies
RUN npm install

# Expose the port your app runs on


EXPOSE 3000

# Define the command to start your application


CMD ["npm", "start"]

---------------------------------------------NodesJs on ubuntu-------------------

# Use Ubuntu as the base image


FROM ubuntu:20.04

# Install Node.js and npm


RUN apt-get update && \
apt-get install -y curl && \
curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && \
apt-get install -y nodejs && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Create and set the working directory inside the container


WORKDIR /app

# Copy the entire codebase to the working directory


COPY . .

# Install dependencies
RUN npm install

# Expose the port your app runs on


EXPOSE 3000

# Define the command to start your application


CMD ["npm", "start"]

==================================================================================

----- ## Docker file single stage and multi stage ##--------


===================================================================================
===
# scenario :1 single stage after run mavne manually, deploy war file on tomcat
webapp by using docker file

FROM tomcat:latest
COPY webapp/target/webapp.war /usr/local/tomcat/webapps/webapp.war
RUN cp -R /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps

# scenario :2 multi satge --run Maven Image by using docker file and deploy on
tomcat webapp

ROM maven:3.8.4-eclipse-temurin-17 AS build


RUN mkdir /app
WORKDIR /app
COPY . .
RUN mvn package

FROM tomcat:latest
COPY --from=build /app/webapp/target/webapp.war
/usr/local/tomcat/webapps/webapp.war # copying files fromsta ge -1 docker file
maven into tomcat path
RUN cp -R /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps # adding
dependecnes to run base tomcat page

# scenario :3 multi stage from ubuntu scratch by using docker file

FROM ubuntu:latest as builder


RUN apt-get update && \
apt-get install -y openjdk-8-jdk wget unzip

ARG MAVEN_VERSION=3.9.6
RUN wget https://dlcdn.apache.org/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-
bin.tar.gz && \
tar -zxvf apache-maven-${MAVEN_VERSION}-bin.tar.gz && \
rm apache-maven-${MAVEN_VERSION}-bin.tar.gz && \
mv apache-maven-${MAVEN_VERSION} /usr/lib/maven

ENV MAVEN_HOME /usr/lib/maven


ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
ENV PATH=$MAVEN_HOME/bin:$PATH
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN mvn install
FROM tomcat:latest
COPY --from=builder /app/webapp/target/webapp.war
/usr/local/tomcat/webapps/webapp.war
RUN cp -R /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps

===================================================
### Docker file MySQL ##

# Use the official MySQL image from the Docker Hub


FROM mysql:8.0

# Set environment variables for MySQL


ENV MYSQL_ROOT_PASSWORD=Cloud123
ENV MYSQL_DATABASE=test
ENV MYSQL_USER=admin
ENV MYSQL_PASSWORD=Devops123

# Expose MySQL port


EXPOSE 3306

# Copy the initialization script to the container


COPY init.sql /docker-entrypoint-initdb.d/

# Start MySQL server


CMD ["mysqld"]

====================================================
-- Create a sample table on local script on name of init.sql

CREATE TABLE IF NOT EXISTS users (


id INT AUTO_INCREMENT PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL
);

-- Insert sample data


INSERT INTO users (username, email) VALUES
('veera', '[email protected]'),
('naresh', '[email protected]');

# docker build -t <imagename> .


# docker run -dt <imagename>
#docker exec -it <containerid> /bin/bash

#mysql -u admin -p
give password

(You will enter into MySQL terminal )

#show databases;
use test; #creating database name test pls check docker file
show tables;
select *from users; $ table name is users check script init.sql

==================================================================================

# Custom Docker file name #

# if we create multiple docker files like Dockerfile and Dockerfile1

Below command example to run

docker build -f <Dockerfilename> -t <imagename>.

docker build -f Dockerfile1 -t image .

====================================================
RUN DOCKER FILE DIRECTLY TAKING SOURCE FROM GITHUB
======================================================
docker build -t <imagename> <github projecturl>

Example:

docker build -t mvntomgit https://github.com/CloudTechDevOps/project-1-maven-


jenkins-CICD-docker-eks-.git

===================================================================================
=

######### CMD vs Entrypoint ###

### CMD ##
CMD Instruction
The CMD instruction in a Dockerfile specifies the command that will run by default
when a container is started.
It can be overridden with command-line arguments during the docker run command.

FROM centos:latest
# Update repository configuration
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|
g' /etc/yum.repos.d/CentOS-Linux-*
# Install Git
CMD ["yum", "-y", "install", "httpd"]

In this example, if you run the container without specifying a command (docker run
my-ubuntu-image), it will execute httpd.
If you provide additional arguments (docker run my-ubuntu-image "yum -y install git
"), they will override the default CMD and execute git.
Note: if we run docker run image along with argument yum install -y git it will
overwrite new argument, it will download git only and ignores httpd

##### Entrypoint

The ENTRYPOINT instruction sets the main command that will be run when a container
is started.
It's often used to specify the executable or script that should be run as the
primary process within the container.

FROM centos:latest
# Update repository configuration
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|
g' /etc/yum.repos.d/CentOS-Linux-*
ENTRYPOINT ["yum", "install", "-y", "git"]

In this example, ENTRYPOINT is set to install git. If you run the container without
specifying additional arguments (docker run my-ubuntu-image), it will always to
install git.
If you provide arguments (docker run my-ubuntu-image https), it will execute httpd
also along git.

note: condition:1 after run command docker run image ----it will install git
condition -2 if we run docker run image httpd ----it will install git and
httpd also

#Combination of CMD and ENTRYPOINT


When both CMD and ENTRYPOINT are specified in a Dockerfile, CMD provides default
arguments for ENTRYPOINT.
This allows flexibility where CMD can define parameters commonly passed to the main
command specified by ENTRYPOINT

FROM centos:centos7
ENTRYPOINT ["yum", "install", "-y"]
CMD ["git"]

Here, ENTRYPOINT is set to run "yum install -y", and CMD provides "git" as default
arguments. (here yum install -y is not override)
If you provide arguments (docker run my-ubuntu-image "tree"), they will override
the default CMD and install tree only

######## Docker network ##############

#Bridge#

Docker provides a default bridge network. Containers connected to this network can
communicate with each other using their IP addresses

Key Features of Docker Bridge Network


Isolation: Containers on a bridge network are isolated from the host and other
networks, which enhances security.
Communication: Containers connected to the same bridge network can communicate with
each other using container names.
Default Gateway: Containers have a default gateway to the bridge network, allowing
them to connect to external networks through the host.

#host#
Docker host network mode allows a container to share the network stack of the
Docker host.
This means that the container will use the host's IP address and will not have its
own separate network namespace.
When you run a container in host network mode, it directly binds to the host's
network interfaces.

Key Features of Docker host Network

Simplified Network Configuration:


No need to map ports between the container and the host, as the container can use
any port available on the host.
Reduces complexity in scenarios where many ports need to be exposed or where
dynamic port assignments are challenging to manage.

No Docker Networking Overhead:


Since the container uses the host’s networking directly, there is no Docker network
driver overhead.
This can result in lower latency and higher throughput for network traffic.

Performance:

Improved network performance due to the elimination of network address translation


(NAT) and bridge network overhead.
Suitable for network-intensive applications that benefit from direct access to the
host's network.

##None#

In Docker, the none network mode is used to disable networking for a container.
When you run a container with the none network mode, it has no network interfaces
apart from the loopback interface.
This means the container cannot access external networks or communicate with other
containers.

Key Features of Docker None Network

Isolation: Containers are completely isolated from all network communications.


Security: Provides an additional layer of security by preventing any network
access.
Testing: Useful for testing applications in a fully isolated environment or when
network functionality is not required.

::Practical::

docker run -dt --name container1 ubuntu


docker run -dt --name container2 ubuntu

docker network ls
create two containers
container1
container2
docker inspect <containername>

container1: 172.17.0.2
container2: 172.17.0.3

Note:defualt network is bridge

login to container
docker exec -it <containernmae> /bin/bash

## need to install ping libraries by using below command (if ping command will not
work)

#apt-get update -y

#apt install iputils-ping

check both containers ips and try to access each other

ping 172.17.0.3

# host type

Docker network host, also known as Docker host networking, is a networking mode in
which a Docker container shares its network namespace with the host machine. The
application inside the container can be accessed using a port at the host's IP
address

docker run -dt --name <container> --network host <image>


ex:docker run -d --network host nginx

#None type

none network in docker means when you don't want any network interface for your
container. If you want to completely disable the networking on a container, you can
use the --network none flag when starting the container.

docker run -dt --name <container> --network None ubuntu

#to remove network : docker network rm networkname

# Create cutome network # (optional)

docker network create <networkname> # to maitain private connection one container


from another container

docker run -dt --name <container-name> --network <created-networkname> <image-name>


ex: docker run -dt --name container3 --network private ubuntu

##########################################################
---------------Docker volumes ----------

Volumes are a mechanism for storing data outside containers. All volumes are
managed by Docker and stored in a dedicated directory on your host, usually
/var/lib/docker/volumes for Linux systems.

### to create volume


docker volume create <volume-name>
docker volume ls ##to check list of volumes
docker inspect volume <volumename>
docker volume rm <volumename> ## to remove volume
### to check created volume path
cd /var/lib/docker/volumes

## run container from ubuntu image along with created volume


docker run -ti --name=container1 -v volume1:/volume1 ubuntu #here you can give
any name in place of volume1 if required

echo "Share this file between containers">/volume1/Example.txt

after delete container still volume will persistence ---volumes are store in to
host path var/lib/docker/volumes

--------docker compose installation ------------

sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-


compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose


- - - - - - - -- - - - - - - - -- - - - - -- - -
docker-compose version

services:
mydb:
environment:
MYSQL_ROOT_PASSWORD: test
image: mysql
mysite:
image: wordpress
links:
- mydb:site
ports:
- published: 8080
target: 80
version: '3'

filename : docker-compose.yaml

after run give below details

database name : MySQL


user :root
password: test
database host :mydb
<docker-compose -f filename.yaml up> need to give this command for custom file name
docker-compose up -----#to run docker compose
docker-compose ps -----#to see containers status

########### list of commands on Docker compose ######


Docker-Compose commands.
To pull docker images.
docker-compose pull

To create all containers using docker-compose file.


docker-compose up

To create all containers using docker-compose file with detached mode.


docker-compose up -d

To stop all running containers with docker-compose.


docker-compose stop

To view the config file.


docker-compose config

To remove all stopped containers.


docker-compose rm

To view the logs of all containers.


docker-compose logs

To view all images.


docker-compose images

To view all contaiers created by docker-compose file.


docker-compose ps

To restart containers with docker-compose file.


docker-compose restart

------Docker Swarm:------
Docker Swarm is a container orchestration tool for clustering and scheduling Docker
containers. With Swarm, IT administrators and developers can establish and manage a
cluster of Docker nodes as a single virtual system. Docker Swarm lets developers
join multiple physical or virtual machines into a cluster.

Docker Swarm Vs Kubernetes

Docker swarm will not support autoscaling concepts but Kubernetes will support.

Kubernetes advantages :
It has a large open source community, backed by Google.
It supports every operating system.
It can sustain and manage large architectures and complex workloads.
It is automated and has a self-healing capacity that supports automatic scaling.
It has built-in monitoring and a wide range of integrations available.
It is offered by all three key cloud providers: Google, Azure, and AWS.

You might also like