Docker_final_doc
Docker_final_doc
============================
===================================================================================
=====================
note: if number of application are more will go with microservices concept (docker
and kubernetes)
DOCKER IMAGE:
Docker images are read-only templates that contain instructions for creating a
container. A Docker image is a snapshot or blueprint of the libraries and
dependencies required inside a container for an application to run
CONTAINERS:
A Docker container image is a lightweight, standalone, executable package of
software that includes everything needed to run an application: code, runtime
its same as a server/vm.
operating system independent
(Ec2 server=AMI, CONTAINER=IMAGE)
os will be managed in image
Note : By default Docker works with the root user and other users can only access
to Docker with sudo commands. However, we can bypass the sudo commands by creating
a new group with the name docker and add ec2_user.
newgrp docker
sudo chmod 666 /var/run/docker.sock # to give access docker demon to run docker
server
#If you want to see an extended version of the version details, such as the API
version, Go version, and Engine Version, use the version command without dashes.
give below command
docker version
# ----commands:---- #
docker run -it imagename /bin/bash --will enter into the container interact
terminal
docker run -dt imagename -we are not enter into the container detach terminal
## To login container ##
docker exec -it <continername or continerid> /bin/bash # to login running container
1.if you want to come out from container without stop give "ctrl p+q" or exit
(container already running while login time so no impact)
## To login container ##
docker run -it <imagename> /bin/bash # to create container from image and also
interactive (login mode)
1.if you want to come out from container without stop give "ctrl p+q"
docker run -dt <imagename> (detach mode not going to be interacted with container)
ps -ef --to know how many processors running if it is in vm many process we can
see but it is in container onle few becuase its light weight
## danger commands##
docker rmi <imagename> ##to delete image if image is not tagging to any container
weather it is running or stopped
docker rmi <imageID> -f #forcrfully delete image even respective container weather
it is running or stopped
docker system prune -a to remove all images (if images are tag to stopped
containers)
## container ------
docker rm <containername or containerid> ### to remover stopped container
# --Enabeling port-- #
docker run -dt -p 3000:3000 --name project-2 ubuntu #along port expose
===================================================================================
docker run -dt -p <local host port>:<container run port> --name <name of the
container> <name of the image>
============================================================
# IMAGE PUSH #
======================================================================
######### docker push to Docker private repository #########
docker login first
docker tag <image> dockerusername/<image>
docker push <dockerusername>/<image>
docker pull <dockerusername>/<image>
================================================================
login#
Note: If you receive an error using the AWS CLI, make sure that you have the latest
version of the AWS CLI and Docker installed an create a role and give ecr
permissions and attach to ec2.
Build your Docker image using the following command. For information on building a
Docker file from scratch see the instructions here . You can skip this step if your
image is already built:
Tagging #
Push #
docker push 992382358200.dkr.ecr.ap-south-1.amazonaws.com/maventomcat:latest
===================================================================================
====
This will execute the default command defined in the Dockerfile of the test image.
If that command is an interactive shell or a long-running process, you will see the
output in your terminal.
If the command completes quickly (e.g., a command that just prints something and
exits), the container will stop, and you won't be able to interact with it.
Example:
docker run --name container-1 test
2. -it
Usage: -it
Purpose: This is a combination of two flags:
-i: Runs the container in interactive mode
-t: Allocates a pseudo-TTY, allowing you to interact with the container as if it
were a terminal.
Example:
docker run -it --name container-1 test
docker run -it --name container-1 test: Starts the container in interactive mode,
allowing you to interact with it directly.
Detached with terminal allocation:
docker run -dt --name container-1 test: Runs the container in the background while
still allocating a pseudo-terminal. You can use it for future commands like docker
exec.
Example Scenarios
Use -it when you need to debug or interact directly with the container.
Use -dt if you want the flexibility of a pseudo-terminal while running the
container in the background.
#example-1
FROM ubuntu:20.04
WORKDIR /app
Case-1:
## Using sh -c
When you use sh -c, you're explicitly invoking a shell to run the commands. This
allows you to:
Instruction Behavior
CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"] sleep 60 runs
unconditionally, whether cat succeeds or fails.
CMD ["sh", "-c", "cat /app/myfile.txt && sleep 3600"] sleep 3600 runs
conditionally, only if cat succeeds.
--------------------------------
##Without sh -c:
FROM ubuntu:20.04
WORKDIR /app
Explanation:
Single Command Execution: Only the command specified is executed, without any shell
features.
For example:-----
This would not work as intended because the command cat would look for an argument
called &&, and it would fail because cat doesn't know how to process &&.
No Command Chaining: You cannot chain commands together directly. Each command must
be specified separately, and you can't use shell-specific operators like &&.
-------------------------------------------------------
###### Example 2 ######### COPY ####
FROM ubuntu:20.04
# Copy all files and directories from the current host directory to the container's
/app directory
COPY . .
Copies all files and directories from the current directory on the host (where
docker build is executed) to the /app directory in the container.
CMD ["sh", "-c", "cat /app/myfile.txt; sleep 60"]:
Ensures that the container prints the content of myfile.txt and remains running for
60 seconds.
-------------------- Example 3-----#### ADD ####------
# Use the ADD instruction to download and extract the tarball from the GitHub
release
ADD https://github.com/torvalds/linux/archive/refs/tags/v5.14.tar.gz /app/
# List the contents of the /app directory and keep the container running with sleep
CMD ["sh", "-c", "ls /app && sleep 3600"]
Ex: 1
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/ # copy present directory files into
destination patha tha is /usr/local/apache2/htdocs/
Ex:2
Note:
-D FOREGROUND This is not a docker command this is Apache server argument which is
used to run the web server in the background. If we do not use this argument the
server will start and then it will stop
FROM centos:7
RUN yum update -y && yum install -y httpd
COPY index.html /var/www/html/
EXPOSE 80
#httpdserver
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
------------------------------------------------------------------
2.If python runs on ubutnu
----------------------------------------------------------------------
3. Deploy python flask application by cloning direct python image
FROM python:3.6
MAINTAINER veera "[email protected]"
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
----------------------------------------------------------------------
ENTRYPOINT ["python3"]
CMD ["app.py"]
(or)
CMD ["python3", "app.py"]
----------------------------------------------
nodejs----------------------------------------------
# Use Node.js based on Debian Slim as the base image
FROM node:16-slim
# Install dependencies
RUN npm install
---------------------------------------------NodesJs on ubuntu-------------------
# Install dependencies
RUN npm install
==================================================================================
FROM tomcat:latest
COPY webapp/target/webapp.war /usr/local/tomcat/webapps/webapp.war
RUN cp -R /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps
# scenario :2 multi satge --run Maven Image by using docker file and deploy on
tomcat webapp
FROM tomcat:latest
COPY --from=build /app/webapp/target/webapp.war
/usr/local/tomcat/webapps/webapp.war # copying files fromsta ge -1 docker file
maven into tomcat path
RUN cp -R /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps # adding
dependecnes to run base tomcat page
ARG MAVEN_VERSION=3.9.6
RUN wget https://dlcdn.apache.org/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-
bin.tar.gz && \
tar -zxvf apache-maven-${MAVEN_VERSION}-bin.tar.gz && \
rm apache-maven-${MAVEN_VERSION}-bin.tar.gz && \
mv apache-maven-${MAVEN_VERSION} /usr/lib/maven
===================================================
### Docker file MySQL ##
====================================================
-- Create a sample table on local script on name of init.sql
#mysql -u admin -p
give password
#show databases;
use test; #creating database name test pls check docker file
show tables;
select *from users; $ table name is users check script init.sql
==================================================================================
====================================================
RUN DOCKER FILE DIRECTLY TAKING SOURCE FROM GITHUB
======================================================
docker build -t <imagename> <github projecturl>
Example:
===================================================================================
=
### CMD ##
CMD Instruction
The CMD instruction in a Dockerfile specifies the command that will run by default
when a container is started.
It can be overridden with command-line arguments during the docker run command.
FROM centos:latest
# Update repository configuration
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|
g' /etc/yum.repos.d/CentOS-Linux-*
# Install Git
CMD ["yum", "-y", "install", "httpd"]
In this example, if you run the container without specifying a command (docker run
my-ubuntu-image), it will execute httpd.
If you provide additional arguments (docker run my-ubuntu-image "yum -y install git
"), they will override the default CMD and execute git.
Note: if we run docker run image along with argument yum install -y git it will
overwrite new argument, it will download git only and ignores httpd
##### Entrypoint
The ENTRYPOINT instruction sets the main command that will be run when a container
is started.
It's often used to specify the executable or script that should be run as the
primary process within the container.
FROM centos:latest
# Update repository configuration
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-*
RUN sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|
g' /etc/yum.repos.d/CentOS-Linux-*
ENTRYPOINT ["yum", "install", "-y", "git"]
In this example, ENTRYPOINT is set to install git. If you run the container without
specifying additional arguments (docker run my-ubuntu-image), it will always to
install git.
If you provide arguments (docker run my-ubuntu-image https), it will execute httpd
also along git.
note: condition:1 after run command docker run image ----it will install git
condition -2 if we run docker run image httpd ----it will install git and
httpd also
FROM centos:centos7
ENTRYPOINT ["yum", "install", "-y"]
CMD ["git"]
Here, ENTRYPOINT is set to run "yum install -y", and CMD provides "git" as default
arguments. (here yum install -y is not override)
If you provide arguments (docker run my-ubuntu-image "tree"), they will override
the default CMD and install tree only
#Bridge#
Docker provides a default bridge network. Containers connected to this network can
communicate with each other using their IP addresses
#host#
Docker host network mode allows a container to share the network stack of the
Docker host.
This means that the container will use the host's IP address and will not have its
own separate network namespace.
When you run a container in host network mode, it directly binds to the host's
network interfaces.
Performance:
##None#
In Docker, the none network mode is used to disable networking for a container.
When you run a container with the none network mode, it has no network interfaces
apart from the loopback interface.
This means the container cannot access external networks or communicate with other
containers.
::Practical::
docker network ls
create two containers
container1
container2
docker inspect <containername>
container1: 172.17.0.2
container2: 172.17.0.3
login to container
docker exec -it <containernmae> /bin/bash
## need to install ping libraries by using below command (if ping command will not
work)
#apt-get update -y
ping 172.17.0.3
# host type
Docker network host, also known as Docker host networking, is a networking mode in
which a Docker container shares its network namespace with the host machine. The
application inside the container can be accessed using a port at the host's IP
address
#None type
none network in docker means when you don't want any network interface for your
container. If you want to completely disable the networking on a container, you can
use the --network none flag when starting the container.
##########################################################
---------------Docker volumes ----------
Volumes are a mechanism for storing data outside containers. All volumes are
managed by Docker and stored in a dedicated directory on your host, usually
/var/lib/docker/volumes for Linux systems.
after delete container still volume will persistence ---volumes are store in to
host path var/lib/docker/volumes
services:
mydb:
environment:
MYSQL_ROOT_PASSWORD: test
image: mysql
mysite:
image: wordpress
links:
- mydb:site
ports:
- published: 8080
target: 80
version: '3'
filename : docker-compose.yaml
------Docker Swarm:------
Docker Swarm is a container orchestration tool for clustering and scheduling Docker
containers. With Swarm, IT administrators and developers can establish and manage a
cluster of Docker nodes as a single virtual system. Docker Swarm lets developers
join multiple physical or virtual machines into a cluster.
Docker swarm will not support autoscaling concepts but Kubernetes will support.
Kubernetes advantages :
It has a large open source community, backed by Google.
It supports every operating system.
It can sustain and manage large architectures and complex workloads.
It is automated and has a self-healing capacity that supports automatic scaling.
It has built-in monitoring and a wide range of integrations available.
It is offered by all three key cloud providers: Google, Azure, and AWS.