Notesk
Notesk
BASED ON USERS AND APP COMPLEXITY WE NEED TO Registry: manages the images.
SELECT THE ARCHITECTURE.
ARCHITECTURE OF DOCKER:
FACTORS AFFECTIONG FOR USING MICROSERVICES: yum install docker -y #client
F-1: COST systemctl start docker #client,Engine
F-2: MAINTAINANCE systemctl status docker
CONTAINERS:
its same as a server/vm. COMMANDS:
it will not have any operating system. docker pull ubuntu : pull ubuntu image
os will be on images. docker images : to see list of images
(SERVER=AMI, CONTAINER=IMAGE) docker run -it --name cont1 ubuntu : to create a
its free of cost and can create multiple containers. container
-it (interactive) - to go inside a container
DOCKER: cat /etc/os-release : to see os flavour
Its an free & opensource tool.
it is platform independent.
used to create, run & deploy applications on apt update -y : to update
containers. redhat=yum
it is introduced on 2013 by solomenhykes & sebastian ubuntu=apt
phal. without update we can’t install any pkg in ubuntu
We used GO language to develop the docker.
here we write files on YAML.
before docker user faced lot of problems, but after apt install git -y
docker there is no issues with the application. apt install apache2 -y
Docker will use host resources (cpu, mem, n/w, os). service apache2 start
Docker can run on any OS but it natively supports service apache2 status
Linux distributions.
docker p q : to exit container
CONTAINERIZATION: docker ps -a : to list all containers
Process of packing an application with its docker attach cont_name : to go inside
dependencies. container
ex: PUBG docker stop cont_name : to stop container
docker start cont_name : to start container
APP= PUBG & DEPENDECY = MAPS docker pause cont_name : to pause container
APP= CAKE & DEPENDECY = KNIFE docker unpause cont_name: to unpause container
docker inspect cont_name: to get complete info of a
Os level of virtualization. container
docker rm cont_name : to delete a container
VIRTUALIZATION:
able to create resource with our hardware properties. STOP: will wait to finish all process running inside
container
ARCHITECTURE & COMPONENTS: KILL: won’t wait to finish all process running inside
client: it will interact with user container
user gives commands and it will be executed by
docker client
============================================ ARGS : to pass env variables (outside
==================================== containers)
EXPOSE : to give port number
OS LEVEL OF VIRTUALIZATION:
EX-3:
DOCKERFILE: FROM ubuntu
it is an automation way to create image. COPY index.html /tmp
here we use components to create image. ADD
in Dockerfile D must be Capiatl. http://dlcdn.apache.org/tomcat/tomcat-9/v9.0.89/bin
Components also capital. /apache-tomcat-9.0.89.tar.gz /tmp
This Dockerfile will be Reuseable.
here we can create image directly without container docker build -t raham:v3 .
help. docker run -it --name cont3 raham:v3
Name: Dockerfile
EX-4:
docker kill $(docker ps -qa)
docker rm $(docker ps -qa) FROM ubuntu
docker rmi -f $(docker images -qa) COPY index.html /tmp
ADD
COMPONENTS: http://dlcdn.apache.org/tomcat/tomcat-9/v9.0.89/bin
/apache-tomcat-9.0.89.tar.gz /tmp
FROM : used to base image WORKDIR /tmp
RUN : used to run linux commands (During LABEL author rahamshaik
image creation)
CMD : used to run linux commands (After docker build -t raham:v4 .
container creation) docker run -it --name cont4 raham:v4
ENTRYPOINT : high priority than cmd
COPY : to copy local files to conatiner
ADD : to copy internet files to conatiner EX-5:
WORKDIR : to open req directory FROM ubuntu
LABEL : to add labels for docker images LABEL author rahamshaik
ENV : to set env variables (inside ENV client swiggy
container) ENV server appserver
at a time we can share single volume to single
docker build -t raham:v5 . container only.
docker run -it --name cont5 raham:v5 every volume will store under
/var/lib/docker/volumes
NETFLIX-DEPLOYMENT: METHOD-1:
DOCKER FILE:
yum install git -y
git clone https://github.com/RAHAMSHAIK007/netflix- FROM ubuntu
clone.git VOLUME ["/volume1"]
mv netflix-clone/*
docker build -t raham:v1 .
Dockerfile docker run -it --name cont1 raham:v1
cd volume1/
FROM ubuntu touch file{1..5}
RUN apt update cat>file1
RUN apt install apache2 -y ctrl p q
COPY * /var/www/html/
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"] docker run -it --name cont2 --volumes-from cont1 --
privileged=true ubuntu
docker run -it --name cont3 --volumes-from cont1 --
docker build -t netflix:v1 . privileged=true ubuntu
docker run -it --name netflix1 -p 80:80 netflix:v1
METHOD-2:
FROM CLI:
MULTI-STAGE BUILD:
if we build image-1 from docker file and use that docker run -it --name cont4 -v volume2 ubuntu
image-1 to build other image. cd volume2/
touch java{1..5}
Dockerfile -- > image1 ctrl p q
COMPRESSING DOCKER IMAGE SIZE: docker service create --name movies --replicas 3 -p
1. push to dockerhub 81:80 vijaykumar444p/movies:latest
2. use multi stage docker build docker service ls : to list services
3. reduce layers docker service inspect movies : to get complete info
4. use tar balls of service
docker service ps movies : to list the containers
============================================ of movies
========================= docker service scale movies=10 : to scale in the
containers
High Avaliabilty: more than one server docker service scale movies=3 : to scale out the
why: if one server got deleted then other server will containers
gives the app docker service rollback movies : to go previous state
docker service logs movies : to see the logs
DOCKER SWARM: docker service rm movies : to delete the
its an orchestration tool for containers. services.
used to manage multiple containers on multiple
servers. when scale down it follows lifo pattern.
here we create a cluster (group of servers). LIFO MEANS LAST-IN FIRST-OUT.
in that clutser, we can create same container on
multiple servers. Note: if we delete a container it will recreate
here we have the manager node and worker node. automatically itself.
manager node will create & distribute the container to it is called as self healing.
worker nodes.
worker node's main purpose is to maintain the
container. CLUSTER ACTIVIES:
without docker engine we cant create the cluster. docker swarm leave (worker) : to make node
Port: 2377 inactive from cluster
worker node will join on cluster by using a token. To activate the node copy the token.
manager node will give the token. docker node rm node-id (manager): to delete worker
node which is on down state
SETUP: docker node inspect node_id : to get comple info of
create 3 servers worker node
install docker and start the service docker swarm join-token manager : to generate
hostnamectl set-hostname the token to join
manager/worker-1/worker-2
Enable 2377 port Note: we cant delete the node which is ready state
if we want to join the node to cluster again we need to
docker swarm init (manager) -- > copy-paste the token paste the token on worker node
to worker nodes
docker node ls
============================================ COMPONENTS:
============================= MASTER:
execution:
kubectl create -f pod.yml
kubectl get pods/pod/po
kubectl get pod -o wide To list rs :kubectl get rs/replicaset
kubectl describe pod pod1 To show addtional info :kubectl get rs -o wide
kubectl delete -f raham.yml To show complete info :kubectl describe rs name-of-
rs
DRAWBACK: once pod is deleted we can't retrieve the To delete the rs :kubectl delete rs name-of-rs
pod. to get lables of pods : kubectl get pods -l
app=Paytm
============================================ to delete pods : kubectl delete po -l
=============================== app=paytm
TO scale rs : kubectl scale rs/movies --
REPLICASET: replicas=10 (LIFO)
rs -- > pods
it will create multiple copies of same pod.
if we delete one pod automatically it will create new LIFO: LAST IN FIRST OUT.
pod.
IF A POD IS CREATED LASTLY IT WILL DELETE FIRST to show all pod labels :kubectl get pods --show-
WHEN SCALE OUT. labels
To delete all pods :kubectl delete pod --all
ADV:
Self healing kubectl rollout history deploy/movies
scaling kubectl rollout undo deploy/movies
kubectl rollout status deploy/movies
DRAWBACKS: kubectl rollout pause deploy/movies
1. we cant Rollin and rollout, we cant update the kubectl rollout resume deploy/movies
application in rs.
COMMANDS FOR SHORTCUTS:
DEPLOYMENT:
deploy -- > rs -- > pods vim .bashrc
we can update the application.
its high level k8s objects. alias kgp="kubectl get pods"
alias kgr="kubectl get rs"
vim deploy.yml alias kgd="kubectl get deploy"
IAM -- > USER -- > CREATE USER -- > NAME: KOPS -- >
Attach Polocies Directly -- > AdministratorAccess -- > ADMIN ACTIVITIES:
NEXT -- > CREATE USER To scale the worker nodes:
USER -- > SECURTITY CREDENTIALS -- > CREATE ACCESS kops edit ig --name=rahamdevops.k8s.local nodes-us-
KEYS -- > CLI -- > CHECKBOX -- > CREATE ACCESS KEYS east-1a
-- > DOWNLOAD kops update cluster --name rahamdevops.k8s.local --
yes --admin
aws configure (run this command on server) kops rolling-update cluster --yes
TYPES: TYPES:
1. CLUSTERIP: It will work inside the cluster.
default : Is the default namespace, all objects it will not expose to outer world.
will create here only
kube-node-lease : it will store object which is taken apiVersion: apps/v1
from one node to another. kind: Deployment
kube-public : all the public objects will store here. metadata:
kube-system : default k8s will create some labels:
objects, those are storing on this ns. app: movies
name: movies-deploy
NOTE: Every component of Kubernetes cluster is going spec:
to create in the form of pod replicas: 10
And all these pods are going to store on KUBE-SYSTEM selector:
ns. matchLabels:
app: movies
kubectl get pod -n kube-system : to list all pods in template:
kube-system namespace metadata:
kubectl get pod -n default : to list all pods in labels:
default namespace app: movies
kubectl get pod -n kube-public : to list all pods in spec:
kube-public namespace containers:
kubectl get po -A : to list all pods in all - name: cont1
namespaces image: rahamshaik/moviespaytm:latest
kubectl get po --all-namespaces ports:
- containerPort: 80
kubectl create ns dev : to create namespace ---
kubectl config set-context --current --namespace=dev : apiVersion: v1
to switch to the namespace kind: Service
kubectl config view --minify | grep namespace : to see metadata:
current namespace name: service1
kubectl run dev1 --image nginx spec:
kubectl run dev2 --image nginx type: ClusterIP
kubectl run dev3 --image nginx selector:
kubectl create ns test : to create namespace app: movies
ports:
- port: 80 it will expose the application with dns [Domain Name
System] -- > 53
DRAWBACK: to crete dns we use Route53
We cannot use app outside.
apiVersion: apps/v1
2. NODEPORT: It will expose our application in a kind: Deployment
particular port. metadata:
Range: 30000 - 32767 (in sg we need to give all traffic) labels:
if we dont sepcify k8s service will take random port app: swiggy
number. name: swiggy-deploy
spec:
apiVersion: apps/v1 replicas: 3
kind: Deployment selector:
metadata: matchLabels:
labels: app: swiggy
app: movies template:
name: movies-deploy metadata:
spec: labels:
replicas: 10 app: swiggy
selector: spec:
matchLabels: containers:
app: movies - name: cont1
template: image: rahamshaik/trainservice:latest
metadata: ports:
labels: - containerPort: 80
app: movies ---
spec: apiVersion: v1
containers: kind: Service
- name: cont1 metadata:
image: rahamshaik/moviespaytm:latest name: abc
ports: spec:
- containerPort: 80 type: LoadBalancer
--- selector:
apiVersion: v1 app: swiggy
kind: Service ports:
metadata: - port: 80
name: service1 targetPort: 80
spec:
type: NodePort
selector: ============================================
app: movies ===========================================
ports:
- port: 80 scaling: increasing the count
nodePort: 31111 why to scale: to avoid the increasing load.
3. LOADBALACER: It will expose our app and distribute Metrics Server offers:
load blw pods.
A single deployment that works on most clusters kind: Deployment
(see Requirements) metadata:
Fast autoscaling, collecting metrics every 15 name: movies
seconds. labels:
Resource efficiency, using 1 milli core of CPU and 2 app: movies
MB of memory for each node in a cluster. spec:
Scalable support up to 5,000 node clusters. replicas: 3
selector:
matchLabels:
You can use Metrics Server for: app: movies
CPU/Memory based horizontal autoscaling (Horizontal template:
Autoscaling) metadata:
Automatically adjusting/suggesting resources needed labels:
by containers (Vertical Autoscaling) app: movies
spec:
containers:
Horizontal: New - name: cont1
Vertical: Existing image: yashuyadav6339/movies:latest
apiVersion: v1
============================================ kind: ResourceQuota
============================================ metadata:
====== name: dev-quota
namespace: dev
QUOTAS: spec:
k8s cluster can be divide into namespaces hard:
By default the pod in K8s will run with no limitations pods: "5"
of Memory and CPU limits.cpu: "1"
But we need to give the limit for the Pod limits.memory: 1Gi
It can limit the objects that can be created in a
namespace and total amount of resources. kubectl create -f dev-quota.yml
when we create a pod scheduler will check the limits kubectl get quota
of node to deploy pod on it.
here we can set limits to CPU, Memory and Storage
here CPU is measured on cores and memory in bytes. EX-1: Mentioning Limits = SAFE WAY
1 cpu = 1000 millicpus ( half cpu = 500 millicpus (or)
0.5 cpu) apiVersion: apps/v1
kind: Deployment
Here Request means how many we want metadata:
Limit means how many we can create maximum name: movies
labels:
limit can be given to pods as well as nodes app: movies
the default limit is 0 spec:
replicas: 3 name: movies
selector: labels:
matchLabels: app: movies
app: movies spec:
template: replicas: 3
metadata: selector:
labels: matchLabels:
app: movies app: movies
spec: template:
containers: metadata:
- name: cont1 labels:
image: yashuyadav6339/movies:latest app: movies
resources: spec:
limits: containers:
cpu: "1" - name: cont1
memory: 512Mi image: yashuyadav6339/movies:latest
resources:
kubectl create -f dep.yml limits:
cpu: "1"
apiVersion: apps/v1 memory: 1Gi
kind: Deployment requests:
metadata: cpu: "0.2"
name: movies memory: 100Mi
labels:
app: movies EX-3: MENTION only REQUESTS = NOT SAFE WAY
spec:
replicas: 3
selector: apiVersion: apps/v1
matchLabels: kind: Deployment
app: movies metadata:
template: name: movies
metadata: labels:
labels: app: movies
app: movies spec:
spec: replicas: 3
containers: selector:
- name: cont1 matchLabels:
image: yashuyadav6339/movies:latest app: movies
resources: template:
limits: metadata:
cpu: "0.2" labels:
memory: 100Mi app: movies
spec:
kubectl create -f dep.yml containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
EX-2: MENTION LIMITS & REQUESTS = SAFE WAY requests:
cpu: "0.2"
memory: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata: