0% found this document useful (0 votes)
61 views

Kubernetes Multi Node Cluster Over AWS Cloud

The document provides steps to install Kubernetes on a single node using kubeadm and configure flannel as the pod network. It begins with installing Docker and enabling it. It then sets up the Kubernetes repo and installs kubeadm, kubelet and kubectl. It initializes the cluster with kubeadm and pulls container images. It then configures flannel and validates pods are running. Finally, it provides troubleshooting steps for CoreDNS pods.

Uploaded by

Ankush
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Kubernetes Multi Node Cluster Over AWS Cloud

The document provides steps to install Kubernetes on a single node using kubeadm and configure flannel as the pod network. It begins with installing Docker and enabling it. It then sets up the Kubernetes repo and installs kubeadm, kubelet and kubectl. It initializes the cluster with kubeadm and pulls container images. It then configures flannel and validates pods are running. Finally, it provides troubleshooting steps for CoreDNS pods.

Uploaded by

Ankush
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

# yum install docker -y

# systemctl enable docker --now

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo


[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\
$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# systemctl enable --now kubelet

# kubeadm config images pull

# kubeadm init --pod-network-cidr=10.240.0.0/16

[root@localhost ~]# docker info | grep -i cgroup


Cgroup Driver: cgroupfs

[root@localhost ~]# systemctl restart docker


[root@localhost ~]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@localhost ~]# docker info | grep -i cgroup
Cgroup Driver: systemd

# yum install iproute-tc

# kubeadm init --pod-network-cidr=10.240.0.0/16 --ignore-preflight-errors=NumCPU --ignore-


preflight-errors=Mem

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# kubectl get pods --all-namespaces

# echo 3 > /proc/sys/vm/drop_caches

# kubeadm token create --print-join-command

Setup Worker Node:


# yum install docker -y
# systemctl enable docker --now

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo


[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\
$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
# yum install -y kubectl kubelet kubeadm --disableexcludes=kubernetes
# systemctl enable kubelet --now

[root@localhost ~]# docker info | grep -i cgroup


Cgroup Driver: cgroupfs

[root@localhost ~]# cat /etc/docker/daemon.json


{
"exec-opts": ["native.cgroupdriver=systemd"]
}

[root@localhost ~]# systemctl restart docker

[root@localhost ~]# docker info | grep -i cgroup


Cgroup Driver: systemd

# yum install iproute-tc

# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf


net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# sysctl --system

# kubeadm join 172.31.40.209:6443 --token 3joamn.y8asi8hw2jb41xx5 --discovery-token-ca-


cert-hash sha256:86dd3714b1168b8de0abcb63300a458baf838860ff9c6f53bd85707853ff4db7

# systemctl status kubelet -l


# echo 3 > /proc/sys/vm/drop_caches

Set at Master Node:


https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-
plugins/#cni
https://kubernetes.io/docs/concepts/cluster-administration/addons/

# kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-
flannel.yml

# kubectl get pods --all-namespaces


# kubectl get nodes

------------------------------------------------------------------------------------------------------------------

TroubleShooting of CoreDNS pods:

At master node:

# kubectl get pods --all-namespaces


# kubectl logs coredns-74ff55c5b-c45jz -n kube-system

After checking the following:

● Flannel interfaces existed on each node


● The routes on each table were checked to ensure each other servers subnet was
present and I could ping flannel interfaces on other servers
● Kublet was running with the parameter –network-plugin=cni
● kube-controller-manager was running with the parameters –allocate-node-
cidrs=true and –cluster-cidr

# ps -aux | grep kube-controller-manager | grep cidr


root 8048 0.8 6.2 816460 63408 ? Ssl 16:59 1:05 kube-controller-manager --allocate-
node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --
authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --
client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.240.0.0/16 --cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
--cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner
--kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --
requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
--root-ca-file=/etc/kubernetes/pki/ca.crt
--service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-
range=10.96.0.0/12 --use-service-account-credentials=true

# kubectl edit configmap kube-flannel-cfg -n kube-system

net-conf.json: |
{
"Network": "10.240.0.0/16",
"Backend": {
"Type": "vxlan"
}
}

because we boot strapped the cluster to use the cidr 10.240.0.0/16. The default flannel
deployment has the network set as 10.244.0.0/16 conflicting with what we bootstrapped
the cluster with

# kubectl delete pods -l app=flannel -n kube-system

# cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.240.0.0/16
FLANNEL_SUBNET=10.240.0.1/24
FLANNEL_MTU=8951
FLANNEL_IPMASQ=true

# kubectl get pods -l k8s-app=kube-dns -n kube-system

https://github.com/coreos/flannel-cni/tree/v0.3.0#readme

----------------------------------------------------------------------------------------------------------

You might also like