Kubernetes Cluster Creation Using Kubeadm
Kubernetes Cluster Creation Using Kubeadm
Pre-requisites:
1. A compatible Linux host. The Kubernetes project provides generic instructions for Linux
distributions based on Debian and Red Hat, and those distributions without a package manager.
2. 2 GB or more RAM per machine (any less will leave little room for your apps).
3. 2 CPUs or more.
4. Full network connectivity between all machines in the cluster (public or private network is fine).
5. Unique hostname, MAC address, and product_uuid for every node.
6. Certain ports are open on your machines.
7. Swap disabled. You MUST disable swap for the kubelet to work properly.
Verify the MAC address and product_uuid are unique for every node
You can get the MAC address of the network interfaces using the command ip
link or ifconfig -a
The product_uuid can be checked by using the command sudo cat
/sys/class/dmi/id/product_uuid
Make sure that the br_netfilter module is loaded. This can be done by
running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe
br_netfilter.
As a requirement for your Linux Node's iptables to correctly see bridged
traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in
your sysctl config, e.g.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
For more details please see the Network Plugin Requirements page.
Install the container runtime: docker is most commonly used and do not require
some additional configuration at the K8s end.
11. Use the following command to set up the stable repository. To add
the nightly or test repository, add the word nightly or test (or both) after the
word stable in the commands below. Learn about nightly and test channels.
12. $ echo \
Sometimes the above commend doesn’t work so follow the below steps:
Then
Follow the instructions from the output from the above command:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You can now join any number of machines by running the following on each node
as root:
Since you are using the cidr address range as 192.168.0.0/16 you must use the calico as a network tool
for installation instructions you can follow the guidelines below depending on the number of pods 50 or
more:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/
onpremises#install-calico-with-etcd-datastore
Install Calico
1. Install the Tigera Calico operator and custom resource definitions.
2. kubectl create -f https://projectcalico.docs.tigera.io/manifests/tigera-
operator.yaml
3. Install Calico by creating the necessary custom resource. For more information
on configuration options available in this manifest, see the installation reference.
4. kubectl create -f https://projectcalico.docs.tigera.io/manifests/custom-
resources.yaml
Note: Before creating this manifest, read its contents and make sure its settings
are correct for your environment. For example, you may need to change the
default IP pool CIDR to match your pod network CIDR.
6. Confirm that all of the pods are running with the following command.
watch kubectl get pods -n calico-system