0% found this document useful (0 votes)
29 views

Deploy Kubectl On Vagrant Machinies

Deploy kubectl on Vagrant Machinies

Uploaded by

Sourav Jyoti Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Deploy Kubectl On Vagrant Machinies

Deploy kubectl on Vagrant Machinies

Uploaded by

Sourav Jyoti Das
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 5

[cisco@centos-host kube-1m-3w]$ vagrant box list

centos/7 (libvirt, 2004.01)


[cisco@centos-host kube-1m-3w]$ vagrant status
Current machine states:

master running (libvirt)


worker1 running (libvirt)
worker2 running (libvirt)
worker3 running (libvirt)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
[cisco@centos-host kube-1m-3w]$
[cisco@centos-host kube-1m-3w]$ vagrant reload
==> master: Halting domain...
==> master: Starting domain.
==> master: Waiting for domain to get an IP address...
==> master: Waiting for SSH to become available...
==> master: Creating shared folders metadata...
==> master: Rsyncing folder: /home/cisco/vagrant-projects/kube-1m-3w/ => /vagrant
==> master: Machine already provisioned. Run `vagrant provision` or use the `--
provision`
==> master: flag to force provisioning. Provisioners marked to run always will
still run.
<..>
[cisco@centos-host kube-1m-3w]$ vagrant ssh master
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$

1. Initialize kube on master

[vagrant@kube-master ~]$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16


--apiserver-advertise-address=192.168.122.119
W1011 04:36:27.689945 2756 configset.go:348] WARNING: kubeadm cannot validate
component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your
internet connection
<..>
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.


Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as
root:

kubeadm join 192.168.122.119:6443 --token ffdbaq.q784r9bkvunsjpyj \


--discovery-token-ca-cert-hash
sha256:84aac7bc530a15832f195f567c3c08d857c32f0aa45a69b6972e7d273af38733
[vagrant@kube-master ~]$

2. Run the commands mentioned above as reguler user ( on Master )

[vagrant@kube-master ~]$ mkdir -p $HOME/.kube


[vagrant@kube-master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[vagrant@kube-master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[vagrant@kube-master ~]$

3. Initialize kube on worker1

[cisco@centos-host kube-1m-3w]$ vagrant ssh worker1


[vagrant@kube-worker-1 ~]$
[vagrant@kube-worker-1 ~]$ sudo kubeadm join 192.168.122.119:6443 --token
ffdbaq.q784r9bkvunsjpyj \
> --discovery-token-ca-cert-hash
sha256:84aac7bc530a15832f195f567c3c08d857c32f0aa45a69b6972e7d273af38733
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get
cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file
"/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file
"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:


* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[vagrant@kube-worker-1 ~]$

4. Check nodes status on Master. "Status" is 'NotReady'.

[vagrant@kube-master ~]$ kubectl get nodes -o wide


NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master NotReady master 5m25s v1.19.2 192.168.122.119 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-1 NotReady <none> 2m22s v1.19.2 192.168.122.98 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@kube-master ~]$

5. Initialize kube on rest of workers


[cisco@centos-host kube-1m-3w]$ vagrant ssh worker2
Last login: Sun Oct 11 04:49:06 2020 from 192.168.122.1
[vagrant@kube-worker-2 ~]$
[vagrant@kube-worker-2 ~]$ sudo kubeadm join 192.168.122.119:6443 --token
ffdbaq.q784r9bkvunsjpyj \
> --discovery-token-ca-cert-hash
sha256:84aac7bc530a15832f195f567c3c08d857c32f0aa45a69b6972e7d273af38733
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
<..>
[vagrant@kube-worker-2 ~]$
[vagrant@kube-worker-2 ~]$ exit
logout
Connection to 192.168.122.69 closed.
[cisco@centos-host kube-1m-3w]$
[cisco@centos-host kube-1m-3w]$ vagrant ssh worker3
[vagrant@kube-worker-3 ~]$
[vagrant@kube-worker-3 ~]$ sudo kubeadm join 192.168.122.119:6443 --token
ffdbaq.q784r9bkvunsjpyj \
> --discovery-token-ca-cert-hash
sha256:84aac7bc530a15832f195f567c3c08d857c32f0aa45a69b6972e7d273af38733
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
<..>
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[vagrant@kube-worker-3 ~]$

6. Check the nodes status on Master

[vagrant@kube-master ~]$ kubectl get nodes -o wide


NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master NotReady master 15m v1.19.2 192.168.122.119 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-1 NotReady <none> 12m v1.19.2 192.168.122.98 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-2 NotReady <none> 2m37s v1.19.2 192.168.122.69 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-3 NotReady <none> 2m3s v1.19.2 192.168.122.53 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@kube-master ~]$

7. Download calico plugins and update the CIDR range to 10.244.0.0/16

Link - https://docs.projectcalico.org/getting-started/kubernetes/self-managed-
onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less

[vagrant@kube-master ~]$ curl https://docs.projectcalico.org/manifests/calico.yaml


-O
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 183k 100 183k 0 0 398k 0 --:--:-- --:--:-- --:--:-- 401k
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$ ls -lrt
total 188
-rwxr-xr-x. 1 root root 1588 Oct 11 04:11 preparing-kube-environment.sh
-rw-rw-r--. 1 vagrant vagrant 187956 Oct 11 04:54 calico.yaml
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$ grep -A1 CALICO_IPV4POOL_CIDR calico.yaml
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$ vi calico.yaml
[vagrant@kube-master ~]$ grep -A1 CALICO_IPV4POOL_CIDR calico.yaml
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
[vagrant@kube-master ~]$

8. Install Calico

[vagrant@kube-master ~]$ kubectl apply -f calico.yaml


configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.o
rg created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico
.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico
.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcali
co.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.o
rg created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.pro
jectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org
created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$
9. Now all nodes are ready.

[vagrant@kube-master ~]$ kubectl get nodes -o wide


NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master NotReady master 23m v1.19.2 192.168.122.119 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-1 NotReady <none> 20m v1.19.2 192.168.122.98 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-2 NotReady <none> 11m v1.19.2 192.168.122.69 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-3 NotReady <none> 10m v1.19.2 192.168.122.53 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master NotReady master 24m v1.19.2 192.168.122.119 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-1 NotReady <none> 20m v1.19.2 192.168.122.98 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-2 Ready <none> 11m v1.19.2 192.168.122.69 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-3 Ready <none> 10m v1.19.2 192.168.122.53 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@kube-master ~]$
[vagrant@kube-master ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kube-master Ready master 24m v1.19.2 192.168.122.119 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-1 Ready <none> 21m v1.19.2 192.168.122.98 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-2 Ready <none> 11m v1.19.2 192.168.122.69 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
kube-worker-3 Ready <none> 10m v1.19.2 192.168.122.53 <none>
CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
[vagrant@kube-master ~]$

10.

You might also like