Introduction
Kubernetes is a very popular container orchestrator used by most of the organisations. It is open source and managed by Cloud Native Computing Foundation. A lot among us might be wanting to know how to install and use a Kubernetes cluster. A typical kubernetes cluster looks like the below diagram.
[Figure 1: Kubernetes Architecture]
Users could deploy single node clusters like Minikube , Kind and Microk8s on their individual machines to practice kubernetes. Katakoda is also another option where somebody could play around kubernetes. These are very easy to set up and play around. If anybody is interested he could check my previous posts here and here to know how to set up Minikube and run spring boot applications on that.
But when it comes to setting up a multinode kubernetes cluster a lot of us are clueless. Kubernetes Playground is a good option where the application allows you to create a multinode cluster and play with it for 4 hours. A lot of the set up is abstract, but someone could get a feel about it. If learners wish for permanent cluster they might have to go for commercial cloud offering like Amazon EKS, VMWare Tanzu, Redhat OpenShift or Google GKE. Most of these offerings prvides some trial period or initial free credits to let the developers get aquianted with the platform and start charging after that. For self learning purpose these charges are pretty high.
For such learners I am publishing this article which will help them set up a multi node bare metal kubernetes cluster on their personal computers. We will create a VM for Master Node and one VM for worker node, then we would initialise the cluster, deploy workloads on it and then access the workloads outside of the cluster. This needs to be done in below steps.
Prerequisites
Hardware
For this excercise the user needs to have a computer with atleast 100 GB of free memory, 16 GB RAM. The computing resources mentioned above is for 1 Master Node VM (15 GB Memory and 4 GB RAM) and 1 Worker Node VM (15 GB Memory and 4 GB RAM) which is the minimum requirement. The RAM size could be reduced to 3 GB and memory to 10GB. If there would be more Node VMs then more computing resources must be available on the host computer.
Software
The machine must have two softwares. It must have Oracle Virtual Box installed. Users could refer the installation manual here. We would use Virtual Box as our hypervisor. We would be using Ubuntu images for creating the Vms. User could dowload the Ubuntu 20.04 image from here and store it on the computer.
Before moving to the next step Virtual Box must be installed and Ubuntu image must be downloaded.
Set Up a Master Node
With this step we would create a Master node VM using Virtual Box. Start the virtual box and press CTRL + N or choose Machine --> New from the menubar. Enter the VM name as master and Type as Linux with Version as Ubuntu(64-bit). Use the below image for reference.
[Figure 2 : Step 1 of Master Node VM]
In next step choose the RAM size for the VM. It should be minimum of 2048 MB and more. Refer the below image.
[Figure 3 : Step 2 of Master Node VM]
In next step choose “Create a virtual hard disk now” as displayed in the below image.
[Figure 4 : Step 3 of Master Node VM]
In next step choose VDI as your hard disk as per the below image.
[Figure 5 : Step 4 of Master Node VM]
In next step choose “Fixed Size” hard disk as mentioned in below image.
[Figure 6: Step 5 of Master Node VM]
In next step we will allocate memory for the VM. Allocate 15 GB memory for the VM as mentioned in the below image.
[Figure 7: Step 6 of Master Node VM]
Now the VM would be displayed in the Virtual Box. We need to add the Ubuntu ISO image already downloaded to the VM. Also we need to set up the networking and add CPU.
To add the image click on the VM and choose Settings --> System --> Processor as displayed in the image and change the number of processors to 3.
[Figure 8: Step 7 of Master Node VM]
Then click on the storage option to set the ubuntu image we have already downloaded. Choose Storage --> Controller : IDE --> Empty and click on the blue CD image next to the IDE Secondary Master and select Choose/Create a Virtual Optical Disk. Select the location of the Ubuntu .iso file which will act as the image for the VM.
[Figure 9: Step 8 of Master Node VM]
In next step we would set up the networking for the VM. We would set 2 network adaters. The first adapter would be attached to bridged network adpater (to give access between guest and host machines, between guest machines and outside network).
[Figure 10 : Step 9 of Master Node VM]
The second adapter would be attached to Host-only adapter (to give access between guest and host machines).
[Figure 11: Step 10 of Master Node VM]
Once all these steps are completed, press the OK button. Now the hardware part of the VM is ready. In next step we have to install the ubuntu image in the VM and create user/password. To do so select the VM on VirtualBox and click start button.
On starting it will provide an window showing options Try Ubuntu and Install Ubuntu. We need to choose Install Ubuntu and proceed with the wizard.
[Figure 12: Step 11 of Master Node VM]
Choose Minimal Installation option and select the suitable options during the steps. In one of the steps it will ask you to create username and password for root. Enter vmname, username as master and password also master. You could also choose username and password of your choice. It will take some time to install ubuntu based on the available internet speed. After the installation you need to reboot the system
Next we need to set up few directories, install docker [container engine], kubeadm, kubelet and kubectl [kubernetes client] in the master node. All these we need to do being the super user. You need to open a terminal and execute the below commands one by one to finish the process. Please refer the kubernetes documentation to install the runtime and kubeadm.
sudo su [Press Enter and provide the root user password]
mkdir -p /etc/apt/trusted.gpd.d
touch /etc/apt/trusted.gpd.d/docker.gpg
##Install Docker Repository
sudo apt-get update && sudo apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
## Install Repository Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
##Install the apt Repository
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
Stable"
##Install the Docker Container Engine
sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
##Set up Docker Daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
echo ‘alias k=kubectl’ >> ~/.bashrc
echo ‘swapoff -a’ >> ~/.bashrc
Once the steps completed for docker installation, we would proceed for the installation of kubeadm, kubelet and kubectl. The following commands would set things up. All these commands need to be executed as super user.
sudo su [Enter the root password]
##iptable set up
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl –system
## Setting up the tools
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
##kubelet restart
systemctl daemon-reload
systemctl restart kubelet
##Install Net tools
apt install net-tools
Till now we have set up docker, kubeadm, kubelet and kubectl. We need to create few directories which would be required by both the master and worker nodes. These directories and files need to created as super user. Execute the below commands to create them.
sudo su
mkdir -p /var/lib/calico
touch /var/lib/calico/nodename
mkdir -p /var/run/bird
touch /var/run/bird/bird.ctl
Now we can stop the master VM and proceed to create the worker node.
Set Up Worker Node(s)
We have already created a master node vm. We will use it create the worker node(s) instead of going through all the process of creating a new VM from scratch.
To clone the master node VM, select the VM called master and press CTRL + O which will open the clone window. You need to enter the name of the vm as node01.
[Figure 13: Cloning the master vm ]
In next step choose the option as “Hard Clone” and the new node vm would be ready. Start the vm, the user name and password would be same as the master vm. We need to change the hostname of the vm to node01. This we need to do in a terminal being the super user. Open the terminal and execute the below commands in it.
sudo su [Enter root password]
cat node01 > /etc/hostname
The above action would change the hostname of the node from master to node01. After that stop the vm. If you want to create more nodes keep on repeating the above steps of cloning and renaming.
Next we would proceed to the master node vm to create the control plane.
Create the Control Plane
Start the master vm and login. As part of creating the control plane we will create the control plane, initialize the single node cluster. You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed. So we will install Calico Pod network so that workloads. First check the ip address of your host machine. For linux machines you need to execute the “ifconfig” command and note down the address which starts with “192.168.x.xxx”. The addresses of the vms will also have the same pattern.
You need to open a terminal and execute below commands as super user to create the kubernetes cluster, control plane and install calico network.
sudo su [Enter root password]
ifconfig [It will give you a list of ip addresses, note down the address which is of the pattern
192.168.x.xxx. That is the ip address of the VM]
kubeadm config images pull
kubeadm init –apiserver-advertise-address= --pod-network-cidr=192.168.0.0/16 [Copy the last part of the output. That would be used by worker nodes to join the master node]
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
##Install the pod network calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl get nodes -w [This will provide details about the master node only]
## Currently no workload could be scheduled on the master node. We need to remove the taint on the master node to schedule pods on it
kubectl taint nodes --all node-role.kubernetes.io/master-
The last part of the output of “kubeadm join” command contains something like “kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>”
Copy the same command as we have to use that command in worker node(s) to join the control plane.
Worker Node joins Master
Start the worker node vm and login as master. Open the terminal, login as super user and execute the command copied from the master node vm.
sudo su [Enter root password]
##Enter the kubeadm join command copied from the master node vm which would of below pattern
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
After the command execution the worker node would join the control plane and pods could be scheduled on the node. If there are more than one worker node repeat the same process.
Deploy and expose Workload
To verify if the worker node(s) joined the cluster or not execute the below command on the master node as a super user.
sudo su
kubectl get nodes -o wide
The output should somewhat like below image.
[Figure 14: Node Status]
If all the node are ready we could create workoads on them. If the status is NotReady we need to wait for few minutes till nodes are Ready. Note down the INTERNAL-IP of both the nodes. We will access pods using the Ips. Now we need to execute the below commands to create a namespace, a deployment and a service to expose the deployment and access the application from outside the cluster. Execute the below commands in the terminal.
##Create namespace called alpha
kubectl create namespace alpha
kubectl config set-context –current –namespace alpha
##Create deployment called pages
kubectl apply -f https://raw.githubusercontent.com/aditya-bhuyan/kube-ws-configs/master/YAML/probe/log-persistent-volumes.yml
##Create a NodePort service to expose the deployment
kubectl expose deploy pages –type=Nodeport –port=8080
##Verify all objects are created
kubectl get all
##Verify the service
kubectl get service pages
The out put of the last command has a Port column. Note down the port which is between 30000 – 32767 from from the Port column. We can access the service using the url http://<worker-node-internal-ip>:<nodeport>. Open the url in the browser and try playing around the links given in the homepage.
Congratulations ! You have just deployed a multi node Kubernetes Cluster and also deployed a workload on it.
Conclusion
The cluster is fine to play around on the host machine. It has one limitation. The cluster is not accessible outside the host machine.
It however supports enough load and all the features of a full fledged Kubernetes Cluster. The user could upscale or downscale the cluster based on the hardware capacity of the host machine.
Happy playing.