Last updated on March 5th, 2024 at 01:12 pm

In this tutorial we are going to install Kubernetes package and setup Kubernetes (k8) cluster using kubeadm on RedHat Linux 9 . Once that is done we just deploy a simple webserver pod.

Lets break this installation process in to simple steps

Step 1 : Install Container Runtime
Step 2 : Configure Kubernetes Repo and Install Kubeadm, Kubectl and Kubelet
Step 3 : Initialize Kubeadm to configure control plane (Master Node)
Step 4 : Deploy pod network using plugin
Step 5 : Create a webserver pod and test it
Step 6 : Debug installation errors/ logs

I know, now a days there are many cloud providers with managed Kubernetes services and no one will bother to spin up a cluster of there own especially since it is time consuming and a lot of moving parts. But I feel that this tutorial will help you achieve your dream of installing a self managed Kubernetes cluster and will be helpful for you to gain more confidence to study for your CKA/CKD exams.

Play around or may be if you have a larger setup why not run your own cluster and have your website configured there 🙂 . I tried my best to make this tutorial more interactive with simple steps.

Install CRIO

The very first step we have to do is install CRIO container run time.

My server configuration is

2 vCPU’s and 8GB RAM with 10GB storage

More details about CRIO – https://github.com/cri-o/cri-o/blob/main/install.md#readme

As you can see the supported versions for CRIO doesn’t cover RHEL 9 so we are going to use Centos 8 installation steps.

You might wonder why we didn’t install docker as a run time, it is due to this .

Lets login to the server and run the below commands to install CRIO (switch to root)

Install CRIO and start it using systemctl – Make sure it is running

# sudo su -
# export VERSION=1.25
# export OS=CentOS_8
# curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
# curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
# yum install crio
# systemctl status crio
# systemctl start crio

Install Kubernetes

Before we install Kubernetes lets change the hostname of the server. I am going to name it master-node-new .

Since my machine is running on EC2 / AWS , I am changing my cloud.cfg to preserve hostname after reboot.

If you are not running on EC2 then this is optional/ Depending on your cloud provider
# hostnamectl set-hostname master-node-new
# hostname
master-node-new
# vi /etc/cloud/cloud.cfg
>preserve_hostname: true

Now lets add the Kubernetes repo and install version 1.25.7-0 for kubelet, kubectl and kubeadm

If you have any firewall running on the server disable it.

Its also recommended to disable selinux (if enabled)

If you don’t provide any version along with the dnf command then it will install the latest one.
Updated the repo to reflect community-owned repositories instead of google (https://packages.cloud.google.com)

Only thing you have to make sure is that kubelet, kubectl and kubeadm are on the same version
root@master-node-new ~]#cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
#Install Kubernetes (kubeadm, kubelet and kubectl)

root@master-node-new ~]# dnf --showduplicates list  kubeadm
[root@k8master ~]# dnf install -y kubeadm-1.25.7-0 kubelet-1.25.7-0 kubectl-1.25.7-0

[root@master-node-new ~]# systemctl enable kubelet.service
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
[root@master-node-new ~]#

Create control plane node

Now that we have all the packages ready and installed on the server the next step is to create the control plane using kubeadm. Truncated output below for better visibility.

[root@master-node-new containerd]# kubeadm init --pod-network-cidr=10.244.0.0/16
I0309 20:28:13.842343   93002 version.go:256] remote version is much newer: v1.26.2; falling back to: stable-1.25
[init] Using Kubernetes version: v1.25.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https:&#47;&#47;kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.22.251:6443 --token 6t92oe.iuy4w1bdcjwjfe91 \
	--discovery-token-ca-cert-hash sha256:90b8c847bd2872c9a98d7ddb829775b979e115a447cc9baf9fcea14df7ecf2d5
[root@master-node-new ~]#

As you can see we got the message that our Kubernetes control-plane has initialized successfully!

Make a note of the join command towards the bottom. This can be used if you want to add nodes to this master node/cluster.

Before you connect to your cluster do these steps

[root@master-node-new containerd]# mkdir -p $HOME/.kube
[root@master-node-new containerd]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-node-new containerd]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

I have given the pod-network option that matches flannel, for the pod network addon you would like to install verify whether it requires any arguments to be passed to kubeadm init.

Depending on add-on it may require --pod-network-cidr to a provider-specific value. See Installing a Pod network add-on.

Deploy pod network

For the pods to communicate we should deploy networking addon. Here I am deploying flannel.

So before we deploy flannel or any network addon you may notice that if you do a get pod -A, coredns will be in ContainerCreating status

[root@master-node-new ]# kubectl get pods -A
NAMESPACE     NAME                                  READY   STATUS              RESTARTS   AGE
kube-system   coredns-565d847f94-j97j              0/1     ContainerCreating   0          19s
kube-system   coredns-565d847f94-lt45q               0/1     ContainerCreating   0          19s
kube-system   etcd-master-node                      1/1     Running             506        34s
kube-system   kube-apiserver-master-node            1/1     Running             501        34s
kube-system   kube-controller-manager-master-node   1/1     Running             0          36s
kube-system   kube-proxy-z6jjj                       1/1     Running             0          19s
kube-system   kube-scheduler-master-node            1/1     Running             527        34s

Lets install flannel and see what happens, ignore the AGE column 🙂 . I ran it for sometime before taking a snap of it hence most pods have been running for more than 3hrs

[root@master-node-new ]# kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master-node-new ~]#
[root@master-node-new ~]# kubectl get pods -A
NAMESPACE      NAME                                  READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-pp4xn                 1/1     Running   0          80s
kube-system    coredns-565d847f94-j97jj              1/1     Running   1          3h51m
kube-system    coredns-565d847f94-lt45q              1/1     Running   1          3h46m
kube-system    etcd-master-node                      1/1     Running   508        3h52m
kube-system    kube-apiserver-master-node            1/1     Running   504        3h52m
kube-system    kube-controller-manager-master-node   1/1     Running   2          3h52m
kube-system    kube-proxy-z6jjj                      1/1     Running   1          3h51m
kube-system    kube-scheduler-master-node            1/1     Running   529        3h52m

As you can see after we installed flannel coredns status changed to RUNNING

Create test webserver pod

Now that we have everything set up lets deploy a webserver pod. But do you think that we will be able to deploy pod directly on control plane? (Always recommended to add additional nodes and deploy pods there but since this is a tutorial I just wanted to show you how to run pods on master node itself without joining any new nodes)

Yes we can but we need to remove the taint. For that run this command (truncated)

 [root@master-node-new containerd]# kubectl describe node master-node-new
  Name:               master-node-new
  Roles:              control-plane
  Labels:             beta.kubernetes.io/arch=amd64
                      beta.kubernetes.io/os=linux
                      kubernetes.io/arch=amd64
                      kubernetes.io/hostname=master-node-new
                      kubernetes.io/os=linux
                      node-role.kubernetes.io/control-plane=
                      node.kubernetes.io/exclude-from-external-load-balancers=
  Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"7a:d6:90:17:df:39"}
                      flannel.alpha.coreos.com/backend-type: vxlan
                      flannel.alpha.coreos.com/kube-subnet-manager: true
                      flannel.alpha.coreos.com/public-ip: 172.31.22.251
                      kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
                      node.alpha.kubernetes.io/ttl: 0
                      volumes.kubernetes.io/controller-managed-attach-detach: true
  CreationTimestamp:  Thu, 09 Mar 2023 20:28:44 +0000
  Taints:             node.kubernetes.io/not-ready:NoSchedule
  Unschedulable:      false
  Lease:

Look at this Taints: node.kubernetes.io/not-ready:NoSchedule line, we are going to remove it and then deploy the webserver

[root@master-node-new containerd]# kubectl taint nodes master-node-new node.kubernetes.io/not-ready:NoSchedule-
node/master-node-new untainted
[root@master-node-new containerd]# 

Now if you describe the node again the taints will be removed and you are good to deploy a pod on master-node.

[root@master-node-new ~]# kubectl run myserver --image=nginx
pod/myserver created
[root@master-node-new ~]# kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
myserver   1/1     Running   0          3s
[root@master-node-new ~]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
myserver   1/1     Running   0          9s    10.0.0.133   master-node-new   <none>           <none>
[root@master-node-new ~]# curl 10.0.0.133
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master-node-new ~]#

If you look closely at the get pod command with wide option we have the node for the myserver pod as master-node-new

Debugging common errors

Ok now if the above steps doesn’t go right as expected then you have couple of options.

1] Check kubelet logs

[root@master-node-new ~]# systemctl status kubelet
[root@master-node-new ~]# journalctl -u kubelet -f

2] Check pod logs

[root@master-node-new ~]# kubectl logs pod/<podname> -n <name_space>

3] Check using crictl

[root@master-node-new ~]# crictl ps 

Make sure crictl works if it doesn’t and you are getting some issues like below make sure you have crio process running and stop the containerd process (if you have installed containerd.io by any chance) and check whether that goes away.

root@master-node-new ~]# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
E0310 16:21:44.813574  551565 remote_runtime.go:390] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\"" filter="&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},}"

If that doesn’t work and you have 2 runtimes try stopping CRIO and check the crictl ps status.

4] If you are seeing these messages in your kubectl logs then you have not installed any network plugins or there is some issue going on with respect to the network settings you have for the cluster.

Mar 09 18:24:47 master-node-new kubelet[13936]: I0309 18:24:47.534648   13936 scope.go:115] "RemoveContainer" containerID="8888f8bb5ce359f5e04b17a8eca651204a66af6df18724ac08862a6c301f4846"
Mar 09 18:24:49 master-node-new kubelet[13936]: E0309 18:24:49.532104   13936 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

5] Make sure to disable swap, Identify swap devices using cat /proc/swaps. Turn off all swap devices and files with sudo swapoff -a.

Feel free to comment if you are facing any issues following these steps. Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *