Let’s learn Kubernetes architecture in detail.
I assume you have a basic understanding of Kubernetes. If not, check out the following introduction and installation articles.
https://geekflare.com/kubernetes-introduction/
https://geekflare.com/install-kubernetes-on-ubuntu/
Kubernetes follows master-slave architecture. Kubernetes architecture has a master node and worker nodes. There are four components of a master node.
- Kube API server
- controller
- scheduler
- etcd
And, the worker node has three components.
- kubelet
- kube-proxy
- container runtime
This is how a Kubernetes architecture looks like:
Let me tell you about the components of the master node and worker nodes in detail.
Master Node
The master node manages the Kubernetes cluster, and it is the entry point for all the administrative tasks. You can talk to the master node via the CLI, GUI, or API. For achieving fault tolerance, there can be more than one master node in the cluster. When we have more than one master node, there would be high availability mode, and with one leader performing all the operations. All the other master nodes would be the followers of that leader master node.
Also, to manage the cluster state, Kubernetes uses etcd. All the master nodes connect to etcd, which is a distributed key-value store.
Let me explain to you about all these components one by one.
API Server
API Server performs all the administrative tasks on the master node. A user sends the rest commands to the API server, which then validates the requests, then processes and executes them. etcd saves the resulting state of the cluster as a distributed key-value store.
Scheduler
After that, we have a scheduler. So as the name suggests, the scheduler schedules the work to different worker nodes. It has the resource usage information for each worker node. The scheduler also considers the quality of service requirements, data locality, and many other such parameters. Then the scheduler schedules the work in terms of pods and services.
Controller Manager
Non-terminating control loops that regulate the state of the Kubernetes cluster is managed by the Control Manager. Now, each one of these control loops knows about the desired state of the object it manages, and then they look at their current state through the API servers.
In a control loop, if the desired state does not meet the current state of the object, then the corrective steps are taken by the control loop to bring the current state the same as the desired state. So, the controller manager makes sure that your current state is the same as the desired state.
etcd
The etcd is a distributed key-value store that is used to store the cluster state. So, either it has to be a part of the Kubernetes master, or you can configure it externally as well. etcd is written in the goLang, and it is based on the Raft consensus algorithm.
The raft allows the collection of machines to work as a coherent group that can survive the failures of some of its members. Even if some of the members fail to work, this algorithm can still work at any given time. One of the nodes in the group will be the master, and the rest of them will be the followers.
There can be only one master, and all the other masters have to follow that master. Besides storing the cluster state, etcd is also used to store the configuration details such as the subnets and the config maps.
Worker Node
A worker node is a virtual or physical server that runs the applications and is controlled by the master node. The pods are scheduled on the worker nodes, which have the necessary tools to run and connect them. Pods are nothing but a collection of containers.
And to access the applications from the external world, you have to connect to the worker nodes and not the master nodes.
Let’s explore the worker node components.
Container Runtime
The container runtime is basically used to run and manage a continuous life cycle on the worker node. Some examples of container runtimes that I can give you are the containers rkt, lxc, etc. It is often observed that docker is also referred to as container runtime, but to be precise, let me tell you that docker is a platform that uses containers as the container runtime.
Kubelet
Kubelet is basically an agent that runs on each worker node and communicates with the master node. So, if you have ten worker nodes, then kubelet runs on each worker node. It receives the pod definition by various means and runs the containers associated with that port. It also makes sure that the containers which are part of the pods are always healthy.
The kubelet connects to the container runtime using gRPC framework. The kubelet connects to the container runtime interface (CRI) to perform containers and image operations. The image service is responsible for all the image-related operations while the runtime service is responsible for all the pod and container-related operations. These two services have two different operations to perform.
Let me tell you something interesting, container runtimes used to be hard-coded in Kubernetes, but with the development of CRI, Kubernetes can now use different container runtimes without the need to recompile. So, any container runtime that implements CRI can be used by Kubernetes to manage pods, containers, and container images. Docker shim and CRI containers are two examples of CRI shim. With docker shim, containers are created using docker installed on the worker nodes and then internally docker uses a container to create and manage containers
Kube-proxy
Kube-proxy runs on each worker node as the network proxy. It listens to the API server for each service point creation or deletion. For each service point, kube-proxy sets the routes so that it can reach to it.
Conclusion
I hope this helps you to understand Kubernetes architecture in a better way. Kubernetes skills are always on demand, and if you are looking to learn to build the career, then check out this Udemy course.