Kubernetes is the most popular open-source software that automates container development processes.
According to the annual study by the Cloud Native Computing Foundation (CNCF), 96% of organizations are either using K8s or considering using them. If you follow closely, 5.8 million developers worldwide use Kubernetes, and that’s 31% of the backend developers across the globe.
It is preferred for its ability to leverage technology through improved scalability, availability, and short deployment time. While many developers kick-start their containers trajectory with Docker (a comprehensive tool that relies on the CLI to interact with containers, one at a time), K8s provides you with high-level abstractions for you to define applications and their infrastructure through schematics that you can collaborate on.
If you are new to Kubernetes, this article is specially tailored to aid you through your introductory bits and share insights to get you started. You’ll learn how K8s can help you as a developer in supercharging your digital products.
What Is Kubernetes, and Why Do You Need It?
Kubernetes is an open-source coordination engine (framework) used in automating deployment, scaling, and managing containerized applications; this includes handling predictability and availability.
Simply put, let’s think of an application that you have containerized. To serve your application users, you need to run several containers. The worry kicks in when you need to manage these containers. The containers may not necessarily be on the same machine, making it more complicated. So, what is the solution to this problem?
Kubernetes comes to your rescue by providing an efficient way to handle all these processes seamlessly. While you may liken K8s to a container engine like Docker, it’s a container orchestrator. As a developer starting, you don’t need to worry about how K8s does the orchestration. You won’t likely set up a K8s cluster for your application; more on this shortly.
However, you will interact with clusters your infrastructure team set up. Familiarizing yourself with the objects you’ll be interacting with is crucial. But before doing that, you’ll need a high-level understanding of its architecture to grasp what happens underneath it.
Features of Kubernetes
Kubernetes has several features with a wide scope of capabilities for running containers and other associated infrastructure. Here’s a list:
- Automated rollouts, scaling, and rollbacks – K8s robotizes creating your specified number of replicas, distributing them across appropriate (most suitable) hardware, and rescheduling containers if a node is on downtime. You can instantly scale your replicas based on the demand or changing needs like your CPU usage.
- Service discovery, load stabilizing, and network entry – Kubernetes offers a unique networking solution, including internal service discovery and exposing public containers.
- Applications with and without a state – In the early days, K8s mainly focused on stateless containers. As technology evolves across many fronts, it now accommodates built-in objects representing stateful applications. Ratified, any application can run on Kubernetes.
- Storage regulation – Whether you are on a local filesystem, network share, or in the cloud, Kubernetes provides (abstracts) persistent storage to applications running on containers. And the abstraction allows you to define storage requirements irrespective of the underlying infrastructure. While that’s beyond the scope of this article, it works through principles like persistent volume (PV), persistent volume claim (PVC), storage classes, and volume plugins.
- Declarative state – K8s uses Yet Ain’t Markup Language (YAML) files, called object manifests, to specify desired states for your cluster. The manifests dictate what your cluster looks like, including but not limited to the desired application instances and networking rules, among other configurations. When you apply manifests, K8s automatically handles all state transitions – you’d not have to write the scripts to do this.
- Multiple working environments – You are not limited to using Kubernetes in the cloud or your developer workstation. Almost every distribution is available to match your specific use case. Look out for the major cloud technology providers like Amazon Web Services, Google Cloud, and Microsoft Azure. You’ll realize they all offer managed Kubernetes services while single node distributions like Minikube and K3s are available for local use.
- Super Extensivity – K8s is a collection of many functionalities. As if that’s not enough, you can accelerate its capabilities with extensions. You can build custom object types, operators, and controllers to streamline your workloads.
Kubernetes Architecture
At its core, Kubernetes architecture comprises a single master node and two worker nodes. The master node calls the shots in the cluster, while the worker (slave) nodes run the applications as decided by the master.
Here’s a further breakdown.
The Master Node(s)
The Master node dictates the cluster states and decides each particular node’s actions. Several processes are required to set up the master node.
- API Server
All cluster communications are based here. It’s the gateway that allows all cluster components to exchange information. It exposes the Kubernetes API. There’re two main roles played here. The first is an entry point that allows users to interact with the cluster. For instance, sending requ5ests when using Kubectl. Second, gatekeeping to authenticate and validate requests. In this case, only certain users can execute requests. - Scheduler
The scheduler assigns applications or Kubernetes workload objects to the worker node. Here, the scheduler places pods on nodes based on resource requirements. And when you talk of pods, it’s just a small unit of deployment in Kubernetes. - Controller Manager
This unit maintains clusters like node failures to maintain the correct number of pods. It detects cluster state changes like pods dying and attempts to restore the pod to its original state. For example, if a pod accidentally dies, the controller manager requests the scheduler to ratify which node spins up a new pod as a replacement, and kubelet spins up a new pod. - etcd
It is also referred to as the cluster brain. The unit is a key value store for the cluster configuration. It means that all cluster changes are made here. You can back up a cluster by saving the key value distributed store. However, please note that only cluster state data is stored here, not application data. This unit is specifically for holding cluster state information and availing it for the preceding processes enacting awareness to them about the cluster.
The Slave Node(s)
Every slave node is installed with three node processes that allow K8s to interact with it and separately spin up pods within every node. The required processes are:
- Kubelet
This is Kubernetes’s primary service that runs execution for the container execution layer. If you take this unit out, Kubernetes is nothing but a REST API endorsed with a key-value store. By default, K8s executes the container application. Containers are always isolated from each other and the underlying host system. This has proven analytic to decupling individual application management from each other and the physical or virtual infrastructure.
While the API admission control can reject pods or add additional constraints, kubelet is the final ratifier of what pods run on a particular node, not schedulers or Daemonsets. To sum it up, kubelets interact with the node and the container. It also takes configuration files and spins up pods using container runtime. - Container runtime
This section runs containers. For example, you can use Docker, rkt, or conatinered a little more on how containers work section. - Kube-proxy
This unit supplies an abstraction layer for node pod groups under common policies like the case with load balancing. All nodes apply Kube-proxy to provide a virtual IP address for clients accessing dynamic pods. This structure is the solution to load balancing while keeping a low-performance overhead.
How Containerization Works
Containerization entails virtualizing all the needed pieces of a software application into one unit. Underneath containers are a collection of libraries, binaries, and all the needed application configurations. But they do not include kernel resources or virtualized hardware.
Ultimately, execute ‘on top’ container runtimes that outline the resources. Since containers include only the basic components and app dependencies, they are lightweight and thus faster, unlike other virtual machines.
Also read: Containers vs. Virtual Machines: Explaining the Differences
How to Install and Setup Kubernetes
I’ve spent so much time being theoretical; the cascading section will be tactical and involve some hands-on container experience. This tutorial particularly covers installation on the Windows operating system.
There’re multiple ways to make installations when using Windows; you can opt for the command line or the graphical user interface. However, you should ensure that you meet the following required specifications.
Your hardware needs a master node with at least 2GB memory and 700 MB for the worker node. For the software requirements, Hype-v, Docker for desktop, unique Mac address, and a unique product UUID for every node. Here’s the step-by-step approach.
Installing and Setting Up Hyper-V
Hyper-V is Window’s default virtualization software. Essentially, it is a VirtalBox on steroids. It allows you to manage virtual machines on either Microsoft GUI or command line. To enable Hyper-V, follow these steps.
- Open the Control Panel.
- Click on programs from the left panel.
- Under the program and features page, click on `Turn Windows features on or off.`
- Select Hyper-V and Hypervisor features for Windows.
- Next, select okay at this step; your machine should restart to activate the new settings.
Occasionally, your PC may restart severally to ensure everything is properly configured. You can verify the installation’s success by keying in the following command in Power Shell.
Get-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V
Confirm that your screen is populated with a 'enabled
‘ state.
Installing Docker
As you have learned, K8s is a container orchestration tool built on top of containers; in this case, Docker is a good choice. K8s communicates with Docker and manages everything at an enterprise level. Get moving by downloading Docker for Windows. If you’re wondering why it’s necessary to use Docker Desktop, it’s preferred for simplifying the development, shipping, and running of dockerized (containerized) applications.
It is also the fastest way to build Docker apps on Windows using Hyper-V and networking. After successful installation, Docker is always accessible on any terminal as long as it’s running. For a detailed guide on installation, you can check out the official Docker documentation. If you encounter any issues like hidden icons after installation, the problem can be solved by restarting your machine.
Installing Kubernetes
The Docker GUI allows you to configure settings, install and enable Kubernetes. To install K8s, follow these steps.
- Right-click on the Docker tray icon and select properties.
- Select `Settings` from the dropdown menu after clicking `Properties`.
- On the left panel, choose `Kubernetes` and click `Apply`.
Docker will then install some additional packages and dependencies. The process takes about five to ten minutes, based on your internet speed. You can use the Docker app to assert that everything is working correctly.
Since Kubernetes apps can be deployed using the CLI, you may need to install the K8s dashboard, as it’s not installed by default. Install the dashboard using the following steps.
- Download the YAML configuration.
- Deploy the application using this code:
. Kubectl apply -f .\recommended.yaml
. - Confirm that all is set well by:
kubectl.exe get -f .\recommended.yaml.txt
.
To access the dashboard, run the following command on Power Shell (not CMD)
- Run the following code
((kubectl -n kube-system describe secret default | select-string “token:”) – split “+”)[1]
- Copy the generated token and run
kubectl proxy
.
- Copy the generated token and run
- On your browser, open
http://localhost:8001/api/v1/namspaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
. - Click on `Token` and paste your token here.
- Sign in.
If you have made it here, bravo your screen should be populated with the K8s dashboard. Now you can manage your apps without doing the hard work using the CLI.
Also read: How to Install Kubernetes on Ubuntu 18
How to Create and Manage Kubernetes Cluster
If you have followed up here, you should have successfully installed Kubernetes on your host. Next, follow these steps to create and perform simple management on your cluster:
- Configure networking – Here, you need to set up networking between cluster nodes; allow them to communicate with each other.
- Setup cluster authentication – Create authentication and authorization mechanisms for cluster access.
- Setup master components; involves API server, scheduler, and controller manager.
- Join worker nodes – Connect worker nodes to the cluster using configuration files provided by the cluster setup process.
- Deploy add-ons – You can install extensions to enhance the cluster’s functionality.
- Manage workloads – It’s time for you to deploy your apps.
While this is just an overview of the cluster creation process, it involves many steps involving several commands. Here’s the official documentation guide on how to create clusters before deployment. It should be your guiding hand.
How to Deploy Your First Application Using Kubernetes
The most common command when using K8s is kubectl action resource
, which allows you to perform specific actions like creating or deleting a specified resource.
If you’re stuck, you can use --help
after a particular subcommand to get additional information.
For instance, Kubernetes get nodes --help
. Deploy your first K8s app using the kubectl create deployment Kubernetes-bootcamp –image=gcr.io/google-samples/Kubernetes-bootcamp:v1
command.
Final Words
This guide has been an entry point into Kubernetes technology. You have learned the benefits, features, and architecture of Kubernetes. Fortunately, you may have had to refer to a few pointers (external resources) to get going; it has explained how things work under the hood.
While it may seem overwhelming to grasp the whole tech stack as a beginner, this post has been a smooth guideline to get you started with K8s. You’ll need a bit of practice to get confident about using this technology, so I’m referring you to the official Kubernetes documentation as your side-by-side reference. The more practice you have, the faster you’ll become an expert on K8s.