Kubernetes is the go-to solution for the microservices and container-based production implementations, which is trusted by the community and the large web-scale companies. It’s backed and used by titans like Red Hat, IBM and Microsoft. We will start our first article of Kubernetes Tutorials series, The Easy Way.
When I first got to know the Kubernetes, it was not a very simple concept to understand as we were working on bare-metal and hypervisor-based production implementations every day. According to Google staff advocate Kelsey Hightower, Kubernetes will be the next big thing after the hypervisors and the cloud era. Kelsey Hightower is a towering personality in the Kubernetes world. So I thought of simplifying the Kubernetes concepts in easy to understand manner to help others who are eager to learn. I will explain the main components and the practical usage of them by using exciting visuals wherever possible.
Please come out of your Comfort Zone to Ace the Kubernetes if you don’t have any basic understanding or experience before. There might be lot of questions, but don’t give up 🙂
History of Kubernetes
Kubernetes was born as a result of Google’s decade of experience in managing containerized systems at large scale using Borg and Omega. They used to ran hundreds of thousands of jobs, from many thousands of different applications, across many clusters. Later, Google introduced Kubernetes as an open source version of Borg.
Kubernetes and Containers
Containers(like Docker, rkt, Containerd) allow to create, deploy, and run applications a very effective way. It will package an application with required libraries and other dependencies, and ship it all out as one package. Not like Virtual Machines, it will share the same host operating system kernel across the containers.
In production, we need to ensure all the services will run with maximum availability. If a container dies, another container should spawn and continue the workload. But how to achieve this state will be a question if you are new to the container centric world. A container orchestration system which will provision, schedule and managing containers at scalable manner would be the ideal choice.
That’s where Kubernetes will be the trusted platform to orchestrate, scale and failover the containerized applications. It adds more features like automate app deployments and updates, health-check and self-heal apps with autorestart, autoreplication and autoscaling.
Kubernetes Tutorial – The Easy Way
Kubernetes contain few main components as you see from the above simplified diagram which displays a typical operation of a Maritime Port facility. I wanted to make this Kubernetes tutorial series much easy to understand for everyone by making this an interesting journey. So I will explain the main concepts using the above diagram as the starting point.
You could see that the Ships are doing the hard work of moving containers across the sea. The Main Control Center is responsible for communication, managing containers and monitoring the Ships.
According to the Kubernetes analogy, Ships will consider as Worker Nodes which can load containers.
The Main Control Center will load the containers to the Ships and identify which containers should go in to which Ships. In addition, it will plan how to load containers, store information about the containers and ships, monitor the containers and ships including the loading and unloading process.
The Main Control Center has different departments to handle various tasks such as loading and moving containers between ships, monitoring the containers and workload, tools such as cranes to move containers and devices to communicate between ships and Main Control Center.
The Main Control Center will consider as the Kubernetes Master according to the Kubernetes terminology.
Master – Kubernetes Master is the main component of managing a Kubernetes cluster deployment. According to the diagram, the Main Control Center of the port will be considered as our Master node. Master have few more components such as ETCD cluster, Kube API Server, Kube Controller Manager and Kube Scheduler which will combine together to control the Kubernetes cluster.
ETCD Cluster – There are many information you need to store regarding the port’s daily operations. Number of ships come to the port, the number of containers loaded and unloaded, container load and unload timestamps and which ships handled which containers? etc. We need to ensure this data is recorded somewhere and available on-demand. The Data Store Facility in the Port will store all of the data.
In terms of Kubernetes, the Data Store Facility in port considered as the ETCD cluster.It’s basically a key-value based distributed data store. It will actually store the critical data related to the Kubernetes cluster such as config data, cluster state and metadata. Kubernetes use ETCD functionalities to monitor the cluster changes. When you interact with the Kubernetes cluster using the API(kubectl), you will read the command output values(kubectl get command) which stored in ETCD. Same way, when you use API to create Kubernetes resources(kubectl create command), it will write back to ETCD. So if you want to back-up cluster data, ETCD is the right pick.
Kube API Server – API server will act as the communication channel between the Kubernetes master and the external user. You can use kubectl clitools to manage the cluster deployments via API. The Communication Tower in port will be considered as the API Server which facilitate the communication between the ships and the control center. TK
Kube Controller Manager TK
Kube Scheduler TK
Worker Nodes – Workers will run the cluster workload as containers which spawned by the API server via the Master node. There could be thousands of worker nodes in a high-end Kubernetes cluster. We need at-least 3 worker nodes to run a production Kubernetes cluster effectively. Ships will be the worker nodes according to the diagram. It has important components such as Kubelet “Node Agent” and Kube Network Proxy.
Kubelet – Kubelet is the typical Node Agent of a worker node who will give the information required by the Master to make decisions regarding the container management, available underline server resources and TK
Kubernetes Tutorial – What’s Next?
Stay tuned for the Part 2 of Kubernetes Tutorial, The Easy Way.