In the previous blog post, we went directly into the installation and configuration of a Kubernetes cluster. In this one, we will get into some details into the fundamentals and some explanations. On of the challenging topic is Microservices, Docker and Kubernetes.
Microservices – A microservice can be referred as an architecture style or methodology that structures an application as a collection of services that can be made independent. Instead of having a big monolith application, it is split into services and processes. For example: a web application has authentication, login, user management are split into different processes to communicate together using HTTPS or TLS.
So, this is where Kubernetes comes into the picture. It is a container-orchestration system for automating application deployment, scaling, and management by using microservices style. Some of the advantages of Kubernetes are:
Reliable language inter-dependency – Let’s say an application has been written into Java, and another using GoLang, and they can communicate over HTTPS and talk to each other correctly, there will be no issue.
Working in teams – Let’s say one team is working into Python, and other in GoLang. This way specialization occurs and responsibility are shared correctly. Compared to a monolith approach everything are inter-dependent.
Fault debugging and identification – Of course, when services are split up, fault are identify more rapidly. Isolation is very fast when using microservices. We can also swap over to a different microservice will can share the responsibility of the failure.
So just imagine, a microservice running in a docker container and sometimes, it need more resource and sometimes not, hence, it makes scalability becomes very fast and efficient without downtime.
We also have disadvantage when running microservices
Just imagine, many microservices depending upon many others, it can be complex to debug and trace out where are the issue. Complexity is the issue here. Documentation is important. To trace from point A to Z is very hard and in terms of knowledge and architecture its very important to understand the whole containerization. In monolith approach, this situation would be more easy to solve.
Docker allows you to build, ship and run distributed applications whether on bare metal or virtual machines. Applications are now coming as microservices and that’s why the importance to learn Docker and Kubernetes. Containers are where the distribution applications are deployed. Kubernetes will orchestrate the distributed application. The idea is that Docker container will contain the image and Kubernetes will orchestrate it.
Something that I saw very confusing from many, is that a container is not a virtual machine. A virtual machine provides an abstract machine that uses device drivers targeting the abstract machine, while a container provides an abstract OS.
They might share the same principles, but they are different. In a VM, we have the entire packages such as the binaries, libraries etc.. where as on the Docker, it is just the translation that goes on using the same binaries. With Docker, more space is saved where as on VMs, we are not cutting down on size.
To create a Docker image, we define it in the dockerfile. In this article, I gave more ideas how to build the image.
There are many more container orchestration tools. One of them is Kubernetes. We have seen the installation and configuration of a Kubernetes cluster in the article: Crash blog to start with kubernetes installation and configuration part1.
So, before going further, there are some key terms to get familiar with. Lets see them individually:
A node is an instance of a machine which is running Kubernetes. For example, in this article , master-node, worker-node1 and worker-node2 are nodes. They ran Kubelet. Kubelet is responsible to talk to the master-node and vice versa.
At step 9 in the previous article before running kubeadm join, systemctl would show that the service kubelet is down.
And a node runs pods. A pod contains one or more containers. Pods are run on nodes.
A service handles requests either coming from inside the Kubernetes cluster from one node/s to another. A service can also be routed from outside the network / cluster that will hit the microservice.
A deployment define the desired state. For example, if we want 5 copies of a specific container in the Kubernetes cluster, Kubernetes will take care of the rest.
Kubernetes controller or the master-node will contain the replication controller that will ensure that the copies of the pods are present and running. It also keep a history of whatever happens in the pods. Example, you initially deployed 5 copies of a pod and then changed to 10. It will keep that in mind. A deployment file is usually YAML in nature though it can be written in JSON as well. I would use YAML because it is very human readable and same like Ansible Playbook structure. Cool right?
The intra-communication between pods on the worker nodes are handle by kube proxy.
Kube API server
This is responsible for all communication between the workers and the master.
This is responsible to orchestrate what microservices or docker containers need to be run on which pods. It can be resource criteria such as CPU or RAM.
Remember, as a good principle of a robust platform, always have more than one master which ensure robustness and redundancy.
We can also use Kubernetes to roll new updates, and reboot a container if something crashes. The way update are handles there is no downtime.
I hope this article is useful and it helps. In the next article, I will focus on the deployment of the Kubernetes master and two nodes using Ansible. We will also see how to build two masters and three workers.