Blog

Get Started with Kubernetes: Introduction to Container Orchestration

Picture of Daniel Dimitrov
Daniel Dimitrov
DevOps & Cloud Engineer
24.06.2022
Reading time: 4 mins.
Last Updated: 08.01.2024

Table of Contents

What is Kubernetes?

Kubernetes is an open-source orchestration platform for managing containerized applications in different deployment environments and services that facilitate automation. It organizes cloud-based microservice applications. The orchestrator is a system that deploys and manages applications.  

What are the benefits of using Kubernetes?

  • Scalability: Applications deployed on Kubernetes can be scaled up and down dynamically based on load or other custom metrics. 
  • Recovery from minor failures: Applications can easily recover when a problem occurs, node issues, running out of memory, etc. There is a pod monitoring mechanism for such events, that regularly checks each pod’s health and attempts to restart it in case it notices an issue.
  • High availability: The deployment process can be configured in a way that prevents downtime and essentially the application is always accessible by users.

What problem does Kubernetes solve?

The shift to microservices architecture caused an increased usage of container technologies as containers represent the perfect host for small, independent applications. Containerization is a good approach for grouping and launching applications. 

Having so many applications across hundreds and thousands of containers and managing loads of containers across multiple environments through scripts and self-made tools can be extremely hard. In a production environment, you must manage containers that run applications. That specifically made it necessary to have container orchestration technologies. For example, if a container falls, you would have to restart it. Kubernetes comes to the rescue to automate the process, providing a framework for creating distributed systems.

How does Kubernetes work? Basic Kubernetes features

The functions of the orchestrator and the container are a set of primitives and abstractions. Primitives are, for example, the concepts used in object-oriented programming, such as objects, classes, inheritances, encapsulations, packages, polymorphisms, and so on. Kubernetes inserts new primitives and abstractions into the design of distributed systems. The principles and concepts lie precisely on the foundations of the OOP. However, Kubernetes provides an opportunity to control the behavior of applications by defining some of the following primitives:

  • Pod – Pods are the smallest, most important deployments in Kubernetes. A pod is an instance of a running process in a cluster. They also contain shared network resources and resources for storing their containers:
  • Network – Automatically receive unique IP addresses. The containers in a Pod can communicate with each other through localhost.
  • Space – A set of storage volumes can be defined inside a pod so that certain paths can be shared between containers or made persistently.
  • Service – Service is a primitive for servicing load-balancers and accessing Pods. This is a REST object similar to Pod. Each Service that you create automatically receives an Endpoint object. This is a dynamic list of all healthy Pods in the cluster that match the label selector of a given Service.
  • Deployment – Deployment is a mechanism for the declarative release of the application. Used to define how instances of Pods containing a containerized application are created or modified. They can scale the number of Pods, allow the distribution of updated code in a controlled way, or revert to a previous version if necessary.
  • Configmap – You may use a config map to store data as a key-pair dictionary. It can be used as the environment variable for pods or in other more sophisticated ways. 

What does the Kubernetes Basic Architecture look like?

The Kubernetes cluster is made up of at least one Master Node and a couple of Worker Nodes connected to it. Each node has a kubelet process running on it. Kubelet essentially is a process that makes it possible for the cluster to communicate and to actually execute tasks on the nodes such as running application processes. Each worker node has containers with different applications deployed on it.

Kubernetes Basic Architecture Master Node Worker Node
Kubernetes Basic Architecture

The Master Node is the first and most vital component responsible for managing the Kubernetes cluster. It is an entry point for all types of administrative tasks. It is good to have at least 3 Master nodes in a highly available Cluster. The master node has various components such as API Server, Controller Manager, Scheduler and ETCD. 

The architecture also consists of Worker nodes. They are another essential component, which contains all the necessary services for network management between the containers, communicating with the master node, and allowing the allocation of resources to the scheduled containers. The Kubernetes scheduler manages the Worker nodes. The Scheduler is constantly aware of the available resources inside the cluster and its job is to pick the most suitable node for a given workload based on its defined resource requests.

How to use YAML files to define Kubernetes

Below you may review a yaml file for Deployment that is used to define how Pods are created. In this case, you may review the name of the image tag of the container, the number of replications to have the corresponding Pod (in our case 3) and the port of the container.

YAML file define pods

Despite the already implemented Deployment, the web application is not yet available for the network. For this purpose it is necessary to define and implement the Kubernetes Service. The concept of the Service is to allow access to existing Pods. Pictured below is this Kubernetes Service.

Kubernetes service

Here is another Kubernetes Service in the case of Load Balancer of the same application:

Kubernetes service load balancer

Kubernetes set up

In our blog post, you can learn how to easily set up a Multi-Master Kubernetes cluster in a high-available mode with persistent storage in Amazon Web Services.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

What is AWS AppStream 2.0 AWS AppStream 2.0 is a service that lets you run desktop applications in the cloud, so users can access them from anywhere, without having to...
Reading
Introduction Integrating Alertmanager with Microsoft Teams enables you to receive alerts directly in Teams channels, facilitating swift collaboration with your team to address issues promptly. This guide will use Prometheus-MSTeams,...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.