High Availability (MULTI-MASTER) KUBERNETES CLUSTER Hosted on AWS

Blog

High Availability (MULTI-MASTER) KUBERNETES CLUSTER Hosted on AWS

ITGix Team
ITGix Team
Passionate DevOps & Cloud Engineers
12.09.2018
Reading time: 6 mins.
Last Updated: 25.04.2023

Table of Contents

What is Kubernetes?

This is the first post of a mini-series dedicated to running Kubernetes hosted on AWS. For an introduction to Kubernetes and Container Orchestration, you may check out our Beginner Guide. This first post will be about the considerations we have made when proposing a production-ready and Enterprise-grade Kubernetes environment to our clients. I will go more technical, with the tools and AWS services we are using, in the next blog post, here I will try to cover what problems we are solving.


    High availability is a characteristic we want our system to have. We aim to ensure an agreed level of operational performance (uptime) for a higher-than-normal period. These are the principles we follow when doing the system design: 

  • Elimination of single points of failure. This means adding redundancy to the system so that failure of a component does not mean failure of the entire system. 
  • Reliable crossover. In redundant systems, the crossover point itself tends to become a single point of failure. Reliable systems must provide for reliable crossover. 
  • Detection of failures as they occur. If the two principles above are observed, then a user may never see a failure. But the maintenance activity must.       

The graph below shows the Kubernetes Master components used to set up a cluster. Will go through them one by one:

kubernetes master components

Backing up ETCD availability

ETCD is our API object store used by Kubernetes. If we lose the data here, we are done, we lose our cluster. Here is what we need to have in mind when running ETCD in a highly available setup:

  • Run ETCD as a cluster of odd members. An etc cluster needs a majority of nodes, a quorum, to agree on updates to the cluster state. For a cluster with n members, a quorum is (n/2)+1. 
  • ETCD is a leader-based distributed system. Ensure that the leader periodically sends heartbeats on time to all of the followers to keep the cluster stable. 
  • Ensure that no resource starvation occurs. The performance and stability of the cluster are sensitive to network and disk IO. Any resource starvation can lead to heartbeat timeout, causing instability of the cluster. An unstable ETCD indicates that no leader is elected. Under such circumstances, a cluster cannot make any changes to its current state, which implies no new pods can be scheduled.
  •  Keeping stable ETCD is critical to the stability of Kubernetes clusters. Therefore, run ETCD on dedicated machines or isolated environments for guaranteed resource requirements.
  • We also need to ensure that our data stays intact in case the running host crashes for whatever reason. Since we are running our cluster in Amazon, the most convenient and reliable service we can use is the Elastic Block Store (EBS). In this way, if we stop, terminate, or decide to do any migration, we can easily do so by attaching the EBS to another instance. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability.


Building reliable API Server    

If ETCD is the brain of our cluster, then the Kubernetes API server is like the rest of the nervous system. The API server acts as the go-between for all data entering and exiting ETCD – from you and from the worker nodes that your application is deployed on. The great thing about the API server is that considerations for it are mostly driven by your considerations for ETCD. Since each master node on your cluster has an ETCD membership and acts as an API server, you will have high availability for both the data your API server consumes, as well as the functionality of the API server.


The API Server in this manual setup communicates with ETCD through localhost. This way you will never have to worry about the API server getting disconnected from its data. Our work here is to install an API server and set up a load balancer in front of each so we have our load distributed across our entire system. The load balancer is easily set using Amazon ELB. After we have our load balancer address we will need it when setting up the rest of the master components.


Setting up a Controller manager and Scheduler   

We haven’t done anything so far that actually modifies the cluster state, such as the controller manager and scheduler. To achieve reliability, we only want one actor modifying the state of the cluster. Moreover, we want replicated instances of these actors, in case the machine dies. Lease-lock is used in the API to perform a master election (leader-elect flag). The scheduler and controller-manager can be configured to communicate using the load-balanced IP address of the API servers. Regardless of how they are configured, the scheduler and controller-manager will complete the leader election process mentioned above by using the –leader-elect flag.


Kubernetes worker components in detail

Kubernetes worker components

   If ETCD is the brain and the API server is the rest of the nervous system – the workers are their arms and legs. Your workers are where your applications reside, what everyone else interacts with, and are in some ways – the least important part concerning the life of your cluster. Workers can die and be replaced with little – if any consequence to your cluster at all. If you have multiple workers, your application can retain functionality in most situations. You always want multiple minions, but keep in mind that scaling your minions is extremely quick and easy. It’s generally safe to start with a small number – and add more in the future if you start running low on space/memory/cpu.
kubelet gets the configuration of a pod from the API server and ensures that the described containers are up and running. This is the worker service that is responsible for communicating with the master node. It also communicates with etcd, to get information about services and write the details about newly created ones.
kube-proxy acts as a network proxy and a load balancer for a service on a single worker node. It takes care of the network routing for TCP and UDP packets.


Multi-zone setup    

Multi-zone support is deliberately limited: a single Kubernetes cluster can run in multiple zones, but only within the same region (and cloud provider). One of the limitations of a multi-zone setup is that we need to ensure that the different zones are located close to each other in the network, so we don’t perform any zone-aware routing. In particular, traffic that goes via services might cross zones (even if pods in some pods backing that service exists in the same zone as the client), and this may incur additional latency and cost.


Talking about scalability…

I have barely talked about the methods of how you are going to scale out your applications. We want this the be as flexible as it could be. We have two instance components that need scaling.
Scaling Kubernetes Master – No matter if you are a small startup or a big Enterprise, when going for High Availability in Amazon you want your Masters working across the entire Region. In the given example, I am using eu-west-1 (Ireland). On each Availability zone (eu-west-1a, eu-west-1b, eu-west-1c) we will set 1 Kubernetes Master, which will be assigned to its own Auto-scaling group.

Why is each master in its own Auto-scaling group? – The main reason for this is to keep us safe from replicating an unhealthy instance.


Scaling Kubernetes Worker 

As described a few lines above, our Kubernetes worker node is our workhorse. It is enough to have it set up and configured once. After that, it is easy to assign it to an Auto-scaling group. Doing so, rising the “Desired” number of Worker instances, will cause in bringing exactly the same hosts, with exactly the same Worker components. Their cubelets will all be configured the same and they will be able to register to the Kubernetes master. Running out of resources on one of the Worker instances will bring a fresh one to the other Availability Zone. This mechanism is one of the strongest in terms of scaling in Amazon.


After all this talk, it has led us to the Architecture below:

Kubernetes worker node

This is how easy it is to set up by hand Kubernetes cluster in a high-available mode with persistent storage in Amazon 🙂
The next blog post will go more technical. We will set up a Kubernetes cluster in Amazon from scratch, with a new Amazon IAM account. Then I will show how easy this process is by using kops and its magic.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

Introduction SSL/TLS certificates play a pivotal part in ensuring websites (and more) transmit secure encrypted data and prevent unauthorized interception. As you may have heard, on March 3, 2023, Google...
Reading
What is Transit Gateway? Transit Gateway is a powerful service offered by Amazon Web Services (AWS) that simplifies network management, improves security, and reduces operational overhead and costs. It offers...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.