Blog

Kubernetes Load Balancers

Picture of Stefan Tsankov
Stefan Tsankov
DevOps and Cloud Engineer
11.05.2023
Reading time: 5 mins.
Last Updated: 01.11.2023

Table of Contents

Today, we will explore how to access applications that are operating on Kubernetes. Specifically, we will delve into the functionality of Kubernetes services of type LoadBalancer, their use cases, and their integration with cloud provider load balancers. Additionally, we will discuss the concept of Ingress and how it relates to LoadBalancer services.

Furthermore, we will briefly examine the concept of finalizers and their role in preventing the unintentional activation of cloud resources when deleting load balancers in Kubernetes.

Services in Kubernetes

Kubernetes employs Pods, each with its own unique IP address. However, when Kubernetes creates or terminates a Pod, its IP address changes, which poses a problem when attempting to access an application running on the Pod. To address this issue, Kubernetes Services offer a dependable, static IP address through which we can access the Pod-hosted application.

Services in Kubernetes

By default, Kubernetes uses ClusterIP as its service type, which is only accessible from inside the cluster. On the other hand, NodePort and LoadBalancer service types enable external cluster access.

Ingress vs Service Type LoadBalancer

Upon initial inspection, it may appear that Kubernetes services of type LoadBalancer and Ingresses are performing identical tasks. However, let’s take a closer look at them and discuss if that is the case and how they differ.

A LoadBalancer service in Kubernetes is accessible through external load balancers located outside of your Kubernetes cluster. These load balancers can interact with your pods provided they can be accessed externally. This feature is natively available in Google and AWS. In the case of AWS, this service maps directly with ELB. When running on AWS (EKS), Kubernetes can automatically set up and configure an ELB instance for each deployed LoadBalancer service.

The following example demonstrates how to access pods and their applications on Kubernetes using a LoadBalancer service type with AWS.

Kubernetes LoadBalancer service type with AWS

In situations where you run Kubernetes on-premise, relying on a cloud provider for load balancing may not be desirable. In such cases, alternatives like MetalLB are available. MetalDB is a network load-balancing solution that is designed specifically for Kubernetes clusters running on bare-metal infrastructure. MetalDB helps allocate IP addresses to Kubernetes services and manages network topology, but it does not perform load balancing. A possible solution is to use  MetalLB (a popular implementation of MetalDB) with HAProxy as an external load balancer. MetalLB can allocate IP addresses to services within the Kubernetes cluster, and HAProxy can then use those IP addresses to route traffic to the appropriate backend pods.

On the other hand, Ingress is essentially a collection of rules that need to be passed on to a controller that is specifically designed to receive them – an Ingress Controller. It’s possible to deploy numerous Ingress rules, but without a controller to process them, they will remain inactive. If configured properly, a LoadBalancer service can serve as an Ingress rules listener.

Let’s consider an example Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: itgix-ingress-example
spec:
  rules:
  - host: "foo.bar.com"
    http:
      paths:
      - pathType: Prefix
        path: "/bar"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "bar.foo.com"
    http:
      paths:
      - pathType: Prefix
        path: "/foo"
        backend:
          service:
            name: service2
            port:
              number: 80

Each Ingress rule is made up of the following components:

  • Host
  • Paths
  • Backend

In this example, the host is specified for both Ingress rules, making them applicable to inbound HTTP traffic with the host header being either foo.bar.com or bar.foo.com.

The paths section contains a list of paths, with each Ingress rule having one path in its list of paths. Every path is associated with a backend.

The backend defines a service that should receive the traffic. HTTP(S) requests that match the rule’s host and path are directed to the specified backend.

Let’s take a look at a second Ingress resource and a visual representation of it. This time, the Ingress consists of three rules that all apply to the same host, example.com. The rules differ by path, which means that an incoming request with the host header set to example.com and a path of s1 will be forwarded to service1:

Ingress resource

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-example
spec:
  rules:
  - host: "example.com"
    http:
      paths:
      - pathType: Prefix
        path: "/s1"
        backend:
          service:
            name: service1
            port:
              number: 80
  - host: "example.com"
    http:
      paths:
      - pathType: Prefix
        path: "/s2"
        backend:
          service:
            name: service2
            port:
              number: 80
  - host: "example.com"
    http:
      paths:
      - pathType: Prefix
        path: "/s3"
        backend:
          service:
            name: service3
            port:
              number: 80

But how to choose the correct method?

First, let’s have a look at the LoadBalancer service type as it is the default method for directly exposing a service. All traffic on the specified port is forwarded to the service, with no filtering or routing. This means that nearly any type of traffic, such as HTTP, TCP, UDP, Websockets, gRPC, and so on, can be sent to it.

The main disadvantage of the LoadBalancer service type is that each exposed service receives its own IP address, and a LoadBalancer must be paid for each exposed service, which can become costly.

Moving on to Ingress is likely the most powerful way to expose services, but it can also be the most complex. There are several types of Ingress controllers available, including Google Cloud Load Balancer, Nginx, Istio, and others. Additionally, there are plugins for Ingress controllers, such as cert-manager, that can automatically provision SSL certificates for services.

Ingress is most beneficial when multiple services must be exposed under the same IP address, and when these services use the same L7 protocol (usually HTTP). If using the native GCP integration, only one load balancer needs to be paid for, and because Ingress is “intelligent,” many features are included out of the box (such as SSL, Auth, Routing, etc).

Finally, let’s take a look at AWS Load Balancer Controller. As previously discussed, the LoadBalancer service type requires an external load balancer to operate, which is typically provided by a cloud provider.

The setup process for this varies depending on the cloud provider being used. For example, if you are using AWS, you will need to use the AWS Load Balancer Controller.

This controller is responsible for creating an ELB (Elastic Load Balancer) on AWS whenever a LoadBalancer service is created in your Kubernetes cluster.

AWS Load Balancer Controller ELB

When a service of type LoadBalancer is created, the AWS Load Balancer Controller will communicate with AWS to ensure that the corresponding ELB is created

If we delete a service of type LoadBalancer, ideally we would also want to delete the actual load balancer in the cloud, as it serves no purpose without the service.

The good news is that this is exactly what happens when the AWS Load Balancer Controller is deployed in your Kubernetes cluster. You will notice that all services of type LoadBalancer have a metadata section added to them that looks like this:

finalizer:
  - service.k8s.aws/resources

Likewise, all ingresses present in your cluster will also contain a finalizer section:

finalizer:
  - ingress.k8s.aws/resources

The finalizers mentioned above ensure that the associated cloud resources are not deleted until the AWS Load Balancer Controller completes its cleanup process. Although the finalizer’s name, such as service.k8s.aws/resources, is just a string in Kubernetes, it is however understood by the load balancer controller.

If you add a finalizer with a string that is not recognized by any controller:

finalizer:
  - my.random.itgix.string/doesnotexist

then the resource cannot be deleted indefinitely. In such cases, you must remove the finalizer manually from the resource definition in order to delete it.

For more expert content, check out our blog page, and don’t forget to subscribe to our newsletter!

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

In late February, the cybersecurity world uncovered a highly sophisticated attempt to compromise Linux systems by exploiting vulnerabilities in open-source software. Unlike typical attacks that seek out security flaws or...
Reading
This guide will walk you through deploying multiple AWS Lambda functions using Pulumi, an infrastructure as code tool that allows you to define and manage cloud resources using familiar programming...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.