Case Study

Driving Cloud Efficiency – How EKS Auto Mode Cut Our Client’s Cloud Costs by 37%

Picture of Victoria Dimitrova
Victoria Dimitrova
DevOps and Cloud Engineer
14.04.2025
Reading time: 3 mins.
Last Updated: 15.04.2025

Table of Contents

This case study explores how our team introduced AWS’s latest Kubernetes offering—EKS Auto Mode—into a client’s existing infrastructure. This innovative feature has proven to be a game-changer for Kubernetes management. By transitioning from the standard EKS setup to EKS Auto Mode, we have secured significant cost reduction for our customer while ensuring higher scalability of the environment. Below, we’ll discuss the initial challenges and the benefits realized from this AWS-driven optimization.

Our client is revolutionizing the education industry by providing innovative and user-friendly technology solutions. Their high-traffic management platform (~2 million users) is built on a microservices-based architecture in Amazon EKS. Given the educational nature of the web-based service, traffic patterns are highly predictable. To optimize costs effectively, the client required the flexibility to scale resources down during periods of low activity. Additionally, the aim was to introduce spot instances for further infrastructure cost reduction but still maintain a highly scalable and reliable infrastructure, which led us to seek a more adaptive resource management solution.

A typical Amazon EKS setup uses EC2 worker nodes provisioned through managed EKS Node Groups, which consist of Auto Scaling Groups (ASGs) and predefined Launch Templates. The primary limitation of this approach is that each node group requires fixed instance types defined upfront, and these cannot be modified after creation. This makes it difficult to dynamically manage diverse workloads, such as frequently switching between larger and smaller instance types. Additionally, incorporating spot instances becomes complicated, as handling instance availability, interruptions, and replacements demands creating multiple Node Groups and extensive manual management.

The Cluster Autoscaler can only add nodes from these node groups, limiting flexibility. It is challenging to right-size the infrastructure because the autoscaler lacks the freedom to choose the most fitting instance type for each scenario. In short, the cluster was not taking full advantage of the diverse EC2 instance offerings AWS provides (from memory-optimized to compute optimized, or newer GPU/Graviton instances).

After exploring various alternatives, we decided to evaluate migrating from Kubernetes Cluster Autoscaler to Karpenter.

Karpenter is an open-source tool (backed by AWS) for node provisioning developed specifically for Kubernetes workloads on AWS. Its purpose is to dynamically provision EC2 instances directly based on real-time pod resource requests. Unlike Cluster Autoscaler, Karpenter does not rely on static Launch Templates and ASGs. Instead, it intelligently selects the most suitable instance types from a broad range, enhancing flexibility and reducing overhead.

Around this time, AWS released EKS Auto Mode, driving us to explore and afterwards bet on this innovative feature, which turned out to be a game-changer for Kubernetes management. It simplifies operational responsibilities and efforts by automating the provisioning, scaling, and management of Kubernetes worker nodes through AWS best practices.

Adopting EKS Auto Mode led to the following improvements and achievements:

  • Cost Optimization: By combining the right-sizing of instances and introducing spot capacity where applicable, in both prod and non-prod environments we achieved a combined infrastructure cost reduction of around 37%. As we continue this transition, further cost reductions are expected
  • Automated Patching & Security Hardening leading to a 40% reduction in operational efforts: Since AWS takes care of regular updates and security patches for both the Kubernetes control plane and worker nodes in EKS Auto Mode. The cluster’s nodes are run on immutable, AWS-managed AMIs (based on a Bottlerocket OS) that enforce security best practices. This reduces the manual effort of updating node OS versions or applying security patches. Even Kubernetes version upgrades and component updates are handled for you (honoring disruption budgets), so maintaining an up-to-date cluster requires minimal intervention
Driving Cloud Efficiency - How EKS Auto Mode Cut Our Client’s Cloud Costs by 37%

An accredited Advanced Tier and Well-Architected Partner of AWS delivering cutting-edge cloud solutions tailored for innovation, scalability, and security. Through our partnership with AWS, we empower businesses to modernize their infrastructure, accelerate cloud adoption, and optimize performance with a wide range of DevOps, managed services, and cloud-native technologies. From seamless cloud migrations to proactive monitoring and infrastructure automation, we ensure high availability and compliance across every environment. Backed by certified expertise and a customer-first approach, we at ITGix are the trusted partner for enterprises looking to unlock the full potential of AWS.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Case Studies

Initial Challenge A prominent player in the utility sector faced a critical need for additional support to enhance the efficiency of their Core DevOps team. The challenge encompassed scaling their...
Reading
Initial Challenge The customer, a security-focused organization, faced a daunting challenge: managing and addressing over 2,000 security vulnerabilities and risks within their infrastructure. As a company committed to upholding the...
Reading

Blog Posts

Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders, designers, and developers who receive blogs, e-Books, and case studies on emerging technology.