A leading payment software solutions provider approached us to build an automated integration test environment.
Our client, a leading payment software solutions provider for a major automobile company in Germany, approached us to build an automated integration test environment. The environment has to be provisioned automatically on any of the public clouds as well as on the customer’s private cloud. The goal is to achieve the exact same functionality, already present on the less security-restricted public cloud in a fully automated way; e.g. deploy multiple microservices that work together automatically, run validation tests, and gather logs and metrics. The validation is considered successful, once every Python and Selenium QA test has finished without failing. Then the temporary environment/namespace is torn down. The code that passes the test pipeline is tagged as a release candidate and transferred to the client-hosted Docker registry in the form of Docker images. Of course, despite the tight deadlines, the project has to be done according to all of the DevOps best practices. This required a rigorous implementation process aligned with DevOps best practices and adhering to the stringent regulatory requirements in the client’s industry.
THE CHALLENGE
The project presented a few noteworthy challenges that required our team’s expertise and creativity. Firstly, obtaining access to the hybrid-cloud Kubernetes cluster was a significant challenge due to the stringent security requirements in the financial and fintech industries. Our team worked closely with the software provider company to establish trust and navigate the necessary security approvals to gain access to the cluster.
Secondly, we faced the task of quickly building a Kubernetes cluster that could facilitate the automatic provisioning of new integration test environments. Our solution involved leveraging the extensive knowledge base and resources available on the AWS platform to construct a Kubernetes cluster that could run multiple microservices, gather metrics, and conduct validation tests. This approach allowed us to meet tight deadline and provide customers with the high-performance environment they needed.
Overall, the unique challenges of this project highlighted our team’s ability to overcome obstacles and devise innovative solutions that meet the complex requirements and regulations of the financial and fintech industries.
THE SOLUTION
→ Kubernetes initialization
Of course, the goal was to have a fail-proof environment, that does not need constant maintenance and works without hiccups.
In order to achieve it we utilize the strengths of Kubernetes, Jenkins, and AWS services. Starting off with the Linux distribution, we build up our own AMI, based on the client’s security and versioning requirements. This gives us the ability to spawn a production-ready Kubernetes cluster environment running in its own VPC with private subnets, domains, and ELBs.
→Jenkins server
Jenkins is our product of choice when it comes to automating deployment processes and streamlining them into pipelines. It handles all of the steps necessary for the environment to be created, tested, and then torn down with no need for human intervention.
→HELM utilization
Having helm.sh at our disposal makes everything much easier. First, utilize its power by bringing up the Kubernetes Infrastructure
components such as:
• Prometheus;
• Grafana;
• Alert manager;
• Elasticsearch;
• FluentD;
• Kibana;
• Ingress controller;
• Quay-enterprise (Docker registry).
Then we use Helm to template the integration test environment and later on to customize it as we want using Pipelines. Having every chart stored in the Git repo brings even more flexibility and control over the expected outcome.
→Pipeline Concept
We choose to use a chain of Jenkins jobs as an automation approach. Instead of having separate jobs for deploying each application, we had a single “deploy” job that would take variables as input from the primary wrapper job and work for each application. One of the many challenges was to make all jobs and environments able to run in parallel so that the only restriction to parallel testing would be the actual Kubernetes resources that are available. All jobs had to be designed with scalability in mind. In the end, we had two available approaches that would allow us to run multiple integration tests in parallel – via Jenkins URL call or through an ECR scanner job that only allows one integration test to be running at a time. We used Helm as our Kubernetes provisioning tool. It carries nice versioning and is easily manageable in a repository.
THE CONCLUSION
In conclusion, we successfully implemented an automated build and deploy environment using AWS and Kubernetes that met all of the client’s requirements and adhered to best practices in DevOps. We overcame several challenges, including obtaining the necessary security approvals and building the Kubernetes cluster within tight deadlines. Through our expertise in utilizing Kubernetes, Jenkins, and AWS services, we were able to create a fail-proof environment that is fully automated and requires no human intervention. Our use of Helm allowed for greater flexibility and control over the expected outcome, and our pipeline concept enabled parallel testing and scalability. Overall, we delivered a Kubernetes environment with a Jenkins configuration that can easily be deployed on any private or public cloud and work out of the box, while also ensuring compliance with industry regulations in the financial and fintech sectors.