Blog

  • Infrastructure as a Code with Terraform – From declarative language and templates to simple programming language

    Infrastructure as a Code with Terraform – From declarative language and templates to simple programming language

    Our passion at ITGix is programming and working with Cloud services and tools. With all the different sets of projects we have, our motivation and experience are inspired from technologies like Docker, Kubernetes, Icinga, Ansible, Chef, SaltStack, GitLab, Jenkins, AWS, Google Cloud and many more tools and services. Recently I got keen on Infrastructure as a Code and for the purpose we are using Terraform. Terraform is a somewhat versatile tool used for automating, codifying your Infrastructure and deploying resources around most of the major Cloud services providers. It uses a declarative language, developed by HashiCorp, called HCL (HashiCorp Configuration Language), which many people think that is a limited and not really innovative programming language. Personally, I cannot share the same opinion, however I can agree that setting conditional logic in Terraform can be a bit tricky and some certain types of tasks become very difficult without access to a full programming language.

    READ MORE
  • Performance tuning of a fully automated AWS environment started only on schedule

    Performance tuning of a fully automated AWS environment started only on schedule

    We’ve been contacted to conduct a tuning for performance for a huge AWS environment. It is used to host the server side of a mobile application for a TV show that gives the ability to its viewers to vote on various questions during the show. Since the show is scheduled only once per week, it’s a perfect use case for a cloud on-demand environment, that is only being raised during the show, then after it ends, all VMS are shutdown or destroyed. In this way costs are cut to the their minimum, no need to keep expensive private physical servers at all times in a private data center for this.  You can imagine the load from the full set of viewers who are most of the times between 200 000 and 300 000 and they need to vote in parallel during one minute for a certain question. This could be a serious challenge for the system.

    READ MORE
  • High availability (Multi-master) Kubernetes cluster hosted on AWS

    High availability (Multi-master) Kubernetes cluster hosted on AWS

        This is a first post of a mini-series dedicated to running Kubernetes hosted on AWS. First post will be about the considerations we have made when proposing production ready and Enterprise grade Kubernetes environment to our clients. I will go more technical, with the tools and AWS services we are using, in the next blog post, here I will try to cover what problems we are solving.     High availability is a characteristic we want our system to have. We aim to ensure an agreed level of operational performance (uptime) for a higher than normal period. These are the principles we follow when doing the system design:  - Elimination of single points of failure. This means adding redundancy to the system so that failure of a component does not mean failure of the entire system.  - Reliable crossover. In redundant systems, the crossover point itself tends to become a single point of failure. Reliable systems must provide for reliable crossover.  - Detection of failures as they occur. If the two principles above are observed, then a user may never see a failure. But the maintenance activity must.        Below graph shows the Kubernetes Master components used for setting up a cluster. Will go thorough them one by one :

    READ MORE
  • SaltStack - Configuration Management and Remote Execution

    SaltStack - Configuration Management and Remote Execution

    What is the purpose of a tool like SaltStack and a better question, what problem does it solve ?The two main purposes of SaltStack are configuration management and remote execution. You have probably heard or used one of the more popular alternatives of SaltStack - Ansible, Puppet, or Chef. All of them pretty much accomplish the same goal. I like Salt in particular because it is written in Python and it is relatively lightweight. It uses ZeroMQ's communication layer which makes it really fast and it also uses the PyYAML for its configuration management recipes (called states). In a nutshell if you manage any number of servers and you need to do something on them, you would have to log in to each one at a time and do your task. Even if that task is a small one like restarting an instance or checking their uptime, or a larger one like doing an installation and configuration of something, you would still have to do it one server at a time. If you manage a lot of servers you will need to do a lot of manual work to accomplish your tasks. This is where Salt Stack can be applied to automate your work and provide the ability to remotely execute commands on any number of machines. Salt works using either Master/Minion setup where you have a master node from which you execute commands to the minion nodes, or using salt-ssh which is pretty much what it sounds like, it allows you to execute anything that you normally would to a configured minion on any machine over ssh, no matter if Salt is installed there.

    READ MORE
  • Introduction to Docker Trusted Registry

    Introduction to Docker Trusted Registry

    Since you are here, you have probably heard of Docker. When you search it in Google the first result is: "Docker - Build, Ship, and Run Any App, Anywhere" - don't believe me, try it yourself. That sounds great but in the means of Privacy and protecting your intellectual  property, it doesn't. This is because of the question "Ship to where ?", to Docker hub where the whole world can just do a simple pull and have all your work at his disposal? In this blog post I will introduce you to the Docker Trusted Registry and its benefits. It is a registry service that you can run on-premise or in virtual private cloud, where it is safe behind your company firewall. From here it is easy to store and manage your Docker images, which are the building blocks of your application stack. Trusted Registry is available in conjunction with a commercially supported Docker Engine to provide you with the peace of mind and support needed for your application environment. It is part of the Docker Datacenter Subscription which also includes a Universal Control Panel. The Docker Trusted registry is easy to install  and integrate with your existing infrastructure.

    READ MORE
  • Installing PeerVPN with Ansible

    Installing PeerVPN with Ansible

    In addition to the article about the PeerVPN installation and configuration, I will now show you more advanced and quite ‘modern’ way to provision several servers and get your VPN client up really fast. You’ve probably heard of Ansible already. Well, one of its use cases is exactly what we need here: Configuration Manager. Many of us have experienced The Headache, when you need to install, configure and then administer a whole environment. Yes, to repeat the same steps on hundreds of servers, where you have different OS distributions, application versions and all kind of dependencies, and all of that certainly lead to some problems.Well, Ansible is here to help you with all that stuff. You can choose, set and customize anything that is required for specific environment and suit its needs. So, let us start with the introduction to ansible, its structure and components.In my opinion there are two approaches when you first start with Ansible. The first one is to read the official introduction to Ansible, which explains a lot about its structure and then start with simple playbook which you then extend to a role. Or the second one, where you make use of the Ansible Galaxy, which has a lot of community-provided roles open for use. Well not every role is that much scalable and flexible as you want so you can simply combine both approaches, take an already built role and expand its functionalities. If you learn that quick and all of that is boring, you can start building your own Ansible modules.

    READ MORE
  • Levitation in Virtual world or how to convert Xen images to KVM

    Levitation in Virtual world or how to convert Xen images to KVM

    The rule of the "cloud" has already been established and now we have multiple vendors fighting for market share. Many companies started relying on the cloud and seeking more and more automation and could services. It looks easy, you just select a cloud provider, use it's services and you have several virtual machines or containers within minutes. Sounds like magic? Well, Here is the question you are probably asking yourself . What is behind it all? The answer is;  good old virtualization, strong APIs and scripting.   I guess most of you, who are to some extent already familiar with vitualization, and have the affinity to work with opensource technologies have used Xen. It was the very first in the opensource world, and that is for sure. Its first release was in 2002 and it definitely became one of the dominant virtualization solutions in the opensource world. If we take a look at the main vendors like Oracle for example, we will see that behind OVM is again Xen. At the same time lots of companies started using another solution, which you might have heard of - Citrix. The company that created it became very well known.  

    READ MORE
  • Openstack NFS Backend Causing Total Hangs

    Openstack NFS Backend Causing Total Hangs

    I'm not a big NFS fan ever since I worked as a Linux/Unix administrator way back in the good old days. Sometimes when the NFS server hung, lost network connectivity or something else happened all clients that had mounts from the NFS completely blocked waiting for it to come back up, because it is so deep in the kernel. Аll commands, even "ls", froze and the only cure was forcibly rebooting them to get them back online. Neat, eh?When NFS v4.1 emerged, back in 2010 hopes were that it will fix everything. I was a bit sceptical but decided to give it a shot and true, many new fixes in the protocol and implementation were made that enhanced the stability. Some of them were: blocking locks that allow client to ping the server if the lock is released, not only wait for notifications, timeout for server unavailable, parallel access. From what I saw, I couldn’t really break it beyond repair.As time went by Openstack offered the option to have NFS as a storage backend. We decided to use it for one deployment where we saw this technology as appropriate, because we didn't need highly available storage with replication that occupies twice the space, but we needed Cinder volumes to get mounted across the hypervisors. I had a feeling that something could go wrong while making the installation, because I remembered all those nights rebooting servers from ILo / IMPI.

    READ MORE
  • Backing up your virtual machines in Openstack

    Backing up your virtual machines in Openstack

    Backup is an essential part of the IT infrastructure management. Having HA solutions, RAIDs etc. doesn't free you from the need of backup. In case of a human error all those techniques will not save you, only the backup will. However as the saying goes "Your backups are only as good as your restores", so we have to think about regularly checking our backups for consistency.In Openstack it's highly recommended to use Cinder as the main storage provider. Cinder gives you the possibility to create block volumes and attach them to your virtual machines. The best practice is that you keep all your application data onto volumes and not on the instance disk, this disk should be used for the operation system files only, that come from the OS image ( of course packages installed from repositories will also go there) . In this article we will show you more reasons to do so.What you would typically want from a good backup solution is: online backup possibility, easy restores, consistency, easy management, to use as less space as possible.Although it's possible to have a traditional backup solution installed on every virtual machine, Openstack offers us other options to backup our data using snapshotting. The downside is that you can't have an "incremental" snapshot copy yet, you have to store the full size of your snapshots every time you backup. However the simplicity of backups and more importantly restores is far greater than supporting a "in-VM" backup solution that supports incremental backups. 

    READ MORE
  • Configuring multiple block storage backends in OpenStack Cinder

    Configuring multiple block storage backends in OpenStack Cinder

    If you're an administrator of virtualized environments you have definetly ran into IO performance issues. IO is the first bottleneck that one hits. Luckily persistant storage has evolved troughout the years and lately we see the high performant SSDs at a reasonable price, still far higher to allow organizations to fully migrate to SSD. The hybrid environments become more and more popular as they combine the low costs for traditional HDD with the high performant Solid state drives. One of the key features of the Cinder storage back- end component of Openstack is the flexibility that allows us to have more than one storage backend on our storage node. This gives us the flexibility to diferentiate the IO heavy applications from the more compute-oriented ones that are more heavy on cpu usage. Typical example is to configure a database VM to run from a SSD drive and the application server cluster to be on a normal storage that is heavily read only during startup. Here is how to achive that with OpenStack and Cinder.

    READ MORE
  • Containerization with Docker

    Containerization with Docker

    INTRODUCTION TO DOCKER If you have been following the “cloud” trends you probably have heard of Docker. It is an open source implementation of the LXC (Linux Containers) used for packaging an application and its needed dependencies into a container that can be deployed and replaced easily. The containerization in Docker is achieved via resource isolation (cgroups), kernel names spaces (isolating the application’s view of the OS, process trees, etc) and a union-capable file system (such as aufs – mounting multiple directories into one that appears to contain their combined contents). Using containers removes the overhead of having to create, deploy and maintain full VMs for running your applications. As well as providing completely identical PROD, Staging, QA, DEV environments. In some cases you can even move a container from one server to another, making it ideal to spin a quick instance of your PROD environment on a separate server to do a quick test without messing with the actual PROD environment.

    READ MORE
  • ITGix is starting its own DevOps platform!

    ITGix is starting its own DevOps platform!

    ITGix is proud to announce the start of our innovative DevOps Platform. It aims to enable companies integrate latest DevOps practices in their environments, while keeping everything in well organized and automated manner.

    READ MORE