Blog

How to Setup Jenkins on Kubernetes Cluster

22.07.2022
Reading time: 6 mins.
Last Updated: 08.01.2024

Table of Contents

Introduction to Jenkins in Kubernetes

There comes a time in every DevOps adventurer’s life, when they have to face the good ol’ Jenkins, in all of its glory. With all those legacy pipelines, the Jenkins VM started with the best intentions but ended up becoming a true embodiment of the Jenkinstein meme. And, of course, the long-forgotten Jenkins instances that run but everyone is afraid to touch because it’s very hard to tell what a small change in the pipeline may lead to.

Joke aside, Jenkins is a great tool. Many other CI/CD tools have appeared in the last few years, but I still haven’t encountered a case where Jenkins comes short, functionality-wise. This is largely influenced by the fact that Jenkins is modular and a big part of its functionality comes from first or third-party plugins. One such plugin that we’ll have our focus on in this blog is the Jenkins-Kubernetes plugin.

PS: If a plugin’s function can be achieved with a few lines of code (e.g. API call), it’s better to write it yourself. Having Jenkins as lean as possible will make it more durable to update and overall less prone to vulnerabilities.

The Jenkins-Kubernetes plugin

This plugin allows you to seamlessly integrate the regular Jenkins that you know and love into Kubernetes. The purpose of this plugin is to integrate Jenkins with the Kubernetes API in a way that allows it to spawn new pods for every job. The worker pods live only during the pipeline execution. That allows us to save on a lot of computing resources for regular Jenkins Agents that would historically sit idle until we need them. An additional benefit of dynamic workers is that they can scale up and down in count as much as it’s necessary. However, I’d advise limiting the number based on the available resources in the cluster.

Let’s split the focus into a few main topics:

  • Jenkins Master installation;
  • Setting up your Jenkins worker pod template;
  • Useful pipeline tips for running in a Kubernetes worker.

Setup Jenkins On Kubernetes Cluster

1. Install Jenkins on Kubernetes: Jenkins Master Installation

There are pre-packaged docker images for Jenkins that can be used. In the past, we’ve installed Jenkins in k8s by creating a custom Helm chart with custom additions to the Jenkins Docker image, but the year is 2022 and no one has time to redo what’s already been done by many. It is open-sourced, so I’d advise starting with the following Helm chart.

The chart is self-explanatory, although I wouldn’t rely on the backup function. It copies files sequentially by default. If its parallelism parameter is used it eventually runs out of memory and when you have north of a million files to be backed up it even runs for longer than 24 hours, which is not at all useful.

As with other Java apps, set the Xms and Xmx accordingly to your use case and resources given to the pod.

The chart comes with a built-in Jenkins plugin “configuration-as-code”. This allows it to initialize with a list of specified plugins without having to write specific Groovy scripts.

I’d advise playing around with a few installations, but eventually setting the “initializeOnce” variable to true. This will ensure that the init container doesn’t overwrite or download new plugins, which may lead to broken dependencies.

2. Setting up your Jenkins worker pod template

Setting up the worker pod may be frustrating initially, especially if you try to use a file for your Pod YAML configuration. The two ways that I’d recommend are the following:

If you’re using a shared pipeline library(recommended), you can specify the pod Yaml as a libraryResource like so:

  pipeline {
    agent {
      kubernetes {
          defaultContainer 'jnlp'
          yaml libraryResource("podTemplates/default.yaml")
      }
    }
    ...
  }

This will look for a file in your library located under resources/podTemplates/default.yaml and is the most convenient way, which may even be parameterized.

More on Jenkins pipeline libraries, next time I find time to write a blog, no more than 5 years from now.

If you’re not using a pipeline library, you can set up the Pod YAML as a multiline string in your pipeline, like so:

  pipeline {
    agent {
      kubernetes {
          defaultContainer 'jnlp'
        yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: jenkins-worker
spec:
  containers:
  - name: jenkins-builder
  ...
  

  """
      }
    }
  }

This has the same end result but is a lot uglier and the YAML is in your Jenkins/Pipeline file.

Notable things about the worker pod:

  • There will always be a JNLP container running in your worker pod. It doesn’t need to be specified in the pod template and it’s used mainly for establishing the connection to the Jenkins master.
  • The different containers in the pod also share a workspace volume by default, which is great, because you don’t need artifact built-code between different steps of the pipeline.

How to set up Docker in Docker in your pod template

Here’s an example of how to set up docker in docker in your pod template. The solution is very similar to other pipeline tools. A “dind” container should run alongside the others. The containers that need it should be pointed to it with the DOCKER_HOST environment variable.

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: jenkins-worker
spec:
  containers:
  - name: python
    image: python:3.8.12-alpine3.13
    command:
    - cat
    tty: true
    env:
    - name: DOCKER_HOST
      value: tcp://localhost:2375

  - name: docker
    image: docker:19.03.0-dind
    tty: true
    env:
    - name: DOCKER_TLS_CERTDIR
      value: ""
    securityContext:
      privileged: true
    volumeMounts:
      - name: dind-cache
        mountPath: /var/lib/docker
  volumes:
  - name: dind-cache
    emptyDir: {}

3. Deploying and Scaling Jenkins on Kubernetes: Useful pipeline tips for running in a Kubernetes cluster

As already stated above, the main configuration you need is the podTemplate for the worker, defined in the Kubernetes agent config.

In order to build different programming languages or versions of them, you can specify several containers within the pod and just choose which one to build depending on the app. For example:

script {
if (env.APP_NAME == "a-node10-app") {
    container("node10") {
      funcs.codeBuild() // <-- This is a function from a shared library
    }
  } else if (env.APP_NAME == "another-node15-app"){
    container("node15") {
      funcs.codeBuild() // <-- This is a function from a shared library
    }
  }
}

The above assumes that you have containers specified with the names node10 and node15.

—–

Let’s give an example of a fully functioning Backup Job to replace the helm chart functionality

As I mentioned, the backup functionality provided by the chart is not great, if it works at all, so I’ll give an example of how you can backup your Jenkins pod with a Jenkins Job. The example is from a pipeline library.

This is the function that we’re calling from the pipeline.

src/org/itgix/Utilities.groovy

package org.itgix

def jenkinsBackup() {
  sh "aws eks update-kubeconfig --name {cluster_name}"
  sh """
  kubectl exec -it  -n jenkins jenkins-0 -- /bin/bash -c "cd /var/ && tar --exclude='*/backups' --exclude='*/workspace/*' --exclude='*/branches/*' --exclude='*/fingerprints' --exclude='*/org.jenkinsci.plugins.github_branch_source.GitHubSCMProbe.cache' -zcvf /var/jenkins_home/backups/jenkins_home_$(date +%F).tar.gz jenkins_home/"
  """
  sh "kubectl cp --retries=-1 jenkins/jenkins-0:/var/jenkins_home/backups/jenkins_home_$(date +%F).tar.gz ${WORKSPACE}/jenkins_home_$(date +%F).tar.gz"
  sh "ls -lh"
  sh "aws s3 cp ${WORKSPACE}/jenkins_home_$(date +%F).tar.gz s3://${JENKINS_BACKUPS_BUCKET}/backups/$(date +%F)/"
}

Here’s the “Jenkinsfile” or Pipeline config that is calling the pipeline below:

@Library('jenkins-shared-lib') _

JenkinsBackupJob {
    runnerTemplate = "default.yaml"
}

Here’s the pipeline for it.

vars/JenkinsBackupJob.groovy

def call(body) {
  def pipelineParams= [:]
  body.resolveStrategy = Closure.DELEGATE_FIRST
  body.delegate = pipelineParams
  body()
  def funcs = new org.itgix.Utilities()

  pipeline {
    agent {
      kubernetes {
          defaultContainer 'jnlp'
          yaml libraryResource("podTemplates/${pipelineParams.runnerTemplate}")
      }
    }
    environment {
      AWS_ACCOUNT_ID = "123456789000"
      AWS_DEFAULT_REGION = "eu-central-1"
      JENKINS_BACKUPS_BUCKET = "an-s3-used-for-jenkins-backups"
    }
    stages {
      stage("Backup Jenkins Home") {
        steps {
          container('build') {
            script {
              // Credentials that need to have access to the cluster and S3
              withCredentials([usernamePassword(credentialsId: 'aws-jenkins-ci-user', 
                                                passwordVariable: 'AWS_SECRET_ACCESS_KEY', 
                                                usernameVariable: 'AWS_ACCESS_KEY_ID')]) {

                  funcs.jenkinsBackup()

              } // withCredentials
            } // script
          } // container
        } // steps
      } // stage
    } //stages
    post {
      failure {
        container('python') {
          script {
            env.BUILD_STAT = "FAILURE"
            env.JENKINS_ENV = "ITGix-Jenkins"
            env.MAIL_LIST = "person@itgix.com,person2@itgix.com"
            env.CHANGESET = funcs.changeLogHtmlTable(currentBuild.changeSets) // <-- That's another one of our self built functions utilizing a python script
            funcs.notifyEmail()
          } //script
        } // container
      } // failure
    } // post
  } // pipeline
} // body

Conclusion

In conclusion, Jenkins is still viable even if you’re using a pipeline-capable git provider like Gitlab or Github. Jenkins’s strengths are most visible in some of the edge cases where you’d need to make a lot of workarounds to achieve what the development process requires. It can deal with challenging requirements fairly easily. That’s because the pipeline logic can be altered with the Groovy programming language.

I’d recommend using it in edge cases or after you build a robust pipeline library that can be reused for different projects.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

What is AWS AppStream 2.0 AWS AppStream 2.0 is a service that lets you run desktop applications in the cloud, so users can access them from anywhere, without having to...
Reading
Introduction Integrating Alertmanager with Microsoft Teams enables you to receive alerts directly in Teams channels, facilitating swift collaboration with your team to address issues promptly. This guide will use Prometheus-MSTeams,...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.