Jenkins Kubernetes Plugin: Using the plugin in your pipelines
Joshua Zangari & Thom Duran | June 21, 2021

How we run agents with different containers across separate clusters and regions from our pipeline code. See part one of this tutorial here.

How we run agents with different containers across separate clusters and regions from our pipeline code. See part one of this tutorial here.

In our first post we went over setting up the Kubernetes Plugin. This described the basic setup of getting the plugin configured, and set with the proper perms to function. In this post we will go over how to leverage the plugin to generate agent pods.

At Moogsoft most of our pipelines are scripted and are built inside of, or from parts of, Jenkins shared functions library we maintain. Because of this most of our techniques work better inside of scripted pipelines, but can be applied to declarative ones if you leverage Jenkins Shared libraries.

Below is how we leverage the plugin including code snippets from our functions library. Note our use case is for cross cluster agents. This allows us to run agents local to a remote cluster to run jobs that are impacted by network latency, such as moving large amounts of data around. This plugin can still be used to run dynamic agents in the same cluster as Jenkins, and the same configuration described below would apply.

It is worth noting that pod templates can be created via the UI. However we have found it far more manageable and repeatable to create our pod templates via code and configuration files. This allows us to easily make changes and retain history via Git.

STOP: If you haven’t read part one of our series please read that first. It goes over setting up the Jenkins Kubernetes Plugin. This will need to be set up before you start leveraging pods as agents as described below.

Pod Templates

You can define Pod Templates inside of the plugin configuration UI. The documentation goes over mechanisms for extending pod templates and plenty of other complexity, but we preferred to keep it simple with our implementation.

The podTemplate step defined by the plugin can take variations of input to templatize how to build an agent pod, but because most of the SRE’s at Moogsoft are more familiar with YAML than Groovy, we use YAML. The yaml parameter for podTemplate is defined in the plugin documentation as "yaml: yaml representation of the Pod, to allow setting any values not supported as fields". To build that yaml we wrote some functions in our Jenkins Shared Library, and we call them like this:

podTemplate(yaml: agentPodYaml.getWithContainers('java', 'helm')) {
  // node() {} block here referencing pod's node label
}

Our function breaks down a Kubernetes pod manifest yaml into about 4 sections. The "head", the "resource limits", the "containers", and the "tail". Below are some redacted code samples from our ‘agentPodYaml.groovy" shared library script.

This is an example of an entry point. We know certain containers mean certain build processes that may require more memory, so we let the list of desired containers in a pod determine it’s resource requests and limits. If no argument is provided, the response is a basic busybox container we called "bash" to make it more universal.

def getWithContainers(String... containers = ['bash']){
  containers.each { containerName ->
    switch (containerName){
      case 'java':
        podYaml += getJavaAgentSpec()
        break
      case 'bash':
      default:
  podYaml += getBasicAgentSpec()
    }
  }
  podYaml += getResourceLimits(containers)
  podYaml += getYamlTail()
  return podYaml
}

This Yaml head function just returns the top of a pod manifest with the list of containers yet to be added.

def getYamlHead(){
       '''apiVersion: v1
kind: Pod
metadata:
 labels:
   jenkins: awesome-agent
spec:
 containers:
'''
}

We pin our resource limits because of how Jenkins handles them for the JVM.

def getResourceLimits(List containers){
   """    resources: # We "pin" these requests and limits because Jenkins will always use the request amount for the jvm memory. 
     limits:
         cpu: 3
         memory: 7Gi
     requests:
         cpu: 3
         memory: 7Gi
"""
}

This "tail" to the yaml being built really just defines the secret we use to pull images from a private repository.

def getYamlTail(){
   '''  imagePullSecrets:
   - name: awesome-docker-secret
'''
}

Container yaml function for a basic bash busybox agent.

def getBasicAgent(){
   '''  - name: busybox
   image: busybox
   command:
    - cat
   tty: true
   imagePullPolicy: Always
'''
}

This is our java agent spec, which actually includes two different containers, a custom java agent container and the *-dind image because we use some Docker coolness in our tests.

The environment variable in the ‘java’ container for DOCKER_HOST will talk to the ‘dind-daemon’ container, because all containers within the same "pod" all share their ports with each other, and "localhost" is pod scope

def getJavaAgentSpec(){
   '''  - name: java
   image: awesome.docker.repository/jenkins-java-agent
   command:
    - cat
   tty: true
   imagePullPolicy: Always
   env:
     - name: DOCKER_HOST
       value: tcp://localhost:2375 
 - name: dind-daemon
   image: awesome.docker.repository/docker:17.06.0-dind
   securityContext:
       privileged: true
'''
}

The results of the original call: agentPodYaml.getWithContainers(‘java’, ‘helm’) would look like this.

apiVersion: v1
kind: Pod
metadata:
  labels:
jenkins: express-agent
spec:
  containers:
  - name: java
    image: awesome.docker.repository/jenkins-java-agent
    command:
     - cat
    tty: true
    imagePullPolicy: Always
    env:
      - name: DOCKER_HOST
        value: tcp://localhost:2375 
  - name: dind-daemon
    image: awesome.docker.repository/docker:17.06.0-dind
    securityContext:
        privileged: true
  - name: helm
    image: awesome.docker.repository/jenkins-helm-agent
    command:
     - cat
    tty: true
    imagePullPolicy: Always
    resources: 
      limits:
          cpu: 3
          memory: 7Gi
      requests:
          cpu: 3 
          memory: 7Gi 
  imagePullSecrets:
    - name: awesome-docker-secret

Clouds

Selecting the particular "Kubernetes cloud" to use for the agent. The cloud parameter isn’t required, and defaults to the string value ‘kubernetes’. We leverage this functionality because when Jenkins is deployed via the helm chart the target cluster is automatically added as the "kubernetes" entry in the plugin. It means some of our pipelines which run local to our tooling cluster do not need to provide a cloud parameter.

// Runs in the tools cluster local to the Jenkins controller.
podTemplate(yaml: agentPodYaml.getWithContainers('java', 'helm')) {
  // node() {} block here referencing pod's node label
}

// Runs in the prod cluster in eu-west-2
podTemplate(cloud: 'prod-eu-west-2', yaml: agentPodYaml.getWithContainers('java', 'helm')) {
  // node() {} block here referencing pod's node label
}

// Runs in the dev cluster in us-west-2
podTemplate(cloud: 'dev-us-west-2', yaml: agentPodYaml.getWithContainers('java', 'helm')) {
  // node() {} block here referencing pod's node label

Containers

When Kubernetes builds the agent it will always add an agent container to the manifest YAML for the agent pod itself. The plugin exposes a mechanism to customize which exact container to use, but the sh step and any others that make calls out to the native operating system programs like bash or mysql will execute within this container.

Switching Container Context

To run within the context of another container, the container block must be used. For example, If you included a container named "mysql" in your podTemplate to run within its context you’d need to do something like:

container('mysql'){
  sh "mysql -h $MY_HOST -P$MY_PORT -u$USER -p$PASSWORD 'SHOW DATABASES'"
}

Putting it all together

This little "pipeline" will run in the Kubernetes cloud configured, and will print "Hello World" statements from the jnlp agent container that is in every agent pod.

podTemplate(cloud: 'not-jenkins-source', label: 'hello-world' yaml: agentPodYaml.getWithContainers('python')) {
  node('hello-world') {
    sh "echo 'Hello World, I am running inside the Jenkins agent container'"
    container('python'){
      sh 'echo \'print("Hello from the python side.")\' > hello-world.py'
      sh 'python hello-world.py'
    }  
  }
}

Preventing Collisions

POD_LABEL can be leveraged to automatically label a pod. This will in turn prevent collisions when running multiple agents. Note that per the plugin documentation that this is only available in Kubernetes 1.17 and newer. “Please note the POD_LABEL is a new feature to automatically label the generated pod in versions 1.17.0 or higher, older versions of the Kubernetes Plugin will need to manually label the podTemplate.”

The above is a stub of what it would look like when calling your node. As you saw in the previous section we explicitly labeled our node when calling it.

node('hello-world') {

If instead we used the ‘POD_LABEL’ syntax it would have automatically generated a label for us.

node(POD_LABEL) {

This is helpful if you plan on running multiple instances of the same job. This occurs in our environment during deployments. Since the deployment job uses the same agent definition every run we would see collisions without different labels. Using ‘POD_LABEL’ we can ensure that each instance of that agent has a separate pod label.

Note: We specify ‘POD_LABEL’ explicitly. However per the documentation it can be omitted and will be set as the default. We prefer to explicitly set values when possible to avoid confusion around functionality for new users. From the docs, “label: The label of the pod. Can be set to a unique value to avoid conflicts across builds, or omitted and POD_LABEL will be defined inside the step.”

Wrapping it up

The plugin has so many additional features, some which we leverage, others that we don’t. This should give you an idea of how to use the plugin to at least get agents running in other clusters. From there you can do various activities such as localization of jobs that may suffer from network latency, or tighten down access for your API users while providing broader access to the agents within the cluster.

For us personally, we implemented this to migrate customer data via an agent within the region. This made our network calls, dumps and restores much faster than running it from the same cluster as Jenkins. This is especially true when crossing oceans. Even if you don’t have global clusters, spinning up agents on demand is extremely useful. Hope you found this helpful, or at least interesting!

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author

mm

Joshua Zangari & Thom Duran

Thom Duran is a Director of SRE at Moogsoft, where he leads a team of SREs that are focused on building the platform for Moogsoft Observability Cloud, as well as spreading best practices to enable a DevOps culture. Focusing on stability and automation has been part of Thom's life for the past decade. He got his start in the trenches of a traditional NOC. From there he moved into more traditional SRE roles focused on availability and monitoring at GoDaddy. At Moogsoft Thom's goals are always about driving efficiency, making sure people love what they do, and that people have a place where they can excel. Joshua Zangari is an SRE Delivery Lead at Moogsoft, where he focuses his wizardry on a self service developer efficiency platform built in house to support pipelines, performance testing, and security. He has nearly 15 years of experience in software, and worked in IT before that. When he isn’t writing code you can generally find him reading, in the woods, or under his Jeep.

All Posts by Joshua Zangari & Thom Duran

Moogsoft Resources

July 20, 2021

Javascript Pointers (They do exist!)

July 14, 2021

Monthly Moo Update | July 2021

June 21, 2021

Jenkins Kubernetes Plugin: Running Agents In Other Clusters

June 21, 2021

Jenkins Kubernetes Plugin: Using the plugin in your pipelines

Loading...