Jenkins Kubernetes Plugin: Running Agents In Other Clusters
Michael Crosby & Thom Duran | June 21, 2021

How to get the Kubernetes Plugin up and running, configure an Nginx Ingress, and configure your first cloud in Jenkins. See part two of this tutorial here.

How to get the Kubernetes Plugin up and running, configure an Nginx Ingress, and configure your first cloud in Jenkins. See part two of this tutorial here.

At Moogsoft we use Jenkins to implement our CICD Pipelines. We run Jenkins where we run most everything else; Kubernetes, but you don’t need to have Jenkins running on Kubernetes to use this plugin. This is made possible by the community maintained Kubernetes plugin.

Recently we had the need to not only run agents local to the same cluster that Jenkins runs in, but in other clusters across different regions. This allowed us to automate moving data between databases without incurring a hit from network latency. In part one of this article we will go over installing the Kubernetes Plugin.

In this first part we will be discussing how to get the Kubernetes Plugin up and running, configure an Nginx Ingress, and configure your first cloud in Jenkins.

In part two we will discuss how we run agents with different containers across separate clusters and regions from our pipeline code. For those not working across regions you can still use this plugin to run dynamic agents on Kubernetes.

Installing And Configuring the Kubernetes Plugin To

Once you have Jenkins up and running somewhere the first step will be making sure you have the Kubernetes plugin installed. You can do this via your own custom install mechanisms, or using the Jenkins plugin manager UI.


Jenkins plugin manager UI


After you finish installing the Kubernetes plugin you can find it’s configuration at <JENKINS_URL>/configureClouds/ You can also get there by clicking on "Build Executor Status" then "Configure Clouds"




Giving the Plugin Access to the Target Cluster

Slow down! Before you click on "Add a new cloud" we have to get all our ducks in a row. First and foremost being the creation of a Namespace, ServiceAccount, Role, and Rolebinding in the target Kubernetes cluster.

Creating the Namespace

This is the namespace Jenkins will use to run agents in the target Kubernetes cluster. We tend to build all of this using Terraform/Terragrunt (specifically the K8 provider), but you can also use kubectl. We will stick to using kubectl to keep this article simple.

  1. Using kubectl, make sure you are in the correct cluster context.
    • kubectl create ns jenkins-agents


A ServiceAccount, Role, and Rolebinding must be created, and the token for the service account must be referenced from the plugin.

Creating a Service Account in the remote Kubernetes Cluster:

We will use a Service Account for Jenkins Master to authenticate into the Remote Kubernetes cluster where you created your namespace:

  1. Using kubectl, make sure you are in the correct cluster context
    • kubectl create serviceaccount jenkins-agent -n jenkins-agents
    • output: serviceaccount/jenkins-agent created
  2. Get the Service Account Token:
    • kubectl get secret $(kubectl get sa jenkins-agent -n jenkins-agent -o jsonpath={.secrets[0].name}) -n jenkins-agent -o jsonpath={.data.token} | base64 –decode
    • Add the Token as a Jenkins Credential by creating a new one of type "Secret text"


Add the Token as a Jenkins Credential


Create a Role and Rolebinding

This will give the Service Account created in the prior step minimum authorization to create jenkins-agents on the Kubernetes cluster in only the specified namespace.

  1. Create Role:
    • Create a yaml file with the below details inside
      • jenkins-agent.yaml
      kind: Role
       name: jenkins-agent
       namespace: jenkins-agents
      - apiGroups: [""]
       resources: ["pods"]
       verbs: ["create","delete","get","list","patch","update","watch"]
      - apiGroups: [""]
       resources: ["pods/exec"]
       verbs: ["create","delete","get","list","patch","update","watch"]
      - apiGroups: [""]
       resources: ["pods/log"]
        verbs: ["get","list","watch"]
    • kubectl apply -f jenkins-agent.yaml
  1. Create Rolebinding: Now that we have created the Role, we need to bind the Role to the Service Account. We do that using a Rolebinding: Create a yaml file with the below details inside
    • jenkins-agent-role-binding.yaml
    • apiVersion:
      kind: RoleBinding
       name: jenkins-agent
       namespace: jenkins-agent
       kind: Role
       name: jenkins-agent
      - kind: ServiceAccount
       name: jenkins-agent
    • kubectl apply -f jenkins-agent-role-binding.yaml
  1. Validation: Double check that everything was created with the below commands: (Make sure you are in the correct cluster context for kubectl)
    • kubectl get rolebinding -n jenkins-agent
      NAME            ROLE                 AGE
      jenkins-agent   Role/jenkins-agent   19s
    • kubectl get role -n jenkins-agent
      NAME            CREATED AT
      jenkins-agent   2021-04-15T20:13:37Z
    • kubectl get serviceaccount -n jenkins-agent
      NAME            SECRETS   AGE
      default         1         25m
      jenkins-agent   1         22m

Jenkins Role and Role Binding Limitations

The Role and RoleBinding configuration demonstrated limits Jenkins to the namespace specified, but you can open your permissions up a little bit with ClusterRoles and ClusterRoleBindings.The Jenkins agents can then manipulate the wider cluster for other purposes like Continuous Deployment. You can read a little more about the difference here: Role and ClusterRole and here: RoleBinding and ClusterRoleBinding. Our use case did not require this so we limited our pods access to the namespace it lands in.

This is made possible because kubectl commands executed on a pod without a kubeconfig use the ServiceAccount of the pod they run on.

Connectivity between agents and the Jenkins controller

Note: From here on out we are going to be discussing how to connect agents back to Jenkins from remote clusters. If your use case only requires agents within the same cluster as Jenkins you can move on to the next section.

The Jenkins plugin actually exposes a few options to try and simplify setting up this connectivity like websockets or direct connection. However, since we could create a "private" ingress exposing the agent TCP port to other clusters in our peered VPC’s, we took that option. Here is how that works for us:

Painting a verbal picture of our Architecture; Jenkins Controller runs on a Kubernetes Cluster inside a VPC located in the us-west-2 region. The target Kubernetes cluster where we wanted to deploy a Jenkins agent is located in a vpc in the us-east-2 region. Both of these clusters are vpc-peered and we wanted to ensure that the communication between the clusters for agent management stayed private.

Below is a high level diagram of the connectivity. Note that it ignores the LB for simplicity as that is simply passing through and configured as part of the Nginx Ingress




To accomplish this we created a "private" ingress using an NGINX ingress controller and AWS Load Balancer annotations. The Kubernetes manifest for the ingress is:

kind: Ingress
apiVersion: extensions/v1beta1
  name: jenkins-agent
  namespace: jenkins
  annotations: nginx-private
    serviceName: jenkins
    servicePort: 8080
    - host:
          - path: /tcpSlaveAgentListener/
            pathType: ImplementationSpecific
              serviceName: jenkins
              servicePort: 8080

Routing Ports for WebSocket connections

After the Jenkins agent makes it’s initial connection on port 8080 all subsequent connections come over port 50000 with no explicit path. In order for the ingress to respond on the TCP 50000 port that Jenkins leverages, we have to expose the TCP port. This is done via a ConfigMap. See the Exposing TCP UDP Services doc.

apiVersion: v1
kind: ConfigMap
  name: tcp-services
  namespace: infra
  50000: "jenkins/jenkins-agent:50000"

Note: the difference in namespace for this configmap. While the ingress is created in the same namespace as our Jenkins deployment the TCP ConfigMap is deployed to the namespace where the Nginx pods are running.We did this because in the next step we will configure the ‘–tcp-services-configmap’ flag and point at our ConfigMap in the same namespace as the Nginx deployment.

Once this has been done you need to ensure that the ‘–tcp-services-configmap’ argument is set on the launch of the ingress pod. This will specify where the configmap you created is housed in the documentation above.


The above shows the argument to add to the deployment so that each pod launches with this argument.

Now if you are deploying Nginx via the Helm Chart, adding your ports and paths becomes much easier.

TCP Port:

  50000: "jenkins/jenkins-agent:50000"

With that one option we have defined that we want to leverage TCP routing, the port to listen on, and the namespace/service:port to hit when a request comes in on that port.

By specifying a TCP port we can explicitly route by port the connection comes in on rather than paths. This is important for connecting the agent as it will initially hit the ‘/tcpSlaveAgentListener/’ path, but all subsequent requests will leverage websockets over port ‘50000’ which has no path attached to it.

More details on the ‘–tcp-services-configmap’ can be seen in the cli arguments documentation.

Nginx Ingress Gotcha

While we were implementing the Nginx Ingress we initially attempted to specify the use of a Network Load Balancer (NLB) for the AWS annotations. Our goal was to have explicit health checks for each TCP port we were leveraging. What we found however is the TCP test coming from the load balancer would result in EOF errors on port 50000 as it was expecting a payload. Since we could not specify a payload in our test (AWS LB tests are ping tests) we were not able to get around this error spamming our logs. We also noted that this undue load caused issues with other agents and at one point prevented Jenkins from restarting until we stopped the healthcheck.

We ended up going back to an Elastic Load Balancer (ELB) which is standard for the Nginx Ingress. The downside to this is that we are only explicitly checking the HTTP port for Jenkins and not port 50000 the agent communicates on. In this way we are assuming that if the HTTP port is up that everything else is functional. We are still investigating if there is a better way to do this. Likewise we may be pushing back improvement requests regarding health checks in Jenkins on port 50000. If you have found a way to work around this, please let us know!

Back to the Beginning

Now that we have the ingress and proper access configured we can go back to configuring our clouds. Head over to <JENKINS_URL>/configureClouds/ and click on ‘Add a new cloud’.

Here you will name your cloud, provide the URL to connect to your Kubernetes cluster, the namespace to run agents in and other required details for how to connect back to Jenkins.  Below is everything you will need to configure to move forward.

Name: The friendly name of your cloud. Will be used when deploying a pod to that cluster.

Kubernetes URL: The Kubernetes URL to use to reach your cluster (use https://kubernetes.default) for the local cluster

Kubernetes Certificate: Only needed when connecting to a remote cluster.

Kubernetes Namespace: The namespace to deploy agent pods.

Credentials: Only required for remote clusters. Specify the Kubernetes credentials stored in a Jenkins secret.

Jenkins URL: The URL to use for requests back to Jenkins from the agent.

Jenkins tunnel: Only specify if your tunnel endpoint is different from your Jenkins URL. Otherwise defaults to the same on port 50000.

Once you have entered all required details you should be able to test your connection.  After this is successful you’re ready to move on to creating pods!

For additional details on configuring your clouds see the official documentation.


To leverage these newly configured clusters you have to tie your pipelines to the cluster using the concept of a Pod Template in the Kubernetes plugin. Check out the next post in this series HERE to see how we use the Jenkins Kubernetes Plugin in our pipelines and leverage the configurations defined in this article.

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author


Michael Crosby & Thom Duran

Thom Duran is a Director of SRE at Moogsoft, where he leads a team of SREs that are focused on building the platform for Moogsoft Observability Cloud, as well as spreading best practices to enable a DevOps culture. Focusing on stability and automation has been part of Thom's life for the past decade. He got his start in the trenches of a traditional NOC. From there he moved into more traditional SRE roles focused on availability and monitoring at GoDaddy. At Moogsoft Thom's goals are always about driving efficiency, making sure people love what they do, and that people have a place where they can excel. Michael is one of our crack Site Reliability Engineers at Moogsoft. Passionate and detail oriented, he enjoys automating tasks using Python and challenging himself to learn and practice new technologies.

All Posts by Michael Crosby & Thom Duran

Moogsoft Resources

July 20, 2021

Javascript Pointers (They do exist!)

July 14, 2021

Monthly Moo Update | July 2021

June 21, 2021

Jenkins Kubernetes Plugin: Running Agents In Other Clusters

June 21, 2021

Jenkins Kubernetes Plugin: Using the plugin in your pipelines