Continuous Integration & Delivery @ Moogsoft: GitLab and Jenkins Integration
Joshua Zangari | June 8, 2020

Here’s how we trigger our Jenkins pipelines from GitLab using the GitLab Jenkins CI Integration and the Jenkins GitLab Plugin.

Here’s how we trigger our Jenkins pipelines from GitLab using the GitLab Jenkins CI Integration and the Jenkins GitLab Plugin.


One of the SRE team’s goals at Moogsoft is to make sure our feature teams have an easy path from local code changes to production. Changes rolling out to production mean new features, bug fixes, optimizations, and more, which translates into value added for our customers. In short, at Moogsoft we are all about making sure our product is continually evolving, and one way the SRE group helps is by building shared Jenkins functionality our engineers understand and can use quickly. In this article of our series we will be introducing how we trigger our Jenkins pipelines from GitLab using the GitLab Jenkins CI Integration  and the Jenkins GitLab Plugin.


“But Josh!?” you ask, “GitLab already provides CICD mechanisms, and Kubernetes integration”. I would respond: “You are correct, and given your situation you may choose to use them, but we did not”. Here’s why: Our deployment architecture, Kubernetes cluster configuration, network security, and a host of other factors made it much simpler to perform CICD (Continuous Integration & Delivery, also known as CI/CD) from a system located closer to our pre-production and production Kubernetes clusters: a separate Kubernetes cluster we already had running for “tools” of this nature, one with good RBAC and network access to the other clusters already. This was evident in the fact that all the old CICD tooling we had in GitLab simply triggered Jenkins jobs; because we had an older Jenkins  installation running on AWS EC2 which had easier visibility of / access to said clusters.

The older Jenkins was insufficient to our needs though. It didn’t scale or work very well, and took ages when it did. The pipelines running on it weren’t well structured either, and relied heavily on shell/bash scripts for logic that could be simplified using groovy. For this reason we did end up setting up a new Jenkins instance on our “tools” cluster. We also leverage the Kubernetes plugin in it for scaling our Jenkins agents horizontally using specific Docker images, defining the pod yaml for each agent in the pipelines themselves.


Going into this our intent was for developers to trigger builds through their interactions with git like pushing changes to a branch or creating a “Merge Request” or MR.  (A Pull Request by any other name will still get ignored :D) We wanted them to have a deployable build for each change pushed to a branch, including master. We divided these build types into four types: dev, qa, staging, and prod.

Build types are generated depending on what action occurred and serve different purposes. The ‘dev’ build type triggers when pushing to a feature branch in progress. These development builds are meant to run fast and only perform unit and other “build time” tests via our Gradle plugin. Our ‘qa’ builds are the result of creating an MR or pushing changes to a branch with an open MR. They deploy a QA instance and spawn automated end to end testing. When master is updated by a branch being merged in an MR, ‘staging’ builds are generated and smoke tested. If they check out then they are promoted to ‘prod’ type builds and deployed. All our build types are separated by artifactory repositories.


To realize our goals we used the most common tooling for the purpose: the GitLab Jenkins CI Integration and the Jenkins GitLab Plugin. On the GitLab side of the house, the integration allows you to direct GitLab repository events at Jenkins Jobs in order to trigger them.

Setting up Jenkins

Install the plugin via the plugin manager, configure it with a GitLab connection in “Manage Jenkins”, and during creation of a new pipeline job check the box for the job to be triggered by GitLab.



You may notice additional checkboxes which are UI elements options that can also be configured in scripted or declarative pipelines. We leave these as default and let our pipelines set their values when they trigger. Make sure that Push Events is selected initially.

Setting up GitLab

Navigate to the Jenkins CI Integration by selecting “Integrations” from the left menu under “Settings” then selecting “Jenkins CI.” The screenshot below shows how we set up our integrations for a repository. The Gradle plugin manages versioning by creating and pushing tags of master when it’s built for production so we don’t leverage those events.



When you click Test Setting and save changes, GitLab will send a test push event of the master branch to Jenkins.

What else?

If you’ve made it this far you are probably thinking “Well, yeah, I can see all of that in the UI and it looks simple enough, so why did you write this article? Is that all?” Then again, if you found this article you may have run into the same frustrations as we did. Unlike most other git integrations for Jenkins I have seen or tried, the GitLab plugin doesn’t do much more than set up some environment variables to access in a pipeline. It does not play well with multi-branch pipelines either, so it’s much easier to use simple pipelines. As a part of those pipelines, these environment variables are not available to the default SCM step in the job configuration most other plugins use. If you read the documentation you will find out that it expects integrators to use those environment variables in pipelines to clone the repository. The ones available are below:


This did not pass our “simple and easy for developers to configure and use” sniff test, so enter Jenkins Shared Libraries!

The Jenkins GitLab plugin provides the above environment variables. They closely map to their event model here:

Library for Triggering Pipelines. 

There are two general ways to load a Jenkins library. Implicit libraries are loaded for every job that gets run at a preconfigured branch, and explicit libraries must be loaded via a step in the pipeline. Once loaded, the “version” (branch) of the library cannot be changed. We built two shared library functions for helping developers easily set up their pipelines when they create a new service. The first is an implicitly loaded variable so our developers can enter a single simple line of Groovy into their Jenkins job configurations’ pipeline scripts. This function also defines a branch parameter for selection of a branch when manually run, and lastly it also contains some of the default GitLab plugin configurations for the job itself. You can find triggerPipeline.groovy below (try and spot the function we load explicitly for the second half of the trigger).

* @param args Map of arguments: libraryBranch, loadJenkinsFile, triggerOnPush, etc… 
def call(args = [:]) {
   def libraryBranch = args.libraryBranch ?: 'master'
   def loadJenkinsfile = args.loadJenkinsfile ?: false // default
   def triggerOnPush = !args.triggerOnPush ?: true // default

   library "explictly-loaded-library@${libraryBranch}" // JSLs are preconfigured then referenced by name

   properties([ // Setup defaults for all our gitlab triggered pipelines.
                $class                        : 'GitLabPushTrigger',
                triggerOnPush                 : triggerOnPush,
                triggerOnMergeRequest         : true,
                triggerOnPipelineEvent        : false,
                triggerOnAcceptedMergeRequest : false,
                triggerOnClosedMergeRequest   : false,
                triggerOnApprovedMergeRequest : false,
                triggerOnNoteRequest          : false,
                noteRegex                     : '',
                skipWorkInProgressMergeRequest: false,
                triggerOpenMergeRequestOnPush : 'both',
                acceptMergeRequestOnSuccess   : true,
                branchFilterType              : "NameBasedFilter",
                includeBranchesSpec           : "",
                excludeBranchesSpec           : ""
        [$class: 'ParametersDefinitionProperty', parameterDefinitions: [
            [$class     : 'StringParameterDefinition', defaultValue: 'master',
             description: 'Branch of the repository to run the pipeline for.',
             name       : 'branch']

   // Trigger if the gitlab environment variable for action is present
   if (env.gitlabActionType) {
       triggerMechanisms.gitlabEventsTrigger(loadJenkinsfile, args) libraryOverride, args)
   } else {
       // Manual trigger, pass in the job name and branch.
       triggerMechanisms.parameterizedTrigger(env.JOB_BASE_NAME, loadJenkinsfile, params.branch, args)

Did you spot triggerMechanisms.groovy being called in the last few lines of the function? Before we cover them though, let’s look at how pipelineTrigger.groovy is used. The developers put some variety of the following code into their pipelines:

node() { triggerPipeline() }

There are a few commonly used arguments to this function you probably also noticed, but since we pass the entire args map down to the pipeline developers have also come up with some ones that aren’t used in the trigger itself.

  • loadJenkinsFile (Boolean) – We define a default pipeline which leverages our custom gradle plugin for most services, but developers are free to write their own and put them in the root of their repository in the traditional “Jenkinsfile”. If this argument is true, the pipeline will check out and load that Jenkinsfile instead of the default.
  • libraryBranch (String) – This is actually a job parameter, but it is passed into the second half of the trigger for manual builds.
  • autoDeploy (Boolean) – When set to false it will disable the continuous production deployment for prod type builds. Typically used for services that still have a WIP flag, or in the rare occasion you don’t want to continuously deliver (they happen… I guess).
  • triggerOnPush (Boolean) – This directly affects the GitLab configuration of the same name. Some folks only work off MR events. (You could technically do this for every one of the GitLab plugin configs, but we wanted a tighter control on the behavior so our devs didn’t need to think about what kind of build would come from which activity.)


This script, when loaded by Jenkins, becomes a singleton instance with two methods: One for handling GitLab events and the other for handling a manual trigger via the Jenkins UI.

def gitlabEventsTrigger(Boolean loadJenkinsfile = false, Map args = [:]) {...}
def parameterizedTrigger(String repoName = null, Boolean loadJenkinsfile = false, String branch = '', Map args = [:]) {...}

The gitlabEventsTrigger function takes fewer arguments than the manual trigger. While it does modify the build description, its main responsibility is to decipher the build type from the GitLab environment variables and call the services’ pipelines. Important Note: GitLab will send two events separately when an update is pushed to a branch with an MR open. A Push event and an MR event. We have logic that ignores and aborts any triggers from a Push event on an open MR. This requires a query back to the GitLab API.

def activeMRs = httpRequest(
customHeaders: [[name: 'PRIVATE-TOKEN', value: gitlabApiToken, maskValue: true]],validResponseCodes: "100:499"

The $urlEncodedRepoPath path variable is simply the path to your repository url encoded (minus the .git) . We get this with a simple pattern match against one of the GitLab env variables, or a url we derive. Example:

def matcher = gitlabSourceRepoHttpUrl =~ /\\/(\S+).git/
def urlEncodedRepoPath = URLEncoder.encode("${matcher[0][1]}", "UTF-8")

The parameterized trigger does the same check and ignore process on the Push/MR events, but it also makes a more generic call to the GitLab API to build up some of the variables the GitLab plugin would ordinarily supply. This logic depends on how your repositories are laid out, as you need to be able to determine the repository URL from the job name. I will share the final call though. We just set up the GitLab environment variables we’d already been using elsewhere before we added the manual trigger.

        "gitlabSourceRepoHttpUrL=$gitlabSourceRepoHttpUrL”) {
   currentBuild.description = "Manually Triggered $buildType build of Branch in ${repoName}"

Calling the pipelines

The last thing we do as a part of the trigger mechanism is actually run the pipelines. As I’ve mentioned, some of our services share an implementation of a pipeline, but we can also load custom-defined ones. A key difference/limitation is that we can pass the original arguments from the pipelineTrigger down into the shared pipeline implementation we use if the trigger doesn’t require loading a Jenkinsfile. We make the buildType available as an environment variable to all builds in addition to the GitLab environment variables set by the plugin or parameterized trigger. Lastly, if we are loading a Jenkinsfile we use a convenience function we wrote encapsulating the majority of our use cases for doing a sparse checkout of it.

def runPipelineForRepo(Map args = [:]) {
   Boolean loadJenkinsfile = args.loadJenkinsfile ?: false "Args: ${args}"
   withEnv(["buildType=${args.buildType}"]) {
       try {
           if (!loadJenkinsfile) {
           } else {
               checkoutSource(args.buildType, env.gitlabSourceRepoHttpUrl, env.gitlabSourceBranch,
                       env.gitlabTargetBranch, true) // The final argument here tells this to do a sparse checkout of the Jenkinsfile only. 
               load "Jenkinsfile"
       } finally {

We provide a “checkoutSource” function in our Jenkins shared libraries that works off our build type, Git repository / branches, and a flag for a sparse checkout of the Jenkins file only.

Conclusion / TLDR

GitLab and Jenkins are a little more involved to integrate than some other Git providers out there, but with a little study of the plugin and integration it’s simple enough to provide a solid method for our engineers to manage their Continuous Integration & Delivery with minimal time wasted in setting it up. There are other gaps that will need to be closed in your processes that we can cover in later articles if there is interest, such as artifact management, quality in process, or version management. Please be sure to leave feedback if you have interest!

Thank you for taking the time to read this article. I know as an engineer crunched for time you are reading for answers, so I hope I’ve managed to give them to you without too much interruption to your day.

TLDR: If you take anything away from this please go read a little bit on the concept of reusing code in Jenkins via implicit and explicit shared libraries here: Extending with Shared Libraries. You should also become familiar with some of the limitations of the Jenkins/GitLab integration mechanisms by reading the documentation on Github

We’re also eager for feedback on this plugin which we haven’t tried yet: As it wasn’t released when we originally designed our current method, It may be a topic in a future post about multi-branch pipelines.

Moogsoft is the AI-driven observability leader that provides intelligent monitoring solutions for smart DevOps. Moogsoft delivers the most advanced cloud-native, self-service platform for software engineers, developers and operators to instantly see everything, know what’s wrong and fix things faster.
See Related Posts by Topic:

About the author


Joshua Zangari

Joshua Zangari is an SRE Delivery Lead at Moogsoft, where he focuses his wizardry on a self service developer efficiency platform built in house to support pipelines, performance testing, and security. He has nearly 15 years of experience in software, and worked in IT before that. When he isn’t writing code you can generally find him reading, in the woods, or under his Jeep.

All Posts by Joshua Zangari

Moogsoft Resources

April 29, 2021

Q&A from the Moogsoft/Datadog Fireside Chat

April 23, 2021

New Gartner AIOps Platform Market Guide Shows More Use Cases for Ops and Dev Teams

April 21, 2021

James (IT Ops Guy) and Dinesh (SRE), Petition the CIO and CFO For AIOps Rollout

April 21, 2021

Coffee Break Webinar Series: Under the Covers of AIOps