Harness the Power of Jenkins Shared Libraries

With the growing adoption of the Jenkins Pipeline in organizations, common patterns emerge, necessitating the sharing of reusable components to enhance code management and eliminate redundancies. Jenkins’ “Shared Libraries” feature facilitates this by allowing the definition of libraries in external repositories, seamlessly integrated into existing Pipelines.

In this post, I will guide you through the process of harnessing the full potential of shared functions in Groovy pipelines. By implementing these shared functions, you will be able to unlock scalability, maintainability, and efficiency in your pipelines, resulting in heightened productivity and accelerated development processes.

What should you expect to learn?

  1. Achieving consistency and modularity with Shared Libraries in DevOps processes.
  2. Orchestrating pipeline workflows by modularizing components.
  3. Reusability and benefits of shared common functions for collaboration.

Consistency through Modularization

By leveraging Shared Libraries, DevOps teams can modularize processes across the entire project lifecycle, enabling tasks like configuration changes, quality checks, and dependency gathering to be encapsulated within reusable library functions. This approach minimizes error risks and fosters consistency throughout the development process.


If you’re interested about the modularity and benefits of the Jenkins Shared Library in our pipelines, check out my YouTube tutorial. It provides insights on incorporating this modular approach effectively:

Orchestrating the Pipeline Workflow

As DevOps faces a myriad of requests for creating and customizing pipelines, becoming the central focal point, it is crucial to advance towards a more sophisticated orchestration approach. Rather than writing numerous repetitive pipelines, we can deconstruct workflow processes into modular components.

(root)
+- src
| +- org
| +- foo
| +- functions
|. +- dockerLibs.groovy
| +- pipelines
| +- backend.groovy
+- vars
| +- commonPipeline.groovy
+- resources
| +- org
| +- foo
| +- docker.yaml (not discussed yet)

Unveiling the skeleton pipeline

Let’s lay the foundation for a standardized skeleton pipeline that provides developers with a clear framework to follow.

Hers is the Core Pipeline Blueprint:

/vars/commonPipeline.groovy

#!/usr/bin/env groovy
def call(sharedLibrary, svcName, buildCommands, pod, slackChannel) {
...
pipeline {
agent {
...
}
stages {
stage('Initialization') {
...
}
stage('Compilation') {
when { expression { buildCommands['compileData'].run } }
steps {
echo "Starting Compilation stage"
script {
try {
sharedLibrary.executeStage("compile", buildCommands['compileData'])
} catch(Exception e) {
echo "Failed in compilation stage: ${e.toString()}"
throw e
}
}
}
}
stage('Unit test') {
...
}
stage('Build and Upload Artifact') {
...
}
stage('Integration Tests') {
...
}
stage('Deployment') {
...
}
}
post {
...
}
}
}

The Reusability of Common Functions

Unleashing the benefits of Shared Common Functions can foster collaboration and ensure high professional development practices.

A Repetitive Function Example for Reference:

/src/org/foo/functions/dockerLibs.groovy

def buildImage(Map args=[:]) {
if (!args.version) args.version = "latest"
return docker.build("${args.imageName}:${args.version}")
}

The responsibility of the build image function is to build a docker image.

Do you ponder the frequency of using this function throughout the lifespan of our workflow?

If the answer is — yes

You might want to consider organizing the function within the common function section, ensuring it is logically structured in a hierarchical manner.

In our scenario, we have created a folder where we store shared functions related to Docker. This is where we create and store all the Docker functions for our various use cases. Subsequently, we can import the docker.groovy class and conveniently reuse all the functions it contains.

Building the pipeline workflow

The workflow class, located in the shared library, is distinct and tailored to the specific needs of the team or product. It serves as an individualized class within the library and can potentially utilize other shared classes, such as the docker.groovy mentioned earlier. Essentially, the purpose of the pipeline workflow is to incorporate and build upon the skeleton structure we previously established.

Here is an example:

/src/org/foo/pipelines/backend.groovy

package org.foo.pipelines;
import groovy.transform.Field

@Field String svcName = (scm.getUserRemoteConfigs()[0].getUrl().tokenize('/')[3].split("\\.")[0]).toLowerCase()
@Field String organization = "naturalett"
def docker_library = new org.foo.functions.dockerLibs()

def executeStage(stageName, stageData, tag="") {
switch (stageName) {
case "initializaion":
this.initializaion(stageData)
break;
case "compile":
this.compile(stageData)
break;
case "test":
this.test(stageData)
break;
case "artifact":
this.artifact(stageData)
break;
case "int-test":
this.intTest(stageData)
break;
case "deployment":
this.deployment()
break;
}
}
def initializaion(stageData) {
echo "TODO"
}
def compile(stageData) {
image = docker_library.buildImage(
imageName: "${organization}/${svcName}"
)
// Alternative way to build the image: image = docker.build("${organization}/${svcName}")
}
def test(stageData) {
echo "TODO"
}
def artifact(stageData) {
echo "TODO"
}
def intTest(stageData) {
echo "TODO"
}
def deployment(stageData) {
echo "TODO"
}
def successStep() {
echo "TODO"
}
return this

The differentiation between each pipeline occurs within the workflow itself. Each pipeline defines its own workflow and utilizes the shared classes from the shared library. This approach allows us to have a tailored workflow for each team or product while also promoting the reusability of functions.

Summarize

We’ve gained an understanding and explored the concept of Jenkins Shared Libraries and their benefits in optimizing Jenkins Pipelines. Shared Libraries enable the reuse of components across projects, enhancing code management and eliminating redundancies. By modularizing processes and encapsulating them within reusable library functions, organizations can achieve consistency and modularity in their DevOps processes.

Our next steps involve…

Furthermore, there are additional practices that justify further discussion. In the upcoming post, I will delve into further functionality regarding automated pipeline decisions, including:

  1. Configuring project language for initial setup.
  2. Jenkinsfile implementation.

Are you looking to enhance your DevOps skills and learn how to implement continuous integration using Jenkins container pipelines? Join my course, where you’ll gain practical experience in various pipeline scenarios and master production skills. Take the first step towards becoming a DevOps expert today.

Hands-On Mastering DevOps — CI and Jenkins Container Pipelines

Facebook
Twitter
LinkedIn