15 JUL, 2020
Continuous Integration (CI) and Continuous Deployment (CD) are the pillars of an Agile ecosystem. The tools used in CI/CD allow us to automate the tasks which follow a change in a project codebase, from automated testing to static and dynamic code analysis, and automatic deployment.
Know Your Basics – What is a Jenkins Pipeline?
Jenkins is a free and open source automation server. It helps automate the parts of software development related to the build, test and deployment of code, facilitating continuous integration and continuous delivery. (Ref: Wikipedia – Jenkins (Software))
In computing, a pipeline, is a set of connected data processing elements, where the output of one element is the input of the next one. It is easy to see a relation with a CI job, where the execution is composed of several steps, each one with its own configuration and dependent on the result of the previous ones.
Jenkins Pipeline (or simply “Pipeline” with a capital “P”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. (Ref: What is a Pipeline?)
A Pipeline takes code from your source control repository and helps you run tests and make changes to enable the application to be “production-ready”.
Pipeline Project
Within a Pipeline Project (read plugin), Jenkins introduces a domain-specific language (DSL) based on ‘Groovy’, which can be used to define a new pipeline as a script. The flow that would typically require many “standard” Jenkins jobs chained together, can be expressed as a single script. The Groovy-based DSL syntax allows us to combine the best of both worlds.
This gives us a few important benefits –
- Pipeline adds a powerful set of automation tools onto Jenkins.
- Setting up a Pipeline project means writing a script that sequentially applies steps or stages to obtain the intended output for our deployment
- We can use conditionals (if, then, else), loops (for, while), variables and so on. Since Groovy is an integral part of Jenkins, we can also use it to access almost any existing plugin, or even Jenkins core features
- By storing the Pipeline scripts in, let’s say, Git, we can apply the same process as with any other code. We can commit it to the repository, use pull requests, code reviews, and so on
- Furthermore, the Multibranch Pipeline Plugin allows us to store the script in Jenkinsfile and define different flows inside each branch
A Jenkinsfile (also known as a Pipeline script) is a text file that contains the definition of a Jenkins Pipeline and is “checked in” to a project’s source control repository.
Creating a Jenkinsfile and committing it to source control provides a few immediate benefits:
- It automatically creates a Pipeline build process for all branches and pull requests
- Ability to implement a Code review/iteration on the Pipeline (along with the remaining source code)
- Enabling ‘Audit Trail’ for the deployment pipeline
- It serves as a single source of truth for the Pipeline, which can be viewed and edited by multiple members of the project
Inline Editor
Using the inline editor allows one to quickly test a new job configuration, by editing the pipeline script in a text field, thus saving the configuration and running the job.
Versioned Pipeline
As Pipelines continue to grow, it would be difficult to maintain them, by using the text area present in the Jenkins job configuration page only. A better option is to store them in specific files versioned within your project repository.
By doing so, all changes to the job configuration will be versioned and can be easily updated using a script. In this case, it wouldn’t be possible to type the pipeline content directly in the configuration page as all the content will be available in the SCM tool being used. We, therefore, would need to specify repository parameters for the job and the path of the file containing the pipeline definition inside the repository itself. Compared to traditional (freestyle) jobs, pipelines provide some awesome features –
- Visualization: Pipelines provide a better visualization of the status of several parts that comprise a Jenkins job
- Code: Pipelines are implemented in code and typically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline
- Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins master
- Pausable: Pipelines can optionally stop and wait for human input or approval before continuing the Pipeline run
- Versatile: Pipelines support complex real-world continuous delivery requirements, including the ability to fork/join, loop, and perform work in parallel
- Extensible: The Pipeline plugin supports custom extensions to its DSL (Domain-Specific Language) and multiple options for integration with other plugins
Exceptions
The basic statements and expressions which are valid in Declarative Pipeline follow the same rules as Groovy syntax with the following exceptions:
- The top-level of the Pipeline must be a block, specifically, pipeline { }
- No semicolons should be used as statement separators, each statement must be on its own line
- Blocks must only consist of sections, directives, steps, or assignment statements.
- A property reference statement is treated as a ‘no-argument’ method invocation. So for example, input is treated as input()
Pipeline Terms
Declarative Pipeline encourages a declarative programming model and imposes limitations to the user with a much stricter and pre-defined structure, which would be ideal for simpler continuous delivery pipelines. It provides a simplified and more friendly syntax with specific statements for defining them, without needing to learn Groovy.
All valid Declarative Pipelines must be enclosed within a pipeline block, for example:
pipeline {/* insert Declarative Pipeline here */}
The agent section specifies, where the entire Pipeline, or a specific stage, will be executed in the Jenkins environment depending on where the agent section is placed. This section must be defined at the top-level inside the pipeline block, but stage-level usage is optional.
The environment directive specifies a sequence of key-value pairs, which will be defined as environment variables for all the steps, or stage-specific steps, depending on where the environment directive is located within the Pipeline.
The parameters directive provides a list of parameters which a user should provide when triggering the Pipeline.
These parameters can be called when you select ‘build with parameters’ option. The values for these user-specified parameters are made available to Pipeline steps via the params object. This block is allowed only once inside the pipeline block. Parameter types can be string, text, or Boolean Parameter (booleanParam).
This section supports other parameter types as well, including cloud credentials and secret keys.
class="lang:default decode:true " >pipeline { agent any parameters { string(name: 'NAME', defaultValue: '', description: 'What is your Full Name?') text(name: 'ADDRESS', defaultValue: '', description: 'Enter some information about your address') choice(name: 'GENDER', choices: ['Male', 'Female', 'Others'], description: 'Select your Gender') } stages { stage('Example') { steps { echo "Hello ${params.NAME}" echo "Address: ${params.ADDRESS}" echo "Gender: ${params.GENDER}" } } }
Stages allow to group job steps into different parts, and they are the main components of a pipeline. Each part contains a sequence of one or more stage directives. At a minimum, it is recommended that stages contain at least one directive for each discrete part. It is easy to see the duration of a single stage in a pipeline, for multiple job runs. Also, in case of an error in any stage, the failed stage is highlighted, and the following stages are not executed. Different stages can be Build, Test and Deploy.
pipeline { agent any stages { stage('Build') { steps { echo 'Building the project' } } stage('Test') { steps { echo 'Executing Testcases ' } } } }
If you want your stage to run only when a specific condition is true, you can use the ‘when’ directive. The when directive allows the Pipeline to determine whether the stage should be executed, depending on the given condition. This directive must contain at least one condition. While optional, you can include the ‘when’ directive inside a stage block.
pipeline { agent any parameters { choice( name: 'RELEASE_ENVIRONMENT', choices: "Build\nTest", description: '') } stages { stage('Build') { when { expression { "${params.RELEASE_ENVIRONMENT}" == 'Build' } } steps { echo 'Building' } } } }
A ‘step’ defines the series of actions that a stage needs to perform. It is allowed inside each separate stage block. It is important to have at least one step directive inside a stage block. Specific commands as well as shell scripts can be run inside a step block.
pipeline { agent any stages { stage('CI') { steps { echo 'Building the project' echo 'Testing the project' } } stage('CD') { steps { echo 'Deploying on QA ' echo 'Deploying on Stage ' echo 'Deploying on Prod' } } } }
Snippet Generator
In order to better learn the pipelines DSL, an online snippet generator is provided along with the Pipeline Plugin. This generator allows us to create snippets of code for practically all the steps available within a pipeline.
Interestingly, it is aware of the Jenkins environment and provides some error checking and additional capabilities depending on the plugins installed on the system. As we can see in the following image, the snippet generator integrates with existing build steps, allowing a configuration like the one that is used in traditional Jenkins jobs.
You can find detailed information of the above narrative here
About the Author: Ashwarya Joshi is a result oriented and dedicated individual working as a cloud engineer at Tavisca Solutions. She has extensive experience in AWS, Docker, ECS, Kubernetes, IaC, Grafana, Redis, Cassandra, RabbitMQ, Jenkins, Consul.