This site uses cookies to improve your user experience. Read our privacy notice.
Continuous Delivery Assessment
Continuous Delivery
DevOps
IT Operations Assessment
Consulting
Git Migrations
Technical Agile Coaching
Testing
Atlassian Consulting
Docker
Kubernetes Consulting
We want to automate everything. Therefore, we are constantly working on new solutions and projects to make DevOps and Continuous Delivery easier.
Eficode Praqma Academy offers public training courses and custom on-site courses. All courses are hands-on, and run by professionals with real world experience.
Engaging with the community is essential for Eficode Praqma. We regularly host conferences and meet-ups, and proactively contribute to open source projects.
We specialize in Continuous Delivery and aim to change the world of software development by sharing our knowledge and building strong communities
The pros and cons of implementing Jenkins pipelines
With multibranch pipelines, Jenkins has entered the battle of the next generation CI/CD server. But with contestants such as Concourse and CircleCI, there is no clear winner.
Eficode Praqma has been working with Jenkins (née Hudson) as the de facto continuous integration server for nearly a decade now. And for a long time, Jenkins leadership was unchallenged. But recently a plethora of competitors have entered the scene, giving good ol’ Jenkins a run for its money. To combat this, Jenkins has introduced pipelines: a Groovy DSL to control your CI flow by code. With CloudBees investing heavily in pipelines, it has become the future of Jenkins. If you haven’t looked at it yet, go visit the Jenkins pipeline page
So should you just convert all your jobs to pipeline, and then live happily ever after? The answer is, as it is in almost any other aspect in life; it depends.
This post shows examples using the Phlow and Eficode Praqma’s Pretested Integration Plugin. If you do not know about it, go educate yourself.
At one of our customers we wanted to create a pipeline that built on multiple OSes in parallel. The solution should block the integration job until all compilation and unit test was done. The original pipeline was made through the Jenkins UI, so we needed to make it as code one way or another. No more pointy pointy, clicky clicky!
We could either convert the existing solution to JobDSL, or try to make a Jenkins pipeline version of the same build flow. We went for Jenkins pipelines in order to gain some of the benefits listed in our slide deck on future pipelines.
Pipeline jobs come in two variants; normal and multibranch. Common to both of them is the language; a Groovy based DSL.
We went with the multibranch pipeline, as it accomplishes having the pipeline embedded as code in the same repository (usecase #1 in the slide deck above). Also, in standard pipeline, there is no way of knowing which branch activated the build, as Jenkins checks out a SHA, not a branch.
A reduced example of the pipeline in production can be found here.
As Eficode Praqma’s current version of Pretested Integration Plugin isn’t Pipeline compatible, we needed to script us to the same functionality. I will not go through the pipeline script, but talk about the problems we ran into when developing the script.
As pipelines is a new way of interacting with Jenkins, you cannot leverage all the plugins and functions that a normal freestyle job offers.
For example, the Git checkout routine can be used for the cloning part of pretested integration, but when we need to push the code back to a branch there is no help.
An issue is raised concerning this, but until it is resolved, you need to use bare Git commands wrapped in the sshagent
plugin.
As illustrated below, the old freestyle jobs can easily be made into an overview when set to trigger one another.
The overview gives a good glance at which branch is building right now and you can identify any persistent errors in a pipeline by scrolling down the list of executions.
With Jenkins Pipeline, the per-master view approach we can take with PIP and freestyle jobs is lost.
The overview is repository-centric, meaning all branches are treated equal, resulting in one view for all branches.
To give an example, both the master
and /version_2.x/master
will now be in the same view.
When going into the individual build, we get a much better overview of parallel builds and easy navigation to the relevant output than what the old style overview did. Each icon is clickable, and shows you the list of commands and corresponding logs.
So if this is a must have feature, you need to make some kind of dashboard that can give some of the same information as the old way (look at dashing.io or Pipeline Aggregator View).
Another way is to split the pipeline in two parts:
That way you get the per-master spilt you want, but sacrifice the whole traceability by developer commit.
Unless you have multiple stages running on the same node at the same time, you do not get a new workspace. It automatically reuses the old.
This results in a lot of deleteDir()
in your pipeline. Otherwise you get wired errors when stashing and unstashing.
You can wrap your node inside a function to make sure that you always have a clean workspace.
Retriggering a pipeline from a given job is again easy in freestyle jobs.
When dividing everything into individual jobs, we have the possibility to retrigger a given pipeline from a specific point.
When using Jenkins Pipeline, it’s all or nothing, making it impossible do a retrigger with PIP, as you do not want to retrigger the integration step.
CloudBees has made a propritary plugin called checkpoints. It allows you to restart at that checkpoint. Unfortunately, they did not open source it or close any issues highlighting this .
That is also an argument for splitting the pipeline in two; one for integration to master, the other for the build pipeline itself.
If you need to perform some manual validation before proceeding to the next stage in your pipeline use the input
tag in your Jenkinsfile
. If it’s only a signal, put it on a flyweight executor so it doesn’t take up an executor slot on a node.
stage 'Promotion' {
input 'Deploy to Production?'
}
There are some disadvantages to this workflow. In the Jenkins UI it looks like the pipeline is still executing even though it is waiting for manual input. I would have loved to see it parked instead of actively still running. But that is what we got for now.
Jenkins Pipelines is definitely a step in the right direction, and I must be totally honest with you; I like it, but I also see problems!
It is not more than a month ago that Jenkins removed the ‘BETA’ tag from Blue Ocean, the new UI that matches Pipelines.
Some of the problems described here could be solved a year from now, and some of the things we just learn to deal with in more elegant ways.
To sum up the experience, here is a TL;DR of the pros and cons discussed above:
Pros:
Cons:
/ready
branches are deleted in the overview after they are run (and merged)In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
Get started with Serverless Jenkins X
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Tutorial: Snapshotting Persistent Volume Claims in Kubernetes
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Turning an example by Martin Fowler into a Refactoring Kata
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
Accelerate State of DevOps Report 2019
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
Misused Figures of DevOps
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
Start well with Kubernetes
How to make the right technical choices on your cloud native journey
Learn how Docker and Kubernetes work and the key benefits they bring. Using real demos, I show how Docker is a great packaging and distribution technology, and how Kubernetes provides a powerful runtime for containerized applications.
Video - Docker and Kubernetes in 40 minutes
Watch this introduction to Docker and Kubernetes at the Trondheim Developer Conference (TDC)
In the world of Agile and DevOps we use many figures, charts and diagrams to argue and reason about our world and how we prioritize and make choices. However, at all levels of the organization, we misuse and misinterpret figures. It’s time to be explicit, measure the right things and act on them. Watch this talk from DevOpsDays Zurich in May 2019.
Misused Figures of DevOps - Video
Watch this talk from DevOpsDays Zurich
Summer is a great time to catch up on reading, whether you’re at the beach, in a summer house, or cozy at home. If your book backlog is on the short side, don’t worry! We compiled a list of great books for summer reading.
Six books you should read when working with DevOps
Inspiration for your summer reading list
At Eficode Praqma we believe in knowledge sharing, and we love to teach our technical expertise. Watch this series of videos to learn how traefik reverse proxy works step by step.
How to use Traefik reverse proxy
A video seminar to learn how Traefik works
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
Pipeline - The Game that Delivers!
A new card game to design Continuous Delivery pipelines
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps
All the latest from the world of DevOps and Continuous Delivery.
- Events & Tech Blogs
- Training Courses & New Services
- No Spam (we hate that too)
Get started with Serverless Jenkins X
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
Tutorial: Snapshotting Persistent Volume Claims in Kubernetes
Sneak peak at CSI Volume snapshotting Alpha feature