An interactive Jenkins showcase
The conference calendar for software developers in Gothenburg is quite frankly, a little sparse. So I was delighted when Praqma decided to hold one of their hands-on “Day of…” events here. This time it was Continuous Integration, and Jenkins in particular, on the agenda.
We were really excited about hosting some major league Jenkins developers and they really lit up the event with their insights and expertise. The tagline for the conference was ‘BYOL’ - Bring Your Own Laptop - as most of the sessions involved hands-on coding and learning how to use various Jenkins features. This wasn’t just any old run-of-the-mill training course. These speakers work at the Jenkins coalface - they’re exploring what you can do with Jenkins on a daily basis. Their deep knowledge of how Jenkins fits together underneath, coupled with their expertise in modern development practices, meant this definitely wasn’t a beginner class. This was more of a showcase: check out what Jenkins looks like when you’re doing it really well.
We’d sold all the available tickets, so it was standing room only by the time Christopher Orr stepped up to the podium to speak about “The latest and greatest in the world of Jenkins.” This was hotly anticipated stuff. He gave us a potted history of the Jenkins project, complete with anecdotes about Kohsuke Kawaguchi, the original developer, and a picture of him drawing a sword (to kill bugs with, of course!).
Christopher brought us all up to date with the main features in Jenkins 2, released in 2016. I think there were many in the audience who were still using a much older version, and will be going back to the office to do some upgrading very soon.
Next up was Andrey Devyatkin to discuss “Doing Jenkins as CoDe.” This was an intensive, hands-on session. Starting from a completely fresh, empty cloud instance, we got Jenkins set up and ready to roll in a matter of minutes. What’s more, we had a Jenkins job that could tear everything down, rebuild it from version-controlled source, and restart all the servers. It was a true Phoenix system - any modern DevOps engineer would be impressed!
It’s a technique allows Jenkins administrators to give teams the freedom to experiment and set up their jobs in different ways while keeping control over the essentials. If something stops working it’s easy to discard the changes.
Robert Sandell is a very experienced Jenkins developer, and in the next session he took us on a really interesting journey. He showed us how to build a declarative pipeline that builds and deploys a modern application. From scratch! Link to the GitHub repository.
Not only did we see the various syntax and language features we could use, he also showed us how to work. Build up the pipeline a little at a time and test it at every stage. Put in ‘hello world’ steps until you have the structure right, then gradually replace them with meaningful code. The final pipeline is available here on GitHub but you really had to be there to truly appreciate how he built it.
It was beautifully done.
Another long-time Jenkins contributor, Oleg Nenashev, took to the stage next. He gave us an in-depth look at the “Dark side of Jenkins: Dealing with agent connectivity issues.”
Being able to distribute builds to an army of agents (previously called slaves) was one of Jenkins’ early killer features. It meant you could get more builds done in less time by throwing hardware at the problem. So, the first thing Oleg asked us was whether we thought the agent controls the build. Surprisingly, it doesn’t! The Jenkins master is constantly in communication with the agents to control what they’re doing. If the connection is flaky it can have a really negative impact on the result.
Oleg has clearly done a lot of work in this area and has seen every imaginable problem in the wild. This was a very technical session and it made everyone appreciate the fact that someone like Oleg is working to improve this area. There was more information here than I hope I will ever need!
The last hands-on session of the day saw Julien Pivotto take us through building our own shared libraries. This seems to me like a really useful feature - a bit like plugins, but at a lower level.
You can create code snippets encapsulating useful functionality, available to all your various jobs, and of course you can share them with the open source community too. Over time the power available in a declarative pipeline is just going to increase, and more and more people contribute.
I was really pleased to realise that I could progress from a simple helper shell script used in one build job to a fully fledged program that I could unit test and document, and, of course, re-use in several other jobs.
The last half hour was dedicated to questions from the audience. Several people wanted to know more about specific Jenkins features and what was on the agenda for future development work.
One question that was on many people’s minds was raised near the end though, ‘I have a fairly old Jenkins installation that mostly works but it’s all hand-configured jobs and older plugins. Should I throw it out and re-write it with all the latest features you’ve just demonstrated?’ The answer from the speakers was clear - YES!
Well… except don’t throw it all out at once. Backwards compatibility is a cherished feature of Jenkins, and you can migrate it one job at a time. From what I’ve seen at the “Day of Jenkins”
I have a feeling that there will be a lot of developers in Gothenburg doing just that.
Find out more about the event on www.code-conf.com
See the speakers’ slides.
Had enough of sluggish polling? With instant Artifactory event triggers you can give responsiveness in Jenkins a real boost. Here’s an easy way to set it up.
A super easy configuration guide
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
This is a conference in Gothenburg for software developers, especially those with some experience of agile and a desire to improve their skills.
Industry experts and pioneers visit Gothenburg
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
A new card game to design Continuous Delivery pipelines
Choosing the perfect CI/CD tool for your project can be tricky. In this post we compare two managed CI/CD services - CircleCI and Google Cloud Build. Sami Alajrami is on hand to score our exciting CI/CD matchup.
How to choose the CI/CD tool that’s right for you
Jenkins is one of the best adverts for open source and at Praqma we have been using it since the very beginning. We enjoy giving something back to the community by hosting Day of Jenkins, and this year’s event was packed with exciting developments. Read on!
Day of Jenkins [as code] - A summary
Continuous Delivery and DevOps are here to stay and not because they’re being practiced by trendsetting unicorn companies. The fact is science tells us that these approaches work and this year’s conference gave us lots of examples.
Continuous Delivery and DevOps - Not Just For Unicorns
Jenkins Configuration as Code is finally here. In this blog I’ll cover how to convert an existing Jenkins instance to JCasC and also how to start from scratch. Let’s get it up and running!
How to spin it up from scratch or migrate an existing instance
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps