The conference calendar for software developers in Gothenburg is quite frankly, a little sparse. So I was delighted when Praqma decided to hold one of their hands-on “Day of…” events here. This time it was Continuous Integration, and Jenkins in particular, on the agenda.
We were really excited about hosting some major league Jenkins developers and they really lit up the event with their insights and expertise. The tagline for the conference was ‘BYOL’ - Bring Your Own Laptop - as most of the sessions involved hands-on coding and learning how to use various Jenkins features. This wasn’t just any old run-of-the-mill training course. These speakers work at the Jenkins coalface - they’re exploring what you can do with Jenkins on a daily basis. Their deep knowledge of how Jenkins fits together underneath, coupled with their expertise in modern development practices, meant this definitely wasn’t a beginner class. This was more of a showcase: check out what Jenkins looks like when you’re doing it really well.
Developments From 2005 to 2017
We’d sold all the available tickets, so it was standing room only by the time Christopher Orr stepped up to the podium to speak about “The latest and greatest in the world of Jenkins.” This was hotly anticipated stuff. He gave us a potted history of the Jenkins project, complete with anecdotes about Kohsuke Kawaguchi, the original developer, and a picture of him drawing a sword (to kill bugs with, of course!).
Christopher brought us all up to date with the main features in Jenkins 2, released in 2016. I think there were many in the audience who were still using a much older version, and will be going back to the office to do some upgrading very soon.
Jenkins Makes Like A Phoenix
Next up was Andrey Devyatkin to discuss “Doing Jenkins as CoDe.” This was an intensive, hands-on session. Starting from a completely fresh, empty cloud instance, we got Jenkins set up and ready to roll in a matter of minutes. What’s more, we had a Jenkins job that could tear everything down, rebuild it from version-controlled source, and restart all the servers. It was a true Phoenix system - any modern DevOps engineer would be impressed!
It’s a technique allows Jenkins administrators to give teams the freedom to experiment and set up their jobs in different ways while keeping control over the essentials. If something stops working it’s easy to discard the changes.
Build A Declarative Pipeline. From Scratch!
Robert Sandell is a very experienced Jenkins developer, and in the next session he took us on a really interesting journey. He showed us how to build a declarative pipeline that builds and deploys a modern application. From scratch! Link to the GitHub repository.
Not only did we see the various syntax and language features we could use, he also showed us how to work. Build up the pipeline a little at a time and test it at every stage. Put in ‘hello world’ steps until you have the structure right, then gradually replace them with meaningful code. The final pipeline is available here on GitHub but you really had to be there to truly appreciate how he built it.
It was beautifully done.
Surprise: the agents don’t control the build!
Another long-time Jenkins contributor, Oleg Nenashev, took to the stage next. He gave us an in-depth look at the “Dark side of Jenkins: Dealing with agent connectivity issues.”
Being able to distribute builds to an army of agents (previously called slaves) was one of Jenkins’ early killer features. It meant you could get more builds done in less time by throwing hardware at the problem. So, the first thing Oleg asked us was whether we thought the agent controls the build. Surprisingly, it doesn’t! The Jenkins master is constantly in communication with the agents to control what they’re doing. If the connection is flaky it can have a really negative impact on the result.
Oleg has clearly done a lot of work in this area and has seen every imaginable problem in the wild. This was a very technical session and it made everyone appreciate the fact that someone like Oleg is working to improve this area. There was more information here than I hope I will ever need!
Sharing is Caring
The last hands-on session of the day saw Julien Pivotto take us through building our own shared libraries. This seems to me like a really useful feature - a bit like plugins, but at a lower level.
You can create code snippets encapsulating useful functionality, available to all your various jobs, and of course you can share them with the open source community too. Over time the power available in a declarative pipeline is just going to increase, and more and more people contribute.
I was really pleased to realise that I could progress from a simple helper shell script used in one build job to a fully fledged program that I could unit test and document, and, of course, re-use in several other jobs.
The last half hour was dedicated to questions from the audience. Several people wanted to know more about specific Jenkins features and what was on the agenda for future development work.
One question that was on many people’s minds was raised near the end though, ‘I have a fairly old Jenkins installation that mostly works but it’s all hand-configured jobs and older plugins. Should I throw it out and re-write it with all the latest features you’ve just demonstrated?’ The answer from the speakers was clear - YES!
Well… except don’t throw it all out at once. Backwards compatibility is a cherished feature of Jenkins, and you can migrate it one job at a time. From what I’ve seen at the “Day of Jenkins”
I have a feeling that there will be a lot of developers in Gothenburg doing just that.
Find out more about the event on www.code-conf.com
See the speakers’ slides.