An automated release train, operated directly from my shell?
Imagine a workflow so sophisticated that you couldn’t break the integration branch even if you tried. And imagine being able to manage all of the issues, promotions, and deploys without leaving your terminal.
More than a year ago we presented a story about how we operate our git workflow directly from the command line using a handful of useful aliases. Since then we’ve taken these aliases to a self-contained Git extension named Git phlow (pronounced /fləʊ/ like in flow). The Git extension is available as both Brew and Chocolatey packages (install instructions here).
The Phlow encourages you to tie all your commits to tasks. This gives you so many benefits that once you’ve tried it you won’t be able to remember how you ever lived without it.
Beware: The Phlow is not related to the blogpost “The Git Flow - a Successful Branching Strategy” which made the internet unsafe a couple of years back. Actually the Phlow was originally created to remedy all the flaws in that approach. The Phlow implements a pure release train and in that sense it represents the simplest coolest Git branching strategy.
But, maybe the coolest feature of the Phlow is that it’s designed to be automated. The pipeline is a first-class citizen in the Phlow.
Behind the desire for an automated flow lies a subtle critique of the Pull-Request-based workflows that seem to have take over the world, particularly since the success of GitHub and Atlassian. The Pull-Request approach works really well when only one benevolent dictator and a few trusted lieutenants are guarding the repository while everyone else has to submit Pull-Requests. However, it’s even used extensively in teams where everyone is contributing to the code on equal terms.
You could argue that a Pull-Request represents unplanned work from the perspective of the person who is being asked to do the commit, and it represents queued work from the perspective of the developer. You can even say it’s a paradigm that values control or policy over quality. Some say that Pull-Requests are good because they help to teach the people involved. This is obviously beneficial, but it could be accomplished just as easily during development through either paired programming or simply chatting with colleagues. So, we must assume that as a quality gate, pull-requests stop faulty code from reaching the integration branch through means of validation.
When we say we’d like an automated flow, we’re referring to the process where the software being developed has a verifiable definition of done and that we have implemented that definition, as a pipeline. So we find it more natural to test if the developer’s code change can be verified in the pipeline.
Automatically. So we need a robot.
The software development process is LEAN. It resembles the process of optimizing a factory floor more than it resembles the process of building a building or a bridge. We’re in pursuit of a flow, and if it’s not an optimized flow we will improve it - continuously.
The Continuous Integration process (grey area) is triggered automatically on new commits and is optimized for speed. It integrates the code onto the master branch. The Continuous Delivery process (orange area) is also triggered automatically. It implements the complete definition of done and strives to verify a true release candidate. If something goes wrong, it means the software isn’t done yet.
As you can see from the picture, we have put validation after we have had the robot do all the verification. There’s no need to spend time on validation, if it’s not even verified.
The approach utilized by the Phlow is really simple. It builds on git branches and naming-conventions and isn’t dependent on a specific vendor technology - it’s just Git.
Seen from the developer’s terminal it’s really just three more git commands:
git workon 321 git wrapup git deliver
The logic in this is that you use
git workon followed by the number of the issue you want to work on. This will establish your entire setup locally; a dedicated branch to work on, named after the issue’s heading, ties to the issue (currently GitHub Issues or Jira are supported), and it will make the issue a work in progress.
When the work is done locally, the developer uses
git wrapup. It will add and remove files that were changed in the workspace to the index and the commit contains a message that automatically closes the issue by referencing a keyword when it enters the
The next thing you do is
git deliver which pushes your local branch to
origin but prefixes the branch name with a keyword (by default we use
ready/....). This is what the CI server is looking for and when it finds it the process kicks into actions.
The robot is looking for “ready” branches
The only long lived branch in the setup is
master. This is “the release train”. If you want to get your code to production, then you need to get it onto
master - there’s not other way. During your work, you might want to indicate several levels of promotion. You can do that by using more branches, provided that they are restricted to fast-forwards only. It’s simple to achieve this. If you set up a rule, that any promotion branch can only be fed from one other branch, it’s guaranteed to be fast-forward merges. In reality it’s just an easy way to implement a floating label. So
master forwards to
stable which forwards to
release. If you need permanent retrievable states, such as your historic releases, you must tag them - maybe even using signed tags.
From the perspective of the release train, all of the branches look the same. The orange line into
master is indicating that the process is automated. It can be a really simple case of
deliver, or it can be a case of a branch with a longer life-span. It can even be a maintenance branch, which also turns out to be like any other development branch, with the exception that the input is guarded using the same automated processes. It’s a micro-setup of the grander scheme - same pipeline, same quality - only it’s allowed to create maintenance releases to production. But, when changes made here need to go to the release train they’ll follow the same procedure as if it was a simple branch - because that’s essentially what it is.
The automated integration gives the developers all the freedom they could possibly wish for. They can utilize the full potential of Git pushing and sharing and collaborating on any branch they want. They can even reuse branches if they’re relevant, typically if an issue is reopened. The maintenance branch can continue to develop like any other branch. If it diverges from
master, and manual work is required in order to sort out merge conflicts or errors, it will be dealt with on an intermediate development branch in the development scope where it can be integrated automatically. When order is restored on a maintenance branch, the following deliveries can run automatically and directly to the
master branch again. If a branch is not contributing to the next upcoming release, it has no business on the the
master branch, but of course it can still keep in sync so that a future integration will not contain any surprises.
By following only 10 simple commitments when you work in Git, you can have a branching strategy that will support a software project of any size. And the git-phlow extension supports the Phlow out of the box. If you are using Jenkins, we’ve even made it really easy to setup the whole automation using the pretested integration plugin. This is updated in the most recent release to support the Jenkins Pipelines.
We have made a poster - just write to us if you want one to hang in your office. It makes life so much easier.
Had enough of sluggish polling? With instant Artifactory event triggers you can give responsiveness in Jenkins a real boost. Here’s an easy way to set it up.
A super easy configuration guide
With the arrival of microservices code is becoming disposable. Does this mean that we no longer need maintainable code? Is it the end of refactoring?
Still relevant or increasingly redundant?
In software development tight coupling is one of our biggest enemies. On the function level it makes our application hard to change and fragile. Unfortunately, tight coupling is like the entropy of software development, so we have always have to be working to reduce it.
How to safely introduce modular architecture to legacy software.
I am an Atlassian certified trainer and over the years I have been spending much time with clients and their Jiras. In this blogpost, I have collected some small tips and tricks that will make your Jira usage better.
Jira Software is a powerful tool deployed in so many organizations, yet in day to day usage people are missing out on improvements, big and small.
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
How to make the right technical choices on your cloud native journey
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps