Lessons from deploying to kubernetes automatically
Anyone who has worked with Docker containers, or any container technology for that matter, has at some point considered using them in production, simply because of their ease of use. And then, as you take these first steps to a containerized production, quite a few problems arise.
I would like to share some learnings from containerization. It is assumed that Docker is used, but follow along regardless of your container solution!
The first time anyone tries to put a container in production, it usually goes fairly smoothly. You build from a Dockerfile, you put the image in production, and it runs as expected. Fantastic. Then you try to do the same thing for an application, which is when the problems usually trickle in.
Without going into too much detail, quite a lot of these type of problems happen because a state is required, and containers are supposed to be stateless. Many of these problems are solved by orchestration, where the solid candidates for an orchestration cluster are Swarm and Kubernetes. In this post the choice is Kubernetes, but the principles apply to any container orchestration in a pipeline context.
Dealing with the state of containerized production environments is something that Kubernetes and Swarm offers tools for, but that is out of the scope for this blogpost.
Given the context of working within a continuous delivery pipeline with containers, one of the differences is maintenance of artifacts.
Instead of keeping a catalogue of binaries, it is now a catalogue of container images with their entire state and dependencies baked into the image. This opens up some new possibilities, that were not feasible previously. Given proper versioning, every commit on the delivery branch will be able to output a Docker image with its state at the given deployment. This is a subtle difference from a binary, but it means that given the same container images, setup in the same way, you will always get the same results.
Simply put - you can recreate the staging environment or production environment for a commit half a year old, without having to bring a machine to the state it needed to be in half a year before.
While the choice of pipeline is a matter of taste, Concourse is an interesting newcomer. It has some subtle differences, which forces good practices when it comes to building pipelines.
Regardless of the build engine you have chosen, handling of Docker images for a staging environment comes down to a few steps in any pipeline:
The Concourse configuration below does exactly that:
This process is quite straight forward for most pipelines. The next, and slightly more challenging, step is to spin up a complete staging environment for the application, including any dependencies the application might have to databases/other applications and so on.
Concourse was chosen this time because of its advantage in everything being maintained as code, which makes it easy to reinstantialize. The Concourse pipeline in this blogpost can be found in my github repository, which runs on an image provided by Stark and Wayne.
Having produced a Docker image and pushed it to a registry, the next bit is the deployment to a staging area. Concourse offers a community Kubernetes resource which was tested out, and the part that would be shared by ALL pipelines is the access to a (https) cluster:
KUBECTL="/usr/local/bin/kubectl --server=$KUBE_URL --namespace=$NAMESPACE" # configure SSL Certs if available if [[ "$KUBE_URL" =~ https.* ]]; then KUBE_CA=$(jq -r .source.cluster_ca < /tmp/input) KUBE_KEY=$(jq -r .source.admin_key < /tmp/input) KUBE_CERT=$(jq -r .source.admin_cert < /tmp/input) CA_PATH="/root/.kube/ca.pem" KEY_PATH="/root/.kube/key.pem" CERT_PATH="/root/.kube/cert.pem" echo "$KUBE_CA" | base64 -d > $CA_PATH echo "$KUBE_KEY" | base64 -d > $KEY_PATH echo "$KUBE_CERT" | base64 -d > $CERT_PATH KUBECTL="$KUBECTL --certificate-authority=$CA_PATH --client-key=$KEY_PATH --client-certificate=$CERT_PATH" fi
Basically it is necessary to provide kubectl, a server url and the namespace to deploy to. Given it is supposed to be in a production setting, if the Kubernetes cluster in question was minikube for example, that would mean that Kube_Ca is ca.crt, Kube_Key is apiserver.key and Kube_Cert is apiserver.crt. The author of the Kubernetes resource has chosen to base64 encrypt the certificates, but that is not a necessary step and is done to allow keeping it in a credentials file, as a oneliner for each certificate.
The /tmp/input is a matter of an implementation detail in Concourse.
However, at deployment time one of the problems with the Concourse resource is that it simply does a rolling update of an existing deployment and changes image. While this gets the job done, it means that recreating the deployment is a headache. At this point, I decided to create a small project which can generate yml, that can be used to deploy the container to Kubernetes alongside a service and an ingress.
Together with Træfik (which was covered in another blogpost) this means that a domain is accessible, which hits the newly deployed application. This makes any kind of end-to-end testing significantly easier, and mimics how the system would behave in a real production setting. A side effect of having everything as code, is that the deployment to kubernetes can be checked into source control and then checked out to recreate the exact same staging/production environment down along the line!
The obvious benefit of this is that it massively helps a Q&A or operations team by allowing them to go back to ANY release and test, which is fantastic if you have to maintain old releases.
The given community Kubernetes resource missed a few things, so I forked it and added a few things. Instead of updating a deployment image, I allowed the usual:
kubectl create -f generated-file.yml
but also allowed automatic cleanup, by allowing the equivalent:
kubectl delete -f generated-file.yml
Using the forked Concourse resource, this is the configuration used:
Notice that the end-to-end test also cleans up the deployed jobs from the exact same files.
The application used for testing did not have a database. But in principle there is nothing preventing a database from being deployed alongside the application in the exact same manner, inside the cluster, which allows a full spin up of a production like system - and then a full tear down that effectively creates a staging environment as close to production as you can get.
The full staging pipeline as made in Concourse only has one step left, which is to fast forward the stable branch with the .yml files generated.
Similarly to how the staging environment is created, the same logic can be applied to deployments to production by just changing the namespace in Kubernetes to the production equivalent namespace. Whatever goes with a release should also be taken care of, in this pipeline documentation and release notes are included. The steps for production are therefore:
The pipeline is the same, but instead of generating a release image, the same can be done with an image serving documentation and release notes, deployed alongside the production image to production. Because of proper versioning, that also means that history for documentation and release notes becomes trivial.
In turn, this means that a given version of the application can be given with the corresponding documentation, staging environment, production environment and release notes at that point in time, allowing a sales team to sell a specific version of the software, or an engineering team to consume an old version of the same application, without having to depend on anyone.
This approach is highly recommended, but can be solved in many other ways. Simply serving documentation and release notes on an internal page in the company has the same effect, so it is a matter of taste.
The last bit of the Concourse pipeline looks like this, with documentation and release notes deployed alongside:
Here it is assumed that the level of testing is high enough that a release can be launched automatically, and scaled up using Kubernetes to ensure the new feature or bug fix does not impact the product negatively.
Next step is to tap into Kubernetes in a way where releasing means scaling up slowly, with heavy monitoring - which will be covered in another blog, so stay tuned!
Do you have a tendency to use the backlog as an eternal placeholder? If so, you probably have a lot of clutter that’s creating a lot of frustrations for your end-users. In this post we’ll show you how to clean up your Jira issues and reduce the backlog with some basic JQL queries.
Tips to improve project management in the Atlassian suite
How to test Kubernetes artifacts like Helm charts and YAML manifests in your CI pipelines with a low-overhead, on-demand Kubernetes cluster deployed with KIND - Kubernetes in Docker.
Low overhead, on-demand Kubernetes clusters deployed on CI Workers Nodes with KIND
Had enough of sluggish polling? With instant Artifactory event triggers you can give responsiveness in Jenkins a real boost. Here’s an easy way to set it up.
A super easy configuration guide
With the arrival of microservices code is becoming disposable. Does this mean that we no longer need maintainable code? Is it the end of refactoring?
Still relevant or increasingly redundant?
In software development tight coupling is one of our biggest enemies. On the function level it makes our application hard to change and fragile. Unfortunately, tight coupling is like the entropy of software development, so we have always have to be working to reduce it.
How to safely introduce modular architecture to legacy software
I am an Atlassian certified trainer and over the years I have been spending much time with clients and their Jiras. In this blogpost, I have collected some small tips and tricks that will make your Jira usage better.
Jira Software is a powerful tool deployed in so many organizations, yet in day to day usage people are missing out on improvements, big and small.
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps