Kubernetes as part of a (concourse) delivery pipeline

Lessons from deploying to kubernetes automatically

Anyone who has worked with Docker containers, or any container technology for that matter, has at some point considered using them in production, simply because of their ease of use. And then, as you take these first steps to a containerized production, quite a few problems arise.

Dealing with containerized production

I would like to share some learnings from containerization. It is assumed that Docker is used, but follow along regardless of your container solution!

The first time anyone tries to put a container in production, it usually goes fairly smoothly. You build from a Dockerfile, you put the image in production, and it runs as expected. Fantastic. Then you try to do the same thing for an application, which is when the problems usually trickle in.

Without going into too much detail, quite a lot of these type of problems happen because a state is required, and containers are supposed to be stateless. Many of these problems are solved by orchestration, where the solid candidates for an orchestration cluster are Swarm and Kubernetes. In this post the choice is Kubernetes, but the principles apply to any container orchestration in a pipeline context.

Dealing with the state of containerized production environments is something that Kubernetes and Swarm offers tools for, but that is out of the scope for this blogpost.

Opportunities with containers

Given the context of working within a continuous delivery pipeline with containers, one of the differences is maintenance of artifacts.

Instead of keeping a catalogue of binaries, it is now a catalogue of container images with their entire state and dependencies baked into the image. This opens up some new possibilities, that were not feasible previously. Given proper versioning, every commit on the delivery branch will be able to output a Docker image with its state at the given deployment. This is a subtle difference from a binary, but it means that given the same container images, setup in the same way, you will always get the same results.

Simply put - you can recreate the staging environment or production environment for a commit half a year old, without having to bring a machine to the state it needed to be in half a year before.

Enter Concourse

While the choice of pipeline is a matter of taste, Concourse is an interesting newcomer. It has some subtle differences, which forces good practices when it comes to building pipelines.

Regardless of the build engine you have chosen, handling of Docker images for a staging environment comes down to a few steps in any pipeline:

  • Versioning is required. Each Docker image produced should have a different version, embedded in its registry tag
  • Building the Docker image itself
  • Managing the artifact in a Docker registry

The Concourse configuration below does exactly that:

Concourse releasing application

This process is quite straight forward for most pipelines. The next, and slightly more challenging, step is to spin up a complete staging environment for the application, including any dependencies the application might have to databases/other applications and so on.

  • Deploy to staging environment - in our case that is Kubernetes
  • Deploy any dependencies alongside the application
  • Run all the relevant tests!

Concourse was chosen this time because of its advantage in everything being maintained as code, which makes it easy to reinstantialize. The Concourse pipeline in this blogpost can be found in my github repository, which runs on an image provided by Stark and Wayne.

Examining the community resources

Having produced a Docker image and pushed it to a registry, the next bit is the deployment to a staging area. Concourse offers a community Kubernetes resource which was tested out, and the part that would be shared by ALL pipelines is the access to a (https) cluster:

KUBECTL="/usr/local/bin/kubectl --server=$KUBE_URL --namespace=$NAMESPACE"
# configure SSL Certs if available
if [[ "$KUBE_URL" =~ https.* ]]; then
    KUBE_CA=$(jq -r .source.cluster_ca < /tmp/input)
    KUBE_KEY=$(jq -r .source.admin_key < /tmp/input)
    KUBE_CERT=$(jq -r .source.admin_cert < /tmp/input)
    CA_PATH="/root/.kube/ca.pem"
    KEY_PATH="/root/.kube/key.pem"
    CERT_PATH="/root/.kube/cert.pem"

    echo "$KUBE_CA" | base64 -d > $CA_PATH
    echo "$KUBE_KEY" | base64 -d > $KEY_PATH
    echo "$KUBE_CERT" | base64 -d > $CERT_PATH

    KUBECTL="$KUBECTL --certificate-authority=$CA_PATH --client-key=$KEY_PATH --client-certificate=$CERT_PATH"
fi


Basically it is necessary to provide kubectl, a server url and the namespace to deploy to. Given it is supposed to be in a production setting, if the Kubernetes cluster in question was minikube for example, that would mean that Kube_Ca is ca.crt, Kube_Key is apiserver.key and Kube_Cert is apiserver.crt. The author of the Kubernetes resource has chosen to base64 encrypt the certificates, but that is not a necessary step and is done to allow keeping it in a credentials file, as a oneliner for each certificate.

The /tmp/input is a matter of an implementation detail in Concourse.


The actual deployment

However, at deployment time one of the problems with the Concourse resource is that it simply does a rolling update of an existing deployment and changes image. While this gets the job done, it means that recreating the deployment is a headache. At this point, I decided to create a small project which can generate yml, that can be used to deploy the container to Kubernetes alongside a service and an ingress.

Together with Træfik (which was covered in another blogpost) this means that a domain is accessible, which hits the newly deployed application. This makes any kind of end-to-end testing significantly easier, and mimics how the system would behave in a real production setting. A side effect of having everything as code, is that the deployment to kubernetes can be checked into source control and then checked out to recreate the exact same staging/production environment down along the line!

Kubernetes generated yml

The obvious benefit of this is that it massively helps a Q&A or operations team by allowing them to go back to ANY release and test, which is fantastic if you have to maintain old releases.

The given community Kubernetes resource missed a few things, so I forked it and added a few things. Instead of updating a deployment image, I allowed the usual:

kubectl create -f generated-file.yml

but also allowed automatic cleanup, by allowing the equivalent:

kubectl delete -f generated-file.yml

Using the forked Concourse resource, this is the configuration used:

Deploy to Kubernetes

Notice that the end-to-end test also cleans up the deployed jobs from the exact same files.

The application used for testing did not have a database. But in principle there is nothing preventing a database from being deployed alongside the application in the exact same manner, inside the cluster, which allows a full spin up of a production like system - and then a full tear down that effectively creates a staging environment as close to production as you can get.

The full staging pipeline as made in Concourse only has one step left, which is to fast forward the stable branch with the .yml files generated.

Full staging pipeline

The impact this has for production

Similarly to how the staging environment is created, the same logic can be applied to deployments to production by just changing the namespace in Kubernetes to the production equivalent namespace. Whatever goes with a release should also be taken care of, in this pipeline documentation and release notes are included. The steps for production are therefore:

  • Similarly to the staging environment, after running tests it is possible to just directly deploy to production namespace
  • (Optional) Deploy documentation with release
  • (Optional) Deploy release notes with release

The pipeline is the same, but instead of generating a release image, the same can be done with an image serving documentation and release notes, deployed alongside the production image to production. Because of proper versioning, that also means that history for documentation and release notes becomes trivial.

In turn, this means that a given version of the application can be given with the corresponding documentation, staging environment, production environment and release notes at that point in time, allowing a sales team to sell a specific version of the software, or an engineering team to consume an old version of the same application, without having to depend on anyone.

This approach is highly recommended, but can be solved in many other ways. Simply serving documentation and release notes on an internal page in the company has the same effect, so it is a matter of taste.

The last bit of the Concourse pipeline looks like this, with documentation and release notes deployed alongside:

Full production pipeline

Here it is assumed that the level of testing is high enough that a release can be launched automatically, and scaled up using Kubernetes to ensure the new feature or bug fix does not impact the product negatively.

Next step is to tap into Kubernetes in a way where releasing means scaling up slowly, with heavy monitoring - which will be covered in another blog, so stay tuned!