An immutable infrastructure approach
Immutable infrastructure as code reduces inconsistency and makes deployments faster and easier. We can provision immutable infrastructure with Packer & Terraform. Let’s use them to provision Jenkins Windows build slaves.
In a previous post we discussed how to deploy Jenkins on a kubernetes cluster using Helm and how to connect Windows build slaves to it. Due to the immaturity of Windows support in k8s, we placed the Windows build slaves in a separate VPC. But, manually provisioning and booting those slaves is far from ideal in the Continuous Delivery and DevOps age! In this post we will discuss how we can automate the provisioning of Windows build slaves using Packer and Terraform on AWS.
While we use AWS -in this post- as an IaaS provider, both Packer and Terraform support multiple other cloud providers.
It is possible to provision Windows build slaves using configuration management tools (e.g. Chef or Puppet). Configuration management tools will build a mutable infrastructure where your servers can move out of sync with each other overtime. This is referred to as Configuration Drift. Packer and Terraform (among other tools) allow you to create immutable infrastructure. You can build a pre-baked image with all the software you need using Packer. Then you can provision all your cloud resources from these images using Terraform. When an update is due you apply it to your images and reprovision new cloud resources. The old resources will be disposed.
You might want to keep your old resources a little longer while you make sure the new -updated- resources are behaving as intended.
You can read more about why you should not use configuration management here.
Please note that there are cases where configuration management tools are still needed. For example, to install your OS of choice on bare-metal infrastructure.
Packer enables us to create a Windows VM image containing all the dependencies needed to run the build slave. Meanwhile, Terraform enables us to launch and manage instances on AWS from an Amazon Machine Image (AMI) created by Packer.
Packer is used to generate machine images from code templates. The templates define how to build the image using builders (which are specific to the target environment, e.g. AWS or VirtualBox) and provisioners which are used to provision software and perform the required setup in the image. We use a Packer template which launches an EC2 instance of the latest Windows Server 2016 version (in AWS) and installs Docker and Java on it. Further, it downloads the Jenkins slave.jar from the Jenkins master and creates a script that will start the slave and connect it to the master. Once the provisioning is done, Packer will register the new AMI to your AWS account and clean up the resources it has used.
By installing Docker into the AMI, Jenkins jobs can be executed in containers. This means creating a custom build environment for specific jobs is as simple as creating a Dockerfile and building a Docker image from it.
Terraform is an infrastructure-as-code tool which is used to build, evolve and version infrastructure on cloud providers. Terraform uses version controlled code templates which allow sharing and re-use of the infrastructure. Using some terraform templates, we create a VPC in AWS and launch a number of Windows instances and start the build slaves on them.
When we’re using Packer to create Windows rather than Linux images the key difference is that the communicator used to communicate with the windows instance while it is being configured is WinRM (Windows Remote Management).
Unlike SSH on Linux, WinRM needs to be configured on the windows instance before Packer can connect to it to provision software. This is done through user data. When Packer launches a new instance on AWS - to be used for building the AMI - it can pass user data script to AWS which will then run that script on the instance while its booting. The ec2-userdata.ps1 script contains the WinRM setup for opening the required ports and adding listeners. More details about this set up can be found here.
If you face problems with WinRM, and want to check that it is running, have a look at this page for some useful commands.
Packer creates the AMI by launching an instance and performing all the required provisioning on it before capturing it as an image. On AWS, when a Windows instance is booted for the first time, it will be initiated using EC2Launch (or EC2Config for Windows versions older than Windows Server 2016). The initialization involves creating a random admin password and executing any specified user data. The initialization is registered as a service to be executed at boot time and when it is completed the service is deregistered. This means further boots of the instance will not result in new admin passwords and will not execute user data scripts.
When you create an instance from the generated Packer AMI it inherits the admin password of the instance used to create the AMI. It also skips executing any new user data because, as just discussed, the initialization is deregistered. What if you want to execute any user data scripts on the instances you launch from the Packer-generated AMI?
You need to configure the instance used for creating the AMI (during Packer provisioning) to treat the next boot as a new launch (which means allocating a new random password and executing any new user data script). On Windows Server 2016, this is done using EC2Launch (older Windows versions are configured using EC2Config).
The EC2Launch scripts triggering the instance initialization can be found (on the Windows instance) in C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\
You can configure the Packer-generated AMI to run the initialization scripts on the next boot by executing the following command on the windows instance (during Packer provisioning):
Now we have an immutable infrastructure managed by Terraform. Want to upgrade the Java version used on the Windows instances? Simply update your packer template and build a new AMI then run the Terraform “apply” command. Voila! You will have your upgraded build slaves up and running in a few minutes.
Want to have a custom environment for specific build jobs? Build a Docker image (using a Dockerfile) within your Packer template and generate an AMI before updating your infrastructure with Terraform. Now your build slaves will be able to run containers to provide whichever custom environment you want. Simple!
The complete code is available on Github.
Installation and management of CI servers is a critical task for any IT team. Kubernetes and its package manager (Helm) provide an easy way to customize Jenkins installations. Let’s see how to do this and add Windows build slaves.
A setup for working with Windows build slaves
As of Kubernetes 1.5, Windows containers support was introduced in an alpha release. With the beta version just around the corner, we put the Windows support to the test. Is it mature enough for production environments?
A verdict on adding Windows minions to Kubernetes clusters in AWS
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
How to make the right technical choices on your cloud native journey
Learn how Docker and Kubernetes work and the key benefits they bring. Using real demos, I show how Docker is a great packaging and distribution technology, and how Kubernetes provides a powerful runtime for containerized applications.
Watch this introduction to Docker and Kubernetes at the Trondheim Developer Conference (TDC)
In the world of Agile and DevOps we use many figures, charts and diagrams to argue and reason about our world and how we prioritize and make choices. However, at all levels of the organization, we misuse and misinterpret figures. It’s time to be explicit, measure the right things and act on them. Watch this talk from DevOpsDays Zurich in May 2019.
Watch this talk from DevOpsDays Zurich
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps