An immutable infrastructure approach
Immutable infrastructure as code reduces inconsistency and makes deployments faster and easier. We can provision immutable infrastructure with Packer & Terraform. Let’s use them to provision Jenkins Windows build slaves.
In a previous post we discussed how to deploy Jenkins on a kubernetes cluster using Helm and how to connect Windows build slaves to it. Due to the immaturity of Windows support in k8s, we placed the Windows build slaves in a separate VPC. But, manually provisioning and booting those slaves is far from ideal in the Continuous Delivery and DevOps age! In this post we will discuss how we can automate the provisioning of Windows build slaves using Packer and Terraform on AWS.
While we use AWS -in this post- as an IaaS provider, both Packer and Terraform support multiple other cloud providers.
It is possible to provision Windows build slaves using configuration management tools (e.g. Chef or Puppet). Configuration management tools will build a mutable infrastructure where your servers can move out of sync with each other overtime. This is referred to as Configuration Drift. Packer and Terraform (among other tools) allow you to create immutable infrastructure. You can build a pre-baked image with all the software you need using Packer. Then you can provision all your cloud resources from these images using Terraform. When an update is due you apply it to your images and reprovision new cloud resources. The old resources will be disposed.
You might want to keep your old resources a little longer while you make sure the new -updated- resources are behaving as intended.
You can read more about why you should not use configuration management here.
Please note that there are cases where configuration management tools are still needed. For example, to install your OS of choice on bare-metal infrastructure.
Packer enables us to create a Windows VM image containing all the dependencies needed to run the build slave. Meanwhile, Terraform enables us to launch and manage instances on AWS from an Amazon Machine Image (AMI) created by Packer.
Packer is used to generate machine images from code templates. The templates define how to build the image using builders (which are specific to the target environment, e.g. AWS or VirtualBox) and provisioners which are used to provision software and perform the required setup in the image. We use a Packer template which launches an EC2 instance of the latest Windows Server 2016 version (in AWS) and installs Docker and Java on it. Further, it downloads the Jenkins slave.jar from the Jenkins master and creates a script that will start the slave and connect it to the master. Once the provisioning is done, Packer will register the new AMI to your AWS account and clean up the resources it has used.
By installing Docker into the AMI, Jenkins jobs can be executed in containers. This means creating a custom build environment for specific jobs is as simple as creating a Dockerfile and building a Docker image from it.
Terraform is an infrastructure-as-code tool which is used to build, evolve and version infrastructure on cloud providers. Terraform uses version controlled code templates which allow sharing and re-use of the infrastructure. Using some terraform templates, we create a VPC in AWS and launch a number of Windows instances and start the build slaves on them.
When we’re using Packer to create Windows rather than Linux images the key difference is that the communicator used to communicate with the windows instance while it is being configured is WinRM (Windows Remote Management).
Unlike SSH on Linux, WinRM needs to be configured on the windows instance before Packer can connect to it to provision software. This is done through user data. When Packer launches a new instance on AWS - to be used for building the AMI - it can pass user data script to AWS which will then run that script on the instance while its booting. The ec2-userdata.ps1 script contains the WinRM setup for opening the required ports and adding listeners. More details about this set up can be found here.
If you face problems with WinRM, and want to check that it is running, have a look at this page for some useful commands.
Packer creates the AMI by launching an instance and performing all the required provisioning on it before capturing it as an image. On AWS, when a Windows instance is booted for the first time, it will be initiated using EC2Launch (or EC2Config for Windows versions older than Windows Server 2016). The initialization involves creating a random admin password and executing any specified user data. The initialization is registered as a service to be executed at boot time and when it is completed the service is deregistered. This means further boots of the instance will not result in new admin passwords and will not execute user data scripts.
When you create an instance from the generated Packer AMI it inherits the admin password of the instance used to create the AMI. It also skips executing any new user data because, as just discussed, the initialization is deregistered. What if you want to execute any user data scripts on the instances you launch from the Packer-generated AMI?
You need to configure the instance used for creating the AMI (during Packer provisioning) to treat the next boot as a new launch (which means allocating a new random password and executing any new user data script). On Windows Server 2016, this is done using EC2Launch (older Windows versions are configured using EC2Config).
The EC2Launch scripts triggering the instance initialization can be found (on the Windows instance) in C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\
You can configure the Packer-generated AMI to run the initialization scripts on the next boot by executing the following command on the windows instance (during Packer provisioning):
Now we have an immutable infrastructure managed by Terraform. Want to upgrade the Java version used on the Windows instances? Simply update your packer template and build a new AMI then run the Terraform “apply” command. Voila! You will have your upgraded build slaves up and running in a few minutes.
Want to have a custom environment for specific build jobs? Build a Docker image (using a Dockerfile) within your Packer template and generate an AMI before updating your infrastructure with Terraform. Now your build slaves will be able to run containers to provide whichever custom environment you want. Simple!
The complete code is available on Github.
The Continuous Delivery and DevOps Conference in Scandinavia
Installation and management of CI servers is a critical task for any IT team. Kubernetes and its package manager (Helm) provide an easy way to customize Jenkins installations. Let’s see how to do this and add Windows build slaves.
Deploying Jenkins on Kubernetes
A setup for working with Windows build slaves
As of Kubernetes 1.5, Windows containers support was introduced in an alpha release. With the beta version just around the corner, we put the Windows support to the test. Is it mature enough for production environments?
Kubernetes on Windows
A verdict on adding Windows minions to Kubernetes clusters in AWS
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
Start well with Kubernetes
How to make the right technical choices on your cloud native journey
Learn how Docker and Kubernetes work and the key benefits they bring. Using real demos, I show how Docker is a great packaging and distribution technology, and how Kubernetes provides a powerful runtime for containerized applications.
Video - Docker and Kubernetes in 40 minutes
Watch this introduction to Docker and Kubernetes at the Trondheim Developer Conference (TDC)
In the world of Agile and DevOps we use many figures, charts and diagrams to argue and reason about our world and how we prioritize and make choices. However, at all levels of the organization, we misuse and misinterpret figures. It’s time to be explicit, measure the right things and act on them. Watch this talk from DevOpsDays Zurich in May 2019.
Misused Figures of DevOps
Watch this talk from DevOpsDays Zurich
Summer is a great time to catch up on reading, whether you’re at the beach, in a summer house, or cozy at home. If your book backlog is on the short side, don’t worry! We compiled a list of great books for summer reading.
Six books you should read when working with DevOps
Inspiration for your summer reading list
Training the Next Generation of Software Developers
Continuous Delivery Academy: a modern software training for students
At Praqma we believe in knowledge sharing, and we love to teach our technical expertise. Watch this series of videos to learn how traefik reverse proxy works step by step.
How to use Traefik reverse proxy
A video seminar to learn how Traefik works
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
Pipeline - The Game that Delivers!
A new card game to design Continuous Delivery pipelines
Praqma joins the RADON project
Unlocking the Benefits of Serverless FaaS for the European Software Industry
In the Accelerate book, researchers explain several metrics which they have shown will measure the performance of a DevOps organization, and crucially, drive performance of the organization as a whole. I will explain why this is important, using an analogy with your risk of a heart attack.
Winning with DevOps and Reducing Your Risk of a Heart Attack
Clinical Trials and Software Process
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps