A setup for working with Windows build slaves
Installation and management of CI servers is a critical task for any IT team. Kubernetes and its package manager (Helm) provide an easy way to customize Jenkins installations. Let’s see how to do this and add Windows build slaves.
Container solutions (e.g. Docker) enable your developers to develop applications and ship them for testing or deployment without having to worry about environment setup or facing the “works-on-my-machine” phenomenon.
Kubernetes (aka k8s) provides an orchestration framework for containerized applications. This framework ensures the availability of the containerized application through health monitoring and the recovery of failed containers.
Containers and k8s speed up your dev-build-test-deploy cycle by abstracting most of the infrastructure management overhead. New tools build on top of k8s and containers to enable infrastructure, configurations, and even application deployment to be created and managed from code. Such tools include: Kops -a k8s cluster management tool- and Helm (a k8s package management tool). These tools allow you to automate your k8s cluster creation and management. Further, they allow you to easily deploy and manage your applications on the k8s cluster.
Helm enables you to manage applications deployment on your k8s cluster. It provides a bunch of stable packages -called charts- which you can deploy to your k8s cluster out of the box. Alternatively, Helm allows you to create your own chart and reuse it.
In this post, we will deploy Jenkins CI server on k8s cluster -on AWS- using Helm, and then connect Windows slaves to it. The following steps are needed to deploy Jenkins on k8s:
Note: While this post uses AWS as a cloud provider. The concepts and steps discussed here can be mapped to other cloud providers offering similar services.
Once you have the cluster created you need to install Kubectl and then install helm on the machine which you use to manage your cluster. Once installed, Helm needs to be initialized so it automatically installs its server. This is called Tiller on the k8s cluster you are using.
Now that you have a cluster, and Helm is installed and initialized, you can install the stable Jenkins chart on the k8s cluster using the following command:
helm install --name <<your-release-name>> stable/jenkins
That’s it! You now have a Jenkins master deployed on your cluster and exposed to the rest of the world using a load balancer (AWS ELB in that case). Helm output will tell you how to extract the URL of your Jenkins master and the admin password.
The figure below shows what has happened. Helm has created a deployment and a couple of services on your k8s cluster. The deployment defines a desired state for the pod containing the Jenkins master. This allows k8s to bring up a new pod running a Jenkins master inside a docker container if the current pod dies. Since pods are mortal, the services created by this step allow other pods / applications to reach the Jenkins master using a DNS (service) name, instead of the IP of the underlying pod. The Jenkins master port is exposed externally (to be accessed from anywhere on the internet) using the cloud provider’s load balancer (in this case, AWS ELB). So, no matter what happens to the pods inside the cluster, you will always be able to access your Jenkins master through the load balancer IP. The Jenkins agent service is, however, exposed with Cluster IP meaning it is only accessible from within the cluster.
You need not worry as the Jenkins Helm chart has a configurable parameters file which you can customize. For example, you can specify which plugins should be installed with your Jenkins and which docker images to use for both master and slaves.
You might be wondering - where are the slaves?
We used the default configuration provided with the Jenkins Helm chart. In that configuration Jenkins comes with the Jenkins Kubernetes plugin installed. This plugin allows the master to create (on demand) slaves as docker containers in pods. This means that whenever there is a job the master will create a slave, on the fly, inside your k8s cluster. The slave will perform the job and then will be discarded.
Now, you might be wondering if you can have Windows slaves, right?
In a previous post, we discussed the current state of the Windows support in k8s and concluded that at this point of time it is not mature enough for production-like environments when you use Kops to manage your k8s cluster. As a result, we recommended that Windows hosts are created outside of the cluster and are allowed to communicate with applications deployed on your k8s cluster. The following diagram illustrates the architecture we use:
We created a separate AWS VPC to host Windows VMs hosting permanent Jenkins slaves.
Helm has exposed the Jenkins agent service (responsible for connecting slaves) to the cluster only (using the Cluster IP service type ). Helm assumes that all slaves will be connecting from within the k8s cluster. However, the Windows slaves will be connecting from outside the cluster as we described above.
To make the Jenkins agent service accessible by Windows slaves connecting from the Windows VPC, there are two options:
Exposing the slaves agent using NodePort service type
Node port exposes the service on a specific port across all cluster nodes. This makes the service accessible from within the cluster using Cluster IPs, from within the k8s VPC using AWS private IPs, and externally (from outside the k8s VPC) using any public IP of any of the k8s nodes. The problem with that approach is that k8s nodes are mortal and when they die they are replaced with new nodes (and consequently, with new external IPs).
Or, exposing the slaves agent using LoadBalancer service type
A load balancer will always connect to the Jenkins agent service from the Windows VPC regardless of the internal IP of the service. Again, you will need to allow traffic from the Windows VPC only for security reasons.
Note: for the first option you need to allow traffic on the Jenkins Slavelistener port (default 50000) from your Windows VPC in your k8s nodes’ security group. By default this security group blocks connections from outside the k8s cluster VPC.
Note: If you have multiple services that need to be accessible from outside your k8s cluster then you might consider using an HTTP reverse proxy (e.g. Træfik) to avoid receiving hefty bills for AWS ELB.
Now you can have a Jenkins master deployed on your k8s cluster that is able to create Linux slaves on demand inside the cluster. Further, it can connect to permanent Windows slaves hosted on VMs outside the k8s cluster. In an upcoming post we will talk about provisioning Jenkins Windows build slaves using Packer and Terraform. Stay tuned!
Helm charts lifecycle management is a manual task. Helmsman allows you to automate your Helm charts lifecycle management using declarative configuration files.
Automating Helm deployments to Kubernetes with Helmsman
How to automatically deploy Helm charts to a Kubernetes cluster
As of Kubernetes 1.5, Windows containers support was introduced in an alpha release. With the beta version just around the corner, we put the Windows support to the test. Is it mature enough for production environments?
Kubernetes on Windows
A verdict on adding Windows minions to Kubernetes clusters in AWS
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
Pipeline - The Game that Delivers!
A new card game to design Continuous Delivery pipelines
Praqma joins the RADON project
Unlocking the Benefits of Serverless FaaS for the European Software Industry
In the Accelerate book, researchers explain several metrics which they have shown will measure the performance of a DevOps organization, and crucially, drive performance of the organization as a whole. I will explain why this is important, using an analogy with your risk of a heart attack.
Winning with DevOps and Reducing Your Risk of a Heart Attack
Clinical Trials and Software Process
Your repositories are cluttered, you’re storing irrelevant builds, and your disks are full. Luckily, you’ve come to the right place for advice. Here’s how to set up automatic cleanup.
Artifactory Retention Policies
How to set up automatic Artifactory repository cleaning
In many security and safety critical environments there can be requirements for long-term reproducibility of artifacts that are created. In this talk from NDC Tech Town, Mike Long shows how to use Docker to create binary reproducible applications from a C++ application.
Binary Reproducible Builds
How to Create Binary Reproducible Builds using Native Build Tools and Docker
In distributed architectures networking is usually taken as a given. However, when it does not function as we expect it is often hard to understand the reason. This blog post outlines some general insights into networking with more specific techniques for understanding the networking infrastructure of a Kubernetes cluster.
Debugging Kubernetes Networking
How to Find a Needle in a Virtualized Network
Choosing the perfect CI/CD tool for your project can be tricky. In this post we compare two managed CI/CD services - CircleCI and Google Cloud Build. Sami Alajrami is on hand to score our exciting CI/CD matchup.
CircleCI vs Google Cloud Build
How to choose the CI/CD tool that’s right for you
Simply aligning your IT capability with your business goals isn’t enough to grow your company. To achieve real IT-enabled growth you’ll need to build on a foundation of technical excellence and thanks to DORA that’s something you can now actually measure.
Avoiding the Agile Alignment Trap
Why a new software process won’t fix your delivery effectiveness
Setting up an on-premise highly available Kubernetes cluster comes with many challenges. Rancher can help build a production grade cluster. Watch our Certified Kubernetes Administrator Henrik Hoegh set up a cluster and test the deployment.
How to install a Kubernetes Cluster with Rancher
A video guide to setup a highly available on-premise Kubernetes Cluster
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps