Getting control of your development environment
Most people have good days at work and not so good days. A good day is when you get to produce code. A bad day is when you spend all your time just figuring out what is going on.
When we produce software, there are various environments involved. There are the developer workstations, where we write code, do local builds and test. Then there are shared environments for integrating code and doing more extensive testing.
Not everyone pays much attention to these environments and how they are used throughout the development lifecycle. Here is a story from such a place told by Tom, a software developer at Yoretech.
When Tom joined his first project in the company, he was told to follow a wiki page with detailed instructions for setting up his development environment. He was also told the guide was probably outdated - after all, it only got corrected when a new guy joined the project.
He checked out source code, but to build or run it, he needed to install a bunch of support software on his PC. Then he needed to set some environment variables and Windows registry keys. Phew.
When he got it working, after a full day, he realized the wiki guide was in fact not quite up to date. So he corrected it with whatever he could remember from this day of just getting to a place where he could write code and build software.
At the standup next morning, Tom feebly mentioned that perhaps the onboarding process could be made easier. The general consensus was that it wasn’t worth it, because the rate of new team members joining was low. Tom let it go - after all, it was his first week on the job and it was working now.
The project did continuous integration. They used Jenkins to build code and run tests. But the server was configured manually; both the master’s environment and the build jobs. One day the Jenkins machine died. Tom asked the team members where the backup was and how old it was. Then he asked who was actually responsible for Jenkins, because he had never been told. It turned out no one was responsible, somebody had just gotten it working some years ago. So there was no backup, and Tom’s spirit died just a little.
Well, they needed a build server, so off he went to create a new one. It took most of the day to get a new machine running, producing green builds of the code. He hoped the configuration was identical to the old one, but he really did not know.
Establishing environments, whether a developer’s computer or a project’s CI server, was so hard that people just wanted to get it done, not having to look at it again, and definitely not own it.
Then came the time to get ready to release The Software. The team had done agile sprints for months, automatically building their commits on a CI server (when it was running). The functional testing had mostly been done on a developer’s PC, in small bits to confirm a user story had been implemented . Now they needed to install their own software on a machine like a customer would. First, they had to book a physical machine within the company. The machines were shared between projects, so Tom hoped not too many other projects were doing release testing at the same time. At least not on Windows 2012 R2, because there were not that many machines available on which you could install Windows 2012 R2.
Tom’s team got a machine and Tom went on to install software. He started by finding a recent green build on the CI server and wrote down the build number. Again, he used a long document describing which versions of things to install, how to write which configuration files, and in which order. Because things like Windows registry keys, JRE versions and Tomcat settings are really important. Tom ensured that services started up as they should and that there were no errors in the logs. After a full day’s work, Tom thought to himself that it shouldn’t be so hard, but finally, the team could begin release testing the next day.
Release testing was an entire phase in the software development cycle. Everyone knew you couldn’t just create an environment whenever you felt like it. Tom considered if virtual machines could be an option. But he was told they couldn’t be trusted for performance. He then asked how the customers ran the software and was told that no one really knew.
Then he got an assignment of validating a GUI tool that was part of the delivery. It wasn’t actually explorative testing. He just had to follow - you guessed it - a list of steps of where to click and what to type, and make notes if anything unusual showed up. He thought it was a boring couple of hours and hoped he wasn’t picked for that task anytime soon.
When the testing completed, the team assembled the release. They searched their CI server for the latest green build. They trusted that to be good, since it was newer than the one used when release testing had begun. Then they found the corresponding documentation on a network drive. The documentation wasn’t stored in version control. There were files with different version numbers in their names, so they just took the newest.
After the first release, Tom mentioned the pains of release testing in a retrospective and there was a lengthy discussion of the problems. The team had discussed this many times before, but, at the end of the day, customers paid for features. There wasn’t really any time for improving the development environment.
Looking at Tom’s story, we see that besides writing code there are tons of circumstantial tasks. He had to spend time on manual, tedious work. You may think that is normal - after all, we need these machines set up. But if you love writing code, try to start creating your infrastructure as code.
You do it by describing the environment, including the operating system, software and configuration. It can be applied even to physical machines. You can then move to virtual machines or containers later.
Put the infrastructure code in version control, just like the rest of your code. You no longer need to be afraid of losing a machine or guess about the content of an environment. It is described in the code and the version control gives you the history of its evolution. You get a reproducible way to create environments. You will be able to create environments as often as you wish, so it is a path towards continuous delivery. The infrastructure code can be reviewed. It can be owned by developers. The definition of environments can be shared between developers and operations, fostering collaboration.
There are many popular tools out there - including Packer, Vagrant, Chef, and Docker - and you can get started for free with the help of online tutorials.
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
How to make the right technical choices on your cloud native journey
Learn how Docker and Kubernetes work and the key benefits they bring. Using real demos, I show how Docker is a great packaging and distribution technology, and how Kubernetes provides a powerful runtime for containerized applications.
Watch this introduction to Docker and Kubernetes at the Trondheim Developer Conference (TDC)
In the world of Agile and DevOps we use many figures, charts and diagrams to argue and reason about our world and how we prioritize and make choices. However, at all levels of the organization, we misuse and misinterpret figures. It’s time to be explicit, measure the right things and act on them. Watch this talk from DevOpsDays Zurich in May 2019.
Watch this talk from DevOpsDays Zurich
Summer is a great time to catch up on reading, whether you’re at the beach, in a summer house, or cozy at home. If your book backlog is on the short side, don’t worry! We compiled a list of great books for summer reading.
Inspiration for your summer reading list
At Praqma we believe in knowledge sharing, and we love to teach our technical expertise. Watch this series of videos to learn how traefik reverse proxy works step by step.
A video seminar to learn how Traefik works
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
A new card game to design Continuous Delivery pipelines
In the Accelerate book, researchers explain several metrics which they have shown will measure the performance of a DevOps organization, and crucially, drive performance of the organization as a whole. I will explain why this is important, using an analogy with your risk of a heart attack.
Clinical Trials and Software Process
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps