This site uses cookies to improve your user experience. Read our privacy notice.
Continuous Delivery Assessment
Continuous Delivery
DevOps
IT Operations Assessment
Consulting
Git Migrations
Technical Agile Coaching
Testing
Atlassian Consulting
Docker
Kubernetes Consulting
We want to automate everything. Therefore, we are constantly working on new solutions and projects to make DevOps and Continuous Delivery easier.
Eficode Praqma Academy offers public training courses and custom on-site courses. All courses are hands-on, and run by professionals with real world experience.
Engaging with the community is essential for Eficode Praqma. We regularly host conferences and meet-ups, and proactively contribute to open source projects.
We specialize in Continuous Delivery and aim to change the world of software development by sharing our knowledge and building strong communities
Managing versioned source level dependencies
Splitting dependencies is the holy grail in software. Breaking up a monolith into reusable components and services changes everything, including approaches to version control
Assembling the components into a usable and useful whole can be achieved at both the source, binary and service level, each with different tradeoffs. Generally, binary differ from source dependencies in its format and purpose. Source dependencies come in the shape of raw code, whereas binaries often come as compiled files. Tools like Maven, Ant and Gradle have greatly increased the pleasantness of working with binary dependencies. On the other hand, source dependencies haven’t received the same tool support. Despite the more cumbersome work environment, source dependencies still serve a purpose in modern software development - especially in cases where you actively develop on separate repositories in parallel. This post makes a case for using thin-shell repos when dealing with source dependencies in a flat, multi repo project, focusing on what thin-shell repos are, when to use them, and the benefits of doing so.
+---------+
+---------+
+ REPO_A +
+---------+ +-------------+
-----> | ComponentAB |
+---------+ +-------------+
+---------+
+ REPO_B +
+---------+
The background for my approach towards thin-shell repos was a client with a situation as follows: The company had decided on a flat multi repo structure of their code base. The initial motivation was that each release object and shared library had its own git repository. The way they built the components locally, was to copy the libraries from the sibling folder in the flat repo structure. On Jenkins, they used the multi scm plugin and cloned the necessary repos into the same repo structure in the Jenkins workspace. The obvious Achilles’ heel of this setup was its dependency to Jenkins. Specifically the storage of old jobs, in order to control which git SHAs was being used in each build.
So, the problem was: “How to allow the local development and testing, using source dependencies, in a flat repo structure, while ensuring a traceable and robust CI build system?”.
Describing the situation for a colleague, he advised me to look into thin-shell repos. Disappointed by the search result of ‘thin shell repos in git’, I decided to write this post.
The thin-repo is a separate git repository, but its only function is to version control other repos. As the name suggests, it works as a shell around the repos in focus. Assuming the client’s situation with a ‘ComponentA’, using ‘Library1’ and ‘Library2’ as building blocks, the thin-shell repo would look like this - imitating the flat repo structure in development:
ComponentA_thin
|--- ComponentA
|--- Library1
|--- Library2
Thin-shell repo has its origin in the SCM alternative, Mercurial. Here, the repos are added as sub-repos. In git we use the equivalent of submodules.
Git submodules normally define a parent-child dependency, but using thin-shell lets us avoid the hierarchic structure. If you’re thinking “I’d rather not bother with submodules”, I get you. However, the point of this setup is for them to be self contained; and the struggles with submodules hardly ever noticed.
Eficode Praqma having “Everything as code!” as a motto, points us naturally in the direction of thin-shell repo as code. The point is to have a template script where you only fill in the name of the repo and the source dependencies needed. The shell script does the rest. Here is an excerpt of a potential script.
# Add your github username
GIT_USER=<gituser>
# List the relevant repos needed with the exact name
SUBMODS=(ComponentA Library1 Library2)
# Name of the thin shell repo
GIT_REPO=${SUBMODS[0]}+'_thin'
# Creating the thin-shell repo remotely
curl -u '${GIT_USER}' https://api.github.com/${GIT_USER}/repos -d "{\"name\":\"${GIT_REPO}\"}"
# Create the local repo
mkdir ${GIT_REPO}
cd ${GIT_REPO}
git init
# add remote git server
git remote add origin https://github.com/${GIT_USER}/${GIT_REPO}.git}
git push --set-upstream origin master
#add remote for necessary submodules
for SUB in ${SUBMODS[@]}
do
'git submodule add https://github.com/${GIT_USER}/${SUB}.git ${SUB}'
done
Ideally configured, the thin-shell repo works under the hood. The developers continue to push the commits to ComponentA’s repo as usual, but Jenkins, in turn, builds ComponentA from the thin-shell repo where it fetches the necessary submodules. This is easily configured with a Jenkins job being triggered on changes in the ComponentA repo. The trigger job then executes a downstream job, building ‘ComponentA’ through its thin-shell repo.
How Jenkins handles the submodules is a question of company policy and preferences. In this case, the company wanted to focus on the daily development; meaning that any commit on master on either ‘ComponentA’, ‘Library1’ or ‘Library2’, would trigger the thin job to collect the latest master commit from all submodules. A shell snippet ensures this functionality with:
git submodule foreach 'git checkout origin/master'
Jenkins ends the build with committing and pushing the updated submodules to GitHub, referencing the build job in its commit message.
git commit -am "Jenkins job # ${BUILD_NUMBER}"
git push
In the daily development process, the thin-shell repo is in practice invisible. But it continuously controls the source dependencies being used in ‘ComponentA’s build. In addition, the developers don’t need to maintain the submodules. That is taken care of by Jenkins.
The main benefit of this setup is obviously its traceability. The thin-shell repo is fully version controlled, where the submodules point to a specific commit-SHA. The command: git submodule status
in the thin shell repo will present the submodules that are present and what commit it points to. The other strength of a thin shell repo is that it imitates the flat repo structure, allowing the same build scripts and Makefiles locally and remote - reducing the uncertainty of differences in local and remote builds.
Lastly, when releasing, you can release a version of the thin repo, meaning a combination of the submodules that is thoroughly tested and easily reproducible.
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
Get started with Serverless Jenkins X
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Tutorial: Snapshotting Persistent Volume Claims in Kubernetes
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Turning an example by Martin Fowler into a Refactoring Kata
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
Accelerate State of DevOps Report 2019
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
Misused Figures of DevOps
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
Start well with Kubernetes
How to make the right technical choices on your cloud native journey
Learn how Docker and Kubernetes work and the key benefits they bring. Using real demos, I show how Docker is a great packaging and distribution technology, and how Kubernetes provides a powerful runtime for containerized applications.
Video - Docker and Kubernetes in 40 minutes
Watch this introduction to Docker and Kubernetes at the Trondheim Developer Conference (TDC)
In the world of Agile and DevOps we use many figures, charts and diagrams to argue and reason about our world and how we prioritize and make choices. However, at all levels of the organization, we misuse and misinterpret figures. It’s time to be explicit, measure the right things and act on them. Watch this talk from DevOpsDays Zurich in May 2019.
Misused Figures of DevOps - Video
Watch this talk from DevOpsDays Zurich
Summer is a great time to catch up on reading, whether you’re at the beach, in a summer house, or cozy at home. If your book backlog is on the short side, don’t worry! We compiled a list of great books for summer reading.
Six books you should read when working with DevOps
Inspiration for your summer reading list
At Eficode Praqma we believe in knowledge sharing, and we love to teach our technical expertise. Watch this series of videos to learn how traefik reverse proxy works step by step.
How to use Traefik reverse proxy
A video seminar to learn how Traefik works
What testing steps should you include in your Continuous Delivery pipeline? Don’t just string together existing manual processes - use simple, collaborative tools to design something better!
Pipeline - The Game that Delivers!
A new card game to design Continuous Delivery pipelines
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps
All the latest from the world of DevOps and Continuous Delivery.
- Events & Tech Blogs
- Training Courses & New Services
- No Spam (we hate that too)
Get started with Serverless Jenkins X
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
Tutorial: Snapshotting Persistent Volume Claims in Kubernetes
Sneak peak at CSI Volume snapshotting Alpha feature