Scratching the container networking itch
What to do when you need more than just
ping to reach a container.
We know that the idea behind a Docker container is that it should have just enough software to run a particular process or service. For example a web server, Java application server or database server.
Images are designed to be very minimalistic and lean in nature. If a container should only run a single process all its life, why bother filling it up with unused software? Great! But because they are lean, they can also be difficult to troubleshoot.
I have many times needed more than just
ping to reach a container running on a particular host on a particular container network.
Recently I was working on a Kubernetes cluster with service names set up using the SkyDNS addon. But I was not able to resolve the service names. I had nginx running as a container and being minimalistic by nature, it had no tools inside it except
ping. I installed
nslookup with the usual
apt-get update and
apt-get install dnsutils. But it was still not giving me enough information about name resolution. I was not until I installed
dig that I figured out what was going on. It took me many container starts and
apt-get commands before things got clear.
It was a nasty itch and I needed a solution.
Being a big fan and user of multitools, such as the Leatherman Wave that I carry with me as EDC, I wanted a container image with all the necessary tools installed in it. One I could use at will, without getting into the
apt-get mess. I also wanted the image to run as a standard pod, so I could achieve two things:
docker exec bashinto it and not have to remember complex
kubectlcommands to run it in interactive mode
I went ahead and created praqma/network-multitool. I am a Red Hat fan so I based my image on
centos:7 . Initially I had Apache as web server, but later I replaced it with nginx - it is very light weight and fast.
The image can be used in any container environment. Here are a few examples of how you can use it.
[kamran@kworkhorse ~]$ docker run --rm -it praqma/network-multitool bash [root@92288413e051 /]# nslookup yahoo.com Server: 192.168.100.1 Address: 192.168.100.1#53 Non-authoritative answer: Name: yahoo.com Address: 184.108.40.206 Name: yahoo.com Address: 220.127.116.11 Name: yahoo.com Address: 18.104.22.168 [root@92288413e051 /]#
[kamran@kworkhorse ~]$ docker run -P -d praqma/network-multitool a76d156c674f2b61c9b9fb10f87c645620c4fcbe88a13162546379abc9a87f14 [kamran@kworkhorse ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a76d156c674f praqma/network-multitool "/start_nginx.sh" 31 seconds ago Up 30 seconds 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp silly_franklin [kamran@kworkhorse ~]$ docker exec -it silly_franklin bash [root@a76d156c674f /]# curl -I yahoo.com HTTP/1.1 301 Redirect Date: Sun, 16 Apr 2017 16:09:20 GMT Via: https/1.1 ir28.fp.ne1.yahoo.com (ApacheTrafficServer) Server: ATS Location: https://www.yahoo.com/ Content-Type: text/html Content-Language: en Cache-Control: no-store, no-cache Connection: keep-alive Content-Length: 304 [root@a76d156c674f /]#
First run the container image as a deployment:
[kamran@kworkhorse ~]$ kubectl run multitool --image=praqma/network-multitool deployment "multitool" created [kamran@kworkhorse ~]$
Then find the pod name and connect to it in interactive mode:
[kamran@kworkhorse ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE multitool-2814616439-hd8p6 1/1 Running 0 1m [kamran@kworkhorse ~]$ kubectl exec -it multitool-2814616439-hd8p6 bash [root@multitool-2814616439-hd8p6 /]# traceroute google.com traceroute to google.com (22.214.171.124), 30 hops max, 60 byte packets 1 gateway (10.112.1.1) 0.044 ms 0.014 ms 0.009 ms 2 wa-in-f102.1e100.net (126.96.36.199) 0.716 ms 0.701 ms 0.896 ms [root@multitool-2814616439-hd8p6 /]# exit exit [kamran@kworkhorse ~]$
Creating this network multitool image has completely soothed my itch. Now I use it to solve all sorts of problems. Packet capture,
curl - you name it! I hope you will enjoy using this multitool as much as we do at Praqma.
Do you have a tendency to use the backlog as an eternal placeholder? If so, you probably have a lot of clutter that’s creating a lot of frustrations for your end-users. In this post we’ll show you how to clean up your Jira issues and reduce the backlog with some basic JQL queries.
Tips to improve project management in the Atlassian suite
How to test Kubernetes artifacts like Helm charts and YAML manifests in your CI pipelines with a low-overhead, on-demand Kubernetes cluster deployed with KIND - Kubernetes in Docker.
Low overhead, on-demand Kubernetes clusters deployed on CI Workers Nodes with KIND
Had enough of sluggish polling? With instant Artifactory event triggers you can give responsiveness in Jenkins a real boost. Here’s an easy way to set it up.
A super easy configuration guide
With the arrival of microservices code is becoming disposable. Does this mean that we no longer need maintainable code? Is it the end of refactoring?
Still relevant or increasingly redundant?
In software development tight coupling is one of our biggest enemies. On the function level it makes our application hard to change and fragile. Unfortunately, tight coupling is like the entropy of software development, so we have always have to be working to reduce it.
How to safely introduce modular architecture to legacy software
I am an Atlassian certified trainer and over the years I have been spending much time with clients and their Jiras. In this blogpost, I have collected some small tips and tricks that will make your Jira usage better.
Jira Software is a powerful tool deployed in so many organizations, yet in day to day usage people are missing out on improvements, big and small.
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps