Follow our blog from the 2017 Eiffel Summit
Let’s talk about Eiffel - the traceability protocol, not the programming language. As we are attending the Eiffel Summit, we let you in on how the big companies work with traceability, and on the latest collaborative projects.
“I was attending the Eiffel Summit in Linköping on November 8th. This blog was posted the day before with good resources to introduce you to Eiffel and today the 8th of November the post was updated with highlights from the event itself.”.
I assume you’re not new to traceability, so you want to know where you commit is in your build and delivery flow. Or if the commit is on the way to a release? Or you want to know which bug fixes went into last release?
Eiffel is a protocol formulated and open-sourced by Ericsson used to describe traceability events. What, how and when something was created and happened. Rather than trying to reinvent our own data formats and proctols we can use Eiffel. It is well proven within Ericsson and recently released publicly together with some useful tools.
If you’re new to Eiffel, here are some resources that are a good start:
The image is the final result of explaining Eiffel in this must-see video presenting what this Eiffel thing really is.
In our world of Continuous Delivery and the daily work with industrial customers working with embedded software, a thing like traceability is a common area of interest and investment. We have been involved in many similar projects over the years, from research to practical simple solutions.
When I mention traceability in this post, it is not bound to any strict definition. Indeed, it can relate to very formal product life-cycle management frameworks like OSLC, or to the simpler approaches of trying to measure Continuous Delivery and producing simple traces of daily changes.
Recently these simpler solutions have got quite much attention, as they are available by just creating JSON data snippets and throwing them into something like Elasticsearch combined with Kibana for visualization.
But in real life it is not simple, and we’re not done, done.
I’m taking over the project from my former colleague, Andrey Devyatkin, who did a proof-of-concept using the Eiffel protocol. Tracey is our name for what we did back then and it can all be found on Github under
Tracey*. To be honest, I’m unsure if this is still the way to do it today, since there are many new efforts published in the community since last year and some seem to replace our POC.
However, I do believe that two efforts of ours still seem very relevant and unique:
I simply want to explore the possibility of Eiffel being the final solution to our traceability answers. Is it the solution to stop re-inventing customized data formats to obtain almost the same metrics and traceability?
This is one of the many reasons that made me join the Summit along with one of our customers.
I have seen too many times small efforts and solutions from many of our customers and my fellow consultants to create traceability and measure Continuous Delivery. What concerns me is that we often have the same end-goal in the horizon, but we get their in many different ways and do not share neither the effort, nor the costs.
Praqma will also focus on this topic next week, when our CoDe Alliance gathers for the 7th time to jointly roadmap concepts, ideas and open source tools. This time with Traceability as a theme.
The welcome slide. Actually doing these kinds of traceability have other benefits.
I was looking forward to today, but also a bit anxious of being a bit Eiffel newbie. But that was truly my perspective, though I would say I’m pretty familiar with traceability and what our customers want.
The program was tight. Daniel Ståhl from Ericsson who must be said to be the community lead on Eiffel, was first setting the common ground and introducing Eiffel. Then came presentations by companies using Eiffel, ranging from what we could call expert users to beginners - even one said they were just prospects.
After lunch we split into break out sessions on suggested topics, with summary and conclusions presented back during a wrap up. The day ended with a short community wrap-up trying to organize ourselves, but also getting recommendation through a video conference with Bas Peters from Github who is an expert in building successful communities. I think we got a fine todo list, thanks!
Since there were many different levels of users or non-user of Eiffel attending setting the scene was excellent and it was a splendid introduction to Eiffel, basically I would say it was an summary of all the videos in the Youtube channel, so I took especially note of when our pipeline metaphor breaks down.
The valuable term or metaphor we often use within continuous delivery called a pipeline - often build and delivery pipeline - breaks down when you go from component to products. You’re familiar with continuous delivery and you go from commit to releasable artifact using your automated build and delivery process. But what happens after your team delivers? How far do you take responsibility for what you delivered? Within open source you’re not responsible for downstream use of what you build, but within companies you often deliver a piece of a product - we call those components - and are expected to take responsibility all the way to the end. This means you need to know what happens with your delivery when others start to use it. And this is usually something your pipeline doesn’t cover, as it is the other teams pipeline - the product pipeline. Now you start getting angry mails about your component not working, maybe even in something you didn’t even knew it was used for?
You can get this transparency through proxy if the other team cares to use Eiffel events and traces back to your events, you can now suddenly get those information and care about them.
This slide from the presentation is very good to give an overview. Also actually on what is available publicly.
In the middle we put the message bus to which we either subscribe or publish Eiffel events. Remrem is service to help easily generate or publish events, while many ci servers themselves either subscribe or publish event directly, through probably some small scripts. To persist events for later analysis or historical reason a JSON database is used as event repository. A newly published tool is the vici that can visualize those persisted events in a very nice fashion, showing aggregated data and your hole pipeline process. Eiffel Intelligence is another tool that seems to do intelligent analysis and use of events.
A good thing with Eiffel and the way it is designed, is that you can start out simple. Use a few events, get value and extend to more advance stuff from there applying intelligence and more.
I also remember a comparison to an upcoming tool call ontrack, which obviously is opinionated but seems reasonable in the context of today: Ontrack is the centralized approach with central database, while Eiffel take the approach of completely decoupled and scalable architecture which are choices made from hard learning over the years.
I think we can benefit from a public roadmap somewhere, since there was changes since last Eiffel event. They were mostly relevant for existing users. It is good to hear there are activity.
One of the recent changes is security, with event properties to use for implementing security. Basically asymmetric key-pair support to encrypt digests and attach to the messages. It then left to the recipient to decide whether to trust the message.
What about an operations extension? One other very interesting new thing, on its way is ideas around operations and deployment. It was a bit over my current Eiffel user level at the moment, but definitely something relevant for all those who deploy stuff into production and want to trace further what happens. I’ll try to give more information from the break out session that was in the afternoon.
Michael Frick showed the latest and greatest brew of Eiffel as a live demo. As I heard from earlier events, it’s impressive when all bound together but some of the components are unfortunately not available and Ericsson internal only. A shame, because they seem very useful. Some of the non-public components is as the figure with architecture above also shows the artifact repositories plugins and the Jenkins plugin. We actually also have a not officially released Jenkins plugin that can subscribe to RabbitMQ messages, so you can trigger Jenkins job based on Eiffel events. Find it publicly available here on our Github in the Tracey Jenkins Trigger Plugin repository. Tracey is the name of our proof-of-concept small tools to get Eiffel into action. If I remember correctly the Ericsson Jenkins plugin can also create events and send them. We should coordinate our efforts!
The demo concentrated on the Vici tool which gives a great visualization of Eiffel events and the traces. Hopefully I get a picture soon to show, but meanwhile you can think of a complete visualization of your entire build and delivery process - also including the flow from those using your component. The aggregated view on Vici shows some recognizable icons for common things. Like action taken, an artifact as a package etc.
I think this visualization have great potential. I hope a public demo will be available very soon.
Tieto which is heavily involved with Ericsson and Eiffel explained their approach on how they see they could use Eiffel outside Ericsson in their SaaS offers.
Next up was my presentation with Grundfos, where Flemming Ask Sørensen showed the result of our collaborative effort of enabling Eiffel messages in a pipeline using the Tracey proof-of-concept tools we made in Praqma last year. I would say it is a good entry-level roll out of Eiffel that every company would be able to do, as it tracks 4 Eiffel events, starting from developer commits a change (EiffelSourceChangeCreatedEvent), then next when it is integrated into an integration branch (EiffelSourceChangeSubmittedEvent), to an artifact is build from it (EiffelArtifactCreatedEvent) and until we make the artifact available to other emitting the event of an artifact being published (EiffelArtifactPublishedEvent).
We use the Tracey CLI generator for creating messages easily and RabbitMQ CLI for sending - both custom tailored
jar command line tools we made. We’re considering if they still make sense just a year after, as Remrem has been published since.
But the easy to use approach might still be very valid - we need to look into that.
For example one could run this from your Git repo, even just in your Jenkins job,
java -jar tracey-protocol-eiffel-cli-generator.jar -h EiffelSourceChangeCreatedEvent -p Praqma/tracey-protocol-eiffel-cli-generator -c HEAD~1 -f msg.json and send it using
java -jar tracey-cli-rabbitmq.jar say -f msg.json where
msg.json is the message generated just before.
I owned the second part of the agenda of the Grundfos presentation, and took the perspective of sharing my view on rolling out traceability using Eiffel with one customer and sharing the challenges I saw and could foresee if deploying to more customers. I admit I’m a beginner within Eiffel, but that just emphasize what I try to point out - we need to lower the barrier of using Eiffel. I was happy about the comment in the audience that what I describe is much like their challenges in the beginning. Exactly. Also that it was earlier during the introduction pointed out that for a protocol to be successful it needs to be used and it needs to be easy to apply. Even Bas Peters from Github had a side note to us later that day around how easy it should be to grasp and understand how to apply the protocol - was it one hour?
I asked Ola Söder from Axis a question related to my presentation:
Would it be easier to get resources and focus on an Eiffel project if it was easy to show something simple first with only little effort?
So what I want to point out in one line is: We need to make it much easier to get going with Eiffel and get a simple setup up and running.
Break-out sessions proposed:
Coordinating development efforts within Getting started session
I was proposed to be responsible for coordinating development efforts. Thanks, fit well with what I really like to do with our customers as well - avoid re-inventing the wheel. We have our open source roadmap and customer alliance called Continuous Delivery Alliance, which can be relevant for the Eiffel community as well.
We wee only two people interested in this break out sessions, but don’t worry I’ll try to coordinate it going forward anyway through some transparency of what we do with our customers.
Instead we joined the Getting started sessions, as there would probably also be need for coordination of efforts helping with tools and projects making it easier to get started.
The getting started session was the largest group, and it was good there was a lot of beginners or prospect of using Eiffel, as well as some very knowledgeable ones that have worked for a long time with Eiffel.
My take aways from the session was:
Finally we agreed a kind of hackathon or meetup very soon - January was proposed, but to me seem far away ;-) - where we could actually make such material that would get new users started and make a playground for demonstrations. A good group could consist of some of the consultants, knowledgeable Eiffel people and some from companies that would like to look further into Eiffel, but isn’t yet started.
During the wrap up I think I heard that Daniel would like to supply the demo from Michael Frick as sandbox. That would be really interesting.
Extending Eiffel into operations
How can Eiffel be used from the time when your deployment starts? Today it usually ends before deployment. Eiffel should not interfere with the actual deployment, just track it.
I personally think this sounds interesting, but also that it will take some time to actually figure out what we would want it to be.
At least the idea of not stopping after deployment, but still continue the traceability is a sound idea.
Visualizing the traces and data seems to be something everyone want - but how exactly do we do that?
The group presented their discussions, I like their approach looking at what different groups could wish for:
I can summarize that today we have the public available Vici tool to show Eiffel events and their trace, and we at Praqma have a very simple Neo4j bridge that can show events as nodes and links as relations.
The group reported some use cases for ML:
They also discussed how to extract the right information from the events, as events do not necessarily have all the information needed. And they could be different from flow to flow.
Thanks! It was a great day, and I felt welcome which is important in a community.
I learned a lot during the day, and I’m happy that my challenges seems to be shared by other newcomers. We will fix that, so we next year will need an even bigger venue because the popularity exploded.
I’m also taking with me that Eiffel is definitely something to consider really seriously if you’re looking into traceability and any kind of visualization or measurement of your process. Though it might not be obvious to you yet, and it will be hard to get started I think it should be tried for real before abandoning the concept again. It is a battle proven concept for years.
Let’s together make it really easy to use and really easy to understand!
Had enough of sluggish polling? With instant Artifactory event triggers you can give responsiveness in Jenkins a real boost. Here’s an easy way to set it up.
A super easy configuration guide
With the arrival of microservices code is becoming disposable. Does this mean that we no longer need maintainable code? Is it the end of refactoring?
Still relevant or increasingly redundant?
In software development tight coupling is one of our biggest enemies. On the function level it makes our application hard to change and fragile. Unfortunately, tight coupling is like the entropy of software development, so we have always have to be working to reduce it.
How to safely introduce modular architecture to legacy software.
I am an Atlassian certified trainer and over the years I have been spending much time with clients and their Jiras. In this blogpost, I have collected some small tips and tricks that will make your Jira usage better.
Jira Software is a powerful tool deployed in so many organizations, yet in day to day usage people are missing out on improvements, big and small.
In this post, I’ll take a closer look at the version of Jenkins X using Tekton, to give you an idea of how the general development, build, test, deploy flow looks like with Jenkins X. How does it feel to ship your code to production using a product coming from the Jenkins community that has very little Jenkins in it?
A crash course in Jenkins X and how to test it out on a local Kubernetes cluster
In this blog I will show you how to create snapshots of Persistent volumes in Kubernetes clusters and restore them again by only talking to the api server. This can be useful for either backups or when scaling stateful applications that need “startup data”.
Sneak peak at CSI Volume snapshotting Alpha feature
When I read Fowler’s new ‘Refactoring’ book I felt sure the example from the first chapter would make a good Code Kata. However, he didn’t include the code for the test cases. I can fix that!
Writing tests for ‘Theatrical Players’
Nicole Forsgren and the Accelerate DORA team has just released the newest iteration of the State of DevOps report. The report investigates what practices make us better at delivering valuable software to our users as measured by business outcomes. Read on for our analysis of the report, and how it can be best put to use.
The latest drivers of software delivery performance
A major challenge of software development is that our work is by and large invisible. This makes our folklore essential in business matters. Some of our commonly used arguments and visualizations are digital urban legends rather than solid foundations for informed decisions. Here, we’ll go through a few examples and some measures to address our misconceptions.
How the stories we tell influence our decisions
When you embark on your cloud native journey there will be important choices to make about cloud providers, continuous deployment, environments’ setup and separation. This guide will help you make the right choices by sharing lessons learnt from running cloud native apps in production.
Kubernetes has become the de facto container orchestration platform. When we help clients of different sizes and domains start their cloud native journeys in Kubernetes, we assist them in making sound decisions and technology choices. There is no one-size-fits-all solution when it comes to choosing cloud providers, CI tools, continuous deployment pipelines etc., so it is important to make the right decisions at the start. Failing to do so can be very costly in terms of lost time and money.
How to make the right technical choices on your cloud native journey
Hear about upcoming events in Scandinavia, latest tech blogs, and training in the field of Continuous Delivery and DevOps