Welcome to the inside scoop on how Adadot uses Adadot. That’s right, we eat our own dog food, and guess what? It tastes great.
Don’t believe us? Read on.
We used Adadot to become more data driven as an engineering team, resulting in a massive uptick in throughput. Three of our insights were
1: We could influence the ‘lines per MR’ metric by breaking work down more
2: Our speed dropped in the latter part of our sprints, so we focussed on harder tasks first
3: Making data visible alone improves performance
Does data driven engineering improve performance?
In the spirit of the question, let’s start by reviewing the data. Below is our own development team’s cumulative flow, with a clear shift up in February.
Did the team size grow? No.
Did team members change? No.
Did we start empowering the team with data? Yes, yes we did.
And what a difference we saw, with issues ‘done’ in the three months between November and February being circa 10, yet the issues done in just two months between February and April being in excess of 100. That’s ten times as many issues complete.
We made one significant change that created this wave of productivity: we started using Adadot.
What is Adadot?
Adadot is the only analytics tool that helps techies achieve their strategic goals by analyzing productivity and collaboration data.
It’s the fitness tracker for work. Instead of steps and heart rate, it reads work, collaboration and wellbeing metrics to help developers be the best version of themselves.
The data Adadot surfaces is grounded in Google’s DORA’s State of DevOps research program data, which represents seven years of research and data from over 32,000 professionals worldwide.
Using DORA as a stepping stone, Adadot has developed its own proprietary data model connecting the dots between working practices, collaboration patterns, and wellbeing.
How did we use Adadot?
Prior to using Adadot we reviewed our velocity and cumulative flow directly in Jira, but were not living and breathing data driven engineering. We knew our team was ready to go to the next level, and the Adadot dashboard was now ready to be used. We quickly identified three levers we could pull on the back of the metrics Adadot surfaced.
1: Improving the ‘lines per MR’ metric
Very quickly we were able to see some areas that needed greater focus. For example, the
‘Lines per MR’ metric, which shows the number of lines of code changed per merge request, revealed that we had some work to do around chunking our merges into smaller increments. We took several actions on the back of this:
- We agreed to break bigger stories down into smaller ones. We noticed that the deviation in forecast versus actual was much bigger when stories were too big in size. For example, a story estimated at a week might have only a day’s deviation in actual versus the original forecast. Conversely, a story that was more around a month would see a whole week in deviation, proportionately much bigger.
- If a story was taking over three days, the person working on it had to explain why. This wasn’t done to shame, quite the opposite, we all experience work items like this where they bloat into something more than we expected. What this facilitated was a hive mentality where the team could then swarm on this story, pushing the tricky work through quicker and ensuring a team member wasn’t left alone to struggle.
- We introduced three amigos sessions. We agreed that before any work was started, three developers would meet up and discuss how to best approach it together, before any one of them started on the actual work. This helped us avoid black box solutionizing
Taking these actions saw a notable increase in throughput from that sprint onwards, showing the value of reviewing the metric and planning how to influence it together.
2: Starting high and drilling in
Another way we used Adadot was to use the graphs and indexes it provides at a summative level to look for any dips/rises.
We would then delve in deeper to understand these changes and solutionize as a team. One example of this was that we noticed our ‘speed’ rating was very high at the start of a given sprint, but dipped dramatically in the second half of the sprint.
When drilling into this we realized our merge speed had reduced, which indicated blockers were occurring. We discussed this together and realized we were bringing in easier tasks first, meaning harder tasks that are more liable to get blocked were left until later in the sprint when it was too late to push them to done. We changed this by encouraging people to ‘eat the frog’, in effect taking on the most daunting work first. This saw a notable increase in our speed as we were able to resolve blockers earlier in the sprint and create a more sustainable and reliable velocity.
3: Making data visible
As the Scrum Guide suggests, practicing empiricism through making performance data transparent transforms team performance. Simply by sharing the Adadot dashboard metrics with the team regularly throughout the sprint, the team were able to be more self organizing and inspect and adapt their approach in a continual way, due to the constant feedback loops it provided. This alone allowed them to better understand and forecast work, ultimately resulting in far greater productivity.
Over to you
Using our own product transformed how our team deliver that very product, creating a massive shift in productivity levels. The metrics we leveraged in our own team are all metrics that DORA has said matter for some years now, but the difference was how easily Adadot surfaced these metrics, making them instantaneously accessible and digestible for the team.
Have a go yourself and try Adadot free now – data driven engineering made easy.