Back to all tech blogs

Beyond Code: Measuring Developer Experience

Discover how we measure developer experience at Adevinta to learn about things like productivity, performance and satisfaction of our engineering teams across the globe.

Not all metrics are created equal. Developers’ experience is often only subject to activity metrics like committed lines of code, pull requests and story points. Unfortunately, these data points offer little insight into the actual productivity and well-being of your developers.

Most recently, McKinsey entered the debate, saying: “Yes, you can measure software developer productivity”, only to be met with harsh criticism of their approach.

In this article, we look into the concept and importance of measuring developer experience and explain how Adevinta got started doing so with a simple tool.

Who We Are

In the Cloud & Infrastructure organisation at Adevinta, we offer a suite of platform and data products to developers across all Adevinta’s marketplaces. With a workforce of around 3000 employees working in Product & Tech, spread across multiple brands, countries and languages, our goal is to meet all their needs and preferences. For us, it is key to understand what brings them together and what sets them apart in terms of developing software.

Only with this certainty, can we plan and execute a great strategy to improve our offered services and the developer experience for everyone at Adevinta.

The Role of Developer Experience

The quest to quantify and optimise developer experience and productivity is a priority for many organisations. The importance of understanding organisation productivity is often crucial to the bottom line of your business. More productive developers may enable a faster time-to-market and reduced costs. Current tech layoffs draw a picture of increasing pressure on businesses to increase efficiency. Companies are trying to do more with the same, or even less, number of developers they have.

Unfortunately, software development is often seen as a cost centre that many (understandably) claim to be hard to quantify. After all, it is often a collaborative, complex and creative process. All of these aspects make it difficult to measure, requiring different metrics for different levels (individual, team, organisational etc.)

A common approach is to focus on measuring output. Here we find the worst metrics in the software development world. Some fall into the trap of measuring lines of code or story points and mistaking these for created value, when they’re merely a byproduct of the process.

Even worse, gaming these metrics is easy. You simply contribute lines of code all year, give higher estimates for story points, but without creating any business value. Relying solely on these metrics, would boil down to Goodhart’s Law.

Goodhart’s Law — When a measure becomes a target, it ceases to be a good measure.
Goodhart’s Law — When a measure becomes a target, it ceases to be a good measure.

The reality is that developing software is not only writing code but also reviewing it, collaborating with others and writing documentation. If you measured an individual on their lines of code, you’d overlook them potentially helping a colleague with an important project. On the contrary, you’d encourage them to stick to their personal productivity gauge and ignore any collaboration, losing out on the larger benefits this approach often brings.

Making a Case for Qualitative Data

Developer experience should be measured by capturing developers’ perceptions as well as their workflows in the various systems and processes that their work involves. To achieve this, Adevinta gathers data from both individuals and systems to achieve a comprehensive view of software delivery processes. For example, system data may be able to tell you how long code reviews take. But without perceptual data, you won’t know whether reviews are blocking developers and slowing down development, or whether developers are receiving the high-quality feedback they need to drive valuable enhancements and improvements.

Developer Experience: System-based metrics vs perceptual metrics
Developer Experience: System-based metrics vs perceptual metrics

System-based data is available to many software enterprises almost by default — lines of code, pull requests and completed tasks can be extracted from most versioning controls (e.g. GitHub) and project management systems (e.g. Jira).

At Adevinta, we even built a tool called “Ledger,” which integrates with our systems and enables teams to measure DORA metrics. Ledger helps teams understand and improve their software delivery process based on the available systems data. They merely need to sign up and metrics like lead time, deployment frequency, failure rate and mean time to recovery will be calculated for them automatically.

But to get a holistic view of developer experience, we needed complementary data. We wanted to understand what each developer has to say about more aspects of their work such as collaboration, communication, flow and efficiency. So we asked them.

Start Simple

To measure this perceptual data, we opted for a survey. To allow for quick prototyping, we chose Google Forms. The survey included around 30 questions based on, but not limited to, the popular SPACE framework and other research done by DX. SPACE is an acronym that stands for:

  • Satisfaction
  • Performance
  • Activity
  • Communication (and Collaboration)
  • Efficiency (and Flow)

The SPACE framework aims to measure developer experience holistically in tech organisations, by looking at these metrics on individual and team levels.

Most survey questions were to be answered on a 1–7 Likert scale. At the end of the survey, developers were able to provide more feedback in free text format.
All responses were collected anonymously to encourage our developers to speak truthfully and without the fear of repercussions.

Our questions covered the following categories:

  • Tools & Services
  • Development Processes
  • Production Monitoring
  • Collaboration & Team Processes
  • Trade offs
  • Code Health

Our strategy to collect as many survey responses as possible was simple. We spoke to engineering leaders across all of our brands about the survey and used every communications channel available to us. It also helped that the survey itself only took 5–10 minutes to complete. For reference, developer surveys at companies like Google and Microsoft typically require around 30 minutes to complete.

Deciphering Results

Note: Developer experience varies a lot across the Adevinta brands. Every brand has its own development teams with their own culture and values, built around a central internal marketplace. The following results are an average of all participants in Adevinta. Similar key findings exist per brand/marketplace, and we will try to highlight local team vs. global team problems and opportunities for improvement.

Approximately a third of our developers responded to the survey. For a first attempt at collecting internal developer data using a survey, we considered this a success. With future iterations, we intend to collect more perspectives and encourage more developers to contribute and help us help them.

The survey shed light on areas where developers expressed high satisfaction, such as purpose, ease of releasing software, a good understanding of the production environment, and the ability to unblock themselves if they’re stuck.

However, it also pinpointed specific pain points, revealing a desire for more focus on technical debt, the ease of contributing to the codebase and the interruptions they experience daily.

These findings not only provided us with valuable insights into the current state of developer experience, but also catalysed targeted improvements in 2024. These insights will allow us to guide Adevinta towards a more inclusive and supportive ecosystem for its development teams.

Areas for improvement

To visualise and determine highlights and areas for improvement we used a diverging stacked bar chart. This visualisation allows us to define an axis that we determine “neutral.” In our situation, this was the true neutral value of responses, however we could have drawn the divergent line on any data value, if we had found responses to be disproportionately positive or negative.

In the chart below, every bar represents a question with the weight of responses distributed horizontally. Ordering questions by the amount of positive vs. negative responses results in the relative ranking of perceived successes and areas for improvement.

Survey Responses — Adevinta All Brands
Survey Responses — Adevinta All Brands

Things got interesting when we compared the total Adevinta average to the local average of our brands. We were able to determine in what aspects our local brands outperform and underperform the global average. To achieve this, we kept the global question ranking and applied a filter to responses by brand/marketplace.

Survey Responses — Adevinta Brand #1
Survey Responses — Adevinta Brand #1
Survey Highlights — Adevinta Brand #1
Survey Highlights — Adevinta Brand #1

Some engineering cultures in Adevinta have great collaborative processes, but a slightly worse experience when deploying software. They also spent more time learning from mistakes but considered their projects poorly scoped.

This is only one example of distinctions we found across our brands. Together with local teams, we try to solve global and local pain points collaboratively. The importance of this can’t be overstated. In the Cloud & Infrastructure’ organisation we can’t possibly cater to all developers in Adevinta at once. We work with local Tech Hubs that serve as platform teams to the local development teams and are more engaged with local needs.

What’s next

As we reflect on this initiative, it becomes clear that prioritising developer experience is not just a one-time endeavour but an ongoing commitment to excellence. Together with our stakeholders, we aim to address local and global pain points and improve the developer experience throughout Adevinta.

Based on the results, one thing we’re prioritising next is the implementation of a global developer portal. This initiative will help developers contribute to their respective codebases more easily, by establishing a common way of developing software, cataloguing services and providing the right documentation at the right time.

In conclusion, combining survey results with quantitative information from our systems has proven instrumental in shaping our strategy for 2024. Embracing transparency and a commitment to improvement, we will continue the trend of surveying our developers at a regular cadence to shape their experience to the best of our ability.

Related techblogs

Discover all techblogs

Trial by Fire: Tales from the SRE Frontlines — Ep2: The Scary ApplicationSet

Read more about Trial by Fire: Tales from the SRE Frontlines — Ep2: The Scary ApplicationSet
Understand the safeguard configuration of the ArgoCD’s ApplicationSet through the experience of our SRE who learned from an incident

Java plugins with isolating class loaders

Read more about Java plugins with isolating class loaders
Java plugins with isolating loaders

Measuring the Impact of Generative AI on Developer Productivity

Read more about Measuring the Impact of Generative AI on Developer Productivity
People working at a desk