top of page
Writer's pictureFernando Cuenca

How Agile Are We?


3 Metrics for Service Delivery Agility

Most organizations going through some sort of “Agile transition” will eventually ask How Agile are we?, and will want the response to be backed by some “objective” form of measurements that can be used for various purposes, such as gauging the progress of the transformation effort, or even justifying its existence.

Often times, we see attempts to answer it by counting the number of agile teams launched, or the number of people sent to agile training, or obtaining certain certifications. Other popular metrics focus on engineering activities (such as degree of automation, code coverage or build times), or some indicators of team output (such as story points, or velocity.) These metrics are seductive, because they are relatively simple to capture and track, but (as I wrote about earlier here) they also tell us very little about the impact the agile transition is making on the Business the IT organization serves, or how well it is addressing its needs.


A Customer-Focused Dashboard

A couple of years ago, while we were working at a common client, Alexei Zheglov proposed an alternative model as a way to reframe the conversation from a customer-focused point of view, avoiding inwards-looking, team-centrist metrics and replacing them with indicators of things customers of a delivery service really care about.

The metaphor of an “agility dashboard” was fashionable at the time, so Alexei one day walked up to a whiteboard and drew the following sketch:

Imagine that you could have something like this”, he said.

The first characteristic of this dashboard is that it’s not for a team, but for a “delivery service”: from the perspective of the Customers you’re serving what’s important is that their needs (and often times, explicit requests) are satisfactory handled. More often than not, this requires the involvement of various teams and other individuals, and therefore agility needs to be measured at that level, through a more “systemic” lens that encompasses all the actors involved in satisfying customer requests end-to-end.

A second implication of adopting this “service delivery” point of view is that the unit of measure for this dashboard must then be some “customer recognizable” work item: the kind of work unit that crosses the customer/service interface and that becomes the unit of commitment and delivery. In some contexts this may be small user stories, but more often it will be larger features, change requests or even entire projects.


Measuring what Customers Care About

“Customers’ purpose, whatever it might be, is very often linked to their timeline and the impact of other events or lost opportunity over time” (“Fit for Purpose”, by David J. Anderson & Alexei Zheglov, pg. 84.)

When it comes to delivery expectations, we know that the most common expectation customers have is connected to delivery times. It should then come to no surprise that “Lead Time” (also referred to in many contexts as “Time to Market”) takes a prominent, central role in the dashboard.

The important aspect to show here is that the answer to the question “how long will this take?” is not a number, but a probability distribution: elapsed time from commitment to delivery for similar work types will vary within some range, with some values being more frequently observed than others. “We know the forces of nature that contribute to the shape of this curve”, Alexei pointed out, and it’s all about delays, more than it is about effort.

By showing time to market represented as a curve (or perhaps a histogram), we can help answer the question of “how fast” our service delivery is (in average) but also “how predictable” we are to deliver around that average (indicated by the shape of the tail to the right.)

Much has been written about how Agile doesn’t necessarily mean faster delivery times, but the ability to react more quickly and gracefully to changes in the environment. From a Customer’s perspective, this will be perceived as the service’s ability to take in new requests when they arise, and the Customer’s ability to take more frequent delivery of results. On the dashboard, these two aspects are represented with the Replenishment and Release Frequency dials, which show current capability on a time-scale of “Yearly”, “Quarterly”, “Monthly”, “Weekly”, “Daily” and “Hourly.”


How Agile Do You Need to Be?

It is very often assumed that an “Agile Transformation” must move in the direction of adopting all the technical practices in the Agile toolkit. An “agility dashboard” based on metrics such as “number of teams doing agile”, or “number of commits to master per hour” may indeed encourage that kind of thinking, but the reality is that many agile practices are not trivial to implement and can come at a significant cost. A “service delivery agility” dashboard may, instead, help direct the conversation in a different direction.

The main force to guide an agile transition strategy needs to be linked to deeply understanding the needs of the Customers, and what makes the service fit for their purposes. This can, in turn, guide the “go to market” strategy, which then can be used to evaluate the degree to which current capability meets Customers’ needs.

We can imagine the dashboard having these “magic sliders” which control the practices and process in place that produce the readings in the dials and charts above. The red markers in the picture represent the position we need on each slider to produce the envisioned “go to market” strategy. We can then see what kind of adjustment we need on each aspect, moving the sliders up or down, and thus metaphorically selecting the appropriate agile techniques and practices to achieve the results wanted.

An additional clue lies on the shading behind the dials, which provide some guidance around the practices that can be used to support different frequencies. If the service is currently replenishing at, say monthly or quarterly intervals, then the organization can “get away” with the practice of “Big Design Up Front” (BDUF); for a replenishment frequency around the “weekly” mark, implementing the common Agile practice of weekly (or bi-weekly) sprint planning might suffice, but if the need is under “daily” and towards “hourly”, then more aggressive, “Just In Time” planning and replenishment practices will be required. Similarly, on the Release side of the cycle, when releasing monthly or more infrequently, it’s possible to sustain that rate without Continuous Integration; higher frequency delivery cycles will make CI/CD a necessity.


Building the Dashboard

Given its customer-focused nature, the first question to ask in order to get started building a dashboard akin to the one here described may be “Who is your Customer?”, followed by “What are you delivering to them?” With this information, we can go to our system of record and dig out information about those work items to help us determine Lead Times, Replenishment and Release Frequencies.

When doing this kind of analysis for one of my clients in recent times, I found that the unit of commitment with their customers was the “Project” (a set of functional capabilities, bundled together and committed to as a whole), but the unit of development and release was the “Phase” (a portion of a project, usually addressing a logical subset of capabilities.) Commitments were being made roughly once a year (coinciding with the beginning of the Fiscal Year), resulting on a “yearly replenishment frequency” for this work type. Phases would usually have a 3-to-9 month lead time to be ready for release, but by keeping multiple of those running in parallel, the organization could sustain a release frequency of 1 production release every 1-to-2 months.

The jury is still out on whether all this is fit-for-purpose for this organization’s Customers: is it adequate to take in new work once every year? Is is OK for the Business to wait 3-to-9 months to get some of that promise? Is it adequate for them to have new functionality available to them roughly every month?

That said, having that conversation in terms of these three metrics is certainly more productive that attempting to get the same insight by analyzing the number of agile teams, sorted by code coverage and grouped by number of build failures per week. 😉

748 views0 comments

Comments


bottom of page