False sense of certainty: what is the alternative?
If you ever worked in a fairly traditional company, you will likely more easily connect to that. But I would bet that is typically seen more often than we would imagine even on fairly digital-first, in principle adhering to agile principles and all of that, kind of companies too. At the moment that pressure is high, there is a lot at stake, and things are not necessarily progressing as we would like, that is when we see leaders sort of doubling down in trying to get some level, or a sense of, certainty.
And to be fair - that is not only natural, but one could argue as something they are entitled to, at least in terms of having some perspective, maybe starting with some healthy transparency. Fair enough - but there are ways and ways of doing it.
Here's what we, unfortunately, more often than not would observe:
A reinforcement of accountability at some individual level. Put in other words, someone that can be held accountable for delivering something, or for each piece of what needs to happen, will have to step in, asked to drive things up (typically framed as "empowered" to do so). While nothing fundamentally wrong with that idea of accountability, the risk here is to save the day by (individual) heroics whilst not structurally fixing underlying issues in how things work in the organization (in its "system"), so you are only "good" for now, this instance, not necessarily set for success in a upcoming initiative.
Planning, re-planning, more and more scrutiny… There will likely be a call out for the need to provide more transparency and expectations setting. That tends to be in the shape of asking relevant teams to do their planning (meaning putting dates on… even though it might well be accepted to change as things evolve - after all, we are agile, aren't we?!), and re-planning… as many times as needed, ultimately putting quite some scrutiny at a high transactional cost of coordination (overhead). Again, here the issue is not with the concept as such, but with the fact that these are too often, at best, educated guesses… Especially if there are a lot of dependencies across the teams. Just to illustrate the point, at the moment you have 4 dependencies to deliver on some piece of work, statistical theory will tell us that you are down to less than 7% chance that everyone will be on time.T
If you bother asking those involved, many people on the floor can point out where the issues are, but nothing changed for way too much time. And at the moment it was a hot-enough topic at the right levels that can do something about them, likely it was already too late… (e.g., simplify the dependency landscape by deliberately cutting corners like putting teams that are on the critical chain to collaborate more closely and efficiently).
There's got to be a better way - you may ask! The good news is that there is… The bad is that it is,, as usual, much more easily said than done, and there is no precise recipe as such that will always work in all contexts. Which means we have to start from somewhere and take from there, evolve and experiment with alternative ways, doing something about what you can do or at least influence.
Now, I don't have the intention to thoroughly elaborate on all possible alternative ways now and here. Even because that would come at the cost of too-long of a text. But let's move into some insights on useful patterns that can serve as an alternative to the above highlighted dysfunctions often observed.
Instead of estimate, try rather to forecast
The difference here is more fundamental than semantics only. Estimation involves use of assumptions and judgment - it is what we do when we put our best effort to assess and state by when we think we can finish something, or we work out a relative sense of size (like using story points). Forecasting, on the other hand, means to make predictions of future events on the basis of available data/information and trends.
The latter requires to make use of something more tangible, preferably which is not applicable only in context (e.g., story points could in principle be used for forecasting, but limited to the context in which their relative sizing is comparable). That's why I personally have a preference to work with whatever more tangible notion of work items may exist (e.g., stories, epics…), and their historical patterns of throughput. While they are still subject to variability in context (e.g., how a team slice stories may be different from some other team), they would still allow for projection that goes beyond a single context.
Another clear advantage of throughput is that they are lagging indicators, and thus they inherit the variability of the underlying system (e.g., a period with higher level of dependencies will tend to yield a reduction of throughput). If you try to do something similar with, for example, story points, immediately you will typically be mixing two different concepts - the leading initial best judgment estimation earlier on, with the lagging velocity in story points; which is to say, unnecessarily adding complexity.
It is (only) OK to set dates if you are more explicit about levels of uncertainty linked to that
To be super-clear, that has to be more than just some sort of "confidence vote" (in feeling, or subjective terms) from the team on the date informed. What I rather mean is to communicate in uncertainty ranges that can be sensibly understood. And more importantly - turn those uncertainty ranges into simple yet effective simulations that can function as means to communicate that inherent uncertainty.
To illustrate that, here is a very simple way that I have been using and training others to understand the overarching concept:
Estimate the scope using that same more tangible reference (like number of items to be worked upon, instead of hours, or story points…) as mentioned above. And do it in such a way that acknowledges uncertainty - e.g., having a low (best case) and high estimation (worst case, where you can cater for some sensible risk level, like how much do I expect the scope still to evolve, or how clear/unclear that is).
Make use of the historical patterns of throughput to simulate scenarios. Similarly here you want to acknowledge uncertainty (or variability) on how much you can get done in a time frame. Maybe you can work with a few projections of throughput - e.g., a typical (like an average or median - watch out that the former is more easily influenced by outliers); a best (e.g., maximum or a very high percentile like P90) and worst case (e.g., minimum or a very low percentile like P10).
Visualize the simulations of all the combinations of scope estimation and throughput projections. Keep holding yourself honest by following up on how the actuals are trending against those simulated scenarios. It could look something like this:
Fairly simple ideas like that, which are not as subject to emotionally-driven discussions, I really believe can go a long way.. They won't solve the underlying problems (of why we are constantly late) by themselves, but that was never the point. Rather to provide a reflection mechanism so that people and teams can do the right thing and solve the problems themselves.
In the end, it's not really about following up on the progress, but what you make out of what you observe in a more practical basis (for using a more tangible reference), using it as a mirror, a feedback mechanism, so to have (more) meaningful conversations.
(By the way there are in fact "on steroids" versions of that, like the famous Monte Carlo based simulation throughput forecaster from Troy Magennis - but that just adds robustness to your ability to do projections with confidence and risks linked to uncertainties, not necessarily add conceptually to the point I am trying to make here.)
P.S.: In the next couple of weeks, I will be enjoying summer holidays in stunning Colombia (my wife's home country). Therefore I may miss writing an issue or two (will try at least minimize it). Fully back on track surely for around July 20th.
By Rodrigo Sperb, feel free to connect (I only refuse invites from people clearly with an agenda to ‘coldly’ sell something to me), happy to engage and interact