TL;DR: Obsessing over perfection can paralyze progress, especially in complex situations. Focus on what you can control, such as pragmatic steps like controlling work-in-progress (WIP), reducing batch sizes, and planning with realistic forecasts to drive meaningful outcomes without getting stuck in “what-ifs”.
Often the enemy of progress is a kind of idealistic pursuit of the perfect. When we fixate in the ‘what-ifs’ of what could happen, and thus get stuck, as opposed to focus on a pragmatic path of what we can do about it now, a lot can go wrong (as entropy in the natural direction of things, as physics would tell us).
“Progress over perfection”, in that sense is a not only a sort of sensible maxim to live by, but also a way to not fall into a trap and recipe for frustration.
From my experience, that kind of risk for a downward spiral can tend to come to picture when we perceive things to be (too) complex. It’s not that complexity doesn’t exist, it does and often there’s not much you can do about it. Other than focus on what are reasonable next future step that can take us towards what seems directionally right.
Or as as the character Anna in Frozen II (yes, the children princess movie – I have two daughters after all…) would put it:
“All I can do is to do the next right thing!”
There’s a whole bunch of ways I could go next, since that piece of advice has fundamental implications, if you think enough about it. But let me what I often do here, start with a general society level example, and then move on to something closer to the realm of my writing here, being a work related (with a bias to product / knowledge) theme.
Here’s one of the practical ways we have evolved as civilized society to deal with complex problems:
We understand we can’t solve everything around a problem, so we can’t take an idealistic perspective on it;
We weigh trade-offs in attempt to find a leverage point that will take us in the direction we want to go, as a society;
We design and implement policies focused on that leverage point – and hopefully keep monitoring things to adjust where needed.
At risk of repeating myself, while acknowledging the power of consistent examples (again, trade-offs…), as I have written before:
When dealing with misbehavior by people, often all it takes is making an example of a few, throwing all the laws in the books at them, so that all the others know what could happen. So most people will not engage in that behavior anymore.
Another simple example: do laws against alcohol consumption for minors entirely inhibit teenagers of having a drink? It hasn’t been my experience. Do we still think they are valuable because they create a sensible barrier? Of course – and that’s really the point.
After all, the parallel idealistic universe of attempting to thoroughlly prevent all the misbehavior to happen woud require an absurd level of external control by (e.g., governmental) entities, which I am not personally willing to trade-off on – hopefully you are not too.
What about work, then?!
What could be examples where we see similar things playing out, for instance in the sense of teams losing that precious opportunity to focus on the things they can control? Even though we can accept that there’s some complexity and therefore not always things exactly how theory will inform us about.
Here are a few close to my heart (if you have been following this for a while, you will probably recognize then):
WIP: How much goals and work a team is going after at a time;
Batch size: How big, or hopefully rather small, are the chunks of work a team is developing against;
Planning: How they make plans, and keep planning (reacting to changes in circumstances, in ways that set them to success – be it speed, predictability, whatever is valued most in your context).
You may be asking yourself now – “fine, I get that, but can you get more practical and specific on what can we do about those things?”. Fair enough, and in the best spirit of “progress over perfection” (since there’s quite a lot that could be said, much more that can fit in a single post, or even all the posts I ever wrote here, for that matter), perhaps a nice combination of well-backed theory and some practical examples can come handy and spark an insight.
Controlling WIP
When it comes to WIP, the bggest thing is to understand the implications of taking too much tracks in parallel. That should make product people mad: we take more risks (due to batch-like delivery) and set priorities might not dictate the order in which things will be coming out. A fairly simple simulation (shown here with a game) illustrates that idea pretty nicely.
We can also use Little’s Law and estimate (with some sensible assumptions) what are the implication of adding additional WIP to expected flow time of work (i.e., average time to complete work). In the example below (from an actual team), we can see that the average expected flow time, in this case for a team portfolio level of deliverables (think of it as designed projects to unlock some value or achieve some goal), currently is at around 8 weeks. But adding 3 more parallel tracks of work pushes it all the way to more than 15 weeks.
Will it always precisely be that the implication? Not quite. We are after all talking about a context with inherent variability and a fair amount of complexity (software development). But that give us an insight which is both theory-informed as well as data-representative of the specific context (as a kind of “useful model”, since all of them are wrong anyway).
And look, there will always be “good enough” excuse to start up something new – one I hear often is because we are stuck with some dependency somewhere else. But that’s precisely where the insight can become powerful – OK, but is that enough to risk trade-off more than two weeks of expected flow time?! Do we expect to be blocked for that long? Isn’t there something else useful the people blocked can do in the meantime (hint: what about helping somewhere else? Or just take some incremental engineering excellence kind of activity?) that doesn’t imply opening up a new front of work!? I hardly think that will tend to be the case.
Controlling Batch Size
Here’s an insight I shared with a group of people the other day, representing a system across 12 different teams, based on actual historical data of delivery on the same team portfolio level as in the example above…
There may be all kinds of (good enough) reasons for why sometimes things take longer. But good-and-old principles of flow will still apply – and batch size is a key one!
Being the data guy I am, I’ve not only stated that but demonstrated with an insight. Taking three horizons of timeline of delivery (as possible expectations of how long could we take to deliver something), I calculated how often we have delivered within each of those timelines depending on the batch size. For the sake of illustration, let’s take the two extremes to compare: the smaller batches vs the biggers ones (here defined as 2.5 x or more in number of work items delivered). It turns out that, with the exception of the longer reference timeline (which is what we tend to adhere to anyway with a 80/20 pareto distribution), should we want to iterate any faster (and we should, because that means value in the hands of users faster or at least faster learnings), then it could well mean the difference of about 50% of time more often delivering within those timelines for smaller batches. Quite a difference, isn’t it? The power of slicing work right there.
Controlling Planning
Here’s yet another thing that teams have more agency than perhaps sometimes they think – how to put their destiny on their own hands when making some plans and commitments towards stakeholders. A pragmatic way I use to help teams to have a kind of practical mirror to use in that sense, is to make use of throughput based forecasting (using Monte Carlo simulations) to understand a team’s capacity to deliver, bounded to their context and historical performance.
And let me just ask you this, considering the two below actual scenarios, which team you think is better set for success in the upcoming period in which they planned their work against?
Team A so far has only planned to the equivalent of 85-75% confidence level (in other words, upfront, with what hey know today, they have to deliver what matches a minimum throughput that happens 75 to 85% of the time historically).
Team B has done the same, and from only what they (can) know upfront, they are already are at level of about 35% confidence.
I know where my bet would be: right there with that team (A), which is prepared to face the unknown that will surely come about, because they have the slack to do so. As opposed to (team B) being just kind of hoping for the best.
By Rodrigo Sperb, feel free to connect, I'm happy to engage and interact. I’m passionate about leading to achieve better outcomes with better ways of working. How can I help you?