Synergy between the reification error and confirmation bias

Last updated on July 10th, 2021 at 08:53 am

In deciding whether to undertake technical debt retirement projects, organizations risk making inappropriate decisions because of a synergy between the reification error and confirmation bias. Together, these two errors of thought create conditions that make committing appropriate levels of resources difficult. And when organizations do commit resources, they tend to underestimate costs. That underestimate can elevate the chance of failure in technical debt retirement projects.

The reification error and confirmation bias

As explained elsewhere in this blog, the reification error is an error of reasoning in which we treat an abstraction as if it were a real, concrete, physical thing. Because technical debt is an abstraction, we risk committing the reification error when we deal with it. (See “Metrics for technical debt management: the basics”)

Confirmation bias is a cognitive bias that causes us to favor and seek only information that confirms our preconceptions, or to avoid information that disconfirms them. (See “Confirmation bias and technical debt”)

How the reification error affects management

The reification error might be responsible, in part, for a widely used management practice that often appears in the exploratory stages of undertaking projects. Let’s start with an illustration from the physical world.

In the physical world, when we want cherries, we go to a market and check the price per pound or kilo. Then we decide how much we want. If the price is high, we might decide to buy fewer cherries. If the price is low, we might buy more cherries. We have in mind a total cost target, and we adjust the weight of the cherries to meet the target. In the physical world, we can often adjust what we purchase to match our willingness to pay.

Retiring technical debt doesn’t work like that, in part, because technical debt is an abstraction. But we try anyway; here’s how it goes. Management decides to retire a particular class of technical debt. They ask an engineer for an estimate of the cost. Sometimes Management reveals the target they have in mind if they have one; sometimes not. The estimate comes back as Total ± Uncertainty. Management decides that’s too high, or the Uncertainty is too great. They then ask the engineer to find a way to do it for less, or to reduce the Uncertainty.

Management—the “customer” in this scenario—makes this request, in part, based on the belief that adjusting the work is possible. Management hopes that the engineer can adjust the work to meet a (possibly unstated) target, in analogy to buying cherries. That thinking is an example of the reification error. In this dynamic, we rarely take into account the fact that retiring technical debt isn’t exactly like buying cherries.

How confirmation bias affects engineering estimates

Return now to the interaction between Management and the engineer/estimator. The engineer now suspects that Management does have a target in mind. Some engineers might ask what the target is. Some don’t. In any case, the engineer makes a lower estimate, which might still be too high. This process repeats until either Management decides against retiring the debt, or accepts the lowest Total ± Uncertainty.

In adjusting their estimates, engineers have a conflict of interest. That conflict of interest can compromise their objectivity through the action of confirmation bias. For technical debt retirement efforts, engineers are usually highly motivated to gain Management approval of the project. The motivation arises, in part, from the frustrating loss of engineering productivity. And since engineers typically sense that Management approval of the project is contingent on finding an estimate that’s low enough, the engineers have a preconception. That is, engineers have an incentive to convince themselves that Management’s adjustments to budget and schedule are reasonable. Because of the confirmation bias, engineers tend to seek justifications for the adjustments. And they tend to avoid seeking justifications for believing that their adjustments might not be feasible. That’s the confirmation bias in action.

How synergy between the reification error and confirmation bias comes about

Because of the reification error, Management tends to believe that retiring technical debt is a more adjustable activity than it actually is. Because of confirmation bias, engineers tend to believe that Management’s proposed cost and schedule are feasible. Too often, the synergy between the two errors of thinking provides a foundation for disaster.

Why this synergy creates conditions for disaster in technical debt retirement projects

Management usually equates estimates with commitments. Engineers don’t. Management usually forgets or ignores the upside Uncertainty. Typically, when Management accepts an estimate, the engineering team finds that it has made a commitment to deliver the work for the cost Total, with zero upside Uncertainty. Rarely does Management make this explicit. An analogous problem occurs with schedule.

By ignoring the Uncertainty, Management (the buyer) transfers the uncertainty risk to the project team. That strategy might work to some extent with conventional development or maintenance projects, where we can adjust scope and risk before the work begins. But for technical debt retirement projects, this practice creates problems for two reasons.

Adjusting the scope of debt retirement projects is difficult

First, with technical debt retirement we’re less able to adjust scope. To retire a class of technical debt, we must retire it in toto. If we retire only some portion of a class of technical debt, we would leave the asset in a mixed state that can actually increase MICs. So it’s usually best to retire the entirety of any class of technical debt, so as to leave the asset in a uniform state.

Debt retirement efforts are notoriously unpredictable

Second, the work involved in retiring a particular class of technical debt is more difficult to predict than is the work involved in more conventional projects. (See “Useful projections of MPrin might not be attainable”) Often, we must work with older assets, or older portions of younger assets. The people who built them aren’t always available, and documentation can be sparse or unreliable. Moreover, it’s notoriously difficult to predict with accuracy when or for how long affected assets will be out of production. Revenue stream interruptions, which can comprise a significant portion of total costs, can be difficult to schedule or predict. Thus, technical debt retirement projects tend to be riskier than other kinds of projects. They have wider uncertainty bands. Ignoring the Uncertainty, or trying to transfer responsibility for it to the project team, is foolhardy.

A strategy for reducing the effects of this synergy

To intervene in the dynamic between the consequences of the reification error and the consequences of confirmation bias, we must find a way to limit how their consequences can interact. That will curtail the ability of one phenomenon to reinforce the other. This task is well suited for application of Donella Meadows’ concept of leverage points [Meadows 1999]. See “Leverage points for technical debt management.”

In that post, I summarized Meadows’ concepts of using leverage points to alter the behavior of complex systems. One can intervene at one or more of 12 categories of leverage points. These are elements in the system that govern the behavior of the people and institutions that comprise the system. In that post, I sketched the use of Leverage Point #9, Delays, to alter the levels of technical debt in an enterprise.

In what follows I sketch the use of interventions at Leverage Point #8, “The strength of negative feedback loops, relative to the impacts they are trying to correct against.”

Our strategy is this:

A feedback loop that now provides budgetary control in most organizations

One feedback loop at issue in this case, illustrated above, influences managers who might otherwise overrun their budgets. It does so by triggering some sort of organizational intervention when a manager overruns his or her budget. And the feedback loop leads to increases in the size and stature of the portfolios of managers who handle their budgets responsibly.  Presumably, that’s one reason why managers compel estimators to find approaches that cost less. The feedback loop to which managers are exposed causes them to establish another feedback loop involving the engineer/estimator, and later the engineering team. That second loop causes engineers to hold down their estimates, and later to limit actual expenditures.

A diagram of effects analysis

A feedback loop that now provides budgetary control in most organizations
A feedback loop that now provides budgetary control in most organizations.

We can use a diagram of effects [Weinberg 1992] to illustrate the feedback mechanism commonly used to control the performance of managers who are responsible for portfolios of project budgets. In the diagram (above), the oval blobs represent quantities indicated by their respective captions. Each of these quantities is assumed to be measurable, though their precise values and the way we measure them are unimportant for our rather qualitative argument.

What the arrows mean

Notice that arrows connect the blobs. The arrows represent the effect of changes in the value represented by one blob on the value represented by another. The blob at the base of the arrow is the effector quantity. The blob at the point of the arrow is the affected quantity. Thus, the arrow running from the blob labeled “Actual Spend” to the blob labeled “Overspend” expresses the idea that a positive (or negative) change in the amount of actual spending on projects causes a positive (or negative) change in Overspend. When a change in the effector quantity causes a like-signed change in the affected quantity, we say that their relationship is covariant.

Because increases in Budget Authority tend to decrease Overspend, all other things being equal, the relationship between Budget Authority and Overspend is contravariant. We represent a contravariant relationship between the effector quantity and the affected quantity as an arrow with a filled circle on it.

Finally, notice that the arrow from Overspend (effector) to Promotion Probability (affected) has a filled Delta on it. This represents the idea that as Overspend increases, it negatively affects the probability that the manager will be promoted at some point in the future. The Delta indicates a delayed effect; that the Delta is filled indicates a contravariant relationship. (An unfilled Delta would indicate a delayed covariant effect.)

Loops in the diagram of effects

This diagram, which contains a loop connecting Budget Authority, Overspend, and Promotion Probability, has the potential to “run away.” That is, as we go around the loop, we find self-re-enforcement, because the loop has an even number of contravariant relationships. It works as follows:

As Overspend increases, after a delay, the Probability of Promotion decreases. This causes reductions in Budget Authority because, presumably, the organization has reduced faith in the manager’s performance. Reductions in Budget Authority make Overspend more likely, and round and round we go.

Similarly:

As Overspend decreases, after a delay, the Probability of Promotion increases. This causes increases in Budget Authority because, presumably, the organization has increased faith in the manager’s performance. Increases in Budget Authority make Overspend less likely, and round and round we go.

Fortunately, other effects usually intervene when these self-re-enforcing phenomena get too large, but that’s beyond the scope of this argument. For now, all we need observe is that managers who manage their budgets effectively tend to rise in the organization; those who don’t, don’t.

The result is that managers limit spending to avoid overspending their budget authority. And that’s one reason why they push engineers to produce lower estimates for technical debt retirement projects.

How this feedback loop overlooks important drivers of technical debt formation

To break the connection between the managers’ reification error and the engineers’ confirmation bias, our intervention must cause the managers and the engineers to make calculations differently. We can accomplish this by requiring that they consider more than the mere cost of retiring the class of technical debt under consideration. They must estimate the consequences of not retiring that technical debt, and they must also estimate costs beyond the cost of retiring the debt. In what follows, I’ll use the shorthand TDBCR to mean the class of Technical Debt Being Considered for Retirement.

Specifically, estimates for technical debt retirement projects cover only the cost of performing the work required to retire the TDBCR. Management then decides whether, when, and to what extent to commit resources to execute the project. The primary considerations budgetary.

Since the debt retirement project can potentially provide benefits beyond the manager’s own portfolio, failing to undertake the project can have negative consequences. Mnagers who decline to undertake debt retirement projects are responsible for the consequences. But accountability for these decisions is rare. That’s the heart of the problem. So let’s look at some examples of relevant considerations.

Adjustments that would support these feedback loops to gain control of technical debt

In allocating resources for a technical debt retirement project, there are considerations beyond the cost of retiring the debt. A responsible decision is possible only if other kinds of estimates are also available. Here are some examples of the estimates we need:

  • The effects of retiring TDBCR on the cost of executing any other development or maintenance effortsy
  • The effects of retiring TDBCR on revenue and market share for all existing assets that directly produce revenue and which could be affected by retiring TDBCR
  • The revenue that would become available (and timing thereof) from any new products or services that become possible because of retiring TDBCR
  • The effects of retiring TDBCR on the cost of executing other technical debt retirement efforts

And these items might not be related to anything for which the decision maker is responsible. But the feedback loop we now use to influence the decision maker excludes considerations that are affected by the decision maker’s decisions. Until we install feedback loops that cause the decision maker to consider these indirect consequences, or until we make decisions at levels that include these other consequences, the effects of the decision maker’s decisions are uncontrolled, and might not lead to decisions optimal for the enterprise.

References

[McConnell 2006] Steve McConnell. Software Estimation: Demystifying the Black Art. Microsoft Press, 2006.

Order from Amazon

Cited in:

[Meadows 1999] Donella H. Meadows. “Leverage Points: Places to Intervene in a System,” Hartland VT: The Sustainability Institute, 1999.

Available: here; Retrieved: June 2, 2018.

Cited in:

[Weinberg 1992] Gerald M. Weinberg. Quality Software Management Volume 1: Systems Thinking. New York: Dorset House, 1989.

This volume contains a description of the “diagram of effects” used to explain how obstacles can induce toxic conflict. Order from Amazon

Cited in:

Other posts in this thread

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Show Buttons
Hide Buttons